React + Three.js with OrbitControls on multiple canvases - javascript

I have created react + three sandbox with multiple canvases showing simple tetrahedron model (demo here).
Unfortunately all instances seem to use Three.OrbitControls the same way: no matter on which canvas mouse is pointing - all models capture the event.
Desired outcome is that OrbitControls are only updating canvas which mouse is pointing.
How should I change this, so model in each canvas behave independently? (source code)

Just pass each canvas as the constructor's second parameter: See second parameter of constructor.
This defaults to document, but if you pass a specific DOM element, it'll bind event listeners to that object instead.
var controls1 = OrbitControls(cam1, canvas1);
var controls2 = OrbitControls(cam2, canvas2);
var controls3 = OrbitControls(cam3, canvas3);
var controls4 = OrbitControls(cam4, canvas4);

Related

Scaling Raster with Paper.js using Tween.js

I'm think I'm having a similar issue as this in that I can not work out (or know if it exists) whereby I can get access to the scaling applied to a given object (in my instance, a raster).
I need to know this so I can animate the scaling via Tween.js.
Anyone have any ideas or know if indeed it is possible to find out the current scaling applied to a raster (or any) object?
I thought it was an issue with Rasters so I tried tweening the scale property of a Path and then a Group and I couldn't get access to the values in order to animate it.
Because I am using Tween.js I can not simply use the object.scale(value) function.
UPDATE
I even tried applying an arbitrary (animated) number to the scale function and it failed to work... i.e.:
object.scale( 0 );
object.arbitraryNumber = 0;
createjs.Tween.get( object )
.to( { arbitraryNumber:1 } , 1000, createjs.Ease.getPowInOut(2) )
.addEventListener( "change", function( event ) {
event.target.target.scale( event.target.target.arbitraryNumber);
} );
Although this did not work, when the same approach was applied to the x position of the object, it animated fine.
Is there anything that needs to be flagged in order to update scaling of an object?
When calling Item.scale() method on each frame with values from 0 to 1, you are actually scaling down item exponentially because each call scales the item relatively to the previous value.
What you want to do is animate the Item.scaling property instead.
You also have to know that by default, PaperJS use global coordinates system and apply every transformations directly to points.
You can change this behavior by setting Item.applyMatrix property to false.
Doing this, scale change will affect item matrix instead of affecting points coordinates and you will be able to animate it as you expect.
Here is simple Sketch of a scale animation:
var circle = new Path.Circle(view.center, 50);
circle.fillColor = 'orange';
circle.applyMatrix = false;
function onFrame(event)
{
circle.scaling = Math.sin(1 + event.count * 0.05);
}
You should be able to transpose this example to your Tween.js context easily.

Raycaster does not move BoxMesh objects

I'm using Physijs script for physics like gravitation.
I want to move objects in my scene with Raycaster from THREE.js script.
My problem is that Raycaster only move objects (simple box) declared like:
var box = new Physijs.Mesh(cubeGeomtery.clone(), createMaterial);
But here physics does not work. It only works if I declare it like:
var create = new Physijs.BoxMesh(cubeGeomtery.clone(), createMaterial);
But here Raycaster / moving does not work.
The difference between these two is that in the first it's just Mesh and in the second it's BoxMesh.
Does anyone know why this doesn't work? I need BoxMesh in order to use gravity and other physics.
Code to add cube
function addCube()
{
controls.enable = false;
var cubeGeomtery = new THREE.CubeGeometry(85, 85, 85);
var createTexture = new THREE.ImageUtils.loadTexture("images/rocks.jpg");
var createMaterial = new THREE.MeshBasicMaterial({ map: createTexture });
var box = new Physijs.BoxMesh(cubeGeomtery.clone(), createMaterial);
box.castShadow = true;
box.receiveShadow = true;
box.position.set(0, 300, 0);
objects.push(box);
scene.add(box);
}
Explanation
In Physijs, all primitive shapes (such as the Physijs.BoxMesh) inherit from Physijs.Mesh, which in turn inherits from THREE.Mesh. In the Physijs.Mesh constructor, there is a small internal object: the ._physijs field. And, in that object, there is... a shape type declaration, set to null by default. That field must be re-assigned by one of its children. If not, when the shape is passed to the scene, the Physijs worker script won't know what kind of shape to generate and simply abort. Since the Physijs.Scene inherits from the THREE.Scene, the scene keeps a reference of the mesh internally like it should, which means that all methods from THREE.js will work (raycasting, for instance). However, it is never registered as a physical object because it has no type!
Now, when you are trying to move the Physijs.BoxMesh directly with its position and rotation fields, it is immediately overridden by the physics updates, which started with the .simulate method in your scene object. When called, it delegates to the worker to compute new positions and rotations that correspond to the physics configurations in your scene. Once it's finished, the new values are transferred back to the main thread and updated automatically so that you don't have to do anything. This can be a problem in some cases (like this one!). Fortunately, the developer included 2 special fields in Physijs.Mesh: the .__dirtyPosition and .__dirtyRotation flags. Here's how you use them:
// Place box already in scene somewhere else
box.position.set(10, 10, 10);
// Set .__dirtyPosition to true to override physics update
box.__dirtyPosition = true;
// Rotate box ourselves
box.rotation.set(0, Math.PI, 0);
box.__dirtyRotation = true;
The flags get reset to false after updating the scene again via the .simulate method.
Conclusion
It is basically useless to create a Physijs.Mesh yourself, use one of the primitives provided instead. It is just a wrapper for THREE.Mesh for Physijs and has no physical properties until modified properly by one of its children.
Also, when using a Physijs mesh, always set either the .__dirtyPosition or the .__dirtyRotation property in the object to directly modify position or rotation, respectively. Take a look in the above code snippet and here.

Point coordinate translate to specific Surface in Famo.us

We have a pretty complex web app built in meteor. The UI is mainly in nested HTML elements. Now we are trying to rewrite the UI with Famo.us so we can have better performance as well as adding great animation effects. One feature in our app is, when user drag on top of an element A, we need to draw a new element B based on the precise position of the mouse events in B. That is, we need to calculate the coordinate of a point in any elements, even the element has complex transforms. We were using the 'webkitConvertPointFromPageToNode' function in webkit browsers(we only support webkit.) to do the job. Does Famo.us has a similar function so I can calculate a point coordinate in a specific Surface? Or do you have any suggestions on how to accomplish such features with current API?
Thanks
Given that the transforms in Famo.us are all backed by absolute positioning, finding the coordinates in any given surface is pretty straightforward. In the Event object you can grab the offsetX and offsetY of the target surface.
Check out this example..
Hope it helps!
var Engine = require('famous/core/Engine');
var Surface = require('famous/core/Surface');
var StateModifier = require('famous/modifiers/StateModifier');
var Transform = require('famous/core/Transform');
var context = Engine.createContext();
var surface = new Surface({
size:[200,200],
properties: {
backgroundColor:'green',
color:'white',
textAlign:'center',
lineHeight:'200px'
}
})
surface.on('mousemove',function(e){
surface.setContent("x: "+e.offsetX+", y: "+e.offsetY);
})
surface.state = new StateModifier({
transform: Transform.translate(100,100,0)
})
context.add(surface.state).add(surface);
I have found the right way to do this.
First, I dug into the problem mentioned in my comment that the offsetX/offsetY value is actually based on the child surfaces. Because offsetX/offsetY values are generated by DOM's MouseEvent and copied into famo.us with no modification. DOM doesn't provide the coordinate of the mouse point on the 'currentTarget'. It only provide the value for 'target', which is the element the event occurs. So we can only use the clientX/clientY coordinate in the viewport, then calculate the coordinate of that point on the target element. No official API to do the calculation either. Only webkit provide the 'webkitConvertPointFromPageToNode' api to do it because the layout engine knows all about the position and transforms on a specific element.
But then I realise that with Famo.us, we know the transforms of each surface! In the render tree, all the modifiers on the path from root context to a RenderNode form the transform for that node and the nodes below. We can multiply them to get one transform matrix M. Then we can do a coordinate system transformation to calculate the point's right coordinate in the node's local coordinate system.
But Famo.us doesn't have direct API to get all the modifiers for a node, I did it myself in my code. I would suggest Famo.us to add a 'parent' reference on each RenderNode, then we can get them easily for any node.
It took me a while but this work for me:
var myX=event.clientX;
var myY=event.clientY;
for(var i=0;i<event.path.length;i++)
{
if(event.path[i].style===undefined)
continue;
var matrix=event.path[i].style.transform;
var matrixPattern = /^\w*\((((\d+)|(\d*\.\d+)),\s*)*((\d+)|(\d*\.\d+))\)/i;
if (matrixPattern.test(matrix)) {
var matrixCopy = matrix.replace(/^\w*\(/, '').replace(')', '');
myX-=matrixCopy.split(/\s*,\s*/)[12];
myY-=matrixCopy.split(/\s*,\s*/)[13];
}
}
Tested with align and size modifier

When to use the #render method in famo.us

In famo.us, there are some easy ways to perform animations/interactions using modifiers on a surface. For instance, dragging and animating surfaces have pretty straight forward examples in the famo.us guides.
...assume require('') statements above here...
var mainContext = Engine.createContext();
var draggable = new Draggable({...});
var surface = new Surface({...});
var transitionableTransform = new TransitionableTransform();
var modifier = new Modifier({
origin: [.5, .5],
transform: transitionableTransform
});
surface.pipe(draggable);
surface.on('click', function () {
transitionableTransform.setScale([3, 3, 1], {duration: 300});
});
mainContext.add(draggable).add(surface);
However, in more complicated scenarios you might want to coordinate multiple animations, starting/stopping/reversing as needed depending on the interaction. In those cases, things as simple as adding transforms with a duration might work at first, but aren't guaranteed to not fall out of sync the more the user interacts with them.
The #render method appears to be a common place to put some types of coordinated animation. My limited understanding of it is it identifies the 'specs' of nodes that are being rendered, and is called on each frame of the render loop. So you might be able to identify each step of a particular animation, then depending on how it's interacted with be able to stop mid animation and change as needed.
For instance, Flipper seems to work this way
(src/views/Flipper.js):
Flipper.prototype.render = function render() {
var pos = this.state.get(); //state is a transitionable
var axis = this.options.direction;
var frontRotation = [0, 0, 0];
var backRotation = [0, 0, 0];
frontRotation[axis] = Math.PI * pos;
backRotation[axis] = Math.PI * (pos + 1);
if (this.options.cull && !this.state.isActive()) {
if (pos) return this.backNode.render();
else return this.frontNode.render();
}
else {
return [
{
transform: Transform.rotate.apply(null, frontRotation),
target: this.frontNode.render()
},
{
transform: Transform.rotate.apply(null, backRotation),
target: this.backNode.render()
}
];
}
};
Is there any documentation on the role #render should play when animating? Or how exactly the render method is supposed to work (for instance, the correct way to construct the specs that get returned). Is render supposed to be more low-level, and if so should a different construct be used?
The only way I've seen the render method used so far is to return specs from pre-existing elements. Personally, I've used it only when creating my own "Views", where I add a RenderNode to my class and create a pass-through render method that simply calls the RenderNode's render method. That's enough to pass custom objects into .add functions and have them work. I learned of that here:
How to remove nodes from the Render Tree?
As for understanding the construction of RenderSpecs themselves, I'm not aware of any docs. The best way to get a sense of it would be to read through the _parseSpec function of SpecParser:
https://github.com/Famous/core/blob/master/SpecParser.js#L92
From that it appears that any of the following can be used as a RenderSpec:
An Entity id (assigned to every Surface upon creation)
An object containing any of:
opacity
transform
origin
size
An array of RenderSpecs
If you want to take control of rendered nodes, write a custom View with a render function. The Flipper class is a simple example (and the RenderController is a complex example of this pattern)
How Famo.us renders:
Every RenderNode has a render function which creates a
renderSpec.
The renderSpec contains information about aModifier or
Surface.
2.1 The Modifier specs are used to calculatethe final CSS properties.
2.2 The Surface spec are coupled to DOMelements.
Every tick of the Engine, the renderSpec is rendered using the RenderNode.commit function.
The commit function uses the ElementAllocator (from the Context) to allocate/deallocate DOM elements. (Which actually recycles DOM nodes to conserve memory).
Therefore: Just return the correct renderSpec in your custom View, and famo.us will manage memory and performance for you.
Notes:
You don’t need to use the View class – an object with a render function will suffice. The View class simply adds events and options which is a nice way to create encapsulated, reusable components.
When a Surface element is allocated in the DOM, the deploy event is fired.
When a Surface element is deallocated, the recall event is fired.
As copied from http://famousco.de/2014/07/look-inside-rendering/

how to get the html file of an edited canvas in html5

I am trying to make a website which helps its users to create a page by dragging and dropping elements on the canvas. The user should be able to save the html file of the edited canvas. I cannot figure how to convert the changes made to the canvas to an html file.
I don't think it's possible to get Markup out of canvas. I've searched it for a month but can't find a valid solution. but may be some experts may know. Best of luck buddy.
Canvas is basically just a bit-map image. Whatever you draw on the canvas is stored as pixels not as elements. So changes to the canvas are just changes in pixel values. To do what you wish you would need to store your 'elements' as 'objects' within your code where each 'object' stores all the required data for your 'element'.
it would then be possible to open a new window and export code into it using document.writeln
The code below may give you an idea of what sort of thing would be needed
newwindow=window.open('','_blank');
newwindow.document.writeln('<!DOCTYPE HTML>');
newwindow.document.writeln('<html>');
newwindow.document.writeln('<head>');
newwindow.document.writeln('<style>');
newwindow.document.writeln('#element0 {');
newwindow.document.writeln('background:'+ obj0.background+';');
newwindow.document.writeln('width:'+ obj0.width+';');
newwindow.document.writeln ('}');
newwindow.document.writeln('</style>');
newwindow.document.writeln('</head>');
newwindow.document.writeln('<body>');
newwindow.document.writeln('<div id="element0"></div>');
newwindow.document.writeln('</body>');
newwindow.document.writeln('</html>');
newwindow.document.writeln('<html>');
newwindow.document.close();
Hope this helps
Canvas won't help you here for anything other than to visualize the objects you have dropped onto it.
You need to record the objects you drop in a "shadow" structure behind the scene sort of. That is to say: build a object list internally which you then can use as source data to render:
Canvas visualization of it
Raw HTML code from it.
You can for example drop an image to the canvas and your code will record a new object (intention with the following code is to show the principle not to provide a full working solution):
var myObjects = [];
/// a drop occurred
var o = new myElement(x, y, width, height, id, type);
myElement is a pre-defined object that you set up in advance to hold the given arguments.
Then push the object to your object stack and render it to canvas:
myObjects.push(o);
for(var i = 0, o; o = myObjects[i]; i++) {
/// draw the look of this object here to canvas
}
When you then need a HTML version of it you do the same:
for(var i = 0, o; o = myObjects[i]; i++) {
var el = '<' + o.type + ' id="' + o.id + ' .... other things here
}
This way you can produce canvas graphics, HTML, send data over a socket etc.
The key in these sort things is to keep raw base data available. In this case it would be the element type you want to drop, its position and dimension. For HTML you also have to consider things as nesting etc. but that would require a bit more code than shown here.

Categories

Resources