In famo.us, there are some easy ways to perform animations/interactions using modifiers on a surface. For instance, dragging and animating surfaces have pretty straight forward examples in the famo.us guides.
...assume require('') statements above here...
var mainContext = Engine.createContext();
var draggable = new Draggable({...});
var surface = new Surface({...});
var transitionableTransform = new TransitionableTransform();
var modifier = new Modifier({
origin: [.5, .5],
transform: transitionableTransform
});
surface.pipe(draggable);
surface.on('click', function () {
transitionableTransform.setScale([3, 3, 1], {duration: 300});
});
mainContext.add(draggable).add(surface);
However, in more complicated scenarios you might want to coordinate multiple animations, starting/stopping/reversing as needed depending on the interaction. In those cases, things as simple as adding transforms with a duration might work at first, but aren't guaranteed to not fall out of sync the more the user interacts with them.
The #render method appears to be a common place to put some types of coordinated animation. My limited understanding of it is it identifies the 'specs' of nodes that are being rendered, and is called on each frame of the render loop. So you might be able to identify each step of a particular animation, then depending on how it's interacted with be able to stop mid animation and change as needed.
For instance, Flipper seems to work this way
(src/views/Flipper.js):
Flipper.prototype.render = function render() {
var pos = this.state.get(); //state is a transitionable
var axis = this.options.direction;
var frontRotation = [0, 0, 0];
var backRotation = [0, 0, 0];
frontRotation[axis] = Math.PI * pos;
backRotation[axis] = Math.PI * (pos + 1);
if (this.options.cull && !this.state.isActive()) {
if (pos) return this.backNode.render();
else return this.frontNode.render();
}
else {
return [
{
transform: Transform.rotate.apply(null, frontRotation),
target: this.frontNode.render()
},
{
transform: Transform.rotate.apply(null, backRotation),
target: this.backNode.render()
}
];
}
};
Is there any documentation on the role #render should play when animating? Or how exactly the render method is supposed to work (for instance, the correct way to construct the specs that get returned). Is render supposed to be more low-level, and if so should a different construct be used?
The only way I've seen the render method used so far is to return specs from pre-existing elements. Personally, I've used it only when creating my own "Views", where I add a RenderNode to my class and create a pass-through render method that simply calls the RenderNode's render method. That's enough to pass custom objects into .add functions and have them work. I learned of that here:
How to remove nodes from the Render Tree?
As for understanding the construction of RenderSpecs themselves, I'm not aware of any docs. The best way to get a sense of it would be to read through the _parseSpec function of SpecParser:
https://github.com/Famous/core/blob/master/SpecParser.js#L92
From that it appears that any of the following can be used as a RenderSpec:
An Entity id (assigned to every Surface upon creation)
An object containing any of:
opacity
transform
origin
size
An array of RenderSpecs
If you want to take control of rendered nodes, write a custom View with a render function. The Flipper class is a simple example (and the RenderController is a complex example of this pattern)
How Famo.us renders:
Every RenderNode has a render function which creates a
renderSpec.
The renderSpec contains information about aModifier or
Surface.
2.1 The Modifier specs are used to calculatethe final CSS properties.
2.2 The Surface spec are coupled to DOMelements.
Every tick of the Engine, the renderSpec is rendered using the RenderNode.commit function.
The commit function uses the ElementAllocator (from the Context) to allocate/deallocate DOM elements. (Which actually recycles DOM nodes to conserve memory).
Therefore: Just return the correct renderSpec in your custom View, and famo.us will manage memory and performance for you.
Notes:
You don’t need to use the View class – an object with a render function will suffice. The View class simply adds events and options which is a nice way to create encapsulated, reusable components.
When a Surface element is allocated in the DOM, the deploy event is fired.
When a Surface element is deallocated, the recall event is fired.
As copied from http://famousco.de/2014/07/look-inside-rendering/
Related
I have created a DomObject class that can display boxes that move around.
jFiddle
Naturally, if I increase the number of boxes in the DomObject, the movement function takes longer to run since it is a for loop.
beginMovement = () => {
if (!this.timer) {
// console.log("Begin Movement");
this.timer = setInterval(this.movement, this.time);
}
};
movement = () => {
// here we know that its running
let i = 0;
while (i < this.boxes.length) {
this.boxes[i].move();
i++;
}
}
In the case that I increase the length of this.boxes, I notice that there are performance issues.
So my main question is, should I be using a for loop in this instance? Or should I avoid using basic html for moving items all together and move onto using canvas
Depends what your goal is. You seem to be trying to do some sort of an animation in which case using canvas/WebGL is a better option if you are going for raw speed.
Now your objects are living on the DOM, which wasn't originally designed to display fancy graphics and animations. Every design consideration for canvas was made explicitly for animation and complex bitmap operations like colorization and blurring. You never have to "find" a canvas element. You can access them far more quickly than even a cached DOM element. Even updating text elements in canvas is faster then doing so on the DOM.
Here is a related SO discussion on canvas vs DOM.
I am implementing a set of custom elements that will be used like this:
<om-root>
...
<om-node id="node1">
...
<om-node id="node2">
...
</om-node>
...
</om-node>
...
<om-root>
That is, my <om-node> elements will be mixed in with arbitrary HTML, which may have positioning and/or CSS transform applied.
The purpose of these <om-node> elements is to apply CSS affine transformations to their content based on various conditions. But regardless of its position in the hierarchy, each om-node computes a transformation relative to the root node.
I can't just apply the computed transformation to the node, because the browser will combine that with the transformations of all its ancestor elements: if I rotate node1 by 30 degrees, then node2 will also be rotated by 30 degrees before its own transformation is applied.
Ideally, what I want is something that works like Element.getClientRects(), but returns a matrix rather than just a bounding box. Then I could do some math to compensate for the difference between the coordinate systems of the <om-node> and <om-root> elements.
This question is similar to mine, but doesn't have a useful answer. The question mentions using getComputedStyle(), but that doesn't do what is claimed – getComputedStyle(elt).transform returns a transformation relative to the element's containing block, not the viewport. Plus, the result doesn't include the effects of "traditional" CSS positioning (in fact it doesn't have a value at all for traditionally-positioned elements).
So:
Is there a robust way to get the transformation matrix for an element relative to the viewport?
The layout engine obviously has this info, and I'd prefer not to do a complicated (and expensive) tree-walking process every time anything changes.
Having thought some more about the question, it occurred to me that, in fact, you can solve the problem using getBoundingClientRect().
Of course, getBoundingClientRect() on its own does not tell you how an element has been transformed, because the same bounding box describes many possible transformations:
However, if we add three child elements, with a known size and position relative to the parent, then we can figure out more information by comparing each of their bounding boxes. The figure below shows where I have placed these three "gauge" elements (which in practice are empty and invisible):
The vectors u̅ and v̅ are orthogonal unit vectors of the parent element's untransformed coordinate system. After the element has been transformed by various CSS positioning and transform properties, we first need to find the transformed unit vectors u̅' and v̅'. We can do that by comparing the bounding boxes of the three gauge elements – the diagram below shows the process with two different example transformations:
the vector from box 1 to box 2 is equivalent to u̅'
the vector from box 1 to box 3 is equivalent to v̅'
the midpoint between [the top left of box 3] and [the bottom right of box 2] gives us point P: this is the transformed position of the parent element's origin
From these three values u̅', v̅' and P we can directly construct a 2D affine transformation matrix T:
This matrix T represents all the transformations affecting the parent element – not just CSS transform rules, but also "traditional" positioning, the effects of margins and borders, etc. And, because it's calculated from getBoundingClientRect(), it is always relative to the viewport – you can compare any two elements directly, regardless of their relationship within the DOM hierarchy.
Note: all this assumes we are only dealing with 2D affine transformations, such as transform:rotate(30deg) or left:120px. Dealing with 3D CSS transforms would be more complicated, and is left as an exercise for the reader.
Putting the above into code form:
class WonderDiv extends HTMLElement {
constructor () {
super();
this.gauges = [null, null, null];
}
connectedCallback () {
this.style.display = "block";
this.style.position = "absolute";
}
createGaugeElement (i) {
let g = document.createElement("div");
// applying the critical properties via a style
// attribute makes them harder to override by accident
g.style = "display:block; position:absolute;"
+ "margin:0px; width:100px; height:100px;"
+ "left:" + ( ((i+1)%2) ? "-100px;" : "0px;")
+ "top:" + ( (i<2) ? "-100px;" : "0px;");
this.appendChild(g);
this.gauges[i] = g;
return g;
}
getClientTransform () {
let r = [];
let i;
for (i=0; i<3; i++) {
// this try/catch block creates the gauge elements
// dynamically if they are missing, so (1) they aren't
// created where they aren't needed, and (2) they are
// restored automatically if someone else removes them.
try { r[i] = this.gauges[i].getBoundingClientRect(); }
catch { r[i] = this.createGaugeElement(i).getBoundingClientRect(); }
}
// note the factor of 100 here - we've used 100px divs
// instead of 1px divs, on a hunch that might be safer
return DOMMatrixReadOnly.fromFloat64Array(new Float64Array([
(r[1].left - r[0].left) / 100,
(r[1].top - r[0].top) / 100,
(r[2].left - r[0].left) / 100,
(r[2].top - r[0].top) / 100,
(r[1].right + r[2].left) /2,
(r[1].top + r[2].bottom) /2
]));
}
}
customElements.define("wonder-div", WonderDiv);
– the custom <wonder-div> element extends <div> to have a getClientTransform() method, which works like getClientBoundingRect() except that it returns a DOMMatrix instead of a DOMRect.
CSS Transformations are actually relatively heavy operations and do come with some gotchas.. (they TRANSFORM elements) so you may not be able to avoid traversing the nodes without implementing an intelligent state system, for example, storing all your objects + transformation in your javascript class..
That said, one easy workaround for small use cases is to disable transform on all the parent elements using something like 'inline' but this is not suitable for all cases..
<div id="outside">
<div id="inside">Absolute</div>
</div>
document.getElementById('outside').style.display = "inline";
The more robust approach is to retrieve and parse the computedStyles dynamically ...
function getTranslateXY(element) {
const style = window.getComputedStyle(element)
const matrix = new DOMMatrixReadOnly(style.transform)
return {
translateX: matrix.m41,
translateY: matrix.m42
}
}
Then you can dynamically set new transformations on any node by adding/subtracting from the current transformation state.
I have created react + three sandbox with multiple canvases showing simple tetrahedron model (demo here).
Unfortunately all instances seem to use Three.OrbitControls the same way: no matter on which canvas mouse is pointing - all models capture the event.
Desired outcome is that OrbitControls are only updating canvas which mouse is pointing.
How should I change this, so model in each canvas behave independently? (source code)
Just pass each canvas as the constructor's second parameter: See second parameter of constructor.
This defaults to document, but if you pass a specific DOM element, it'll bind event listeners to that object instead.
var controls1 = OrbitControls(cam1, canvas1);
var controls2 = OrbitControls(cam2, canvas2);
var controls3 = OrbitControls(cam3, canvas3);
var controls4 = OrbitControls(cam4, canvas4);
I'm think I'm having a similar issue as this in that I can not work out (or know if it exists) whereby I can get access to the scaling applied to a given object (in my instance, a raster).
I need to know this so I can animate the scaling via Tween.js.
Anyone have any ideas or know if indeed it is possible to find out the current scaling applied to a raster (or any) object?
I thought it was an issue with Rasters so I tried tweening the scale property of a Path and then a Group and I couldn't get access to the values in order to animate it.
Because I am using Tween.js I can not simply use the object.scale(value) function.
UPDATE
I even tried applying an arbitrary (animated) number to the scale function and it failed to work... i.e.:
object.scale( 0 );
object.arbitraryNumber = 0;
createjs.Tween.get( object )
.to( { arbitraryNumber:1 } , 1000, createjs.Ease.getPowInOut(2) )
.addEventListener( "change", function( event ) {
event.target.target.scale( event.target.target.arbitraryNumber);
} );
Although this did not work, when the same approach was applied to the x position of the object, it animated fine.
Is there anything that needs to be flagged in order to update scaling of an object?
When calling Item.scale() method on each frame with values from 0 to 1, you are actually scaling down item exponentially because each call scales the item relatively to the previous value.
What you want to do is animate the Item.scaling property instead.
You also have to know that by default, PaperJS use global coordinates system and apply every transformations directly to points.
You can change this behavior by setting Item.applyMatrix property to false.
Doing this, scale change will affect item matrix instead of affecting points coordinates and you will be able to animate it as you expect.
Here is simple Sketch of a scale animation:
var circle = new Path.Circle(view.center, 50);
circle.fillColor = 'orange';
circle.applyMatrix = false;
function onFrame(event)
{
circle.scaling = Math.sin(1 + event.count * 0.05);
}
You should be able to transpose this example to your Tween.js context easily.
We have a pretty complex web app built in meteor. The UI is mainly in nested HTML elements. Now we are trying to rewrite the UI with Famo.us so we can have better performance as well as adding great animation effects. One feature in our app is, when user drag on top of an element A, we need to draw a new element B based on the precise position of the mouse events in B. That is, we need to calculate the coordinate of a point in any elements, even the element has complex transforms. We were using the 'webkitConvertPointFromPageToNode' function in webkit browsers(we only support webkit.) to do the job. Does Famo.us has a similar function so I can calculate a point coordinate in a specific Surface? Or do you have any suggestions on how to accomplish such features with current API?
Thanks
Given that the transforms in Famo.us are all backed by absolute positioning, finding the coordinates in any given surface is pretty straightforward. In the Event object you can grab the offsetX and offsetY of the target surface.
Check out this example..
Hope it helps!
var Engine = require('famous/core/Engine');
var Surface = require('famous/core/Surface');
var StateModifier = require('famous/modifiers/StateModifier');
var Transform = require('famous/core/Transform');
var context = Engine.createContext();
var surface = new Surface({
size:[200,200],
properties: {
backgroundColor:'green',
color:'white',
textAlign:'center',
lineHeight:'200px'
}
})
surface.on('mousemove',function(e){
surface.setContent("x: "+e.offsetX+", y: "+e.offsetY);
})
surface.state = new StateModifier({
transform: Transform.translate(100,100,0)
})
context.add(surface.state).add(surface);
I have found the right way to do this.
First, I dug into the problem mentioned in my comment that the offsetX/offsetY value is actually based on the child surfaces. Because offsetX/offsetY values are generated by DOM's MouseEvent and copied into famo.us with no modification. DOM doesn't provide the coordinate of the mouse point on the 'currentTarget'. It only provide the value for 'target', which is the element the event occurs. So we can only use the clientX/clientY coordinate in the viewport, then calculate the coordinate of that point on the target element. No official API to do the calculation either. Only webkit provide the 'webkitConvertPointFromPageToNode' api to do it because the layout engine knows all about the position and transforms on a specific element.
But then I realise that with Famo.us, we know the transforms of each surface! In the render tree, all the modifiers on the path from root context to a RenderNode form the transform for that node and the nodes below. We can multiply them to get one transform matrix M. Then we can do a coordinate system transformation to calculate the point's right coordinate in the node's local coordinate system.
But Famo.us doesn't have direct API to get all the modifiers for a node, I did it myself in my code. I would suggest Famo.us to add a 'parent' reference on each RenderNode, then we can get them easily for any node.
It took me a while but this work for me:
var myX=event.clientX;
var myY=event.clientY;
for(var i=0;i<event.path.length;i++)
{
if(event.path[i].style===undefined)
continue;
var matrix=event.path[i].style.transform;
var matrixPattern = /^\w*\((((\d+)|(\d*\.\d+)),\s*)*((\d+)|(\d*\.\d+))\)/i;
if (matrixPattern.test(matrix)) {
var matrixCopy = matrix.replace(/^\w*\(/, '').replace(')', '');
myX-=matrixCopy.split(/\s*,\s*/)[12];
myY-=matrixCopy.split(/\s*,\s*/)[13];
}
}
Tested with align and size modifier