Making concurrent canvas animations in JavaScript? - javascript

I'm learning JavaScript canvas recently, and I came up two ways of making animations. I searched google for a while but cannot determine which way is correct.
Say I have want to render different objects doing different things on the canvas with 30 fps. There're 2 ways to achieve this.
For both 2 ways, there should be a main setInterval function that draw all objects in 30fps.
Every object has a nextframe(user_response) method, which changes the 'status' of this object according to user response, and is called by a main setInterval 30 times ps. The main setInterval need to pass user responses into each nextframe(...) in some way, and it calls draw for each object.
--The problem with this approach is that all nextframe for all objects are called per frame, taking system resources.
Objects implement their own animation methods withsetInterval. These methos get called according to user response, changing the object 'status' 30 times per second. And the main setInterval function only calls draw for each object in 30fps, behaving like 'taking pictures' of each object's status. The object statuses change independently in other threads. So there's always one main thread of 30fps running, and if there're m objects animated and n object not animated at the moment, there're (m+1) threads in total --The problem with this that when many objects are animated I have many threads running, which also takes system resources.
So, which one is a more appropriate method? or are they both wrong? :>
thank you in advance!

The second one is the good one. Except you should user requestNextAnimationFrame instead of setInterval.
To solve your problem of resources, you can add conditions in the draw() methods to avoid redrawing if it is not necessary. But I think you need to redraw for each frames because you must clear your stage in order to draw the moving objects.

Related

Javascript - Access object from memory-cache

I'm working on an app for image editing. Each time an edit is made, the new version shows up in the network tab as a png.
I'm trying to add an "undo" feature which, to me, would mean using the previous version of the image.
So if the current version is the last image, I want the undo function to load the second to last. Is this possible?
I've been searching for ways to do this but haven't really seen anything that suggests it is.
Any advice would be much appreciated.
Thanks!
Update
I've been looking into this:
https://developer.mozilla.org/en-US/docs/Web/API/CacheStorage/keys
So if the current version is the last image, I want the undo function to load the second to last. Is this possible?
Yes in general this is possible. That said the current approach isn't something I'd recommend doing as cache is persistence storage. You could store every edit as a base64 string and then when you want to undo call array.pop() and render the last element in that array. But this will quickly run into a problem that we could be storing a lot of long base64 strings as storage in the users web browser.
Instead there is a better way to do this is have a single baseImage and anything the users does instead keep track of the operations they did, and then simply do this in memory. Let's take an example such as Paint:
We could define this as a blank white baseImage. Followed by an array of three operations: operations = [DrawRect, DrawLine, DrawCircle] each of these could be thought of as an object that knows how to draw itself base on some points. An undo here would remove DrawCircle from the end then starting with the blank white image perform the [DrawRect, DrawLine] operations we saved in the array to end up with:
Some operations can be costly and eventually add up. We don't want to be starting from scratch and performing hundreds or thousands of operations the user did every time we undo. This is why many editors only have a set number of undo's you can do. What's happening here is we are changing our baseImage with our pervious operations and starting from there instead.
This can be considered a sort of buffer that we only hold so many operations. A simple way would be after say 20 operations when an undo is performed have it draw the first item and save that as the new baseImage. Although now this would be happening every operations after we pass 20. If the process for changing baseImage is an expensive one this could be staggered to say every 10 operations we remove the first 10 and save the image on the 10th operation instead. That way it makes it so we have at least 10 undo's available and we are also only changing baseImage every 10th operation.
I do want to point out for the most part this is me thinking out loud as I don't have any vast development experience in creating image editing or drawing software. Rather this is just an approach/concepts I'd take if I where to make one. Hopefully these points can help you make an undo feature that isn't too space/memory or computationally expensive.

matter.js | Fixed Update problem for applyForce

I am making a player movement with applyForce using matter.js.
I am checking for pressed keys and applying force to my character in my game loop, which is normally called 60 times per second. But the problem begins when FPS drops. If the loop is called only 30 times per second, how can I applyForce the same amount when FPS was 60?
Is there any analog of FixedUpdate like in Unity?
This is a classic problem in game development. One way you can solve this problem is instead of applying the same amount of force in every update, you can check a clock to see how much time has passed since the last update (e.g. call performance.now() in every update). Then multiply the amount of force you want to add by the amount of time that has passed.
I don't think this will work perfectly in all situations. Especially if you have small, fast moving objects, you might find objects clipping through each other. But I think this will be good enough for most cases, and you should be able to code it by hand.

Running into AngularJS performance issues with web workers, fast timers, and $scope.$apply()

I have a timer that runs in a web worker with a 10 millisecond interval. Each time the timer ticks, a function is called in the controller which increments a variable. This variable is used by a bootstrap progress bar on my page.
The problem that I'm encountering is that the progress bar doesn't update unless I call $scope.$apply() in the function call where the value gets updated.
Meanwhile, I have an array with a bunch of complex objects in them (100+ objects) that are on the $scope. Since I need to call $scope.$apply() in order to have the view take the changes every time my timer ticks, it's also updating this list of objects (every 10ms), which is slowing down my application.
Does anyone have any ideas as to how I could potentially resolve this issue? Please let me know if I can provide additional details.
If the elements for the 100+ objects aren't all actually visible on screen at any one time, you can only include the ones on screen in the DOM (and so only have watchers for them) by using something like https://github.com/kamilkp/angular-vs-repeat (a colleague of mine had to hack it slightly to get it to do exactly what was needed: I forget the details)
If you know the variables only need to be updates in a certain $scope, you can call $scope.$digest() on that $scope. As opposed to $apply(), $digest() will only run the watchers on that $scope (and its children) rather than throughout the application.
An update every 10 milliseconds is extremely frequent. If everything keeps up, this would be 100 updates a second: around 4 times the frame rate of a lot of video formats. A very simple way of speeding things up is to reduce this considerably.
One way I can think of doing this that would probably fit in with most architectures is to use a throttle function, such as `_.throttle' from lodash:
var throttledApply = _.throttle($scope.apply, 500);
And then when you receive a message, you would have something like:
worker.onmessage = function(e) {
// ... Other processing code ...
throttledApply();
};
If you're using the Bootstrap progress bar, it should still give a smooth transition between displayed values, even if the differences between them are large.

Moving object performance - ThreeJs

Context :
I'm working on a pretty simple THREE.JS project, and it is, I believe, optimized in a pretty good way.
I'm using a WebGLRenderer to display lot's of Bode plot extracted from an audio signal every 50ms. This is pretty cool, but obviously, the more Bode I display, the more laggy it is. In addition, Bodes are moving at constant speed, letting new ones some space to be displayed.
I'm now at a point where I implemented every "basic" optimization I found on Internet, and I managed to get a 30 fps constantly at about 10.000.000 lines displayed, with such a bad computer (nVidia GT 210 and Core i3 2100...).
Note also i'm not using any lights,reflections... Only basic lines =)
As it is a working project, i'm not allowed to show some screenshots/code, sorry ...
Current implementation :
I'm using an array to store all my Bodes, which are each displayed via a THREE.Line.
FYI, actually 2000 THREE.Line are used.
When a Bode has been displayed and moved for 40s, it is then deleted and the THREE.Line is re-used with another one. Note that to move these, I'm modifying THREE.Line.position property.
Note also that I already disabled my scene and object matrix autoUpdate, as I'm doing it manually. (Thx for pointing that Volune).
My Question :
Do the THREE.Line.position modification induces some heavy
calculations the renderer has already done ? Or is three.js aware that my object did not change and
avoid these ?
In other words, I'd like to know if rendering/updating the same object which was just translated is heavier in the rendering process than just leaving it alone, without updating his matrix etc...
Is there any sort of low-level optimization, either in ThreeJS about rendering the same objects many times ? Is this optimization cancelled when I move my object ?
If so, I've in mind an other way to do this : using only two big Mesh, which are folowing each other, but this induces merging/deleting parts of their geometries each frames... Might it be better ?
Thanks in advance.
I found in the sources (here and here) that the meshes matrices are updated each frame no matter the position changed or not.
This means that the position modification does not induce heavy calculation itself. This also means that a lot of matrices are updated and a lot of uniforms are sent to the GC each frame.
I would suggest trying your idea with one or two big meshes. This should reduce javascript computations internal to THREE.js, and the only big communications with the GC will be relative to the big buffers.
Also note that there exists a WebGL function bufferSubData (MSDN documentation) to update parts of a buffer, but it seems not yet usable in THREE.js

RequestAnimationFrame for multiple canvases

I’ve got a page with layered <canvas> elements like explained in the answer here. The canvases together make up an animation, so each is cleared and redrawn as necessary.
Now I'm trying to incorporate requestAnimationFrame by using the cross browser shim. But I don’t really know what requestAnimationFrame is doing behind the scenes.
Is it okay to have it update multiple canvases in each loop? Should each canvas have its own loop? Is the answer browser dependent?
Updating all the canvases in a single requestAnimationFrame is perfectly okay.
If the canvases are independent from each other and appear on different sections of the page, then you might want to use individual requestAnimationFrame handlers, passing the canvas element as the second argument. That way, only the currently visible canvases get updated. (Passing an element as the second argument is WebKit-specific, though.)
What requestAnimationFrame does is tell the browser that you would like to update the appearance of the page. The browser calls the callback when the page or element is next up for a redraw. And that’s going to happen when the page/element is visible, and never more often than the screen refresh rate.
Using requestAnimationFrame simply lets the browser control when reflows/repaints on the page happen.
It would be better to alter all the canvases in one callback, so the browser can repaint them in one go.
This explanation of requestAnimationFrame was pretty helpful
The shim merely falls back to a timer if it is not available.
You should be fine. Update all of them in the same single loop.
requestAnimFrame isn't "tied" to a canvas or anything, just the function you pass it. So you can use it with three canvases or zero canvases just fine.
use canvas in this format js file so RequestAnimationFrame for multiple canvas will work
(function(){ ... })();

Categories

Resources