matter.js | Fixed Update problem for applyForce - javascript

I am making a player movement with applyForce using matter.js.
I am checking for pressed keys and applying force to my character in my game loop, which is normally called 60 times per second. But the problem begins when FPS drops. If the loop is called only 30 times per second, how can I applyForce the same amount when FPS was 60?
Is there any analog of FixedUpdate like in Unity?

This is a classic problem in game development. One way you can solve this problem is instead of applying the same amount of force in every update, you can check a clock to see how much time has passed since the last update (e.g. call performance.now() in every update). Then multiply the amount of force you want to add by the amount of time that has passed.
I don't think this will work perfectly in all situations. Especially if you have small, fast moving objects, you might find objects clipping through each other. But I think this will be good enough for most cases, and you should be able to code it by hand.

Related

JavaScript - Best way to have multiple events fire at different timings

I'm working on a Node-red flow (which basically uses nodejs, so JavaScript-based) that reads several sensors at different intervals and triggers other various events.
Instead of having a timer for each event to fire at specific frequency (I was afraid it would cause lag), I have one single timer that increments seconds indefinitely, and at every second, each event is doing a modulo operation to see if they should fire.
This seems janky as multiple divisions occur each second. Furthermore, as the integer value gets bigger, I'm afraid those divisions will get slower and the integer will eventually overflow its max value after running for long periods of time.
So I'm now second-guessing my approach and asking the community if individual timers are actually better for performance, or if there's an even better way to do this. For the record, I will have maybe 20-40 events to fire at different intervals.

How can I allow Easeljs Tickers to remain active when a user switches tabs

I am making a game using Createjs and I have a problem with the game pausing automatically once I switch tabs. I have researched on this topic and I believe to have found out that it isn't Createjs pausing the Tickers but the browsers themselves. This has something to do with the Page Visibility API which knows when the document is hidden or visible and once hidden, I believe it slows down the RAF or setIntervals of that document which makes it seem that it is paused. This provides help to the CPU and Battery so they don't burn out.
I need my game to always keep running in the background even if the user switches tabs. What is the best way I can do this? Just want to mention that I am using Easeljs with the Tickers if that matters.
Please correct me if I was wrong with anything I said. I am still a beginner and by correcting me, I will be able to understand the real problem. Thank you for your time.
The behaviour you are describing is expected, the browser will throttle JavaScript execution to about 1 FPS.
The usual approach to get around this is to make sure your application/animations are not tied to a specific FPS. For example, here is a sample where a shape moves across the screen: http://jsfiddle.net/lannymcnie/4neobe00/2/
It takes about 5 seconds to go from the left to the right
A count shows how many times it resets
If you just add a certain amount to the sprite (such as the blue shape), it will only get updated 1 time per second when the browser frame is closed. I used 1.66 because it closely matched the speed of the other sprite.
lockedShape.x = (lockedShape.x + 1.66); // Moves a certain amount per tick
However the Ticker provides a delta in the tick event, which will be around 1 when you have a reliable framerate, but provide a multiplier you can use to determine how long the frame takes. In the sample, an index is incremented by 1 x event.delta:
index += event.delta;
Then the position is determined based on the index:
shape.x = (index/10)%500; // divide by 10 to slow it down, modulus 500 to make it loop
We can even determine how many times the item has "looped" because we know that the if you divide that index by the width, it will give you the number of loops:
text.text = (index/10/500 |0).toString(); // rough position divided by 500, rounded.
So using this approach, you can make the rest of your content run at a consistent speed, since you know how much time has elapsed since the last tick. Here is an official tutorial (which has a sample that does basically the same thing, which is what was probably in my head when I made my own example).
Note that TweenJS uses time-based tweens, and you can set framerates on MovieClips and Sprites in EaselJS, which uses this approach.
Hope that helps.

What's the harm in having a long maxDelayTime?

I'm building a live-looping application that uses several delay nodes. I initialize the delay nodes by setting the maxDelayTime to be slightly longer than the delayTime, because this seems like the right thing to do. I don't know if it actually makes a difference, but it seems wasteful to set a maxDelayTime of e.g. 3 minutes when I only need a delay of ~10-15 seconds.
However, I want the user to be able to resize the loop, and that is where I'm having problems. If the user wants the loop to be smaller, I can set delayTime to be a smaller number, and all is good. However, the user can't make the loop larger, because maxDelayTime cannot be overwritten. I COULD recreate all the delay nodes with an appropriate maxDelayTime, but the delay nodes are connected to/from a bunch of other nodes, so I'd rather not recreate the whole thing.
So my question is:
Is it a bad idea to create 8 delay nodes with maxDelayTime of 3 minutes, even though delayTime will typically be less than 30 seconds, just in case the user wants to make a longer loop?
Yes, that's a bad idea.
The best way to think of this is that the maxDelayTime sets the size of the internal buffer that is continually updated - the delayTime just alters the lookup point in that buffer. If you set a maxDelay to be overly aggressively large, you'll chew up a large amount of memory (e.g. 8 delay nodes in stereo 44.1kHz with a maxDelay of 3 minutes would take up approximately 496 megabytes. On a mobile device, that's a huge amount. (even on a desktop, it's quite a bit.)
I'd probably have boundaries around some inflection points that swap out for new nodes - e.g. >30 sec, >2min - and set the maxDelay for those sizes. If your default was 30 sec, for example, your 8 nodes is "only" 82 meg.

RequestAnimationFrame behaviour..hows it work?

I have been playing around with requestAnimationframe for chrome, and wondered how it actually behaves.
When i load my canvas and draw, I get a steady 60FPS. If i scroll around using offset like a click and drag around a map, the FPS will drop (as expected)...once i stop dragging around the map, the FPS creeps back up to its steady 60fps, again as expected.
Here how ever is where I'm wondered if this is delibrate for requestAnimationframe. If i drag the map around until the FPS drop, drops below 30 for an extended period of time, once i stop dragging, it climbs back up, but this time it hits 30FPS and will not go higher. It appears as if the browser decided 30FPS is perhaps the best option.
Is this delibrately done by the browser, i been trying to find out if this is the case. Because it will go to 60fps if i dont drop below 30fps for too long.
Yes, it's something that the browsers are capable of doing.
"How it's supposed to work" isn't really something that anybody can answer, here.
The reason for that is simply that under the hood is 100% browser-specific.
But it's very safe to say that yes, the browser is capable of deciding when you should be locked into a 30Hz refresh, rather than a 60Hz refresh.
An illustration of why this is the case:
requestAnimationFrame() is also tied into the Page Visibility API if the vendors want (very true for Chrome).
Basically, if the page isn't visible, they can slow the requestAnimationFrame() updates down to a few times per second or pause them altogether.
Given that knowledge, it's entirely plausible to believe that one of two things is happening:
they're intentionally capping you at 30fps, because they feel your experience will be more stable there, based on averaged performance data
they're intentionally throttling you, but there's some bug in the system (or some less than lenient math) which is preventing you from going back up to 60, after the coast has cleared, .and if they are using averaged performance data, then that might be part of the issue.
Either way, it is at very least mostly-intentional, with the only unanswered question being why it sticks to 30fps.
Did you leave it alone for 20 or 30 minutes after the fact, to see if it went back up at any time, afterwards?
You can run Timeframe analysis from Chrome DevTools to look for maverick JS that is slowing down your animation times.
https://developers.google.com/chrome-developer-tools/docs/timeline
RAF will find the best place to paint your changes not the closest one. So, if the JS in the RAF callback is taking two frames worth of time(around 16ms per frame on your 60hz hardware), then you FPS will drop to 30.
From Paul Irish via Boris
Actually, “It’s currently capped at 1000/(16 + N) fps, where N is the number of ms it takes your callback to execute. If your callback takes 1000ms to execute, then it’s capped at under 1fps. If your callback takes 1ms to execute, you get about 60fps.” (thx, Boris)
http://www.paulirish.com/2011/requestanimationframe-for-smart-animating/

How to determine the best "framerate" (setInterval delay) to use in a JavaScript animation loop?

When writing a JavaScript animation, you of course make a loop using setInterval (or repeated setTimeout). But what is the best delay to use in the setInterval/setTimeout call(s)?
In the jQuery API page for the .animate() function, the user "fbogner" says:
Just if anyone is interested: Animations are "rendered" using a setInterval with a time out of 13ms. This is quite fast! Chrome's fastest possible interval is about 10ms. All other browsers "sample" at about 20-30ms.
Any idea how jQuery determined to use this specific number?
Started bounty. I'm hoping someone with knowledge of the source code behind Chromium or Firefox can provide some hard facts that might back up the decision of a specific framerate. Or perhaps a list of animations (or frameworks) and their delays. I believe this makes for an interesting opportunity to do a bit of research.
Interesting - I just took the time to look at Google's Pac-Man source to see what they did. They set up an array of possible FPSes (90, 45, 30), start at the first one, and then each frame they check the "slowness" of the frame (amount the frame exceeded its allotted time). If the slowness exceeds 50ms 20 times, the framerate is notched down to the next in the list (90 -> 45, 45 -> 30). It appears that the framerate is never raised back up, presumably because the game is so short-lived that it wouldn't be worth the trouble to code that.
Oh, and the setInterval delay is of course set to 1000 / framerate. They do, in fact, use setInterval and not repeated setTimeouts.
I think this dynamic framerate feature is pretty neat!
I would venture to say that a substantial fraction of web users are using monitors that refresh at 60Hz, which translates to one frame every 16.66ms. So to make the monitor the bottleneck, you need to produce frames faster than 16.66ms.
There are two reasons you would pick a value like 13ms. First, the browser needs a little bit of time to repaint the screen (in my experience, never less than 1ms). Which puts you at, say, updating every 15ms, which happens to be a very interesting number - the standard timer resolution on Windows is 15ms (see John Resig's blog post). I suspect that an well-written 15ms animation looks very close to the same on a wide variety of browsers/operating systems.
FWIW, fbogner is plain wrong about non-Chrome browsers firing setInterval every 20-30ms. I wrote a test to measure the speed of setInterval firing, and got these numbers:
Chrome - 4ms
Firefox 3.5 - 15ms
IE6 - 15ms
IE8 - 15ms
The pseudo-code for this is this one:
FPS_WANTED = 25
(just a number, it can be changed while executing, or it can be constant)
TIME_OF_DRAWING = 1000/FPS_WANTED
(this is in milliseconds, I believe it is accurate enough)
( should be updated when FPS_WANTED changes)
UntilTheUserLeavesTheDrawingApplication()
{
time1 = getTime();
doAnimation();
time2 = getTime();
animationTime = time2-time1;
if (animationTime > TIME_OF_DRAWING)
{
[the FPS_WANTED cannot be reached]
You can:
1. Decrease the number of FPS to see if a lower framerate can be achieved
2. Do nothing because you want to get all you can from the CPU
}
else
{
[the FPS can be reached - you can decide to]
1. wait(TIME_OF_DRAWING-animationTime) - to keep a constant framerate of FPS_WANTED
2. increase framerate if you want
3. Do nothing because you want to get all you can from the CPU
}
}
Of course there can be variations of this but this is the basic algorithm that is valid in any case of animation.
When doing loops for animations, it's best that you find a balance between the speed of the loop, and how much work needs to be done.
For example, if you want to slide a div across the page within a second so it is a nice effect and timely. You would skip coordinates and have a reasonably fast loop time so the effect is noticeable, but not jumpy.
So it's a trial and error thing (by having to put work, time, and browser capability into account). So it doesn't only look nice on one browser.
The number told by fbogner have been tested.
The browsers throttle the js-activity to a certain degree to be usable every time.
If your javascript would be possible to run every 5msec the browser runtime would have much less cpu time to refresh the rendering or react on user input (clicks) because javascript-execution blocks the browser.
I think the chrome-devs allow you to run your javascript at much shorter intervals than the other browsers because their V8-Javascript-Engine compiles the JavaScript and therefore it runs faster and the browser will noch be blocked as long as with interpreted js-code.
But the engine is not only so much faster to allow shorter intervals the devs have certainly tested which is the best possible shortest interval to allow short intervals and don't blocking the browser for to long
Don't know the reasoning behind jQuery's interval time, as 13ms translates to 80fps which is very fast. The "standard" fps that's used in movies and such is 25fps and is fast enough that human eye won't notice any jittering. 25fps translates to 40ms, so to answer your question: anything below 40ms is enough for an animation.

Categories

Resources