JavaScript - Best way to have multiple events fire at different timings - javascript

I'm working on a Node-red flow (which basically uses nodejs, so JavaScript-based) that reads several sensors at different intervals and triggers other various events.
Instead of having a timer for each event to fire at specific frequency (I was afraid it would cause lag), I have one single timer that increments seconds indefinitely, and at every second, each event is doing a modulo operation to see if they should fire.
This seems janky as multiple divisions occur each second. Furthermore, as the integer value gets bigger, I'm afraid those divisions will get slower and the integer will eventually overflow its max value after running for long periods of time.
So I'm now second-guessing my approach and asking the community if individual timers are actually better for performance, or if there's an even better way to do this. For the record, I will have maybe 20-40 events to fire at different intervals.

Related

matter.js | Fixed Update problem for applyForce

I am making a player movement with applyForce using matter.js.
I am checking for pressed keys and applying force to my character in my game loop, which is normally called 60 times per second. But the problem begins when FPS drops. If the loop is called only 30 times per second, how can I applyForce the same amount when FPS was 60?
Is there any analog of FixedUpdate like in Unity?
This is a classic problem in game development. One way you can solve this problem is instead of applying the same amount of force in every update, you can check a clock to see how much time has passed since the last update (e.g. call performance.now() in every update). Then multiply the amount of force you want to add by the amount of time that has passed.
I don't think this will work perfectly in all situations. Especially if you have small, fast moving objects, you might find objects clipping through each other. But I think this will be good enough for most cases, and you should be able to code it by hand.

Javascript OnScroll performance comparison

Update: Similiar question with a very good answer that shows how to use requestAnimationFrame with scroll in a useful way:
scroll events: requestAnimationFrame VS requestIdleCallback VS passive event listeners
So let's say I want to add some expensive action on my site triggered by scrolling. For example, I'm using parallax effects in my jsfiddle.
Now I keep reading it must not be bound to the event directly, sometimes followed by snippets that are meant to be better. Just some examples:
Attaching JavaScript Handlers to Scroll Events = BAD!
How to develop high performance onScroll event?
How to make faster scroll effects?
60FPS onscroll event listener
What they say is basically don't do this:
// Bad guy 1
$(window).scroll( function() {
animate(ex1);
});
or this
// Bad guy 2
window.addEventListener('scroll', onScroll, false);
function onScroll() {
animate(ex2);
}
But use timeouts, intervals, requestAnimationFrame and whatnot, for example:
// Good guy
$(window).scroll( function() {
scrolling1 = true;
});
setInterval( function() {
if (scrolling1) {
scrolling1 = false;
animate(ex3);
}
}, 50 );
So, I went and added the options I found in the links above to a jsfiddle that tries to compare them by adding a counter to every approach, like so:
// Test
$(window).scroll( function() {
counter = counter + 1;
// output result of counter
animate(ex1);
});
Best to check the complete jsfiddle
Outcome: Everything that works smooth is about the same number of calculations. If I can live with choppy effects, maybe I can safe some resources. And against everything I read, this seems logical to me!
First question:
Am I missing something or is this a valid test? If it's invalid, how could I test correctly?
Edit: To clarify, I want to test whether any of the above methods save performance at all.
Second question:
If it is valid, why is everyone nervous about onscroll? If fluid animations require 5000 calculations over the complete site, there's no way to change it anyway?
(Well, sometimes I use checks to determine whether an object is in the viewport or not, but honestly I don't even know if those checks aren't as expensive as the prevented code itself, especially if they involve five different variables such as offset, windowHeight, scrolltTop, getBoundingClientRect and outerHeight...)
So, #SirPeople already answered your first question correctly, it is indeed a good test to see how often the animate function gets called, but it's a bad test to compare the performance of the different snippets.
This is a performance recording of the excecution:
The function animate isn't expensive at all. I took a performance recording (next picture), which shows that it takes between 0.64ms and 1.29ms in the one iteration I looked at (points 1-5). And once the function is done, the repaint takes no time at all (point 6), which might be because the page has almost no content. When we take a look at the time, we can see that all five animation functions and the repaint happen in less than 10ms, which, under normal circumstances, mean that we can get a fluid 60fps animation (point 7).
Also, if we want to compare onscroll event listeners we need to test each on it's own and compare the results. If one of the listeners would really be blocking it would have an influence on the whole page and without performance debugging you wouldn't know which one it was.
I made two jsfiddles window.scroll and RAF. And, to my surprise, there does not seem to be any difference.
Why are people concerned about this?
As you can see in the jsfiddles linked above, if the event handlers get too large, the entire page is going to lag.
Now what?
I'm no performance guru myself, but:
Perhaps one of the other solutions is correct
We can mark your event listeners as passive, although in my test it didn't really improve at all
https://developers.google.com/web/updates/2016/06/passive-event-listeners
We can optimize the event listener by removing parallax effects
There's also this new thing called Intersection Observer which is supposed to be much faster, I didn't test it
https://developer.mozilla.org/en-US/docs/Web/API/Intersection_Observer_API
I am not totally sure if I got correctly your questions and all your statements but I will try to give you an answer:
Am I missing something or is this a valid test? If it's invalid, how could I test correctly?
It is a valid test if you are measuring the number of times a function has been called, this will of course depend on the browser, SO, if is GPU enhanced and some other benchmark parameters that has been commented in your question already.
If we consider that measurement correct then it can be said that by using timeouts or requestAnimationFramework could save time because we are basically following the principles of debouncing or throttling. Basically we do not want to request or called a function more times than is needed. In the case of the timer we will queue less functions calls and in the case of requestAnimationFrame because it enqueue calls before repainting and will execute them sequentially. In timeouts it could happen that calculations overlap if they are very heavy.
I found a better answer in why using requestAnimationFrame explaining the main problems with animations in the browser like Shear, flickering or frame skip. It also includes a good demo.
I think your testing method is correct, you also should interpret it correctly, maybe calls are close to be the same number because of your hardware and your engine, but as said, debounce and throttling are a performance relieve.
Here also one more article supporting not attach handlers to window scroll from Twitter. (Disclaimer: this article is from 2011 and browsers have deal with optimizations for scroll in different ways).
why is everyone nervous about onscroll? If fluid animations require 5000 calculations over the complete site, there's no way to change it anyway?
I do not think there is nervousness in the performance hit, but the user experience will be worst for the above mentioned animation problems that your overcalling of scroll can cause, or even if there is a desynchronization with your timer you could still get the same 'performance' problems. People just recommend saving calls to scroll because:
Human visual permanence doesnt require a super high frame rate and so it is useless to try to show images more often.
For more complex calculations or heavy animations browsers are already working on optimizations, like you have check, some browsers had optimize this things in comparison with the 2, 3 or 6 years ago the articles you expose were written.

How can I allow Easeljs Tickers to remain active when a user switches tabs

I am making a game using Createjs and I have a problem with the game pausing automatically once I switch tabs. I have researched on this topic and I believe to have found out that it isn't Createjs pausing the Tickers but the browsers themselves. This has something to do with the Page Visibility API which knows when the document is hidden or visible and once hidden, I believe it slows down the RAF or setIntervals of that document which makes it seem that it is paused. This provides help to the CPU and Battery so they don't burn out.
I need my game to always keep running in the background even if the user switches tabs. What is the best way I can do this? Just want to mention that I am using Easeljs with the Tickers if that matters.
Please correct me if I was wrong with anything I said. I am still a beginner and by correcting me, I will be able to understand the real problem. Thank you for your time.
The behaviour you are describing is expected, the browser will throttle JavaScript execution to about 1 FPS.
The usual approach to get around this is to make sure your application/animations are not tied to a specific FPS. For example, here is a sample where a shape moves across the screen: http://jsfiddle.net/lannymcnie/4neobe00/2/
It takes about 5 seconds to go from the left to the right
A count shows how many times it resets
If you just add a certain amount to the sprite (such as the blue shape), it will only get updated 1 time per second when the browser frame is closed. I used 1.66 because it closely matched the speed of the other sprite.
lockedShape.x = (lockedShape.x + 1.66); // Moves a certain amount per tick
However the Ticker provides a delta in the tick event, which will be around 1 when you have a reliable framerate, but provide a multiplier you can use to determine how long the frame takes. In the sample, an index is incremented by 1 x event.delta:
index += event.delta;
Then the position is determined based on the index:
shape.x = (index/10)%500; // divide by 10 to slow it down, modulus 500 to make it loop
We can even determine how many times the item has "looped" because we know that the if you divide that index by the width, it will give you the number of loops:
text.text = (index/10/500 |0).toString(); // rough position divided by 500, rounded.
So using this approach, you can make the rest of your content run at a consistent speed, since you know how much time has elapsed since the last tick. Here is an official tutorial (which has a sample that does basically the same thing, which is what was probably in my head when I made my own example).
Note that TweenJS uses time-based tweens, and you can set framerates on MovieClips and Sprites in EaselJS, which uses this approach.
Hope that helps.

How much of an effect can CPU have on JavaScript setInterval

I wrote a small jquery plugin that basically converts all words in an html element to spans, makes them invisible and then animates them into view. I made it so that you can define the time that it's supposed to take to load the whole element, and based on tests the math seems to be right, but in practice it takes quite a bit longer.
See jsfiddle: http://jsfiddle.net/A2DNN/
Note the variables "per" and "ms", this basically tells it to process "per" number of words every "ms" milliseconds.
In the log you'll see it'll process 1 word ever 1 ms, which should result in a MUCH faster loading time.
So I'm just wondering, is it possible that the CPU is forming a bottleneck here? In that JS is fading items into view, which is handled by the CPU, which isn't very fast at graphical processing.
It sounds almost silly to ask, I'd expect these days a CPU would laugh at a small bit of work load like this.
It is due to a minimum timeout forced by the browser's JavaScript implementation. You can't have a 1ms timeout, it is slightly more than that. There has already been a discussion about that here.

Understanding JavaScript timer thread issues

I'm starting on a javascript MMORPG that will actually work smoothly. Currently, I created a demo to prove that I can move characters around and have them chat with each other, as well as see eachother move around live.
http://set.rentfox.net/
Now Javascript timers are something I have not used extensively, but from what I know, correct me if I'm wrong, is that having multiple setIntervals happening at the same time doesn't really work well b/c it's all on a single thread.
Lets say I wanted to have 10 different people nuking fireballs at a monster by using sprite background positioning with setInterval -- that animation would require 10 setIntervals doing repainting of the DOM for sprite background-position shifts. Wouldn't that be a big buggy?
I was wondering if there was a way around all this, perhaps using Canvas, so that animations can all happen concurrently without creating an event queue and I don't have to worry about timers.
Hope that makes sense, and please let me know if I need to clarify further.
The issue with multiple setIntervals is twofold. The first is as you indicate, since all Javascript on browsers is (currently) single-threaded, one timer's execution may hold up the next timer's execution. (Worker threads are coming, though; Firefox already has them, as does Safari 4 [and maybe others].) The second is that the timer happens at a set interval, but if your handler is still running when that interval expires, the second interval is completely skipped. E.g., the timer can interfere with itself.
That last part needs more explanation: Say you have a setInterval at 10ms (which is the fastest you can reasonably expect any implementation to do it; may are clamped so that they don't go faster than that). If your handler takes 13ms, the interval that should have happened 10ms after it began will be completely skipped.
I usually use setTimeout for this kind of thing. When my handler is triggered, I do my work and then schedule the next event at the end of the handler. Then (within the bounds of what you can be certain of), I know the next event will happen at that interval.
For what you're doing, it seems like a single "pulse" timer would be best, working through whatever it needs to do on the pulse. Whether that pulse timer uses setInterval or setTimeout is a judgment call based on what you're seeing with your actual code.
+1 to T. J. Crowder, the answer was perfect. I strongly recommend learning to use Canvas over DOM nodes for game animation; the latter is slow and buggy, and will hang the browser in any non-trivial situation. OTOH, Canvas is much faster and can be hardware accelerated, and even has a 3D context if you need it.

Categories

Resources