How can I make my setTimout functions run at the same speed? - javascript

Preface: I have a demo of the problem on my personal site (I hope this is ok. If not, I can try to set it up on jsfiddle). I'm intending this question to be a little fun, while also trying to understand the time functions take in javascript.
I'm incrementing the value of progress bars on a timeout. Ideally (if functions run instantaneously) they should fill at the same speed, but in the real world, they do not. The code is this:
function setProgress(bar, myPer) {
bar.progressbar({ value: myPer })
.children('.ui-progressbar-value')
.html(myPer.toPrecision(3) + '%')
.attr('align', 'center');
myPer++;
if(myPer == 100) { myPer = 0; }
}
function moveProgress(bar, myPer, inc, delay){
setProgress(bar, myPer);
if(myPer >= 100) { myPer = 0; }
setTimeout(function() { moveProgress(bar, myPer+inc, inc, delay); }, delay);
}
$(function() {
moveProgress($(".progressBar#bar1"), 0, 1, 500);
moveProgress($(".progressBar#bar2"), 0, 1, 500);
moveProgress($(".progressBar#bar3"), 0, .1, 50);
moveProgress($(".progressBar#bar4"), 0, .01, 5);
});
Naively, one would think should all run (fill the progress bar) at the same speed.
However, in the first two bars, (if we call "setting the progress bar" a single operation) I'm performing one operation every 500 ms for a total of 500 operations to fill the bar; in the third, I'm performing one operation every 50ms for a total of 5,000 operations to fill the bar; in the fourth, I'm performing one operation every 5ms for a total of 50,000 operations to fill the bar.
What part of my code is takes the longest, causes these speed differences, and could be altered in order to make them appear to function in the way that they do (the fourth bar gets smaller increments), but also run at the same speed?

The biggest problem with using setTimeout for things like this is that your code execution happens between timeouts and is not accounted for in the value sent to setTimeout. If your delay is 5 ms and your code takes 5 ms to execute, you're essentially doubling your time.
Another factor is that once your timeout fires, if another piece of code is already executing, it will have to wait for that to finish, delaying execution.
This is very similar to problems people have when trying to use setTimeout for a clock or stopwatch. The solution is to compare the current time with the time that the program started and calculate the time based on that. You could do something similar. Check how long it has been since you started and set the % based on that.

What causes the speed difference two things: first is the fact that you executing more code to fill the bottom bar (as you allude to in the 2nd to last paragraph). Also, every time you set a timeout, your browser queues it up... the actual delay may be longer than what you specify, depending on how much is in the queue (see MDN on window.setTimeout).

Love the question, i don't have a very precise answer but here are my 2 cents:
Javascript is a very fast language that deals very well with it's event loop and therefore eats setTimeouts and setIntervals for breakfast.
There are limits though, and they depend on a large number of factors, such as browser and computer speed, quantity of functions you have on the event loop, complexity of the code to execute and timeout values...
In this case, i think it's obvious that if you try to execute one function every 500ms, it is going to behave a lot better than executing it every 50ms, therefore a lot better than every 5ms. If you take into account that you are running them all on top of each other, you can predict that the performance will not be optimal.
You can try this exercise:
take the 500ms one, and run it alone. mark the total time it took to fill the bar (right here you will see that it's going to take a little longer than predicted).
try executing two 500ms timeouts at the same time, and see that the total time just got a bit longer.
If you add the 50ms to it, and then the 5ms one, you will see that you will lose performance everytime...

Related

time remainder of setInterval JavaScript

I want to be able to run code on the condition that there is time left on a setInterval.
let car = setInterval(function() {
// stuff
}, 2000);
let vehicle = setInterval(function() {
train()
}, 1000);
function train() {
// stuff
// and if car has time left, do this stuff too.
}
At the last line of the train function, I want to check the car's time remaining but don't understand how to go about such a concept.
I guess knowing the exact time left or simply that there is time left is the same to me so which ever is easier.
Remember that timers are potentially quite imprecise, and that setInterval's rules are...interesting...if your handler runs past when the next interval was meant to start. So important to keep in mind.
But if your goal is to know how long it's been since the start of the call to the timer vs when the timer is going to be set to fire again, it's a matter of remembering when you started
var start = Date.now();
...and then later checking how long it's been relative to the interval. For instance:
if (Date.now() - start >= 1900) {
// It's been at least 1900ms since the start of the `vehicle` call,
// so based on a 2000ms interval, in theory you have only 100ms left
}
But, again, note that timers can be quite imprecise if other things are keeping the UI thread busy.
This is not going to be possible. Timers are actually executed outside of the JavaScript runtime and the time specified in them is never a precise measurement - - it's actually the minimum time that the timer could be expected to run. The actual amount of time depends on how busy the JavaScript runtime is and if the runtime is idle at the timer's timeout, the timer's callback function can be run.
So, essentially, you'd have to calculate how much longer the JavaScript execution stack will need in order to become idle, which you can't do because in order to get that time measurement, you have to execute the remaining code. So, you can't get the answer until there's no time left.
But, based on your code, it seems that you could just set a simple "flag" variable that gets set when car runs. So, inside train you can check to see if that flag has been set yet and if not, run the train code and then reset the flag.

Do I got number of operations per second in this way?

Look at this code:
function wait(time) {
let i = 0;
let a = Date.now();
let x = a + (time || 0);
let b;
while ((b = Date.now()) <= x) ++i;
return i;
}
If I run it in browser (particularly Google Chrome, but I don't think it matters) in the way like wait(1000), the machine will obviously freeze for a second and then return recalculated value of i.
Let it be 10 000 000 (I'm getting values close to this one). This value varies every time, so lets take an average number.
Did I just got current number of operations per second of the processor in my machine?
Not at all.
What you get is the number of loop cycles completed by the Javascript process in a certain time. Each loop cycle consists of:
Creating a new Date object
Comparing two Date objects
Incrementing a Number
Incrementing the Number variable i is probably the least expensive of these, so the function is not really reporting how much it takes to make the increment.
Aside from that, note that the machine is doing a lot more than running a Javascript process. You will see interference from all sorts of activity going on in the computer at the same time.
When running inside a Javascript process, you're simply too far away from the processor (in terms of software layers) to make that measurement. Beneath Javascript, there's the browser and the operating system, each of which can (and will) make decisions that affect this result.
No. You can get the number of language operations per second, though the actual number of machine operations per second on a whole processor is more complicated.
Firstly the processor is not wholly dedicated to the browser, so it is actually likely switching back and forth between prioritized processes. On top of that memory access is obscured and the processor uses extra operations to manage memory (page flushing, etc.) and this is not gonna be very transparent to you at a given time. On top of that physical properties means that the real clock rate of the processor is dynamic... You can see it's pretty complicated already ;)
To really calculate the number of machine operations per second you need to measure the clock rate of the processor and multiply it by the number of instructions per cycle the processor can perform. Again this varies, but really the manufacturer specs will likely be good enough of an estimate :P.
If you wanted to use a program to measure this, you'd need to somehow dedicate 100% of the processor to your program and have it run a predictable set of instructions with no other hangups (like memory management). Then you need to include the number of instructions it takes to load the program instructions into the code caches. This is not really feasible however.
As others have pointed out, this will not help you determine the number of operations the processor does per second due to the factors that prior answers have pointed out. I do however think that a similar experiment could be set up to estimate the number of operations to be executed by your JavaScript interpreter running on your browser. For example given a function: factorial(n) an operation that runs in O(n). You could execute an operation such as factorial(100) repeatedly over the course of a minute.
function test(){
let start = Date.now();
let end = start + 60 * 1000;
let numberOfExecutions = 0;
while(Date.now() < end){
factorial(100);
numberOfExecutions++;
}
return numberOfExecutions/(60 * 100);
}
The idea here is that factorial is by far the most time consuming function in the code. And since factorial runs in O(n) we know factorial(100) is approximately 100 operations. Note that this will not be exact and that larger numbers will make for better approximations. Also remember that this will estimate the number of operations executed by your interpreter and not your processor.
There is a lot of truth to all previous comments, but I want to invert the reasoning a little bit because I do believe it is easier to understand it like that.
I believe that the fairest way to calculate it is with the most basic loop, and not relying on any dates or functions, and instead calculate the values later.
You will see that the smaller the function, the bigger the initial overload is. That means it takes a small amount of time to start and finish each function, but at a certain point they all start reaching a number that can clearly be seen as close-enough to be considered how many operations per second can JavaScript run.
My example:
const oneMillion = 1_000_000;
const tenMillion = 10_000_000;
const oneHundredMillion = 100_000_000;
const oneBillion = 1_000_000_000;
const tenBillion = 10_000_000_000;
const oneHundredBillion = 100_000_000_000;
const oneTrillion = 1_000_000_000_000;
function runABunchOfTimes(times) {
console.time('timer')
for (let i = 0; i < times; ++i) {}
console.timeEnd('timer')
}
I've tried on a machine that has a lot of load already on it with many processes running, 2020 macbook, these were my results:
at the very end I am taking the time the console showed me it took to run, and I divided the number of runs by it. The oneTrillion and oneBillion runs are virtually the same, however when it goes to oneMillion and 1000 you can see that they are not as performant due to the initial load of creating the for loop in the first place.
We usually try to sway away from O(n^2) and slower functions exactly because we do not want to reach for that maximum. If you were to perform a find inside of a map for an array with all cities in the world (around 10_000 according to google, I haven't counted) we would already each 100_000_000 iterations, and they would certainly not be as simple as just iterating through nothing like in my example. Your code then would take minutes to run, but I am sure you are aware of this and that is why you posted the question in the first place.
Calculating how long it would take is tricky not only because of the above, but also because you cannot predict which device will run your function. Nowadays I can open in my TV, my watch, a raspberry py and none of them would be nearly as fast as the computer I am running from when creating these functions. Sure. But if I were to try to benchmark a device I would use something like the function above since it is the simplest loop operation I could think of.

requestAnimationFrame [now] vs performance.now() time discrepancy

Assumptions: rAF now time is calculated at the time the set of callbacks are all triggered. Therefore any blocking that happens before the first callback of that frame is called doesn't affect the rAF now and it's accurate--at least for that first callback.
Any performance.now() measurements made before a rAF set is triggered should be earlier than rAF now.
Test: Record before (a baseline time before anything happens). Set the next rAF. Compare rAF now and actual performance.now() to before to see how different they are.
Expected results:
var before = performance.now(), frames = ["with blocking", "with no blocking"], calls = 0;
requestAnimationFrame(function frame(rAFnow) {
var actual = performance.now();
console.log("frame " + (calls + 1) + " " + frames[calls] + ":");
console.log("before frame -> rAF now: " + (rAFnow - before));
console.log("before frame -> rAF actual: " + (actual - before));
if (++calls < frames.length) { before = actual; requestAnimationFrame(frame); }
});
// blocking
for (var i = 0, l = 0; i < 10000000; i++) {
l += i;
}
Observations: When there is blocking before the frame starts, the rAF now time is at times incorrect, even for that first frame. Sometimes the first frame's now is actually an earlier time than the recorded before time.
Whether there is blocking happening before the frame or not, every so often the in-frame time rAFnow will be earlier than the pre-frame time before--even when I setup the rAF after I take my first measurement. This can also happen without any blocking whatsoever, though this is rarer.
(I get a timing error on the first blocking frame most of the time. Getting an issue on the others is rarer, but does happen occasionally if you try running it a few times.)
With more extensive testing, I've found bad times with blocking prior to callback: 1% from 100 frames, no blocking: 0.21645021645021645% from ~400 frames, seemingly caused by opening a window or some other potentially CPU-intensive action by the user.
So it's fairly rare, but the problem is this shouldn't happen at all. If you want to do useful things with them, simulating time, animation, etc., then you need those times to make sense.
I've taken into account what people have said, but maybe I am still not understanding how things work. If this is all per-spec, I'd love some psuedo-code to solidify it in my mind.
And more importantly, if anyone has any suggestions for how I could get around these issues, that would be awesome. The only thing I can think of is taking my own performance.now() measurement every frame and using that--but it seems a bit wasteful, having it effectively run twice every frame, on top of any triggered events and so on.
The timestamp passed in to the requestAnimationFrame() callback is the time of the beginning of the animation frame. Multiple callbacks being invoked during the same frame all receive the same timestamp. Thus, it would be really weird if performance.now() returned a time before the parameter value, but not really weird for it to be after that.
Here's the relevant specification:
When the user agent is to run the animation frame callbacks for a Document document with a timestamp now, it must run the following steps:
If the value returned by the document object’s hidden attribute is true, abort these steps. [PAGE-VISIBILITY]
Let callbacks be a list of the entries in document’s list of animation frame callbacks, in the order in which they were added to the list.
Set document’s list of animation frame callbacks to the empty list.
For each entry in callbacks, in order: invoke the Web IDL callback function, passing now as the only argument, and if an exception is thrown, report the exception.
So you've registered a callback (let's say just one) for the next animation frame. Tick tick tick, BOOM, time for that animation frame to happen:
The JavaScript runtime makes a note of the time and labels that now.
The runtime makes a temporary copy of the list of registered animation frame callbacks, and clears the actual list (so that they're not accidentally invoked if things take so long that the next animation frame comes around).
The list has just one thing in it: your callback. The system invokes that with now as the parameter.
Your callback starts running. Maybe it's never been run before, so the JavaScript optimizer may need to do some work. Or maybe the operating system switches threads to some other system process, like starting up a disk buffer flush or handling some network traffic, or any of dozens of other things.
Oh right, your callback. The browser gets the CPU again and your callback code starts running.
Your code calls performance.now() and compares it to the now value passed in as a parameter.
Because a brief but non-ignorable amount of time may pass between step 1 and step 6, the return value from performance.now() may indicate that a couple of microseconds, or even more than a couple, have elapsed. That is perfectly normal behavior.
I encountered the same issue on chrome, where calls to performance.now () would return a higher value than the now value passed into subsequent callbacks made by window.requestAnimationFrame ()
My workaround was to set the before using the now passed to the callback in the first window.requestAnimationFrame () rather than performance.now (). It seems that using only one of the two functions to measure time guarantees progressing values.
I hope this helps anyone else suffering through this bug.

Why do multiple setTimeout() calls cause so much lag?

I have a complex animation sequence involving fades and transitions in JavaScript. During this sequence, which consists of four elements changing at once, a setTimeout is used on each element.
Tested in Internet Explorer 9, the animation works at realtime speed (it should take 1.6 seconds and it took exactly 1.6 seconds). ANY other browser will lag horribly, with animation times of 4 seconds (Firefox 3 and 4, Chrome, Opera) and something like 20 seconds in IE 8 and below.
How can IE9 go so fast while all other browsers are stuck in the mud?
I have tried to find ways of merging the elements into one, so as to one have one setTimeout at any given time, but unfortunately it wouldn't stand up to any interference (such as clicking a different link to start a new animation before the current one has finished).
EDIT: To elaborate in response to comments, here's the outline of the code:
link.onclick = function() {
clearTimeout(colourFadeTimeout);
colourFadeTimeout = setTimeout("colourFade(0);",25);
clearTimeout(arrowScrollTimeout);
arrowScrollTimeout = setTimeout("arrowScroll(0);",25);
clearTimeout(pageFadeOutTimeout);
pageFadeOutTimeout = setTimeout("pageFadeOut(0);",25);
clearTimeout(pageFadeInTimeout);
pageFadeInTimeout = setTimeout("pageFadeIn(0);",25);
}
Each of the four functions progress the fade by one frame, then set another timeout with the argument incremented, until the end of the animation.
You can see the page at http://adamhaskell.net/cw/index.html (Username: knockknock; Password: goaway) (it has sound and music, which can be disabled, but be warned!) - my JavaScript is very messy since I haven't really organised it properly, but it is commented a bit so hopefully you can see what the general idea is.
Several things:
Your timeout is 25ms. This translates to 40fps which is a very high framerate to try to achieve via javascript. Especially for things involving DOM manipulation that may trigger reflows. Increase it to 50 or 60. 15fps should be more than fluid enough for the kinds of animation you're doing. You're not trying to display videos here, just move things around the page.
Don't use strings as the first parameter to setTimeout(). Especially if you care about performance. That will force javascript to recompile the string each frame of animation. Use a function instead. If you need to pass an argument use an anonymous function to wrap the function you want to execute:
setTimeout(function(){
pageFadeIn(0)
},50);
this will only get compiled once when the script is loaded.
As mentioned by Ben, it is cheaper to use a single setTimeout to schedule the functions. For that matter, code clarity may improve by using setInterval instead (or it may not, depends on your coding style).
Additional answer:
Programming javascript animation is all about optimisation and compromise. It's possible to animate lots of things on the page with little slow-down but you need to know how to do it right and decide what to sacrifice. As an example of just how much can be animated at once is a demo real-time strategy game I wrote a couple of years ago.
Among the things I did to optimize the game are:
The walking soldiers are made up of only two frames of animation and I simply toggle between the two images. But the effect is very convincing nonetheless. You don't need perfect animation, just one that looks convincing.
I use a single setInterval for everything. It's cheaper CPU-wise and easier to manage. Just decide on a base frame rate and then schedule for different animation to start at different times.
Well, that's a lot of javascript (despite the "quadruple-dose of awesomeness" :)
You're firing a lot of setTimeout sequence, I'm not sure how well JS engines are optimised for this.. particularly IE <= 8
Ok, maybe just a rough suggestion... You could maybe write a small timing engine.
Maintain a global object that stores your current running timed events with the function to run, and the delay...
Then have a single setTimeout handler that check against that global object, and decreases the delay at each iteration and call the function when the delay becomes < 0
you global event would looks something like that:
var events = {
fade1 : {
fn : func_name,
delay : 25,
params : {}
}
fadeArrow : {
fn : func_name,
delay : 500,
params : {}
}
slideArrow : {
fn : func_name,
delay : 500,
params : {
arrow:some_value
}
}
}
then create a function to loop through these at an interval (maybe 10 or 20 ms) and decrease your delays until they complete and fire the function with params as a paramer to the function (check Function.call for that).
Once down, fire setTimeout again with the same delay..
To cancel an event just unset the property from the events list..
Build a few method to add / remove queued events, update params and so on..
That would reduce everything to just one timeout handler..
(just an idea)

How to tell what's causing slow HTML5 Canvas performance?

How can I tell if the canvas's slow performance is caused by the drawing itself, or the underlying logic that calculates what should be drawn and where?
The second part of my question is: how to calculate canvas fps? Here's how I did it, seems logical to me, but I can be absolutely wrong too. Is this the right way to do it?
var fps = 0;
setInterval(draw, 1000/30);
setInterval(checkFps, 1000);
function draw() {
//...
fps++;
}
function checkFps() {
$("#fps").html(fps);
fps = 0;
}
Edit:
I replaced the above with the following, according to Nathan's comments:
var lastTimeStamp = new Date().getTime();
function draw() {
//...
var now = new Date().getTime();
$("#fps").html(Math.floor(1000/(now - lastTimeStamp)));
lastTimeStamp = now;
}
So how's this one? You could also calculate only the difference in ms since the last update, performance differences can be seen that way too. By the way, I also did a side-by-side comparison of the two, and they usually moved pretty much together (a difference of 2 at most), however, the latter one had bigger spikes, when performance was extraordinarily low.
Your FPS code is definitely wrong
setInterval(checkFps, 1000);
No-one assures this function will be called exactly every second (it could be more than 1000ms, or less - but probably more), so
function checkFps() {
$("#fps").html(fps);
fps = 0;
}
is wrong (if fps is 32 at that moment it is possible that you have 32 frames in 1.5s (extreme case))
beter is to see what was the real time passes since the last update and calculate the realtimepassed / frames (I'm sure javascript has function to get the time, but I'm not sure if it will be accurate enough = ms or better)
fps is btw not a good name, it contains the number of frames (since last update), not the number of frames per second, so frames would be a better name.
In the same way
setInterval(draw, 1000/30);
is wrong, since you want to achieve a FPS of 30, but since the setInterval is not very accurate (and is probably going to wait longer than you say, you will end up with lower FPS even if the CPU is able to handle the load)
Webkit and Firebug both provide profiling tools to see where CPU cycles are being spent in your javascript code. I'd recommend starting there.
For the FPS calculation, I don't think your code is going to work, but I don't have any good recommendation :(
Reason being: Most (all?) browsers use a dedicated thread for running javascript and a different thread for running UI updates. If the Javascript thread is busy, the UI thread won't be triggered.
So, you can run some javascript looping code that'll "update" the UI 1000 times in succession (for instance, setting the color of some text) - but unless you add a setTimeout to allow the UI thread to paint the change, you won't see any changes until the 1000 iterations are finished.
That said, I don't know if you can assertively increment your fps counter at the end of the draw() routine. Sure, your javascript function has finished, but did the browser actually draw?
Check if you dont use some innerHTML method to debug your project. This can slow your project in a way you can't imagine, especially if you do some concatenation like this innerHTML += newDebugValues;
Or like desau said, profile your cpu usage with firebug or webkit inner debugger.

Categories

Resources