I watched Jake Archibald's talk about event loop - https://vimeo.com/254947206. Based on the talk my understanding was that event loop will execute as many macro tasks as it can fit in one frame and if there is some long running macro task it will cause frames to be skipped. So my expectation was that any task, running longer then usual frame duration, would cause other tasks to be executed in the next frame. I tested that by creating one button and multiple handlers like this https://codepen.io/jbojcic1/full/qLggVW
I noticed that even though handlerOne is long running (due to calculating computationally intensive fibonacci), handlers 2, 3 and 4 are still executed in the same frame. Only timeoutHandler is being executed in the next frame. Here are the logs I am getting:
animationFrameCallback - 10:4:35:226
handler one called. fib(40) = 102334155
handler two called.
handler three called.
handler four called.
animationFrameCallback - 10:4:36:37
timeout handler called
animationFrameCallback - 10:4:36:42
so the question is why are handlers two, three and four executed within the same frame as handler one?
To make things even more confusing according to https://developer.mozilla.org/en-US/docs/Web/API/Frame_Timing_API,
A frame represents the amount of work a browser does in one event
loop iteration such as processing DOM events, resizing, scrolling,
rendering, CSS animations, etc.
and to explain "one event loop iteration" they linked https://html.spec.whatwg.org/multipage/webappapis.html#processing-model-8 where it's stated that in one iteration:
one macro task is processed,
all micro tasks are processed
rendering is updated
... (there are some other steps too which are
not important for this)
which doesn't seem to be correct at all.
You are mixing a few concepts here.
The "frame" you are measuring in your codepen is the one of the step 10 - Update the rendering.
Quoting the specs:
This specification does not mandate any particular model for selecting rendering opportunities. But for example, if the browser is attempting to achieve a 60Hz refresh rate, then rendering opportunities occur at a maximum of every 60th of a second (about 16.7ms). If the browser finds that a browsing context is not able to sustain this rate, it might drop to a more sustainable 30 rendering opportunities per second for that browsing context, rather than occasionally dropping frames. Similarly, if a browsing context is not visible, the user agent might decide to drop that page to a much slower 4 rendering opportunities per second, or even less.
So it is not sure at which frequency will this "frame" fire, but generally it is at 60FPS (most monitors refresh at 60Hz), so in this lapse of time, a lot of event loops iterations will normally occur.
Now, requestAnimationFrame is even more special in that it can discard frames if the browser thinks it has too much things to perform. So your fibonacci will most probably delay any execution of rAF callbacks until it's done.
What the MDN article you linked talks about is a "frame" in the realm of the PerformanceFrameTiming API. I must admit directly that I don't have a lot of knowledge about this particular API, and given its very limited browser support, I don't think we should spend too much time on it, except to say that this has nothing to do with a painting frame.
I think the most precise tool we have currently for measuring an EventLoop iteration is the Messaging API.
By creating a self-calling message event loop, we can hook to every EventLoop iterations.
let stopped = false;
let eventloops = 0;
onmessage = e => {
if(stopped) {
console.log(`There has been ${eventloops} Event Loops in one anim frame`);
return;
}
eventloops++
postMessage('', '*');
};
requestAnimationFrame(()=> {
// start the message loop
postMessage('', '*');
// stop in one anim frame
requestAnimationFrame(()=> stopped = true);
});
Let's see how your code behaves at a deeper level:
let done = false;
let started = false;
onmessage = e => {
if (started) {
let a = new Date();
console.log(`new EventLoop - ${a.getHours()}:${a.getMinutes()}:${a.getSeconds()}:${a.getMilliseconds()}`);
}
if (done) return;
postMessage('*', '*');
}
document.getElementById("button").addEventListener("click", handlerOne);
document.getElementById("button").addEventListener("click", handlerTwo);
document.getElementById("button").addEventListener("click", handlerThree);
document.getElementById("button").addEventListener("click", handlerFour);
function handlerOne() {
started = true;
setTimeout(timeoutHandler);
console.log("handler one called. fib(40) = " + fib(40));
}
function handlerTwo() {
console.log("handler two called.");
}
function handlerThree() {
console.log("handler three called.");
}
function handlerFour() {
console.log("handler four called.");
done = true;
}
function timeoutHandler() {
console.log("timeout handler called");
}
function fib(x) {
if (x === 1 || x === 2) return 1
return fib(x - 1) + fib(x - 2);
}
postMessage('*', '*');
<button id="button">Click me</button>
Ok, so there is actually one frame as in EventLoop iteration to fire between the event handlers and the setTimeout callback. I like it better.
But what about that "long running frames" thing we heard about?
I'll guess you are talking about the "spin the event loop" algorithm, which is indeed meant to allow the event loop to not block all the UI in some circumstances.
First, specs only tell the implementers that it is a recommendation to enter this algorithm for long running scripts, it is not a must.
Then, this algorithm is to allow the normal EventLoop processing of events registration and UI updates, but anything related to javascript is simply resumed at the next EventLoop iteration.
So there is actually no way from js to know if we did enter this algorithm.
Even my MessageEvent driven loop can't tell, because the event handler will just get pushed to after we exit this long-running script.
Here is an attempt to put in a more graphical way, at the risk of being technically inacurate:
/**
* ...
* - handle events
* user-click => push([cb1, cb2, cb3]) to call stack
(* - paint if needed (may execute rAF callbacks if any))
*
* END OF LOOP
—————————————————————————
* BEGIN OF LOOP
*
* - execute call stack
* cb1()
* schedule `timeoutHandler`
* fib()
* ...
* ...
* ...
* ... <-- takes too long => "spin the event loop"
* [ pause call stack ]
* - handle events
(* - paint if needed (but do not execute rAF callbacks))
*
* END OF LOOP
—————————————————————————
* BEGIN OF LOOP
*
* - execute call stack
* [ resume call stack ]
* (*fib()*)
* ...
* ...
* cb2()
* cb3()
* - handle events
* `timeoutHandler` timed out => push to call stack
(* - paint if needed (may execute rAF callbacks if any) )
*
* END OF LOOP
—————————————————————————
* BEGIN OF LOOP
*
* - execute call stack
* `timeoutHandler`()
* - handle events
...
*/
The answer actually existed in https://html.spec.whatwg.org/multipage/webappapis.html#event-loop-processing-model
Key points:
The "frame" in "A frame represents the amount of work a browser does in one event loop iteration such as processing DOM events, resizing, scrolling, rendering, CSS animations, etc" is actually an event processing iteration, i.e. all the steps of https://html.spec.whatwg.org/multipage/webappapis.html#event-loop-processing-model
Your meaning of "paint frame" is the step 11 "Update the rendering" part.
When to create a new "paint frame" in a event iteration is determined by the browser:
This specification does not mandate any particular model for selecting rendering opportunities. But for example, if the browser is attempting to achieve a 60Hz refresh rate, then rendering opportunities occur at a maximum of every 60th of a second (about 16.7ms). If the browser finds that a browsing context is not able to sustain this rate, it might drop to a more sustainable 30 rendering opportunities per second for that browsing context, rather than occasionally dropping frames. Similarly, if a browsing context is not visible, the user agent might decide to drop that page to a much slower 4 rendering opportunities per second, or even less.
So it is possible that a new "paint frame" is created after many event iteration(event/task processing).
For the long task, again, it is also possible that the browser decides not to create a new "paint frame".(Maybe it decides to handle these event immediately after each other or it is unnessary to create because the view content does not change).
Related
I ran into a problem where timestamps received in requestAnimationFrame callbacks and mouse events do not seem to be in order, I mean I expect them to be increasing (as I hope that time goes only in one direction :)), but that doesn't seem to be the case. It can be illustrated by this example code:
<html><body>
<script type="text/javascript">
let lastTimesamp = -1;
function log(name, timestamp) {
console.log(name, timestamp);
console.assert(lastTimesamp < timestamp, "Invalid time", lastTimesamp, timestamp);
lastTimesamp = timestamp;
}
function update(timestamp) {
log("update", timestamp);
requestAnimationFrame(update);
}
requestAnimationFrame(update);
function mouseDown(event) {
log("mouseDown", event.timeStamp);
}
document.body.addEventListener("mousedown", mouseDown, false);
</script>
</body></html>
If you start clicking with your mouse you can see this sort of output eventually:
which implies that mouse-down event happened before the last update call.
I get the opposite situation on my production app: call to update is made with a timestamp which is before the last call to mouse-down callback.
Can someone explain it to me? From the documentation it looks like they are not necessary measured in the same way, but wouldn't it make sense to time them in the same time?
What happens here is that the AnimationFrameCallbacks queue has an higher priority than UI events.
So it may occur that your UI event fires in the same frame than the painting frame, it will thus get its timeStamp set at this moment, or even by the OS when it received it in the first place. But, the UA will chose to prioritize the AnimationFrameCallbacks instead of the UI event callbacks, so the UI event callback will get delayed until the next event-loop iteration.
Since the rAF callback gets its own timestamp from inside the event-loop iteration that will call it, this timestamp will be higher than the one of UI event, even though its callback fires before.
Also note that Chrome has it's requestAnimationFrame method completely broken, so it may not help for debugging.
I've inherited a codebase where the order in which JS executes is not clear since there's a lot of setTimeout calls, globals, and broken Promise chains. Rather than manually trace every execution path I'd like to capture what JS gets scheduled for execution on the browser's message queue over a time period, or in response to an event.
I can see Event Listeners and trace from when one fires, but this is proving too slow in my case. A single click can sprawl out into several scheduled scripts that each mutate a shared state. This is why I am not considering tracing from event handlers and am instead looking for an overarching timeline for all JS in the application.
Given that JS scripts are scheduled for execution, how I can see the order in which JS gets queued?
I've started with something like this, but this doesn't give me a fully reliable timeline.
const {
setTimeout,
setInterval,
} = window;
window._jsq = [];
window._record = f => {
window._jsq.push([f, new Error().stack]);
};
window.setTimeout = (...a) => {
window._record(a[0]);
return setTimeout.apply(window, a);
};
window.setInterval = (...a) => {
window._record(a[0]);
return setInterval.apply(window, a);
};
I'll take a crack at my own question from the angle of the OP snippet. Corrections appreciated.
Assuming you cannot see the message queue (or at least the scripts queued), you can still see the code that is scheduling other JS and the code that is scheduled to run. So, tracking both independently is possible.
This is not all good news because you still have to do legwork to 1) adapt that tracking to the various ways JS can get scheduled, and 2) make sense of what you capture.
In the setTimeout case, something quick and dirty like this can at least provide a sense of a scheduling timeline and when things actually happened. That's just a matter of wrapping functions.
const { setTimeout } = window;
// For visibility in DevTools console
window._schedulers = [];
window._calls = [];
const wrap = f => {
const { stack } = new Error();
window._schedulers.push([stack, f]);
return (...a) => {
window._calls.push([stack, f, a]);
return f(...a);
};
};
window.setTimeout = (f, delay, ...a) => {
return setTimeout.apply(window, [wrap(f), delay].concat(a));
}
Still, that's just one case and says nothing about when to start/stop monitoring and the potential trigger points where traceability is a concern as Mosè Raguzzini mentioned. In the case of Promises, this answer calls out Bluebird's checking facilities.
It seems that until more native tools come out that visualize queued scripts and related info, you are stuck collecting and analyzing the data by hand.
There is no built-in automatic debugging tool for monitoring your browser event loop.
In order to monitor the browser's event loop you have to explicity monitor the event that are in your interested in and pass it to the (in this case Chrome's) DevTool:
monitorEvents(document.body, "click");
More info about monitoring events in Chrome Dev Tools
Note #1: You don't know how custom events are called. They may not dispatch an event into the DOM (e.g. some libraries implement their own event registration and handling systems) so there is no general way of knowing when event listeners are being called, even if you can track the dispatch of the event.
Some libraries also simulate event bubbling, but again, unless you know the type of event, you can't listen for it.
However, you could implement your own event management system and implement a function to listen for all events for which listeners are set or events dispatched using your system.
Ref: How can I monitor all custom events emitted in the browser?
Note #2: a modern JS approach to events (IE: React/Redux) involves dispatching ACTIONS instead of events. As actions are often logged for time-travel purpose, monitoring events in this case is unnecessary.
I'm working on a javascript application that performs 2 jobs.
The first job is more important and needs to run at 60fps. The other job is a "background" job that still needs to run but it's okay if it takes longer.
Normally the way I would do this is have the more important job's code in a RequestAnimationFrame loop, and put the background job on a web worker.
However the main job is already spawning 2 web workers, and I don't want to spawn a third for context switching and memory consumption reasons.
There is ~8 ms of processing time left over on the RequestAnimationFrame loop that I have to work with for the background job to run on, however it is a job that will take about 100 ms to complete.
My question is there a way to write a loop that will pause itself every time the ui is about to be blocked?
Basically run as much code as you can until the remaining 8ms of time are up for the frame, and then pause until there is free time again.
This is currently experimental technology which isn't well-supported yet, but: There's requestIdleCallback, which:
...queues a function to be called during a browser's idle periods. This enables developers to perform background and low priority work on the main event loop, without impacting latency-critical events such as animation and input response. Functions are generally called in first-in-first-out order; however, callbacks which have a timeout specified may be called out-of-order if necessary in order to run them before the timeout elapses.
One of the key things about rIC is that it receives an IdleDeadline object which
...lets you determine how much longer the user agent estimates it will remain idle and a property, didTimeout, which lets you determine if your callback is executing because its timeout duration expired.
So you could have your loop stop when the deadline.timeRemaining() method returns a small enough number of remaining milliseconds.
That said, I think I'd probably add the third worker and see what it looks like in aggressive testing before I tried other approaches. Yes, it's true that context-switching is costly and you don't want to overdo it. On the other hand, there's already plenty of other stuff going on on mobiles and architectures these days are quite fast at context switching. I can't speak to the memory demands of workers on mobiles (haven't measured them myself), but that's where I'd start.
I recommend requestIdleCallback() as the accepted answer does, but it is still experimental and I like coming up with stuff like this. You might even combine rIC with this answer to produce something more suited to your needs.
The first task is to split up your idle code into small runnable chunks so you can check how much time you have/spent between chunks.
One way is to create several functions in a queue that do the work needed, such as unprocessed.forEach(x=>workQueue.push(idleFunc.bind(null,x)));}, then have an executor that will at some point process the queue for a set amount of time.
If you have a loop that takes awhile to finish, you could use a generator function and yield at the end of each loop, then run it inside recursive calls to setTimeout() with your own deadline or requestIdleCallback().
You could also have a recursive function that when processed, would add itself back to the end of the queue, which could help when you want to give other work time to run or when creating a function per piece of work would be absurd (e.g., hundreds of array items bound to a function that together only take 1ms to process).
Anyway, here's something I whipped up out of curiosity.
class IdleWorkExecutor {
constructor() {
this.workQueue=[];
this.running=null;
}
addWork(func) {
this.workQueue.push(_=>func());
this.start();
}
//
addWorkPromise(func) {
return new Promise(r=>{
this.workQueue.push(_=>r(func()));
this.start();
});
//DRY alternative with more overhead:
//return new Promise(r=>this.addWork(_=>r(func())));
}
sleep(ms) {
return new Promise(r=>setTimeout(r,ms));
}
//Only run the work loop when there is work to be done
start() {
if (this.running) {return this.running;}
return this.running=(async _=>{
//Create local reference to the queue and sleep for negligible performance gain...
const {workQueue,sleep}=this;
//Declare deadline as 0 to pause execution as soon as the loop is entered.
let deadline=0;
while (workQueue.length!==0) {
if (performance.now()>deadline) {
await sleep(10);
deadline=performance.now()+1;
}
/*shift* off and execute a piece of work. *push and shift are used to
create a FIFO buffer, but a growable ring buffer would be better. This
was chosen over unshift and pop because expensive operations shouldn't
be performed outside the idle executor.*/
workQueue.shift()(deadline);
}
this.running=false;
})();
}
}
//Trying out the class.
let executor=new IdleWorkExecutor();
executor.addWork(_=>console.log('Hello World!'));
executor.addWorkPromise(_=>1+1).then(ans=>{
executor.addWork(_=>console.log('Answer: '+ans));
});
//A recursive busy loop function.
executor.addWork(function a(counter=20) {
const deadline=performance.now()+0.2;
let i=0;
while (performance.now()<deadline) {i++}
console.log(deadline,i);
if (counter>0) {
executor.addWork(a.bind(null,counter-1));
}
});
If you can use requestIdleCallback() in your code, adding it to IdleWorkExecutor is pretty simple:
function rICPromise(opt) {
return new Promise(r=>{
requestIdleCallback(r,opt);
});
}
if (!deadline||deadline.timeRemaining()>0) {
deadline=await rICPromise({timeout:5000});
}
So I've got a scroll event. It does a load of stuff to work out whether something should be moved on the page. When you scroll down, it fires off. If you wheel down, drag, it fires of bazillions and bazillions of times. As you'd expect, perhaps. Here's some simple dummy code to represent the sequence of events.
function scroller() {
// 1. A really expensive calculation that depends on the scroll position
// 2. Another expensive calculation to work out where should be now
// 3. Stop current animations
// 4. Animate an object to new position based on 1 and 2
}
$(window).on('resize' scroller);
Don't get me wrong, it's usually accurate so there isn't so much a concurrency issue. My animations inside the event call .stop() (as part #3) so the latest version is always* the right one but it's eating up a lot of CPU. I'd like to be a responsible developer here, not expecting every user to have a quad core i7.
So to my question... Can I kill off previous calls to my method from a particular event handler? Is there any way I can interfere with this stack of queued/parallel-running "processes" so that when a new one is added to the stack, the old ones are terminated instantly? I'm sure there's a concurrency-minded way of wording this but I can't think of it.
*At least I think that's the case - if the calculations took longer in an earlier run, their animation could be the last one to be called and could cock up the entire run! Hmm. I hadn't thought about that before thinking about it here. Another reason to stop the previous iterations immediately!
You can ensure the event is fired a maximum of once per x milliseconds. E.g.:
(function ($) {
$.fn.delayEvent = function (event, callback, ms) {
var whichjQuery = parseFloat($().jquery, 10)
, bindMethod = whichjQuery > 1.7 ? "on" : "bind"
, timer = 0;
$(this)[bindMethod](event, function (event) {
clearTimeout (timer);
timer = setTimeout($.proxy(callback, this, event), ms);
});
return $(this);
};
})(jQuery);
$(window).delayEvent("resize", scroller, 1000);
Minimalistic demo: http://jsfiddle.net/karim79/z2Qhz/6/
When looking to improve a page's performance, one technique I haven't heard mentioned before is using setTimeout to prevent javascript from holding up the rendering of a page.
For example, imagine we have a particularly time-consuming piece of jQuery inline with the html:
$('input').click(function () {
// Do stuff
});
If this code is inline, we are holding up the perceived completion of the page while the piece of jquery is busy attaching a click handler to every input on the page.
Would it be wise to spawn a new thread instead:
setTimeout(function() {
$('input').click(function () {
// Do stuff
})
}, 100);
The only downside I can see is that there is now a greater chance the user clicks on an element before the click handler is attached. However, this risk may be acceptable and we have a degree of this risk anyway, even without setTimeout.
Am I right, or am I wrong?
The actual technique is to use setTimeout with a time of 0.
This works because JavaScript is single-threaded. A timeout doesn't cause the browser to spawn another thread, nor does it guarantee that the code will execute in the specified time. However, the code will be executed when both:
The specified time has elapsed.
Execution control is handed back to the browser.
Therefore calling setTimeout with a time of 0 can be considered as temporarily yielding to the browser.
This means if you have long running code, you can simulate multi-threading by regularly yielding with a setTimeout. Your code may look something like this:
var batches = [...]; // Some array
var currentBatch = 0;
// Start long-running code, whenever browser is ready
setTimeout(doBatch, 0);
function doBatch() {
if (currentBatch < batches.length) {
// Do stuff with batches[currentBatch]
currentBatch++;
setTimeout(doBatch, 0);
}
}
Note: While it's useful to know this technique in some scenarios, I highly doubt you will need it in the situation you describe (assigning event handlers on DOM ready). If performance is indeed an issue, I would suggest looking into ways of improving the real performance by tweaking the selector.
For example if you only have one form on the page which contains <input>s, then give the <form> an ID, and use $('#someId input').
setTimeout() can be used to improve the "perceived" load time -- but not the way you've shown it. Using setTimeout() does not cause your code to run in a separate thread. Instead setTimeout() simply yields the thread back to the browser for (approximately) the specified amount of time. When it's time for your function to run, the browser will yield the thread back to the javascript engine. In javascript there is never more than one thread (unless you're using something like "Web Workers").
So, if you want to use setTimeout() to improve performance during a computation-intensive task, you must break that task into smaller chunks, and execute them in-order, chaining them together using setTimeout(). Something like this works well:
function runTasks( tasks, idx ) {
idx = idx || 0;
tasks[idx++]();
if( idx < tasks.length ) {
setTimeout( function(){ runTasks(tasks, idx); },1);
}
}
runTasks([
function() {
/* do first part */
},
function() {
/* do next part */
},
function() {
/* do final part */
}
]);
Note:
The functions are executed in order. There can be as many as you need.
When the first function returns, the next one is called via setTimeout().
The timeout value I've used is 1. This is sufficient to cause a yield, and the browser will take the thread if it needs it, or allow the next task to proceed if there's time. You can experiment with other values if you feel the need, but usually 1 is what you want for these purposes.
You are correct, there is a greater chance of a "missed" click, but with a low timeout value, its pretty unlikely.