Given this code:
$('#foo').css('height', '100px'); // or any other change to the DOM
console.log('done!');
When the 2nd statement executes, is it safe to assume that the reflow is complete?
Follow on question: if you replaced the second line with this, does it change the answer?
window.setTimeout('console.log("done");', 1);
I hope my underlying question is clear. Thanks a lot for any input.
Browsers usually queue DOM modifications that require a reflow/repaint, to avoid performing that expensive operation multiple times. There are exceptions to that, however, as you can see in this Q&A: When does reflow happen in a DOM environment?.
Considering the code you posted, and assuming the console will log the output synchronously*, the answer to your first question is no. If you just change the height of an element, the browser will typically finish running all other synchronous code before performing the reflow/repaint operation. But, as the answers on the link above says, some actions do trigger an immediate reflow, so it's not possible to answer the "or any other change to the DOM" part of your question.
Considering the same assumptions above, the answer to your second question would be yes. The string "done" will be logged to the console in the next tick of the browser's event loop, so it's safe to assume that's after the reflow.
Usually you don't have to worry about that kind of browser behavior, unless you're optimizing code for performance, and trying to avoid reflows.
* Sometimes the console outputs later than expected; unfortunately I couldn't find a good link about that.
The .css() method is not asynchronous, thus you can safely assume that any statements after it will execute as you expect.
$('#foo').css('height', '100px');
console.log($('#foo').css('height')); // will log '100px'
Related
In my question, DOM redraw methods are those that modifies the DOM and cause browser to redraw the page. For example:
const newChildNode = /*...*/;
document.body.appendChild(newChildNode);
const newHeight = document.body.scrollHeight;
This code works fine under normal circumstances, but I am not so sure how it behaves under high pressure conditions, like when there are so many request to redraw the page. Can I assume that when document.body.scrollHeight is executed, newChildNode is already visible on screen?
We can divide this "redraw" process in 3 parts, DOM update, Reflow, Repaint.
All these operations do not follow the same rules:
DOM update: Always synchronous. The DOM is just an other js object, and its manipulations methods are all synchronous.
Reflow: That's the strange beast you stumbled upon. This is the recalculation of all box positions of the elements on the page.
Generally, browsers will wait until you finished all DOM modifications, and thus, the end of the js stream, before triggering it.
But some DOM methods will force this operation, synchronously. e.g, all the HTMLElement.offsetXXX and alike properties, or Element.getBoundingClientRect, or accessing in-doc's Node.innerText or accessing some properties of getComputedStyle returned object (, and probably others) will trigger a synchronous reflow, in order to have the updated values. So beware when you use these methods/properties.
Repaint: When things are actually passed to the rendering engines. Nothing in the specs says when this should happen. Most browsers will wait the next screen refresh, but it's not said it will always behave like that. e.g. Chrome is known for not triggering it when you blocked the scripts execution with alert(), while Firefox will.
I get wordy sometimes: tl;dr: read the bold text.
The motivation behind deprecating Mutation Events is well understood; their efficacy in achieving many types of tasks is questionable.
However, today, I have discovered a use for them that is highly dependent on those very same undesired properties.
I will first present the question, and then present the reasons that lead me to the question, because the question will be absurd without it.
Is it possible to use the new Mutation Observers in a way that we can have the VM stop at the instant of the change (like the DOM3 Mutation Events do), rather than report it to me after the fact?
Basically, the very thing that makes the Mutation Observer performant and "reasonable" is its asynchronicity, which means (necessarily, it seems) throwing away the stack, pushing a record mutation to a list, and delivering the list to qualified Observers at the next tick or several ticks later.
What I am after is precisely that stack trace of the DOM3 Mutation Event. I really really hope this will work, but basically the Mutation Event callback (which I am allowed to write) will have a stacktrace that will lead me back to the actual code that created my element I'm listening for. So in theory I'd write a Mutation Event handler like this:
// NOT in an onload cb
$("div#haystack").on('DOMNodeInserted', function(evt) {
if (is_needle(evt.target)) {
report(new Error().stack); // please, Chrome, tell me what code created the needle
}
});
This gives me the golden answer.
It seems that Mutation Observers will make it impossible to extract this information. What, then, am I to do once Mutation Events are completely taken out? They have been deprecated for a while now.
Now, to explain a little better the real actual circumstances, and why this matters.
I have been trying to kill a bug which I describe here: I have built a full-DOM serializer which nicely spits back out every element that exists on the webpage, and in comparing them, the broken page and the working page are identical. I have tested this and it is pretty nice. it captures every little thing that's different: Whatever hovery-thing my mouse happens to be over, the CSS class that gets consequently set will be reflected in the HTML dump. Any text of any form on the page will show up if you search it (provided it doesn't span across elements). All inline JS (and more importantly, all differences between inline JS) is present.
I have then gone on to verify that the broken page is missing several event handlers. So none of the clickable items respond to hover or clicks, and therefore no useful work can be done on the interactive form. This is not known to be the only problem, but it does fully explain the behavior. Given that the DOM has no differences in inline JS that explains the difference in behavior, then it must be the case that either the content of the linked resources or the invisible properties of elements (event handlers being in this category) are causing the difference in behavior.
Now I know which elements are supposed to have handlers, but I know not where in the comically large code base (ballpark: 200K lines of JS all loaded as one resource, assembled by several M lines of Perl serverside code) lies the code that assigns the events.
I have tried JS methods to watch modifications of object properties, such as this one (there are many, but all work on the same principle of setting setters and getters), which works the first time, and then subsequently breaks the app afterward. Apparently assigning setters and getters cause the system to stop functioning. It's not clear to me how I can take that approach of watching property assignments to a point where i can get a list of code points that hit a specific element. It might be feasible, but surely not if I can only fire it once, and it breaks everything thereafter.
So watching variables with JS is out.
I might be able to manually instrument jQuery itself, so that when my is_needle() succeeds on the element processed by jQuery, I log all event-related functions performed by jQuery on that element. This is dreadful, and I will resort to this if my Mutation Observer approach fails.
There are yet more ways to skin the cat of course. I could use the handy getEventListeners() on my target element when it is working to get the list of event listener functions that are on it, and then look at the code there, and search the code base to find those functions, and then analyze the code to find out all the places there those functions are inserted into event handlers. That is actually pretty straightforward.
Now I know which elements are supposed to have handlers, but I know not where in the comically large code base (ballpark: 200K lines of JS all loaded as one resource, assembled by several M lines of Perl serverside code) lies the code that assigns the events.
Have you considered simply instrumenting .addEventListener function calls one way or another, e.g. via debugger breakpoints or by modifying the DOM element prototype to replace it with a wrapper method? This would be browser-specific but should be sufficient for your debugging needs.
You also might want to try firefox's tracer, available in nightlies I think. It basically records function execution without the need to use breakpoints or instrumenting code.
Let's say we got an onClick event to a certain div, but before that, we have a big calculation that needs to be done with jQuery which takes like 3 seconds and jQuery is currently busy so it doesn't recognise my event call.
So, 1 second passes and I click on the box. Nothing happens? 2 second. Nothing happens? 3 seconds and jQuery completes his current task. My onclick jQuery event works and the box disappears.
The question is;
What would jQuery do in this case? Automatically create a thread to execute my onclick event instantly? Queue the call? (so it would execute my 3 clicks when the task done, hence 3 event calls) Ignore the first 2 call completely? Also, what should I do to avoid this kind of problems?
JavaScript functions as if it were single threaded. It's my understanding that some browsers differ in actual implementation, but it is safe to write your scripts with the expectation that they will be executed linearly.
See this Question
I imagine your browser will queue up the clicks during the blocked UI, but it's up to the browser to decide how to handle that scenario. (My Chrome queues up click events during blocked UI)
That said, there's a cool feature implemented in newer browsers:
Web Workers
It allows you to perform expensive/long operations in the background without blocking UI. If your script is going to be running on mostly new browsers, it might be worth digging into this feature. BONUS: that article is written by the originator of jQuery! =)
You could probably use a loading bar or a page refresh element to inform the user that something is happening in the background .
Have a look at this jsfiddle. On Chrome, as Shad stated, the clicks get queued up and the events are handled when the calculation has finished. One weird thing is that the line before the big calculation
E('status').innerHTML = "Status: started";
doesn't seem to get executed until afterwards. Another surprising thing is how easy it is to make the entire browser hang by repeating a few operations 10,000 or 100,000 times.
If a server side solution is not possible, a solution could be to break the calculation down into smaller batches of operations, and carry them out one batch at a time with an interval of a few milliseconds to allow other parts of the code to operate. In the meantime you might need a 'please wait' message.
This is a rather complicated question that may simply be impossible with what's currently available, but if there was an easy way of doing it it would be huge.
I'm debugging some JavaScript in Chrome, and because it's very event-driven, I prefer to get trace reports of the code (what got called, etc.) instead of breakpoints. So wherever I leave a breakpoint, I'd like to see the local function name and arguments.
The closest I can get is to drop a conditional breakpoint in, like the following:
There are two big problems with this approach:
Pasting this into each breakpoint is too cumbersome. People would be far more likely to use it if it could be chosen as the default action for each breakpoint.
In Google Chrome, the log calls get fired twice.
Any ideas on a way to surmount either of these problems? I think it might be possible in IE with VS, but the UI there seems equally cumbersome.
IE11 now has "tracepoints", independent of Visual Studio. They do exactly what you asked for three years ago. I don't see them in Chrome or any other browsers yet, but hopefully they will catch on soon!
The best option I found was to edit the javascript code in Chrome's Javascript panel, adding a console.log.
It would only work after the page has been loaded (unless you can afford to put a break point after refresh and then add the logging lines) and (to be worse) you would have to do it each time you reload the page.
Good luck with your search!
I couldn't find something to do this, so I wrote my own.
Now, instead of constantly inserting and removing console.log calls, I leave the logging in and only watch it when necessary.
Warning: specific code below is untested.
var debug = TraceJS.GetLogger("debug", "mousemove");
$('div').mousemove(function(evt) {
debug(this.id, evt);
});
Every time the mouse is moved over a DIV, it generates a logevent tagged ["mousemove", {id of that element}]
The fun part is being able to selectively watch events. When you want to only see mousemove events for element #a, call the following in the console:
TraceJS('a');
When I want to see all mousemove events, you can call:
TraceJS('mousemove');
Only events that match your filter are shown. If you call TraceJS(no argument), the log calls stop being shown.
Suppose I modify the HTML DOM on line 1. Can I be sure that line 2 of the JavaScript will be working with the DOM modifications enacted by line 1?
This is the only explanation I can come up with some buggy behavior I've been having on a form. The previous line is supposed to update the DOM, but sometimes the DOM is not updates by the time it's not on the next one. Things seem to work fine when I go slower though.
Yes, the Javascript DOM modifications will occur sequentially, unless you are waiting for an asynchronous AJAX call to return. The next instruction will not occur until the first has completed. However please show your code!
Updating a specific property on a DOM element happens right away and should persist on a subsequent read of that property.
If you are relying on that change to propagate across the the DOM, it can be tricky. For example, such as changing the size of an element and expecting the sibling element to report a new offset position as a result - the latter may may not happen until the stack unwinds. I don't actually know the exact rules, but you have to be careful - and it is sometimes browser dependent behavior. And scary yet, sometimes throwing an alert to help debug this makes the elements "realize" their new layout right away. Then you take the alert out and it goes back to buggy behavior.
So if you are positive that a DOM change hasn't had the impact right away, then sometimes the thing to do is to call "setTimeout" with a callback function and a time value of 0. When the timer callback completes, you can complete the subsequent processing. YMMV