Javascript: Atomicity / Interactions between browser view and DOM - javascript

I have two specific Javascript questions that are probably answered by one general answer. Please feel free to also submit the corresponding general question--I have difficulties expressing myself.
When I manipulate multiple DOM elements in a single Javascript callback, is the view possibly updated "live" with each individual manipulation, or atomically after the callback returns?
When a user clicks an HTML element twice in a short timeframe, and the corresponding click handler disables the HTML element, is there a guarantee that the handler won't be executed twice?

Preemptively, I do not have a standards citation for this. This is strictly in my experience.
I have never noticed the visible pixels update while Javascript is executing in real time. I suspect that they will not during the standard operation of the browser - it certainly is possible that debugging presents an exception. I have, however, observed synchronous reflow calculations occurring on DOM elements between the top and bottom of a single function call, but these reflow calculations never made it to the pixel buffer ( that I noticed ). These appear to occur synchronously.
function foo() {
$('#myElement').width(); // 100
$('#myElement').parent().width(); // 150
$('#myElement').css('width', 200);
$('#myElement').width(); // 200
$('#myElement').parent().width(); // 250
}
Regarding multiple clicks on an element that is disabled within the click handler, I suspect that the second click will not fire. I believe when the operating system receives a click event it passes it to the browser and it is placed in a queue. This queue is serviced by the same thread that executes Javascript. The OS click event will remain in the queue until Javascript completes execution at which time it will be removed, wrapped as a browser click event, and bubbled through the DOM. At this point the button will already be disabled and the click event will not activate it.
I'm guessing the pixel buffer is painted on-screen as another operation of this same thread though I may be mistaken.
This is based on my vague recollection of standards that I have seen quoted and referenced elsewhere. I don't have any links.

All script executions happen within the same thread. Therefore you can never have simultaneous actions and don't have to worry about concurrent modification of elements. This also means you don't need to worry about a click handler being fired while one is currently executed. However, this doesn't mean they cant immediately fire it again when your script is finished. The execution may be so fast that its indistinguishable.

First Bullet: The updates will be live. For example, attach the following function to an onclick handler:
function(){
var d = document.getElementById("myelement")
d.setAttribute("align", "center")
d.setAttribute("data-foo","bar")
d.setAttribute("data-bar","baz")
}
Now load this in your browser set a breakpoint on the first line. trigger the event and step through line-by-line while watching the DOM. The updates will happen live, they are not going to happen all at once.
If you want them to happen atomically, you'll want to clone the DOM element in question, make the changes on the clone, then replace the original element with the clone. The cloned element is still being updated in realtime, but the user-visible effect is atomic.
Second Bullet: If the second click event comes in after the element has been disabled, then yes, you won't get a second callback. But if there is any delay between the first click and the disable call, (for example some kind of lengthy check needs to be performed to determine if the element should be disabled) and the second click occurs in that delay, it will fire the callback a second time. The browser has no way to know that multiple click events isn't acceptable behavior in a given script.

Related

Javascript execution in browser

This might sound silly, but has got me bugged for the past couple of days. I just wanted some clarity on how javascript is interpreted and executed in a browser, esp during event handling. Suppose i have two functions based on a click event for the same element. Might be two different event listeners written for two different classes, and the same element has both these classes during the click. Which function does the js executer run first?
Does the interpreter interpret the complete js file on event being triggered or use a bytecode generated during interpretation as in Java or specifically execute lines x - x+y?
Rather than knowing if function 1 would execute before function 2 or vice versa, i am more curious about the mechanism behind the whole process of registering and handling events using js.
i am more curious about the mechanism behind the whole process of
registering and handling events using js.
Think of it as a queue where events are getting pushed when you do element.addEventListener.
Queue has a property -> First-In-First-Out.
So, which ever event-listener has been added to the queue first (basically received by the event-target first) will be executed first, until all of them are executed.
Note: If the same event-handler method is added more than once (same parameters to addEventListener), then older one is replaced by new one.
Secondly, When we add the event via addEventListener we specify a boolean value called - useCapture. If the value is true, then events assigned to parent element is fired first and child-elements later, and vice-versa if this value is false.

Does adding too many event listeners affect performance?

I have a general question about javascript (jQuery) events/listeners.
Is there any limit for the number of click listener without getting performance problems?
In terms of performance, the number of elements the event is bound to is where you'd see any issues.
Here is a jsperf test. You'll see that binding to many elements is much slower, even though only one event is being bound in each case.
The 3rd test in the jsperf shows how you'd bind the event to a parent element and use delegation to listen for the elements you want. (In this case .many)
n.b. The test shows that the 3rd option is the fastest, but that's because it's targeting the element with an id and not a class.
Update: Here's another perf test showing 3 events bound to an id vs one event bound using a class
Though this is an old question, I do not feel that it's completely answered yet.
As atmd pointed out: It's already important where you're adding the event handlers to.
But the original question seems to be more concerned about the performance impact of triggering those event handlers (e.g. click or scroll events).
And yes, adding additional event handlers to an element DOES decrease performance.
Here is a performance comparison to test the following cases:
https://jsbench.me/ztknuth40j/1
The results
One <div> has 10 click handlers, and the click event is triggered by jQuery.
→ 72.000 clicks/sec
One <div> has 100 click handlers, and the click event is triggered by jQuery.
→ 59.000 clicks/sec ~ 19% slower than first case
This shows, that additional event handlers can slow down the execution
One <div> has 10 click handlers, and the click event is triggered by plain JS.
→ 84.000 clicks/sec ~ 6% faster than the first case
Using plain JS is a little bit faster than using jQuery
One <div> has 100 click handlers, and the click event is triggered by plain JS.
→ 14.000 clicks/sec ~ 77% slower than second case
This is interesting: When using native events, the number of listeners seems to degrade the performance faster than using jQuery.
(Those results vary on every run and depend largely on your hardware and browser)
Keep in mind that those tests are done with an empty function. When adding a real function that performs some additional tasks, the performance will slow down even further.
Here is a second test that changes the contents of a div on each click:
https://jsbench.me/ztknuth40j/2
Is it slow?
On the other hand: Even 100 operations per second are super fast (it means, that every event handler is executed 100 times in a single second) and no user will notice the delay.
I think you will not run into problems with user-action events like click or mouseenter handlers, but need to watch out when using events that fire rapidly, like scroll or mouseover.
Also, as computers get faster and browsers apply more and more optimizations, there is no hard limit for how many event handlers are "too much". It not only depends on the function that's called and the event that's observed but also on the device and browser of the user.

Unbind inline javascript events from HTML elements in memory

How do I completely unbind inline javascript events from their HTML elements?
I've tried:
undelegating the event from the body element
unbinding the event from the element
and even removing the event attribute from the HTML element
To my surprise at least, only removing the onchange attribute (.removeAttr('onchange')) was able to prevent the event from firing again.
<input type="text" onchange="validateString(this)"></input>
I know this is possible with delegates and that's probably the best way to go, but just play along here. This example is purely hypothetical just for the sake of proposing the question.
So the hypothetical situation is this:
I'm writing a javascript validation library that has javascript events tied to input fields via inline HTML attributes like so:
<input type="text" onchange="validateString(this)"></input>
But, I'd like to make the library a little better by unbinding my events, so that people working with this library in a single-page application don't have to manage my event handlers and so that they don't have to clutter their code at all by wiring up input events to functions in my hypothetical validation library... whatever. None of that's true, but it seems like a decent usecase.
Here's the "sample" code of Hypothetical Validation Library.js:
http://jsfiddle.net/CoryDanielson/jwTTf/
To test, just type in the textbox and then click elsewhere to fire the change event. Do this with the web inspector open and recording on the Timeline tab. Highlight the region of the timeline that correlates to when you've fired the change event (fire the change event multiple times) and you'll see the event listeners (in the window below) increase by 100 on each change event. If managed & removed properly, each event listener would be properly removed before rendering a new input, but I have not found a way to properly do that with inline javascript events.
What that code does is this:
onChange, the input element triggers a validation function
That function validates the input and colors the border if successful
Then after 1 second (to demonstrate the memory leak) the input element is replaced with identical HTML 100 times in a row without unbinding the change event (because I don't know how to do that.. that's the problem here). This simulates changing the view within a single-page app. This creates 100 new eventListeners in the DOM, which is visible through the web inspector.
Interesting Note. $('input').removeAttr('onchange'); will actually prevent the onchange event from being fired in the future, but does not garbage collect the eventListener/DOM stuff that is visible in the web inspector.
This screenshot is after change event fires 3 times. Each time, 100 new DOM nodes are rendered with identical HTML and I've attempted to unbind the onchange event from each node before replacing the HTML.
Update: I came back to this question and just did a quick little test using the JSFiddle to make sure that the answer was valid. I ran the 'test' dozens of times and then waited -- sure enough, the GC came through and took care of business.
I don't think you have anything to worry about. Although the memory can no longer be referenced and will eventually be garbage collected, it still shows up in the Web Inspector memory window. The memory will be garbage collected when the GC decides to garbage collect it (e.g., when the browser is low on memory or after some fixed time). The details are up to the GC implementer. You can verify this by just clicking the "Collect Garbage" button at the bottom of the Web Insepctor window. I'm running Chrome 23 and after I enter text in your validation box about 5 or 6 times, the memory usage comes crashing down, apparently due to garbage collection.
This phenomenon is not specific to inline events. I saw a similar pattern just by repeatedly allocating a large array and then overwriting the reference to that large array, leaving lots of orphaned memory for GC. Memory ramps up for a while, then the GC kicks in and does its job.
My first sggestion would have been to use off('change') but it seems you've already tried that. It's possible that the reason it's not working is because the handler wasn't attached with .on('change'). I don't know too much about how jQuery handles listener like this internally, but try attaching with .on('change', function ()... or .bind('change', function ()... instead.

Does JavaScript wait for this DOM event to finish?

Suppose I modify the HTML DOM on line 1. Can I be sure that line 2 of the JavaScript will be working with the DOM modifications enacted by line 1?
This is the only explanation I can come up with some buggy behavior I've been having on a form. The previous line is supposed to update the DOM, but sometimes the DOM is not updates by the time it's not on the next one. Things seem to work fine when I go slower though.
Yes, the Javascript DOM modifications will occur sequentially, unless you are waiting for an asynchronous AJAX call to return. The next instruction will not occur until the first has completed. However please show your code!
Updating a specific property on a DOM element happens right away and should persist on a subsequent read of that property.
If you are relying on that change to propagate across the the DOM, it can be tricky. For example, such as changing the size of an element and expecting the sibling element to report a new offset position as a result - the latter may may not happen until the stack unwinds. I don't actually know the exact rules, but you have to be careful - and it is sometimes browser dependent behavior. And scary yet, sometimes throwing an alert to help debug this makes the elements "realize" their new layout right away. Then you take the alert out and it goes back to buggy behavior.
So if you are positive that a DOM change hasn't had the impact right away, then sometimes the thing to do is to call "setTimeout" with a callback function and a time value of 0. When the timer callback completes, you can complete the subsequent processing. YMMV

DOM input events vs. setTimeout/setInterval order

I have a block of JavaScript code running on my page; let's call it func1. It takes several milliseconds to run. While that code is running, the user may click, move the mouse, enter some keyboard input, etc. I have another block of code, func2, that I want to run after all of those queued-up input events have resolved. That is, I want to ensure the order:
func1
All handlers bound to input events that occurred while func1 was running
func2
My question is: Is calling setTimeout func2, 0 at the end of func1 sufficient to guarantee this ordering, across all modern browsers? What if that line came at the beginning of func1—what order should I expect in that case?
Please back up your answers with either references to the relevant specs, or test cases.
Update: It turns out that no, it's not sufficient. What I failed to realize in my original question was that input events aren't even added to the queue until the current code block has been executed. So if I write
// time-consuming loop...
setTimeout func2, 0
then only after that setTimeout is run will any input events (clicks, etc.) that occurred during the time-consuming loop be queued. (To test this, note that if you remove, say, an onclick callback immediately after the time-consuming loop, then clicks that happened during the loop won't trigger that callback.) So func2 is queued first and takes precedence.
Setting a timeout of 1 seemed to work around the issue in Chrome and Safari, but in Firefox, I saw input events resolving after timeouts as high as 80 (!). So a purely time-based approach clearly isn't going to do what I want.
Nor is it sufficient to simply wrap one setTimeout ... 0 inside of another. (I'd hoped that the first timeout would fire after the input events queued, and the second would fire after they resolved. No such luck.) Nor did adding a third, or a fourth, level of nesting suffice (see Update 2 below).
So if anyone has a way of achieving what I described (other than setting a timeout of 90+ milliseconds), I'd be very grateful. Or is this simply impossible with the current JavaScript event model?
Here's my latest JSFiddle testbed: http://jsfiddle.net/EJNSu/7/
Update 2: A partial workaround is to nest func2 inside of two timeouts, removing all input event handlers in the first timeout. However, this has the unfortunate side effect of causing some—or even all—input events that occurred during func1 to fail to resolve. (Head to http://jsfiddle.net/EJNSu/10/ and try rapidly clicking the link several times to observe this behavior. How many clicks does the alert tell you that you had?) So this, again, surprises me; I wouldn't think that calling setTimeout func2, 0, where func2 sets onclick to null, could prevent that callback from being run in response to a click that happened a full second ago. I want to ensure that all input events fire, but that my function fires after them.
Update 3: I posted my answer below after playing with this testbed, which is illuminating: http://jsfiddle.net/TrevorBurnham/uJxQB/
Move the mouse over the box (triggering a 1-second blocking loop), then click multiple times. After the loop, all the clicks you performed play out: The top box's click handler flips it under the other box, which then receives the next click, and so on. The timeout triggered in the mouseenter callback does not consistently occur after the click events, and the time it takes for the click events to occur varies wildly across browsers even on the same hardware and OS. (Another odd thing this experiment turned up: I sometimes get multiple jQuery mouseenter events even when I move the mouse steadily into the box. Not sure what's going on there.)
I think you are on the wrong track with your experiments. One problem is of course that you are fighting different message loop implementations here. The other (the one you didn't recognize it seems) is different double click handling. If you click the link twice you won't get two click events in MSIE - it's rather one click event and a dblclick event (for you that looks like the second click was "swallowed"). All other browsers seem to generate two click events and a dblclick event in this scenario. So you need to handle dblclick events as well.
As message loops go, Firefox should be easiest to handle. From all I know, Firefox adds messages to the queue even when JavaScript code is running. So a simple setTimeout(..., 0) is sufficient to run code after the messages are processed. You should refrain from hiding the link after func1() is done however - at this point clicks aren't processed yet and they won't trigger event handlers on a hidden element. Note that even a zero timeout doesn't get added to the queue immediately, current Firefox versions have 4 milliseconds as the lowest possible timeout value.
MSIE is similar, only that there you need to handle dblclick events as I mentioned before. Opera seems to work like that as well but it doesn't like it if you don't call event.preventDefault() (or return false from the event handler which is essentially the same thing).
Chrome however seems to add the timeout to the queue first and only add incoming messages after that. Nesting two timeouts (with zero timeout value) seems to do the job here.
The only browser where I cannot make things work reliably is Safari (version 4.0 on Windows). The scheduling of messages seems random there, looks like timers there execute on a different thread and can push messages into the message queue at random times. In the end you probably have to accept that your code might not get interrupted on the first occasion and the user might have to wait a second longer.
Here is my adaptation of your code: http://jsfiddle.net/KBFqn/7/
If I'm understanding your question correctly, you have a long-running function but you don't want to block the UI while it is running? After the long-running function is done you then want to run another function?
If so instead of using timeouts or intervals you might want to use Web Workers instead. All modern browsers including IE9 should support Web Workers.
I threw together an example page (couldn't put it on jsfiddle since Web Workers rely on an external .js file that has to be hosted on the same origin).
If you click A, B, C or D a message will be logged on the right. When you press start a Web Worker starts processing for 3 seconds. Any clicks during those 3 seconds will be immediately logged.
The important parts of the code are here:
func1.js The code that runs inside the Web Worker
onmessage = function (e) {
var result,
data = e.data, // get the data passed in when this worker was called
// data now contains the JS literal {theData: 'to be processed by func1'}
startTime;
// wait for a second
startTime = (new Date).getTime();
while ((new Date).getTime() - startTime < 1000) {
continue;
}
result = 42;
// return our result
postMessage(result);
}
The code that invokes the Web Worker:
var worker = new Worker("func1.js");
// this is the callback which will fire when "func1.js" is done executing
worker.onmessage = function(event) {
log('Func1 finished');
func2();
};
worker.onerror = function(error) {
throw error;
};
// send some data to be processed
log('Firing Func1');
worker.postMessage({theData: 'to be processed by func1'});
At this point, I'm prepared to say that, regrettably, there is no solution to this problem that will work under all browsers, in every scenario, every time. In a nutshell: If you run a JavaScript function, there's no way to reliably distinguish between input events that the user triggered during that time and those the user triggered afterward. This has interesting implications for JS developers, especially those working with interactive canvases.
My mental model of how JS input events work was off the mark. I'd thought that it went
The user clicks a DOM element while code is running
If that element has a click event handler, the callback is queued
When all blocking code has executed, the callback is run
However, my experiments, and those contributed by Wladimir Palant (thanks, Wladimir) show that the correct model is
The user clicks a DOM element while code is running
The browser captures the coordinates, etc. of the click
Some time after all blocking code has executed, the browser checks which DOM element is at those coordinates, then runs the callback (if any)
I say "some time after" because different browsers seem to have very different behaviors for this—in Chrome for Mac, I can set a setTimeout func2, 0 at the end of my blocking code and expect func2 to run after the click callbacks (which run only 1-3ms after the blocking code finished); but in Firefox, the timeout always resolves first, and the click callbacks typically happen ~40ms after the blocking code finished executing. This behavior is apparently beyond the purview of any JS or DOM spec. As John Resig put it in his classic How JavaScript Timers Work:
When an asynchronous event occurs (like a mouse click, a timer firing, or an XMLHttpRequest completing) it gets queued up to be executed later (how this queueing actually occurs surely varies from browser-to-browser, so consider this to be a simplification).
(Emphasis mine.)
So what does this mean from a practical standpoint? This is a non-issue as the execution time of blocking code approaches 0. Which means that this problem is yet another reason to hew to that old advice: Break up your JS operations into small chunks to avoid blocking the thread.
Web workers, as Useless Code suggested, are even better when you can use them—but be aware that you're foregoing compatibility with Internet Explorer and all major mobile browsers.
Finally, I hope browser-makers will move forward on standardizing input events in the future. This is one of many quirks in that area. I hope Chrome will lead the way to the future: excellent thread isolation, low event latency, and relatively consistent queueing behavior. A web developer can dream, can't he?
You can use dispatchEvent with a custom event name at the end of your function. This won't work on IE, but is still possible; just use fireEvent instead.
Take a look at this:
http://jsfiddle.net/minitech/NsY9V/
Click "start the long run", and click on the textbox and type in it. Voilà!
You can make the event handlers check to see if a flag is set by func1; if so queue func2 if not already queued.
This may either be elegant or ugly depending on the specializedness of func2. (Actually it's probably just ugly.) If you choose this approach, you need some way to hook events, or alternatively your own bindEvent(event,handler,...) function which wraps the handler and binds the wrapped handler.
The correctness of this approach depends on all the events during func1 being queued at the same time. If this is not the case, you can either make func2 idempotent, or (depending on the semantics of func2) put an ugly "cannot be called again for N milliseconds" lock on it.
please describe better you scenario.
What you need do
some time ago i need do something how that was so i build an simple javascript's routine across serialize async call in one sync call. maybe you could used that added one variant
for example that let me show how that work
first register all async or sync routines
second register end callback
third register call's to routines with yours parameters
fourth thrown process
in your case it neccesary added one call routine and that routine should be UoW of user actions.
Now the main problem is not call to routine in and order of execution if not track changes done by the user
first register all async or sync routines
second register end callback
third register call's to routines with yours paramter
--register your first routine
--register BlockUi //maybe for not accept more changes in the view
--register UiWriter // UoW of change done by user
--register you last routine
fourth thrown process
in real code that is one call dummy's function
function Should_Can_Serializer_calls()
{
RegisterMethods(model);
model.Queue.BeginUnitProcess(); //clear stack of execution, y others
model.Queue.AddEndMethod(SucessfullEnd); // callback to end routine
model.AbstractCall("func1",1,"edu",15,""); //set routine how first to execute
model.AbstractCall("BlockUi"); //track changes and user's actions
model.AbstractCall("UiWork"); //track changes and user's actions
model.AbstractCall("func2","VALUE"); //set second routine for execute
model.Process(); //throw call
}
Now the methods should be async for themselves for this you could use that library http://devedge-temp.mozilla.org/toolbox/examples/2003/CCallWrapper/index_en.html
so, what do you want do?

Categories

Resources