How does jQuery have asynchronous functions? - javascript

I'm surprised I can't find a clear answer to this. So, in jQuery, you can do this:
$(someElements).fadeOut(1000);
$(someElements).remove();
Which, will start a fadeOut animation, but before it finishes executing in the 1 second duration, the elements are removed from the DOM. But how is this possible? I keep reading the JavaScript is single threaded (see also: Is JavaScript guaranteed to be single-threaded?). I know I can do either:
$(someElements).fadeOut(1000).promise().done(function() {
$(someElements).remove();
});
or even better:
$(someElements).fadeOut(1000, function() {
$(this).remove();
});
What I don't understand is how JavaScript runs in a "single thread" but I'm able to use these jQuery functions that execute asynchronously and visibly see the DOM change in different places at the same time. How does it work? This question is not: "How do I fix this".

jQuery animations (and pretty much all javascript-based animations) use timers to run their animations. The call to .fadeOut() just STARTS the animation and it doesn't complete until some time later when a series of timer operations are done.
This is all still single-threaded. .fadeOut() does the first step of the animation, it sets a timer for the next step and then the rest of your javascript (including the .remove()) runs until completion. When that javascript thread finishes and the time elapses for the timer, the timer fires and the next step of the animation happens. Finally when all steps of the animation have completed, jQuery calls the completion function for the animation. That is a callback that allows you to do something when the animation is done.
That is how you fix this issue:
$(someElements).fadeOut(1000, function() {
$(this).remove();
});
You use the animation completion function and only remove the item when the animation is done.

There is a setInterval handler in jQuery that performs transformations on all registered animation properties. If you're coming from as3, think of it as an EnterFrame handler, or like a Draw method in OpenGL

You ca use delay() to wait for a certain time or use a callback for the animation, changing the fadeOut with animate.
jQuery uses setTimeout to animate and queues.

In much the same way that operating systems 20 years ago did multi-tasking. There were no threads, there were just a list of things that needed attention and a controller that would give attention to things based on the list.
A single thread just iterates through the list over and over services all the things that need servicing. The only difference here is that some things have a wait period associated. They are in the list, but are flagged to service only after a certain period. It's a very basic scheduler implementation essentially. The kernel on a computer does the same thing. Your CPU can only execute a few programs concurrently, and even then, only somewhat. The operating system kernel has to decide who gets attention on a millisecond by millisecond basis (see jiffies). Javascript's "kernel" or runtime does the same thing, but essentially like it's running on a CPU with only one core.
This doesn't talk about things like interrupt queues and such which a CPU can cope with, and I'm not sure Javascript has any analogue, but at a simple level, I think it's a fair representation.

Single Threading has nothing to do with Asynchronous programming. http://social.msdn.microsoft.com/Forums/uk/csharplanguage/thread/3de8670c-49ca-400f-a1dc-ce3a3619357d
If you have only one thread on which you can execute instructions, it won't /always/ be executing. During those empty spots, that's opportunity for more work. Asynchronous programming just breaks up the work into re-entry capable blocks and the thread jumps around to where it's needed. (Conceptual explanation)
In this case, your question might more appropriately be: Why isn't this a blocking call? In which case the answer is pretty clear that it's to separate the UI animations from data calls. The whole JS environment shouldn't block for 1 second while taking small slices to animate an element which it could be retrieving or transforming data, queuing up animation for other elements, etc.

Related

Javascript OnScroll performance comparison

Update: Similiar question with a very good answer that shows how to use requestAnimationFrame with scroll in a useful way:
scroll events: requestAnimationFrame VS requestIdleCallback VS passive event listeners
So let's say I want to add some expensive action on my site triggered by scrolling. For example, I'm using parallax effects in my jsfiddle.
Now I keep reading it must not be bound to the event directly, sometimes followed by snippets that are meant to be better. Just some examples:
Attaching JavaScript Handlers to Scroll Events = BAD!
How to develop high performance onScroll event?
How to make faster scroll effects?
60FPS onscroll event listener
What they say is basically don't do this:
// Bad guy 1
$(window).scroll( function() {
animate(ex1);
});
or this
// Bad guy 2
window.addEventListener('scroll', onScroll, false);
function onScroll() {
animate(ex2);
}
But use timeouts, intervals, requestAnimationFrame and whatnot, for example:
// Good guy
$(window).scroll( function() {
scrolling1 = true;
});
setInterval( function() {
if (scrolling1) {
scrolling1 = false;
animate(ex3);
}
}, 50 );
So, I went and added the options I found in the links above to a jsfiddle that tries to compare them by adding a counter to every approach, like so:
// Test
$(window).scroll( function() {
counter = counter + 1;
// output result of counter
animate(ex1);
});
Best to check the complete jsfiddle
Outcome: Everything that works smooth is about the same number of calculations. If I can live with choppy effects, maybe I can safe some resources. And against everything I read, this seems logical to me!
First question:
Am I missing something or is this a valid test? If it's invalid, how could I test correctly?
Edit: To clarify, I want to test whether any of the above methods save performance at all.
Second question:
If it is valid, why is everyone nervous about onscroll? If fluid animations require 5000 calculations over the complete site, there's no way to change it anyway?
(Well, sometimes I use checks to determine whether an object is in the viewport or not, but honestly I don't even know if those checks aren't as expensive as the prevented code itself, especially if they involve five different variables such as offset, windowHeight, scrolltTop, getBoundingClientRect and outerHeight...)
So, #SirPeople already answered your first question correctly, it is indeed a good test to see how often the animate function gets called, but it's a bad test to compare the performance of the different snippets.
This is a performance recording of the excecution:
The function animate isn't expensive at all. I took a performance recording (next picture), which shows that it takes between 0.64ms and 1.29ms in the one iteration I looked at (points 1-5). And once the function is done, the repaint takes no time at all (point 6), which might be because the page has almost no content. When we take a look at the time, we can see that all five animation functions and the repaint happen in less than 10ms, which, under normal circumstances, mean that we can get a fluid 60fps animation (point 7).
Also, if we want to compare onscroll event listeners we need to test each on it's own and compare the results. If one of the listeners would really be blocking it would have an influence on the whole page and without performance debugging you wouldn't know which one it was.
I made two jsfiddles window.scroll and RAF. And, to my surprise, there does not seem to be any difference.
Why are people concerned about this?
As you can see in the jsfiddles linked above, if the event handlers get too large, the entire page is going to lag.
Now what?
I'm no performance guru myself, but:
Perhaps one of the other solutions is correct
We can mark your event listeners as passive, although in my test it didn't really improve at all
https://developers.google.com/web/updates/2016/06/passive-event-listeners
We can optimize the event listener by removing parallax effects
There's also this new thing called Intersection Observer which is supposed to be much faster, I didn't test it
https://developer.mozilla.org/en-US/docs/Web/API/Intersection_Observer_API
I am not totally sure if I got correctly your questions and all your statements but I will try to give you an answer:
Am I missing something or is this a valid test? If it's invalid, how could I test correctly?
It is a valid test if you are measuring the number of times a function has been called, this will of course depend on the browser, SO, if is GPU enhanced and some other benchmark parameters that has been commented in your question already.
If we consider that measurement correct then it can be said that by using timeouts or requestAnimationFramework could save time because we are basically following the principles of debouncing or throttling. Basically we do not want to request or called a function more times than is needed. In the case of the timer we will queue less functions calls and in the case of requestAnimationFrame because it enqueue calls before repainting and will execute them sequentially. In timeouts it could happen that calculations overlap if they are very heavy.
I found a better answer in why using requestAnimationFrame explaining the main problems with animations in the browser like Shear, flickering or frame skip. It also includes a good demo.
I think your testing method is correct, you also should interpret it correctly, maybe calls are close to be the same number because of your hardware and your engine, but as said, debounce and throttling are a performance relieve.
Here also one more article supporting not attach handlers to window scroll from Twitter. (Disclaimer: this article is from 2011 and browsers have deal with optimizations for scroll in different ways).
why is everyone nervous about onscroll? If fluid animations require 5000 calculations over the complete site, there's no way to change it anyway?
I do not think there is nervousness in the performance hit, but the user experience will be worst for the above mentioned animation problems that your overcalling of scroll can cause, or even if there is a desynchronization with your timer you could still get the same 'performance' problems. People just recommend saving calls to scroll because:
Human visual permanence doesnt require a super high frame rate and so it is useless to try to show images more often.
For more complex calculations or heavy animations browsers are already working on optimizations, like you have check, some browsers had optimize this things in comparison with the 2, 3 or 6 years ago the articles you expose were written.

How multiple scripts run without webworkers?

I am learning web-workers and found that we can run multiple scripts now on web pages. This is quite interesting but one thing again came in to my mind was If we can't run multiple scripts before html web-workers, How did I run several slideshows on a single html page without using above technology.
I may not properly understand this concept. Can anyone explain how I ran several slideshows which was designed from only JavaScript without web-workers?
Slideshows will typically use a setTimeout() to show the next slide. You can simply have multiple slideshows each using their own setTimeout().
The first slideshow shows it's initial image, then sets a setTimeout() for a particular time in the future when it wants to switch to the next slide. The second slideshow then shows it's initial image and set a setTimeout() for a particular time in the future when it wants to switch to the next slide.
Then, both slideshow are done executing for now until one of their setTimeout() fires. They never technically "execute at the same time". Only one ever actually has code running at a time. Using short enough timer intervals, it may appear they are both operating at the same time, but technically they aren't.
Javascript without WebWorkers is single threaded. There is only ever one thread of execution running at a time. Timers are often used to simulate multiple things happening at the same time (such as multiple javascript-based animations running at the same time). But, that is only a simulation (sometimes a very effective one).
Slideshows may also use CSS3 animations or transitions to show slide transitions from one slide to the next which are controlled by the browser and don't use javascript to execute the animations (they use native code which may or may not be multi-threaded or may even use the GPU - that's browser implementation dependent).
You may find this answer and the references it contains helpful in understanding the javascript event model and single threading: How does JavaScript handle AJAX responses in the background?.

How to Write A Function That Appends an Item to the DOM and Delays the Next Tick?

I recently found the following question online:
Write a function that takes an object and appends it to the DOM, making it so that events are buffered until the next tick? Explain why this is useful?
Here is my response:
function appendElement(element) {
setTimeout(function() {
document.body.appendChild(element);
}, 0);
}
Why did I set the interval to zero?
According to this article, setting the timeout to 0, delays the events until the next tick:
The execution of func goes to the Event queue on the nearest timer tick. Note, that’s not immediately. No actions are performed until the next tick.
Here's what I am uncertain of:
Is my solution correct?
I cannot answer why this approach is beneficial
For reference, I got this question from this website listing 8 JavaScript interview questions.
I'd also like to point out that I am asking this question for my own research and improvement and not as part of a code challenge, interview question, or homework assignment.
I think you misunderstood the question. I read it as asking to append an element to the DOM, then delay any further processing until the next tick. Therefore:
document.appendChild(element);
setTimeout(function() {
resumeProgramFlowFromHere();
}, 0);
// nothing here
That's useful when you want to make sure there is a reflow/repaint before some time-consuming operation takes place (to give users visual feedback). Browsers already force a repaint in certain circumstances, but when they don't, this technique can be useful.
You can find some more information here and here.
That's my interpretation of the question, but I find it confusing too, probably because it's not clear what they mean by events. And there are other debatable questions on that site, the weirdest being:
What is the concept of “functions as objects” and how does this affect variable scope?
That simply makes no sense to me. Okay, functions are objects in JavaScript, and scopes are also related to functions, but those are distinct topics. The fact that functions are objects has nothing to do with scope.
So my advice is, take those interview questions with a grain of salt.
There are situations where you run into bugs that require this technique to fix. Its not specific to appending an element though; that's just one use case.
I've encountered this from time to time doing particular kinds of animations where setting multiple css3 properties at the same time doesn't trigger the browser to redraw correctly.
While I don't have code examples of the previous case, you can see where I use the technique on my site http://popped.at. Look in this file, http://www.popped.at/js/main.js, and search for "//the 0ms timeout is needed for IE9". In this case, there was an issue in IE9 where canvas wasn't being updated properly.
(The site isn't functioning in the backend at the moment, which is why its dark. I'm working on that.)

How does JQuery do animations?

JQuery is written in Javascript. As someone who knows a little of each, I have to wonder how they wrote some of it. How do you animate HTML elements in pure Javascript? Is it just repeatedly updating the CSS property that is to be animated, using standard DOM manipulation, with callbacks to make it asynchronous? Or is it something more sophisticated?
jQuery animations are just updating CSS properties on a recurring timer (which makes it asynchronous).
They also implement a tweening algorithm which keeps track of whether the animation is ahead of schedule or behind schedule and they adjust the step size in the animation at each step to catch up or slow down as needed. This allows the animation to finish in the time specified regardless of how fast the host computer is. The downside is that slow or busy computers will show more choppy animations.
They also support easing functions which control the time/shape of the acceleration of the animation. Linear means a constant speed. Swing is a more typical, start slow, accelerate to max speed in the middle and then end slowly.
Because animations are asynchronous, jQuery also implements an animation queue so that you can specify multiple animations that you want to see execute one after the other. The 2nd animation starts when the first animation finishes and so on. jQuery also offers a completion function so if you want some of your own code to run when an animation is complete, you can specify a callback function that will get called when the animation completes. This allows you to carry out some operation when the animation is complete such as start the animation of some other object, hide an object, etc...
FYI, the jQuery javascript source code is fully available if you want more details. The core of the work is in a local function called doAnimation(), though much of the work is done in functions called step and update which can be found with the definition of jQuery.fx.prototype.
That's it : a plain old setInterval and some DOM manipulation.
Of course, the code is a bit more complex than that.
Look : http://code.jquery.com/jquery-latest.js , near the end of the file. Search for jQuery.fx.prototype .
When inspecting element during a jQuery animation almost always it changes your CSS code, which is assigned to the element through jQuery that is not really assigned from HTML you write. Get a FireBug if you don't have it for firefox and inspect what's going on while the animations execute.
Without having read the code myself, from what I understand it is indeed using standard Javascript methods and properties to update the DOM elements and CSS styles at regular intervals ("ticks"), which it accomplishes using standard setInterval() and setTimeout() methods.

Understanding JavaScript timer thread issues

I'm starting on a javascript MMORPG that will actually work smoothly. Currently, I created a demo to prove that I can move characters around and have them chat with each other, as well as see eachother move around live.
http://set.rentfox.net/
Now Javascript timers are something I have not used extensively, but from what I know, correct me if I'm wrong, is that having multiple setIntervals happening at the same time doesn't really work well b/c it's all on a single thread.
Lets say I wanted to have 10 different people nuking fireballs at a monster by using sprite background positioning with setInterval -- that animation would require 10 setIntervals doing repainting of the DOM for sprite background-position shifts. Wouldn't that be a big buggy?
I was wondering if there was a way around all this, perhaps using Canvas, so that animations can all happen concurrently without creating an event queue and I don't have to worry about timers.
Hope that makes sense, and please let me know if I need to clarify further.
The issue with multiple setIntervals is twofold. The first is as you indicate, since all Javascript on browsers is (currently) single-threaded, one timer's execution may hold up the next timer's execution. (Worker threads are coming, though; Firefox already has them, as does Safari 4 [and maybe others].) The second is that the timer happens at a set interval, but if your handler is still running when that interval expires, the second interval is completely skipped. E.g., the timer can interfere with itself.
That last part needs more explanation: Say you have a setInterval at 10ms (which is the fastest you can reasonably expect any implementation to do it; may are clamped so that they don't go faster than that). If your handler takes 13ms, the interval that should have happened 10ms after it began will be completely skipped.
I usually use setTimeout for this kind of thing. When my handler is triggered, I do my work and then schedule the next event at the end of the handler. Then (within the bounds of what you can be certain of), I know the next event will happen at that interval.
For what you're doing, it seems like a single "pulse" timer would be best, working through whatever it needs to do on the pulse. Whether that pulse timer uses setInterval or setTimeout is a judgment call based on what you're seeing with your actual code.
+1 to T. J. Crowder, the answer was perfect. I strongly recommend learning to use Canvas over DOM nodes for game animation; the latter is slow and buggy, and will hang the browser in any non-trivial situation. OTOH, Canvas is much faster and can be hardware accelerated, and even has a 3D context if you need it.

Categories

Resources