displaying progress bar for labjs, labjs callbacks - javascript

How can I keep track of what scripts have been loaded so far so I can display a progress bar when using labjs (http://labjs.com/)? wait() doesn't work because then it won't parallel load the next resource. Basically i'm looking for some kind of non-blocking callback function i can tie into. Does this exists in labjs?

wait() does not affect the parallel loading of LABjs... it will always parallel load as much as possible (allowed by the browser). wait() only affects the execution of scripts. If it's inserted between two script() calls, it ensures that the second script will "wait" for the first script to finish executing before it executes.
No, there is no exposed API for the load-finishing on scripts, because browsers do not expose a consistent API for when a script finishes loading (only when it executes: "onload", as confusing as that name is).
Now, you can do a progress meter using wait() calls in between each script, but it will tell you something slightly different than what you asked: what percentage of the scripts have executed, not what percentage of the scripts have downloaded. Depending on your needs, that might be quite acceptable.

Related

What does synchronous vs asynchronous loading mean?

Reading from this site, I understand that using commonjs means that when the browser finishes downloading your files, it will have to load them up 1 by 1 as they depend on each other. But with AMD, it can load multiple ones at the same time so that even if file a depends on file b, it part of file a can be executed before file b is finished?
CommonJS Modules: The dominant implementation of this standard is in
Node.js (Node.js modules have a few features that go beyond CommonJS).
Characteristics: Compact syntax Designed for synchronous loading and
servers
Asynchronous Module Definition (AMD): The most popular
implementation of this standard is RequireJS. Characteristics:
Slightly more complicated syntax, enabling AMD to work without eval()
(or a compilation step) Designed for asynchronous loading and browsers
Synchronous programming is executing code line by line. Same with loading.
It will load 1 by 1 whatever that you are loading.
Real world example: You are in a queue in cinema for a movie ticket.
Asynchronous would be many people in restaurant. You order food and other people order food. They dont need to wait for your order to finish.
Everyone can order but you dont know when the order will come. Same as with loading. You can load multiple things at the same time or different intervals but it doesnt guarantee that it will come in that order.
I hope the explanation is good enough.
The syntax with CommonJS in loading modules is as such:
var MyModule = require("MyModule");
As you can see, this will block the thread until the module is downloaded, from either your filesystem or the web. This is called synchronous loading. This is impossible to achieve in a normal web browser environment without affecting user experience, since we cannot block the thread as the browser uses the thread to update the graphics.
With RequireJS, it's done as such:
// In your module
define(["dependencies", ...], function(){
return MyModule;
})
// In your web page
require(["dependencies", ...], function(MyModules, ...){
// do stuff here
});
With this model, our web page does not depend on the timing of when the module should be loaded. We can load our scripts in parallel while the page is still being loaded. This is called asynchronous loading. Once the scripts are loaded, they will call define which notifies RequireJS that the scripts are indeed loaded and executed. RequireJS will then call your main function and pass in the initialized modules.

Ajax calls increase the website load time

Hi i am working on a web app.in that app there is a code like below
$(document).ready(function()}{
getNewLetterContent();
getVideoComplition();
getRssFeed();
intializeEvents();
});
each of these functions are makes ajax calls to thrid party api.most of these calls are taking so much time to get a response.this ultimately makes the app to load slow.whatever the response we are getting back that is not much important for the initial view of the app(Above the fold content) .so i have searched the internet for a solution ,i replaced document.ready to window.load that doesn't make a much difference.can you guys please help me how i can improve the performance along with this calls
You should check code in all these functions. By default JavaScript engines have event loop system, which is event-driven system, which notifies about events and invokes needed callbacks. It means that ajax calls per se shouldn't slow your webpage, because it is just small piece of javascript (just few functions to invoke and to send XHR/fetch requests).
There is a chance that these functions have some heavy code, which is blocking, and therefore page is really slow (it might be the case if all these functions are old 3rd-party-libraries).
Also, there are few possibilities with fully asynchronous code. First of all, there are number of maximum concurrent requests, and if you exceed them heavily, page will be slow (had this problem and add waiting explicitly through promises).
Also, another possibility is that some function actually loaded data, and started some heavy manipulation on the page (changing DOM, forcing recalculation of styles, adding animations, etc). Each case should be investigated, but I recommend to start looking at the network tab in the chrome console.
the simplest solution is to add a timeout function in ready method, this will complete your page load without depending on these functions.
$(document).ready(function()}{
setTimeout(function(){
getNewLetterContent();
getVideoComplition();
getRssFeed();
intializeEvents();
},8000);
});

Is there a provision in LABJS for a callback function if loading times out?

I am asynchronously loading scripts through LabJS and have a chain of dependent scripts. Now if one of the scripts in the chain breaks (in the sense that it can not be downloaded, or connection times out) I believe that the remaining scripts under the dependency chain will not be executed. In such an event, is it possible to provide a custom callback function to take appropriate measures if a particular script fails to load ?
If this is not possible with LabJS, is it possible with any other Asynchronous script loader ?
Here's an example showing how to wrap setTimeout() timeouts around LABjs code... in this case it provides a fallback mechanism where it tries to load jquery from a CDN, then if the timeout passes, it aborts that and tries to load jquery from a local file instead.
https://gist.github.com/670840
According to getify, who happens to be sitting about 20 feet away from me, there's not a way to handle timeouts like that in general, mostly because a timeout is not an explicit, "positive" event. (In the specific case of how the library handles the dependency chain in such cases, I'll let the author himself clarify.)
What you can do is use your own watchdog to wait as long as you feel is appropriate. Just run an interval timer, checking for some tell-tale sign that your script has made it onto the page, and if after some number of iterations you fail to see it you can fall back on an alternative (different script host, whatever).
What about this? I have not tested this:
$LAB.script('jquery-from-cdn.js').wait(function(){
if(!window.jQuery) {
$LAB.script('local-jquery.js').wait(load_scripts);
} else {
load_scripts();
}
});
function load_scripts() {
$LAB.script('other-js.js');
}

Combining JavaScript files as recommended by YSlow - optimal size?

We have about 30 external JavaScripts on our page. Each is already minified.
To reduce HTTP requests and page load time, we are considering combining them to a single file.
This was recommended by the YSlow tool.
Is this wise, or is it better to combine them into, say, two files with 15 scripts each?
Is there an optimal size for the combined JavaScript files?
The fewer the HTTP requests, the better. If you want your page to work on Mobile devices as well, then keep the total size of each script node under 1MB (See http://www.yuiblog.com/blog/2010/07/12/mobile-browser-cache-limits-revisited/)
You might also want to check whether any of your scripts can be deferred to after onload fires. You could then make two combined files, one that's loaded in the page, and the other that's loaded after page load.
The main reason we ask people to reduce HTTP requests is because you pay the price of latency on each request. This is a problem if those requests are run sequentially. If you can make multiple requests in parallel, then this is a much better use of your bandwidth[*], and you pay the price of latency only once. Loading scripts asynchronously is a good way to do this.
To load a script after page load, do something like this:
// This function should be attached to your onload handler
// it assumes a variable named script_url exists. You could easily
// extend it to use an array of scripts or figure it out some other
// way (see note late)
function lazy_load() {
setTimeout(function() {
var s = document.createElement("script");
s.src=script_url;
document.body.appendChild(s);
}, 50);
}
This is called from onload, and sets a timeout for 50ms later at which point it will add a new script node to the document's body. The script will start downloading after that. Now since javascript is single threaded, the timeout will only fire after onload has completed even if onload takes more than 50ms to complete.
Now instead of having a global variable named script_url, you could have script nodes at the top of your document but with unrecognised content-types like this:
<script type="text/x-javascript-deferred" src="...">
Then in your function, you just need to get all script nodes with this content type and load their srcs.
Note that some browsers also support a defer attribute for script nodes that will do all this automatically.
[*] Due to TCP window size limits, you won't actually use all the bandwidth that you have available on a single download. Multiple parallel downloads can make better use of your bandwidth.
The browser has to interpret just as much javascript when it is all combined into one monolithic file as it does when the files are broken up, so I would say it doesn't matter at all for interpretation and execution performance.
For network performance, the less HTTP requests the better.

Any difference between lazy loading Javascript files vs. placing just before </body>

Looked around, couldn't find this specific question discussed. Pretty sure the difference is negligible, just curious as to your thoughts.
Scenario: All Javascript that doesn't need to be loaded before page render has been placed just before the closing </body> tag. Are there any benefits or detriments to lazy loading these instead through some Javascript code in the head that executes when the DOM load/ready event is fired? Let's say that this only concerns downloading one entire .js file full of functions and not lazy loading several individual files as needed upon usage.
Hope that's clear, thanks.
There is a big difference, in my opinion.
When you inline the JS at the bottom of the <body> tag, you're forcing the page to load those <script>s synchronously (must happen now) and sequentially (in a row), so you're slowing down the page a bit, as you must wait for those HTTP calls to finish and the JS engine to interpret your scripts. If you're putting lots of JS stacked up together at the bottom of the page, you could be wasting the user's time with network queueing (in older browsers only 2 connections per host at a time), as the scripts may depend on each other, so they must be downloaded in order.
If you want your DOM to be ready faster (usually what most wait on to do any event handling and animation), you must reduce the size of the scripts you need to as little as possible as well as parallelize them.
For instance, YUI3 has a small dependency resolution and downloading script that you must load sequentially in the page (see YUI3's seed.js). After that, you go through the page and gather the dependencies and make 1 asynchronous and pipelined call to their CDN (or your own servers) to get a big ball of JS. After the JS ball is returned, your scripts execute the callbacks you've supplied. Here's the general pattern:
<script src="seed.js"></script>
<script>
YUI().use('module', function(Y) {
// done when the ball returns and is interpretted
});
</script>
I'm not a particularly big fan of putting your scripts into 1 big ball (because if 1 dependency changes, you must download and interpret the whole thing over again!), but I am a fan of pipe-lining (combining scripts) and the event-based model.
When you do allow for asynchronous, event-based loading, you get better performance, but perhaps not perceived performance (though this can be counteracted).
For instance, parts of the page may not load for a second or two, and hence look different (if you're using JS to affect the page style, which I don't advise) or not be ready for user interaction until you (or those hosting your site) return your scripts.
Additionally, you must do some work to ensure your <script>s have the right dependencies to be able to execute properly. For instance, if you don't have jQuery or Prototype, you can't successfully call:
<script>
$(function () {
/* do something */
});
</script>
or
<script>
document.observe('dom:loaded', function {
/* do something */
});
</script>
as the interpretter will say something like "Variable $ undefined". This can happen even if you've added both <script>s to the DOM at the same time, as I'd bet jQuery or Prototype are bigger than you're application's JS (so the request for the data takes longer). Either way, without some type of limiting, you're leaving this up to chance.
So, the choice is really up to you. If you can properly segment your dependencies - i.e. put the stuff you need up front and lazily load the other stuff later, it'll result in faster overall time until you hit the DOM being ready.
However, if you use a monolithic library like jQuery or the user expects to be able to see something involving JS animation or style right away, inlining might be better for you.
In terms of Usability, you definitely shouldn't do this with anything that the user expects a quick response from like having a button do double duty as the load trigger in addition to it's other function.
OTOH replacing pagination with continuously loading the page as the user scrolls is a very good idea. I do find it a distraction when the load trigger is towards the end of the page, better to put it 1/2 to 3/4 the way down.

Categories

Resources