Is AMD (Lazyloading) really efficient? - javascript

I been developing a single page application which has become really huge now. I started off with Require JS and AngularJS, but there are too many components and loading a single page would make around 40-50 requests to the server (including template files).
Even if the data is cached for all future requests, sending 40-50 requests for the first attempt turns out to be quite expensive and awfully slow on slower internet connections.
My understanding is that if we concatenate and create two script files - Vendors.js (the doesn't change very often) and Private.js (changes with every release) the page load times would be much faster. If this is true then why would someone even use requirejs at all?

You can only compare efficiency, not make absolute claims about it. So you will need to ask
Is sending 40-50 requests more efficient than 1 request for a concatenated file?
No, definitely not. While you might get little advantages because of parallelisation (unlikely still), the overhead is just to much.
Is not requiring an unneeded file more efficient than always loading the file?
Yes, it obviously is.
And that's what lazy loading is all about: It request files only when they are needed, instead of prematurely downloading everything.
So, for a fast app you need to determine which ressources are always (or most often) needed, and concatenate them to one file. The other modules that are seldom needed can go on their own. Taking caching of changing ressources into consideration like you did allows for further optimisation.
why would someone even use AMD at all?
Because it embraces modularisation. Also, it enables very flexible deployment strategies - from serving every module independently in development to using optimisers for production. Without changing your code files.

Related

How much JavaScript can actually be loaded into memory by a browser?

I'm working on a BIG project, written in RoR with jQuery frontend. I'm adding AngularJS which has intelligent dependency injection, but what I want to know is how much javascript can I put on a page before the page becomes noticeably slow? What are the specific limits of each browser?
Assuming that my code is well factored and all operations run in constant time, how many functions, objects, and other things can I allocate in javascript before the browser hits it's limit (which there must be one, because any computer has a finite amount of RAM and disk space (although disk space would be an ambitious limit to hit with javascript)
I've looked online but I've only seen questions about people asking how many assets they can load, i.e. how many megabytes can I load etc. I want to know if there is an actual computation limit set out by browsers and how they differ
-- EDIT --
For the highly critical, I guess a better question is
How does a modern web browser determin the limit for the resources it allocates to a page? How much memory is a webpage allowed to use? How much disk space can a page use?
Obviously I use AJAX, I know a decent amount about render optimization. It's not a question of how can I make my page faster, but rather what is my resource limitation?
Although technically, it sounds a monumental task to reach the limits of a client machine, it's actually very easy to reach these limits with an accidental loop. Everyone has done it at least once!
It's easy enough to test, write a JS loop that will use huge amounts of memory and you'll find the memory usage of your PC will peg out and will indeed consume your virtual memory too, before the browser will fall over.
I'd say, from experience, even if you don't get anywhere near the technological limits you're talking about, the limits of patience of your visitors/users will run out before the resources.
Maybe it's worth looking at AJAX solutions in order to load relevant parts of the page at a time if loading times turn out to be an issue.
Generally, you want to minify and package your javascript to reduce initial page requests as much as possible. Your web application should mainly consist of one javascript file when you're all done, but its not always possible as certain plugins might not be compatible with your dependency management framework.
I would argue that a single page application that starts to exceed 3mb or 60 requests on an initial page load (with cache turned off) is starting to get too big and unruly. You'll want to start looking for ways of distilling copy-and-pasted code down into extendable, reusable objects, and possibly dividing the one big application into a collection of smaller apps that all use the same library of models, collections, and views across all of them. If using RequireJS (what I use) you'll end up with different builds that will need to be compiled before launching any code if any of the dependencies contained within that build have changed.
Now, as for the 'speed' of your application, look at tutorials for render optimization for your chosen framework. Tricks like appending a model's view one-by-one as they are added to the collection results in a faster rendering page then trying to attach a huge blob of html all at once. Be careful of memory leaks. Ensure you're closing references to your views when switching between the pages of your single page application. Create an 'onClose' method in your views that ensures all subviews and attached data references are destroyed when the view itself is close, and garbage collection will do the rest. Use a global variable for storing your collections and models. Something like window.app.data = {}. Use a global view controller for navigating between the main sections of your application, which will help you close out view chains effectively. Use lazy-loading wherever possible. Use 'base' models, collections, and views and extend them. Doing this will give you more options later on for controlling global behavior of these things.
This is all stuff you sort of learn from experience over time, but if enough care is taken, it's possible to create a well-running single page application on your first try. You're more likely to discover problems with the application design as you go though, so be prepared to refactor your code as these problems come up.
It depends much more on the computer than the browser - a computer with a slow CPU and limited amount of RAM will slow down much sooner than a beefy desktop.
A good proxy for this might be to test the site on a few different smartphones.
Also, slower devices sometimes run outdated and/or less feature-rich browsers, so you could do some basic user-agent sniffing or feature detection on the client and fall back to plane server-rendered HTML.

when using require.js and creating a single file using r.js do we still get the benefit of load time?

when using require.js and creating a single file using r.js do we still get the benefit of load time?
In other words, my final single file that r.js produces is over 2MB in size... however in my code I take full advantage of require.js, in other words, I only load required modules when I need them in code.
So the question is, does require js have to read the whole 2MB file before it can start working? I mean, how would it be able only read a portion of the 2MB file... :/
and so producing a single file out of r.js may beat the purpose of quick load times...
no?
Thanks,
Sean.
Why yes, it is possible to misuse r.js and deploy your application in a way that harms performance. There are three broad scenarios for deployment:
Each module is its own file. This is what happens when r.js is not used.
A single bundle contains all modules of the application. Simple applications often use this scenario.
Multiple modules are grouped in multiple bundles. A deployment scenario could involve putting modules A, B, C in file A.js and modules C, D, E in file C.js. More complex applications would conceivably use this scenario. This means using the modules option in r.js' build config.
Just like any other optimization task, using the last option requires reflecting on the architecture of the application to be deployed and possibly using statistical method to determine how to best split the application, or perhaps select the second option instead.
For instance, one of the applications I'm working on and that uses RequireJS is an editor with multiple modes (similar to how Emacs has modes, or IDEs that support multiple programming languages change how they behave depending on the language being edited). Since a user is most certainly not going to use all modes in a single editing session, it makes complete sense to optimize the application into: a) a core bundle that contains the modules that are always needed and b) one bundle per editing mode, containing all the modules that the mode defines. In this way, a typical usage scenario would boil down to downloading two files: the core bundle, plus one mode bundle.
Similarly, an application which is internationalized would might want to deploy the internationalization data in separate bundles so that someone needing Hindi does not also download the data for 100 other languages with it.
Now to address some specific questions in the question:
So the question is, does require js have to read the whole 2MB file before it can start working?
Yes, RequireJS has to read and execute the whole file. While in theory there may be a way to section off a bundle and execute only the define calls required, I'm not convinced that in practice this could be done reliably. And in the end, it would not be surprising if this were a misoptimization, making performance worse.
and so producing a single file out of r.js may beat the purpose of quick load times... no?
Yes, producing a single file may not be optimal. The solution is to do what I've explained above.
My opinion, the answer is yes! You are using one http request in such case and that means you are minimizing Round-Trip Times number.
Here is a summary from that article:
Round-trip time (RTT) is the time it takes for a client to send a
request and the server to send a response over the network, not
including the time required for data transfer. That is, it includes
the back-and-forth time on the wire, but excludes the time to fully
download the transferred bytes (and is therefore unrelated to
bandwidth). For example, for a browser to initiate a first-time
connection with a web server, it must incur a minimum of 3 RTTs: 1 RTT
for DNS name resolution; 1 RTT for TCP connection setup; and 1 RTT for
the HTTP request and first byte of the HTTP response. Many web pages
require dozens of RTTs.
RTTs vary from less than one millisecond on a LAN to over one second
in the worst cases, e.g. a modem connection to a service hosted on a
different continent from the user. For small download file sizes, such
as a search results page, RTT is the major contributing factor to
latency on "fast" (broadband) connections. Therefore, an important
strategy for speeding up web page performance is to minimize the
number of round trips that need to be made. Since the majority of
those round trips consist of HTTP requests and responses, it's
especially important to minimize the number of requests that the
client needs to make and to parallelize them as much as possible.

Does separating your Javascripts into separate files have performance implications?

I'm writing a PhoneGap(Cordova) app with jQuery and jQuery Mobile.
I'm a noob, so for simplicity I like to keep my scripts in separate .js files, with functions divided between them roughly according to function.
(1) Are there performance implications to this method?
(2) Are there programmatic effects that this has that I'm unaware of?
(3) Since it's a Cordova app, all the files will be pre-packaged, but does this turn into a better/worse idea when you're talking about a classically-accessed website?
Thanks!
EDIT
Since asking this question, I found this blog post. http://css-tricks.com/css-sprites/. It addresses the issue of multiple HTTP requests and the associated performance issues, albeit in the context of images.
Yes:
It takes longer to load several small files than one large file.
The scripts are executed in order, so with several files you have a load - run - load - run cycle, where each file can't be loaded until the previos has run.
The scripts run in order, so if one depends on another, they have to be in the right order.
If it's loaded as one package, the network transfer time would be the same, but the process of loading the scripts into the page is still somewhat more complicated with several scripts.
Normally it does not effect performance much, but reduces the no. of requests generating from client browser.
There are no programmatic effects of keeping js in seperate files.
For a classic website, its good to keep it the way you are using.
as far as I understand, it is a design issue, and you need to keep balance between factoring/cumulating your scripts.
It will have no noticeable impact when executed by the client machine. However, having lots of separate JS files can have a noticeable impact on perceived load time by the user. This is one aspect of something called "page weight".
Each file requires a separate request to look up and then download the file. For small or low traffic sites, this will not matter much. But as usage goes up, it can become very noticeable.

Javascript errors / bad practice

Is it bad practice to have a single javascript file that gets loaded accross all pages even if certain functions are only needed on certain pages? Or should the files be split up according to functionality on a given page and loaded only by pages that need them?
According to YSlow less files is better, but try to keep each file under 25k. Also make sure you minify or otherwise reduce the size of the js (and css) files. If possible turn on GZip for js on the webserver and set a far future expires header.
See here for the Yahoo Developer performance best practices.
If this file is really large, it could impact certain user's perceived performance (i.e. download and render times). IMHO you should split it up into reasonable groups of functions, with each group having similar functions (such that pages only reference the files they need).
depends on the size and complexity of the unused functions.
the javascript-parser anyway only stores the location and the signature of each function. as far as i know, it is only parsed when executed.
if traffic is a problem for you, rather include only those you need...
regards
Since the JS files are cached once they are downloaded and the JS parser shows no noticable performance difference btw a big JS file(not a HUGE one ;)) and a small js file, you should go with the single file approach.
Also it is known that multiple js files reduces the performance.
You're best off with a single JS file, as browsers will cache it after the first request for it (unless they've turned that off, in which case they deserve what they get). Another thing that will vastly, vastly increase your perceived performance on page load is turning on gzip compression in the web server - most JS files compress very well.
I would recommend to use one big file, for each file the browser launches a web request. Most browsers, I'm not quite sure how much it is with the newest versions of the known browsers, only launch a few concurrent web requests. The browser will wait until the files have been downloaded before launching the next web requests.
The one big file will be cached and other pages will load faster.
As #Frozenskys mentions YSlow states that less files is better, one of the major performance enhancements proposed by the Yahoo team is to minimize the amount of http requests.
Of course if you have a HUGE javascript file that literally takes seconds to download, it's better to split it up to prevent that the user has to wait seconds before the page loads.
A single file means a single download; as this article explains, most browsers will only allow a limited number of parallel requests to a single domain. Although your single file will be bigger than multiple small ones, as the other answers have pointed out:
The file will be cached
Techniques like minification and server-side gzip compression will help to reduce the download time.
You can also include the script at the end of the page to improve the perceived load time.

Javascript and CSS parsing performance

I am trying to improve the performance of a web application. I have metrics that I can use to optimize the time taken to return the main HTML page, but I'm concerned about the external CSS and JavaScript files that are included from these HTML pages. These are served statically, with HTTP Expires headers, but are shared between all the pages of the application.
I'm concerned that the browser has to parse these CSS and JavaScript files for each page that is displayed and so having all the CSS and JavaScript for the site shared into common files will negatively affect performance. Should I be trying to split out these files so I link from each page to only the CSS and JavaScript needed for that page, or would I get little return for my efforts?
Are there any tools that could help me generate metrics for this?
­­­­­­­­­­­­­­­­­­­­­­­­­­­
Context: While it's true that HTTP overhead is more significant than parsing JS and CSS, ignoring the impact of parsing on browser performance (even if you have less than a meg of JS) is a good way to get yourself in trouble.
YSlow, Fiddler, and Firebug are not the best tools to monitor parsing speed. Unless they've been updated very recently, they don't separate the amount of time required to fetch JS over HTTP or load from cache versus the amount of time spent parsing the actual JS payload.
Parse speed is slightly difficult to measure, but we've chased this metric a number of times on projects I've worked on and the impact on pageloads were significant even with ~500k of JS. Obviously the older browsers suffer the most...hopefully Chrome, TraceMonkey and the like help resolve this situation.
Suggestion: Depending on the type of traffic you have at your site, it may be well worth your while to split up your JS payload so some large chunks of JS that will never be used on a the most popular pages are never sent down to the client. Of course, this means that when a new client hits a page where this JS is needed, you'll have to send it over the wire.
However, it may well be the case that, say, 50% of your JS is never needed by 80% of your users due to your traffic patterns. If this is so, you should definitely user smaller, packaged JS payloads only on pages where the JS is necessary. Otherwise 80% of your users will suffer unnecessary JS parsing penalties on every single pageload.
Bottom Line: It's difficult to find the proper balance of JS caching and smaller, packaged payloads, but depending on your traffic pattern it's definitely well worth considering a technique other than smashing all of your JS into every single pageload.
I believe YSlow does, but be aware that unless all requests are over a loopback connection you shouldn't worry. The HTTP overhead of split-up files will impact performance far more than parsing, unless your CSS/JS files exceed several megabytes.
To add to kamen's great answer, I would say that on some browsers, the parse time for larger js resources grows non-linearly. That is, a 1 meg JS file will take longer to parse than two 500k files. So if a lot of your traffic is people who are likely to have your JS cached (return visitors), and all your JS files are cache-stable, it may make sense to break them up even if you end up loading all of them on every pageview.

Categories

Resources