some time ago, I was reading an article(a library built by some guy) about how his library can do
lazy loading of JS
resolve dependencies between JS
(typically encountered when trying
to "include" one js from another)
include files only once. thought
specified multiple times regardless
of how they are called (either
directly specifying it as file or
specifying it as one of the
dependencies)
I forgot to bookmark it, what a mistake. Can some one point me to something which can do the above. I know DOJO and YUI library have something like this, but I am looking for something which I can use with jQuery
I am probably looking for one more feature as well.
My site has asp.net user controls
(reusable server side code snippets)
which have some JS. Some of them get
fired right away, when the page is
loading which gives a bad user
experience. Yahoo performance
guidelines specify that JS should
be at the bottom of the page, but
this is not possible in my case as
this would require me to separate the
JS and the corresponding server side
control into different files and
maintenance would be difficult. I
definitely can put a jQuery
document.ready() in my user control
JS to make sure that it fires only
after the DOM has loaded, but I am
looking for a simpler solution.
Is there anyway that I could say "begin executing any JS only after DOM has loaded" in a global way than just writing "document.ready" within every user control ?
Microsoft Research proposed a new tool called DOLOTO. It can take care of rewriting & function splitting and enable the on-demand js loading possible.
From the site..
Doloto is a system that analyzes
application workloads and
automatically performs code splitting
of existing large Web 2.0
applications. After being processed by
Doloto, an application will initially
transfer only the portion of code
necessary for application
initialization. The rest of the
application's code is replaced by
short stubs -- their actual function
code is transferred lazily in the
background or, at the latest,
on-demand on first execution.
OK I guess I found the link
[>10 years ago; now they are all broken]
http://ajaxian.com/archives/usingjs-manage-javascript-dependencies
http://www.jondavis.net/techblog/post/2008/04/Javascript-Introducing-Using-%28js%29.aspx
I also found one more, for folks who are interested in lazy loading/dynamic js dependency resolution
http://jsload.net/
About the lazy-loading scripts thingy, most libraries just adds a <script> element inside the HTML pointing to the JS file to be "included" (assynchronously), while others like DOJO, fetches it's dependencies using a XMLHttpRequest and then eval's its contents, making it work synchronously.
I've used the YUI-Loader that is pretty much simple to use and you don't need the whole library to get it working. There are other libraries that gives you just this specific funcionality, but I think YUI's is the safe choice.
About your last question, I don't think there's something like that. You would have to do it yourself, but it would be similar to using document.ready.
i did in my framework a similar thing:
i created a include_js(file); that include the js file only if it isn't included reading and executing it with a synchronous ajax call.
Simply put that code in top of the page that needs dependencies and you're done!
Related
I'm used to working with Java in which (as we know) each object is defined in its own file (generally speaking). I like this. I think it makes code easier to work with and manage.
I'm beginning to work with javascript and I'm finding myself wanting to use separate files for different scripts I'm using on a single page. I'm currently limiting myself to only a couple .js files because I'm afraid that if I use more than this I will be inconvenienced in the future by something I'm currently failing to foresee. Perhaps circular references?
In short, is it bad practice to break my scripts up into multiple files?
There are lots of correct answers, here, depending on the size of your application and whom you're delivering it to (by whom, I mean intended devices, et cetera), and how much work you can do server-side to ensure that you're targeting the correct devices (this is still a long way from 100% viable for most non-enterprise mortals).
When building your application, "classes" can reside in their own files, happily.
When splitting an application across files, or when dealing with classes with constructors that assume too much (like instantiating other classes), circular-references or dead-end references ARE a large concern.
There are multiple patterns to deal with this, but the best one, of course is to make your app with DI/IoC in mind, so that circular-references don't happen.
You can also look into require.js or other dependency-loaders. How intricate you need to get is a function of how large your application is, and how private you would like everything to be.
When serving your application, the baseline for serving JS is to concatenate all of the scripts you need (in the correct order, if you're going to instantiate stuff which assumes other stuff exists), and serve them as one file at the bottom of the page.
But that's baseline.
Other methods might include "lazy/deferred" loading.
Load all of the stuff that you need to get the page working up-front.
Meanwhile, if you have applets or widgets which don't need 100% of their functionality on page-load, and in fact, they require user-interaction, or require a time-delay before doing anything, then make loading the scripts for those widgets a deferred event. Load a script for a tabbed widget at the point where the user hits mousedown on the tab. Now you've only loaded the scripts that you need, and only when needed, and nobody will really notice the tiny lag in downloading.
Compare this to people trying to stuff 40,000 line applications in one file.
Only one HTTP request, and only one download, but the parsing/compiling time now becomes a noticeable fraction of a second.
Of course, lazy-loading is not an excuse for leaving every class in its own file.
At that point, you should be packing them together into modules, and serving the file which will run that whole widget/applet/whatever (unless there are other logical places, where functionality isn't needed until later, and it's hidden behind further interactions).
You could also put the loading of these modules on a timer.
Load the baseline application stuff up-front (again at the bottom of the page, in one file), and then set a timeout for a half-second or so, and load other JS files.
You're now not getting in the way of the page's operation, or of the user's ability to move around. This, of course is the most important part.
Update from 2020: this answer is very old by internet standards and is far from the full picture today, but still sees occasional votes so I feel the need to provide some hints on what has changed since it was posted. Good support for async script loading, HTTP/2's server push capabilities, and general browser optimisations to the loading process over the years, have all had an impact on how breaking up Javascript into multiple files affects loading performance.
For those just starting out with Javascript, my advice remains the same (use a bundler / minifier and trust it to do the right thing by default), but for anybody finding this question who has more experience, I'd invite them to investigate the new capabilities brought with async loading and server push.
Original answer from 2013-ish:
Because of download times, you should always try to make your scripts a single, big, file. HOWEVER, if you use a minifier (which you should), they can combine multiple source files into one for you. So you can keep working on multiple files then minify them into a single file for distribution.
The main exception to this is public libraries such as jQuery, which you should always load from public CDNs (more likely the user has already loaded them, so doesn't need to load them again). If you do use a public CDN, always have a fallback for loading from your own server if that fails.
As noted in the comments, the true story is a little more complex;
Scripts can be loaded synchronously (<script src="blah"></script>) or asynchronously (s=document.createElement('script');s.async=true;...). Synchronous scripts block loading other resources until they have loaded. So for example:
<script src="a.js"></script>
<script src="b.js"></script>
will request a.js, wait for it to load, then load b.js. In this case, it's clearly better to combine a.js with b.js and have them load in one fell swoop.
Similarly, if a.js has code to load b.js, you will have the same situation no matter whether they're asynchronous or not.
But if you load them both at once and asynchronously, and depending on the state of the client's connection to the server, and a whole bunch of considerations which can only be truly determined by profiling, it can be faster.
(function(d){
var s=d.getElementsByTagName('script')[0],f=d.createElement('script');
f.type='text/javascript';
f.async=true;
f.src='a.js';
s.parentNode.insertBefore(f,s);
f=d.createElement('script');
f.type='text/javascript';
f.async=true;
f.src='b.js';
s.parentNode.insertBefore(f,s);
})(document)
It's much more complicated, but will load both a.js and b.js without blocking each other or anything else. Eventually the async attribute will be supported properly, and you'll be able to do this as easily as loading synchronously. Eventually.
There are two concerns here: a) ease of development b) client-side performance while downloading JS assets
As far as development is concerned, modularity is never a bad thing; there are also Javascript autoloading frameworks (like requireJS and AMD) you can use to help you manage your modules and their dependencies.
However, to address the second point, it is better to combine all your Javascript into a single file and minify it so that the client doesn't spend too much time downloading all your resources. There are tools (requireJS) that let you do this as well (i.e., combine all your dependencies into a single file).
It's depending on the protocol you are using now. If you are using http2, I suggest you to split the js file. If you use http, I advise you to use minified js file.
Here is the sample of website using http and http2
Thanks, hope it helps.
It does not really matter. If you use the same JavaScript in multiple files, it can surely be good to have a file with the JavaScript to fetch from. So you just need to update the script from one place.
I like keeping myself busy building modular web-applications, but don't want to spend time where I can save some..
For example I'm building a news module that should be easily implemented over multiple sites, because the same web-application is used.
However not all websites will need a news module. Is it better/easier/faster to create an inline stylesheet/javascript file built into the module itself, than to create a big external stylesheet/javascript with all the libraries? Even though the file for the news module is not needed on all other webpages?
It seems to be much easier to create an inline library in the module itself. So that this only gets loaded when needed, and saves load time and bandwidth on the other pages.
The other thing is that I like writing 'plug-and-play' modules. Say I move a file across the file server into the module folder, and the application will take care of the rest. With inline sheets, I dont have to add new lines to the header/footer etc.
What is the best solution for this? When also taking into account that I rather spend 10 minutes moving a file and it works, than to spend 1 hour appending external libraries just because its more of a 'good practice'?
If you re building web applications, are you using the MVC patern? Do you separate your concerns? (your Templates/Views, your Models, and your logic(Controllers))
If you follow MVC, it makes easier maintaining and customizing your app.
To answer to your exact question, what you need is RequireJS. This way you have only one place to declare your requirements, and RequireJS will handle the rest.. Load order and more..
Quoting from the requirejs website:
Over time, if you start to create more modular code that needs to be
reused in a few places, the module format for RequireJS makes it easy
to write encapsulated code that can be loaded on the fly.
Inline is never a solid way to maintain CSS. Take care to separate the description of your layout from your views. You can easily include the file in the same directory as the module so it should not be an issue.
Or if you want just to load "on fly" some script you can use jQuery.getScript , http://api.jquery.com/jQuery.getScript/ . Otherwise you should follow George's way.
I am tinkering around with jQuery and am finding it very useful and almost exciting.
As of now, I am referencing the jQuery script via Google's CDN and I store plugins I use locally in a static/scripts directory.
Naturally, each page has its own individual implementation of components that are required for the features it currently offers. I.E. the main page has the Twitter plugin whereas the login page has form validation logic and password strength metering. However, certain components (navigation bar) for example use the same script across multiple pages.
Admittedly so, I am not a fan of putting javascript code in the header of a page, but I rather prefer to have it in an external file (for caching, re-usability, and optimization purposes).
My question is, what is the preferred route for organizing the external files. I wanted to try and keep it to one javascript file for the entire site to reduce IO requests. However, I am not sure how to implement document ready functions on a conditional per page bases.
$(document).ready(function () { ... }
Is there some way to reference a page by some method (preferably id based and not a url conditional).
Thank you in advance for your time!
You should try REQUIRE JS.
This will allow you to load only those plugins the pages where you need them, and unload them again if they are not needed anymore.
Then again, it might be overkill. It really depends on the size of your project.
Paul Irish:
http://paulirish.com/2009/markup-based-unobtrusive-comprehensive-dom-ready-execution/
This will allow you to block your scripts by body class/ID and execute them automatically.
First you might want to use YUI Compressor or some other JS compressing tool. Then perhaps creating a resource file (resx) for your JavaScript is the way to go. Then just reference the resource within your code. This is the approach Telerik took for their RadControl ASP.NET AJAX control framework.
Every beginners guide to Javascript talks about the evils of embedded scripts.
And I get it: definitely good advice for novices who have no concept of modular design. But every rule has an exception, and since I'm fairly new at web development (but not development in general) I want to ask if the following is a good exception:
I'm building a web application using MVC on the server-side (Django, for the record), and Require.js on the client-side to manage application logic scripts. I'm careful to keep these scripts DOM-agnostic.
It makes sense to me to embed the remaining DOM-interacting code directly in the server-side HTML templates that are defining that DOM. Creating separate JS files that are so minimal and tightly coupled with the content of the templates just feels unnecessary. Am I wrong?
Assuming you've got all application logic tucked away nicely in external files, is it really so bad to sneak a few lines of jQuery in with your HTML to hook that logic up to the DOM?
If you've got JS that applies to only one page and there's not very much of it then yes, I would put it directly in the page rather than as a separate JS include.
If you have a lot of JS that applies to only one page then a separate JS include gives you the advantage that (after the first request) it will sit in the browser cache rather than being downloaded every time a user refreshes that page. So in that case I'd probably stick with the external JS file.
Having made the decision to include JS in a particular html page, I prefer to include it all as a single block either in the head or at the end of the body - I don't want to have to scan through the html looking for multiple small script blocks buried in the markup. Also if I later decide to move the script to an external file it is easy to do so.
See this on where it is going particularly comment around Angular js from google referred to
http://blog.stevensanderson.com/2012/08/01/rich-javascript-applications-the-seven-frameworks-throne-of-js-2012/
See this Comparison http://engineering.linkedin.com/frontend/client-side-templating-throwdown-mustache-handlebars-dustjs-and-more thorough test and comparison
It appears DOM Templates maybe the future. And for my mind better to deal with in terms of debugging.
I know that best practice for including javascript is having all code in a separate .js file and allowing browsers to cache that file.
But when we begin to use many jquery plugins which have their own .js, and our functions depend on them, wouldn't it be better to load dynamically only the js function and the required .js for the current page?
Wouldn't that be faster, in a page, if I only need one function to load dynamically embedding it in html with the script tag instead of loading the whole js with the js plugins?
In other words, aren't there any cases in which there are better practices than keeping our whole javascript code in a separate .js?
It would seem at first glance that this would be a good idea, but in fact it would actually make matters worse. For example, if one page needs plugins 1, 2 and 3, then a file would be build server side with those plugins in it. Now, the browser goes to another page that needs plugins 2 and 4. This would cause another file to be built, this new file would be different from the first one, but it would also contain the code for plugin 2 so the same code ends up getting downloaded twice, bypassing the version that the browser already has.
You are best off leaving the caching to the browser, rather than trying to second-guess it. However, there are options to improve things.
Top of the list is using a CDN. If the plugins you are using are fairly popular ones, then the chances are that they are being hosted with a CDN. If you link to the CDN-hosted plugins, then any visitors who are hitting your site for the first time and who have also happened to have hit another site that's also using the same plugins from the same CDN, the plugins will already be cached.
There are, of course, other things you can to to speed your javascript up. Best practice includes placing all your script include tags as close to the bottom of the document as possible, so as to not hold up page rendering. You should also look into lazy initialization. This involves, for any stuff that needs significant setup to work, attaching a minimalist event handler that when triggered removes itself and sets up the real event handler.
One problem with having separate js files is that will cause more HTTP requests.
Yahoo have a good best practices guide on speeding up your site: http://developer.yahoo.com/performance/rules.html
I believe Google's closure library has something for combining javascript files and dependencies, but I havn't looked to much into it yet. So don't quote me on it: http://code.google.com/closure/library/docs/calcdeps.html
Also there is a tool called jingo http://code.google.com/p/jingo/ but again, I havn't used it yet.
I keep separate files for each plug-in and page during development, but during production I merge-and-minify all my JavaScript files into a single JS file loaded uniformly throughout the site. My main layout file in my web framework (Sinatra) uses the deployment mode to automatically either generate script tags for all JS files (in order, based on a manifest file) or perform the minification and include a single querystring-timestamped script inclusion.
Every page is given a body tag with a unique id, e.g. <body id="contact">.
For those scripts that need to be specific to a particular page, I either modify the selectors to be prefixed by the body:
$('body#contact form#contact').submit(...);
or (more typically) I have the onload handlers for that page bail early:
jQuery(function($){
if (!$('body#contact').length) return;
// Do things specific to the contact page here.
});
Yes, including code (or even a plug-in) that may only be needed by one page of the site is inefficient if the user never visits that page. On the other hand, after the initial load the entire site's JS is ready to roll from the cache.
The network latency is the main problem.You can get a very responsive page if you reduce the http calls to one.
It means all the JS, CSS are bundled into the HTML page.And if your can forget IE6/7 you can put the images as data:image/png;base64
When we release a new version of our web app, a shell script minify and bundle everything into a single html page.
Then there is a second call for the data, and we render all the HTML client-side using a JS template library: PURE
Ensure the page is cached and gzipped. There is probably a limit in size to consider.We try to stay under 400kb unzipped, and load secondary resources later when needed.
You can also try a service like http://www.blaze.io. It automatically peforms most front end optimization tactics and also couples in a CDN.
There currently in private beta but its worth submitting your website to.
I would recommend you join common bits of functionality into individual javascript module files and load them only in the pages they are being used using RequireJS / head.js or a similar dependency management tool.
An example where you are using lighbox popups, contact forms, tracking, and image sliders in different parts of the website would be to separate these into 4 modules and load them only where needed. That way you optimize caching and make sure your site has no unnecessary flab.
As a general rule its always best to have less files than more, its also important to work on the timing of each JS file, as some are needed BEFORE the page completes loading and some AFTER (ie, when user clicks something)
See a lot more tips in the article: 25 Techniques for Javascript Performance Optimization.
Including a section on managing Javascript file dependencies.
Cheers, hope this is useful.