So I've recently learned that putting your js at the bottom of the DOM is antiquated, and that I should be once again putting them in the <head> with the “async” and “defer” attributes.
Great. But I'm a bit confused as to which I should use, based on priority.
So I have:
jquery
jquery plugins that don't have immediate effects on the look of the
page
jquery plugins that do have immediate effects on the look of the page
my own personal scripts, which have immediate effects on the look of
the page, and is also reliant on jquery
Which should get async, and which should get defer?
If I understand all this correctly, the ones that don't have an immediate effect on the look of the site should get defer, while everything else gets async. Correct? Or am I getting these mixed up.
It's quite simple. You should use [async] for scripts which can be executed in any order, and [defer] for scripts which have to be executed after HTML is parsed.
For example, if you have a script that add social sharing icons next to your posts, and this script doesn't rely on any other script, you can use both [async] and [defer]. But if your scripts requires jQuery, you can't use [async], because if you do, it might turn out that it gets executed before jQuery is loaded and it breaks.
If all your scripts require jQuery, then you shouldn't use [async] at all. As for [defer], it depends on whether your scripts access DOM. For plugins it probably doesn't matter, but you'll probably need it for your own code.
If you wrap your scripts in $(document).ready();, you can use [defer] for scripts which don't have immediate effect (e.g. require user interaction).
Related
I have some applications which dynamically create widgets. Some widgets need some libraries to be included (eg Codemirror, Tinymce, jQuery, ..). These scripts are dynamically added in document when the widget is first created, else they are not included at all (it would be a waste of resources to pre-include all possible widget scripts without them being used in every request).
The widgets can be created either server-side or client-side. In client-side the scripts are added dynamically on page else added as script tags by server in the resulting html output.
If on client, I have noticed that execution order is not respected. For example some Codemirror addons load first (as more light-weight) yet they fail because the main lib s not yet loaded (even though it comes before them as script tag).
I tried using defer attribute which according to MDN respects the execution order as it appears, even though scripts are loaded async. Yet I noticed it fails (note I use only defer attribute not async). Tested on Firefox and Chrome so far.
Is this true also for dynamically added script tags?
If not, what can be an alternative in order to respect execution order (without using onload callbacks)?
Thank you!
I resolved the issue by using callbacks to wait until a script has loaded before attaching a new script on the DOM (which depends on previous script).
However I would expect this procedure to be handled by browser (eg using defer attribute).
If someone has a solution in which the browser handles all this interlinking for dynamically added script tags, please feel free to post an answer.
Is there any difference between including external javascript and CSS files and including all javascript and CSS files (even jQuery core!) inside html file, between <style>...</style> and <script>...</script> tags except caching? (I want to do something with that html file locally, so cache doesn't matter)
The difference is that your browser doesn't make those extra requests, and as you have pointed out, cannot cache them separately from the page.
From a functional standpoint, no, there is no difference once the resources have been loaded.
The reason most of the time we see external path for CSS and javascript because they are either kept on a CDN or some sort sort cache server now days on a cloud
Very good example is when you include jquery from google
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js"></script>
here we see that google is hosting it for us and we don't need to worry about maintainance etc
So if you want to keep them locally then you will have to maintain it
There isn't any difference once the code is loaded. Of course it wont be cached like you pointed out but since you just need to do something locally, it really isn't that important.
On thing to remember would be that you'd have to make sure dependency chains aren't broken since the browser would be loading the scripts simultaneously.
Edit: Of course your main page would appear to take a longer time to load since it has to download all that extra content before the <body> starts to load. You can avert that by moving your JS at the bottom (near the footer) instead.
When your css isnt loaded your page appears crappy at first and then it settles after the css styles are applied, thus now you have to declare your css style on top of the page and then wait for all that to be processed by the browser and then start rendering your page or you let your first page load slowly and on further requests your page will load quicker since the style is now cached
and similarly with your script code, now you need to wait for your code to be rendered on the page and then wait for the the execution that you have bound in $(document).ready().. I hope you realize that $(document).ready will now be delayed since there is no caching.
There is a huge performance issue with this. your load and DOMContentLoaded will fire way slower.
load will fire when browser parses last line of your code. So browser will show waiting circle until all your resources are loaded and parsed. Browsers load multiple resources synchronously. You will lose this performance boost by including JS and CSS code in HTML.
No difference on the client side except you'll do less requests, thus loading faster. On the other hand, you won't be caching but you also won't be able to share the style an the JavaScript among several pages.
If you're positive that CSS and JavaScript are only going to be used in this page, inline is fine IMO.
If you use the script and css only on one page, including them in the html would be the fastest way as the browser needs to make only one request. If you use them on more pages, you should make them external so the browser can cahche them and only has to download them once. Using the jquery from google for example, as mentionned #hatSoft, is even better as the browser is very likly to have them already in cache from othersites that reference them when your user visits for the first time. In real live you rarly use scripts and css on one page only, so making them external is most often the best for performance, and definitly for maintenance. Personaly i always keep HTML, js and css strictly separate!
I want to place jQuery just before the closing </body>-Tag like it's recommended. But because I'm using a Content Management System, inline scripts that require jQuery will be executed before the closing body tag.
My question now is: Is it worth to collect jQuery-based scripts in an array and run them at the end of the document when jQuery is loaded (EXAMPLE) OR just load jQuery in the head section?
You could adopt the approach described here
the idea is to create a q global variable soon in the header and use a temporary window.$ function to collect all the code/functions/plugin jQuery dependent.
window.q=[];
window.$=function(f){
q.push(f);
};
and after you load jQuery you will pass all the functions to the real $.ready function.
$.each(q,function(index,f){
$(f)
});
in this way you will be able to safely include your jquery code before the jQuery lib inclusion
If this could be better than loading jQuery in the head it may depends on how much code you have to push into q temporary function.
jQuery placed into <head> section would require a single blocking script. But if you have much code to inject everywhere inside the document you may have a lot of huge blocking scripts that stop the rendering process.
On the contrary loading a lot of scripts after dom ready event it could easily make your page faster to load so this approach it's better and the benefits can be more evident.
So there's no a definitive answer valid for all code and all pages: even if this kind of technique anyway is generally good and preferable (because you have the benefit of a as-later-as-possible script execution), you should ever make some test with both the approaches and look at loading and rendering time. The article at the beginning has some consideration on the performance but also explains why stackoverflow didn't use it.
Just load jQuery in the head, it will be cached after the first load and won't slow down anything after that.
Everything else sounds like it would be over the top and I am not sure that the gain in performance will be that significant to justify the extra work.
Also sometime if your page load slowly with the javascript at the bottom it can take longer to come and load which means the visual without javascript might be visible and not provide such a good experience to the user.
Is there any drawback to putting code (which will interact with the DOM, adding event listeners and so on) just before the closing </body> tag?
<!-- all the HTML comes before this -->
(function() {
do_stuff_with_the_DOM();
})();
</body>
It seems to work in my own tests, but I never see this method used in tutorials, code examples, or other people's projects. Is there a reason not to do it? Are there edge cases that only seem to pop up when you begin using this in production and see many page views across a variety of browsers?
My project doesn't use jQuery or any other toolkit, and I'm aware of the alternatives that mimic jQuery's $(document).ready() functionality. Do I really need to use one of those? (It should go without saying, but I'm looking to run the code before window.load.)
Note that the code I want to run (do_stuff_with_the_DOM() in the example above) would be defined in an external script file, if that makes a difference.
You should put your JavaScript code in the place that makes the most sense for what it needs to do. In most cases, you need your js to attach events to DOM objects, which is hard to imagine if those DOM objects don't exist when the js is running. So putting it at the bottom of the html is a sensible, simple, and common approach. Is it the best? That's arguable.
Another approach is to attach your JavaScript to the various events that different browsers fire when the DOM is fully loaded. There is nothing wrong with this, although detractors don't like that it's often done in a way that requires an additional blocking HTTP request in the head element.
Event delegation offers a third approach that lets you attach events to parent elements (such as body) very early, and when appropriate child events exist the events will fire as if they had been attached to those elements all along. It's a very cool approach with theoretically the best early-loading performance of any of the above, but has pitfalls in that not all events bubble all the way to the top, like you might expect, and that it often tempts you to separate your JavaScript into multiple chunks, which can violate separation of content and behavior.
In General
Putting code just before </body> should always be the aim.
Its highly recommended, as the download of scripts (if requesting external JavaScript files) blocks parallel downloading (i.e. whilst a script is downloading, nothing else - be it another script or an image for example - can be downloaded at the same time).
The only time you should have an issue with this, is in something like a poor CMS system where you need to have jQuery in-place in the <head> in order for some of its scripts to work.
Using inline JavaScript
Adding inline JavaScript (or inline CSS code) to a page is generally considered bad practice as, for one, its a merging of concerns (i.e. you no longer have true separation between HTML/CSS/JS).
I cannot think of a negative performance issue if you did have all your code inlined - indeed Google use this as a practice (they load all their JavaScript in a bit comment (so that it isn't parsed) and then eval() elements of this blob of "text" as and when they need to.
I would note however, that its unlikely nowadays that you'll have many pages that don't at some point have a requirement on at least one external JavaScript file (be that JQuery, Mootools, Underscore of Backbone). In which case, as you will always have at least one external file (unless you're going to Google route), then you might as well put both external references AND inline code together... at the bottom. Creating consistency.
References
Yahoo Developer Network best practices
Google Page Speed - Defer the loading of JavaScript
For a while I had been running JavaScript component initialization by waiting for the "onload" event to fire and executing a main() of sorts. It seemed cleaner and you could be sure the ID state of your DOM was in order. But after some time of putting that through its paces I found that the component's initialization was choked off by any sort of resource hanging during the load (images, css, iframes, flash, etc.).
Now I have moved the initialization invocation to the end of the HTML document itself using inlined <script /> execution and found that it pushes the initialization before other external resources.
Now I wonder if there are some pitfalls that come with doing that instead of waiting for the "onload".
Which method are you using?
EDIT: Thanks. It seems each library has a specialized function for DOMContentLoaded/readyState implementation differences. I use prototype so this is what I needed.
For me, we use jquery, and its document ready state ensures that the DOM is loaded, but is not waiting for resources like you say. You can of course to this without a javascript framework, it does require a function you can create for example: document ready Now, for the most part putting script at the end of the page sure ensure the rest of the page is there, but making sure the DOM is ready is never a bad thing.
Jquery has $(document).ready()
The ideal point at which to run most scripts is when the document is ready, and not necessarily when it is "loaded".
See here
I use neither. Instead, I depend on YUI's onDomReady() (or onContentReady()/onAvailable()), because it handles the timing of initialization for me.
(Other JS libraries have similar methods for executing only once the page is fully loaded, as this is a common JS problem.)
That is not conform to any (X)HTML spec and I would be advised against that. It would put your site in to a quirks mode of the browser.
The correct way around the issue would be to use the DOMContentLoaded event, which isn't supported in all browsers. Hacks (eg polling doScroll() or using onreadystatechange) exist, so libraries are able to provide this functionality across browsers.
But there are still problems with DOMContentLoaded and chunked transfers which haven't been addressed by popular JavaScript frameworks.
Here's my take on the issue.