I've got a script executing on $(document).ready() that's supposed to vertically align block element in my layout. 90% of the time, it works without issue. However, for that extra 10% one of two things happens:
There's an obvious lag in the time it takes to do the centering, and the block elements jump into position. This could simply be performance related - as the page size is often large and there is a fair amount of javascript that is executing at once.
The centering will completely mess up, and the block element will either pushed down too far or not far enough. It appears as if it tried to calculate the height, but was getting improper measurements.
Is there any reason why executing a script on DOM-ready would not have all the correct CSS values injected into the DOM yet? (all CSS is in the <head> via a <link>).
Also, here's the script that's causing the issue (yes, it's been taken straight from here):
(function ($) {
// VERTICALLY ALIGN FUNCTION
$.fn.vAlign = function() {
return this.each(function(i) {
var ah = $(this).height();
var ph = $(this).parent().height();
var mh = (ph - ah) / 2;
$(this).css('margin-top', mh);
});
};
})(jQuery);
Thanks.
From the 1.3 release notes:
The ready() method no longer tries to make any guarantees about waiting for all stylesheets to be loaded. Instead all CSS files should be included before the scripts on the page. More Information
From the ready(fn) documentation:
Note: Please make sure that all stylesheets are included before your scripts (especially those that call the ready function). Doing so will make sure that all element properties are correctly defined before jQuery code begins executing. Failure to do this will cause sporadic problems, especially on WebKit-based browsers such as Safari.
Note that the above is not even about actually rendering the CSS, so you may still see the screen change when ready() kicks in. But it should save you from problems.
Actually, I find it a bit strange that just putting the CSS above the JS will solve all issues. The CSS is loaded asynchronously, so JS loading can start and finish while the CSS is still being downloaded. So if the above is a solution, then executing any JS code is then halted until all earlier requests have completed?
I did some testing, and indeed, sometimes JS is delayed until the CSS is loaded. I don't know why, because the waterfall shows that the JS has completed loading long before downloading the CSS has finished.
See JS Bin for some HTML and its results (this has a 10 second delay), and see webpagetest.org for its waterfall results. This uses some script from Steve Souders' cuzillion.com to mimic slow responses. In the waterfall, the reference to resource.cgi is the CSS. So, in Internet Explorer, the first external JS starts to load right after the CSS was requested (but that CSS will take another 10 seconds to finish). But the second <script> tag is not executed until the CSS has finished loading as well:
<link rel="stylesheet" type="text/css" href=".../a script that delays.cgi" />
<script type="text/javascript" src=".../jquery.min.js"></script>
<script type="text/javascript">
alert("start after the CSS has fully loaded");
$(document).ready(function() {
$("p").addClass("sleepcgi");
alert("ready");
});
</script>
Another test with a second external JS after getting jQuery, shows that the download of the second JS is not started until the CSS has loaded. Here, the first reference to resource.cgi is the CSS, the second the JS:
Moving the stylesheet below all JS indeed shows that the JS (including the ready function) runs much earlier, but even then the jQuery-applied class --which is yet unknown when the JS runs-- is used correctly in my quick tests in Safari and Firefox. But it makes sense that things like $(this).height() will yield wrong values at that time.
However, additional testing shows that it is not a generic rule that JS is halted until earlier defined CSS is loaded. There seems to be some combination with using external JS and CSS. I don't know how this works.
Last notes: as JS Bin includes Google Analytics in each script when running from the bare URL (like jsbin.com/aqeno, the test results are actually changed by JS Bin... It seems that the Output tab on the edit URL such as jsbin.com/aqeno/edit does not include the additional Google Analytics things, and surely yields different results, but that URL is hard to test using webpagetest.org. The reference to Stylesheets Block Downloads in Firefox and JavaScript Execution in IE as given by strager is a good start for a better understanding, but I got many questions left... Also note Steve Souders' IE8 Parallel Script Loading to make things even more complicated. (The waterfalls above are created using IE7.)
Maybe one should simply believe the release notes and documentation...
CSS/JavaScript/JQuery ordering doesn't work for me, but the following does:
$(window).load(function() { $('#abc')...} );
The DOM ready fires when all the DOM nodes are available. It has nothing to do with CSS. Try positioning the style before or try loading it differently.
To the best of my knowledge the ready event is fired when the DOM is loaded - which means that all the blocking requests (i.e. JS) have loaded and the DOM tree is completely graphed. The ready state in IE relies on a slower event trigger (document.readyState change vs DOMContentLoaded) than most other browsers so the timing is browser dependant also.
The existence of non-blocking requests (such as CSS and images) is completely asynchronous and unrelated to the ready state. If you are in a position where you require such resources you need to depend on the good old onload event.
According to HTML5, DOMContentLoaded is a plain DOM ready event without taking stylesheets into account. However, the HTML5 parsing algorithm require browsers to defer the execution of scripts until all previous stylesheets are loaded. (DOMContentLoaded and stylesheets)
In molily's tests (2010),
IE and Firefox blocked all subsequent script execution until stylesheets loaded
Webkit blocked subsequent execution only for external scripts (<script src>)
Opera did not block subsequent execution for any scripts
All modern browsers now support DOMContentLoaded (2017) so they may have standardized this behavior by now.
Related
I have some JavaScript in the HEAD tag that dynamically inserts an asynchronously loading script tag before the last script on the page (that has been currently parsed). This dynamically included script tag contains JavaScript that needs to parse the DOM after the DOM is available, but before all images AND script tags have been loaded in. It's important that the JavaScript starts executing before all JS has been loaded in, because if there is a hanging script, this would lead to a bad user experience. This means I can't wait for the DOMContentLoaded event to fire. I don't have any flexibility as to where I place the first bit of JavaScript that is dynamically including the script tag.
My question is, is it safe for me to start parsing through the DOM right away, without waiting for the DOMContentLoaded event? If not, is there a way for me to do this without waiting for the DOMContentLoaded event?
...JavaScript in the HEAD ... dynamically inserts an asynchronously loading script tag before the last script on the page...
I'm assuming the loader script is inline, meaning that the highlighted bit actually refers to the "current" script element i.e. the loader. This happens since only the html preceding the loader script tag has been parsed and interpreted, so the inserted script tag is actually still in the head and not at the bottom of the page. So the target script is limited to performing DOM operations on preceding elements only, unless you wrap the code into a DOM ready callback... which is what you're trying to avoid in the first place!
Basically you want to load all html so that the page is visible/scannable, start loading images/stylesheets (which occurs in non-blocking threads) and then load any javascript. One approach is to put your target script at the bottom of the page, just pick their order correctly (interactivity first, enhancements second, third party analytics/social media integration/anything else super-heavy last) and adjust for your needs. Technically it still blocks the page load, but there are only scripts left at the bottom of the page anyway (and since they are at the bottom, you would be able to directly manipulate DOM as soon as they're loaded, minus some IE7 quirks).
There is a relevant rant/overview I like to link to that provides decent examples and some timing trivia on use and abuse of DOM ready callbacks, as well as the "other side of the story" on why stellar performance could be of lower value than a sane dependency management framework. The subject of latter is far too broad to be exhausted in one answer, but something like requirejs documentation should give you a fair idea of how the pattern works.
Perhaps another pattern for to consider is building an SPA - single page application which leverages asynchronous access to content chunks rather than the "traditional" navigating between complete pages. The pattern comes with an underestimated but rather significant performance benefit from not having to parse and re-execute shared javascript on every page, which would also address your (valid) concern about third-party js performance. After all, just a good caching policy would do wonders for loading time, but poor javascript code or massive frameworks' execution overhead remains.
Update: Figured this out. With your specific scenario in mind (i.e. no control over markup per se, and wanting to be the last script to execute), you should wrap the insertion of the async script element into DOM into a 0ms setTimeout callback:
setTimeout(function(){
//the rest is how GA operates
var targetScript = document.createElement('script');
targetScript.type = 'text/javascript';
targetScript.async = true;
targetScript.src = 'target.js';
var s = document.getElementsByTagName('script')[0];
s.parentNode.insertBefore(targetScript, s);
}, 0);
Due to the single-threaded nature of the environment, js setTimeout callback is basically added to a queue for 0ms-delayed execution as soon as the thread is no longer busy (more thorough explanation here). So the browser isn't even aware of the need to load, let alone execute, the target script until after all "higher priority" code has been completed! And since DOM is operational when the script tag is being added, you will not have to check for it explicitly in the target script itself (which is handy for when it's loaded "instantly" from cache).
The behavior of the following techniques make it safe to parse DOM ...
Using window load or DomContentLoaded event
Declare or inject your script at the bottom of the page
Place "async" attribute on your script tag
or doing this:
<script>
setTimeout(function(){
// script declared inside here will execute after the DOM is parsed
},0);
</script>
Also, these will NOT BLOCK the page loading in DOM.
There is no need to call the DomContentLoaded event when declaring script below any DOM you are depending on UNLESS you are needing to do size calculations or positioning as images/video will change the sizing of things if width/height is not specified.
Here is some scenarios where this works.
DEPENDENT DOM IS ABOVE
<script src="jquery.js"></script>
<script>
$('mydom').slideDown('fast');
</script>
or try this:
<script>
// won't fart
setTimeout(function(){ console.log(document.getElementById('mydom').innerHTML); },0);
</script>
DEPENDENT DOM IS BELOW or ABOVE (dont' matter)
Here's my little test for you to see setTimeout working as its one of those strange things I didn't notice until recently so its nice to see a working example of it.
http://jsfiddle.net/FFLL2/
Yes, you should wait for the user agent to tell you that the DOM is loaded.
However, there is more than one way to do so.
There is a difference between onreadystatechange to interactive and DOMContentLoaded.
According http://www.whatwg.org/specs/web-apps/current-work/multipage/the-end.html
the first thing the user agent does, after stopping parsing the document, is setting the
document readiness to "interactive".
The user agent does not wait for scripts to be loaded.
When the document readiness changes, the user agent will fire readystatechange at the Document object.
So if the scripts you are worrying about are non-inline, you might hook up with readystatechange.
Talking about cross-browser: This is the spec.
I strongly advise you to fully read the following article which delves in detail into mysteries of script loading and actual DOM readiness and when is it safe to do what with what, also taking into account browser disrepancies.
http://www.html5rocks.com/en/tutorials/speed/script-loading/
In order to avoid javascript to block webpage rendering, can't we just put all all our JS files/code to be loaded/executed simply before the closing </body> tag?
All JS files and code would be downloaded and executed only after the all page has being rendered, so what's the need for tricks like the one suggested in this article about non blocking techniques to load JS files. He basically suggests to use code like:
document.getElementsByTagName("head")[0].appendChild(script);
in order to defer script laod while letting the webpage to be rendered, thus resulting in fast rendering speed of the webpage.
But without using this type of non-blocking technique (or other similar techniques), wouldn't we achieve the same non-blocking result by simply placing all our JS files (to be loaded/executed) before the closing </body> tag?
I'm even more surprised because the author (in the same article) suggests to put his code before the closing </body> tag (see the "Script placement" section of the article), so he is basically loading the scripts before the closing </body> tag anyway. What's the need for his code then?
I'm confused, any help appreciated, thanks!
UPDATE
FYI Google Analytics is using similar non-blocking technique to load their tracking code:
<script type="text/javascript">
...
(function()
{
var ga = document.createElement('script');
ga.type = 'text/javascript';
ga.async = true;
ga.src = 'your-script-name-here.js';
var s = document.getElementsByTagName('script')[0];
s.parentNode.insertBefore(ga, s); //why do they insert it before the 1st script instead of appending to body/head could be the hint for another question.
})();
</script>
</head>
Generally saying no. Even if scripts will be loaded after all the content of the page, loading and executing of the scripts will block the page. The reason for that is possibility of presence of write commands in your scripts.
However if all you want to achieve is the speed of loading page contents, the result of placing script tags right before </body> tag is the same as for creating script tags dynamically. The most significant difference is that when you load scripts in common static way they are executed one by one, in other words no parallel execution of script file (in old browsers the same true is for downloading of the script too).
If you want asynchonous scripts.
Use the (HTML5) async tag if it is availble in the browser you're in. This is what Google Analytics is doing in the code you posted (specifically the line ga.async = true MDN Link, scroll down for async).
However, this can cause your script to load at arbitrary times during the page load - which might be undesirable. It's worth asking yourself the following questions before choosing to use async.
Don't need user input? Then using the async attribute.
Need to respond to buttons or navigation? Then you need to put them at the top of the page (in head) and not use the async tag.
Async scripts run in any order, so if your script is depending on (say) jQuery, and jQuery is loaded in another tag, your script might run before the jQuery script does - resulting in errors.
Why don't people put things at the bottom of the body tag? If the script is taking enough time to load that it's slowing/pausing the load of the website, it's quite possible that that script is going to pause/hang the website after the website has loaded (expect different behaviour on different browsers) - making your website appear unresponsive (click on a button and nothing happens). In most cases this is not ideal, which is why the async attribute was invented.
Alternatively if your script is taking a long time to load - you might want to (after testing) minify and concatenate your script before sending it up to the server.
I recommend using require.js for minifying and concatenation, it's easy to get running and to use.
Minifying reduces the amount of data that needs to be downloaded.
Concatenating scripts reduces the number of "round-trips" to the server (for a far away server with 200ms ping, 5 requests takes 1 second).
One advantage of asynchronous loading (especially with something like the analytics snippet) is, at least if you would place it on the top, that it would be loaded as soon as possible without costing any time in rendering the page. So with analytics the chances to actually track a user before he leaves the page (maybe before the page was fully loaded) will be higher.
And the insertBefore is used instead of append, because if I remember correctly there was a bug (I think in some IE versions, see also link below theres something in the comments about that).
For me this link:
Async JS
was the most useful I found so far. Especially because it also brings up the issue, that even with googles analytic code the onload event will still be blocked (at least in some browsers). If you want this to not happen, better attach the function to the onload event.
For putting the asynchronous snippet on the bottom, that is actually explained in the link you posted. He seems to just do it to make sure that the DOM is completely loaded without using the onload event. So it may depend on what you're scripts are doing, if you're not manipulating the DOM there should be no reason for adding it on the bottom of body. Besides that, I personally would prefer adding it to the onload-event anyway.
Im experiencing strange behavior with Firefox and Dojo. I have a html page with these lines in the <head> section:
...
<script type="text/javascript" src="dojo.js" djconfig="parseOnLoad: true, locale: 'de'"></script>
<script type="text/javascript">
dojo.require("dojo.number");
</script>
...
Sometimes the page loads normally. But sometimes it won't. Firefox will fetch the whole html page but not render it. I see only a gray window.
After some experiments I figured out that the rendering problem has something to do with the load time of the html. Firefox starts evaluating the html page while loading it. If the page takes too long to load the above javascript will be executed BEFORE the html finishes loading.
If this happens I'll get the gray window. Advising Firefox to show me the source code of the page will display the correct complete html code. BUT: if I save the page to disk (File->Save Page As...) the html code will be truncated and the above part will look like this:
...
<script type="text/javascript" src="dojo.js" djconfig="parseOnLoad: true, locale: 'de'"></script>
<script type="text/javascript">
dojo.require("dojo.number");
</script></head><body></body></html>
This explains why I get to see a gray area. But why does this code appear there? I assume the require() method of Dojo does something "evil". But I can't figure out what. There is no write.document("</head><body></body></html>"); in the Dojo code. I checked for it.
The problem would be fixed, if I'd place the dojo.require("dojo.number"); statement in the window.load event:
<script type="text/javascript">
window.load=function() {
dojo.require("dojo.number");
}
</script>
But I'm curious why this happens. Is there a Javasctript function which forces Firefox to stop evaluating the page? Does Dojo do somethig "bad"? Can anyone explain this behavior to me?
EDIT: Dojo 1.3.1, no JS errors or warnings.
What does the rest of the page look like? What elements should be rendering that aren't? What other Javascript do you have?
What you have looks fine, but you will not be able to use methods in dojo.number or anything else loaded via dojo.require until after the page loads -- you must wait for window.onload to fire, or use the dojo.addOnLoad() method to trigger a callback. The latter is actually a bit quicker than onload.
dojo.require uses synch xhr to load which does block the browser, so if the load is unusually slow, you will notice a delay in the rendering of the page.
I think this is a rendering bug in Firefox that I've seen in a number of contexts where the one common factor is the amount of time the browser takes to load all the resources loaded in the of the page. The more scripts you have in the head that take a long time to request over the network or eval, the higher your chances are of running into this. Hitting the page with a warm cache notably reduces the possibility of running into the paint bug as well. Another way to mitigate it is to put the javascript at the end of the which is also a best practice since it doesn't block the browser from previewing markup immediately as it gets it.
Regarding the specifics of using dojo, common use cases include running things onload like creating and starting up widgets. If you have code in an onload handler that uses a dojo module like a widget, then stick the dojo.require statement inside the onload handler as well instead of before the onload handler. There's no point in suffering the performance penalty or blocking the initial UI rendering if you don't need it until later. Then build custom dojo layers to include the minimal core (possibly a custom base to make it even smaller) and the other 90% of what you need in a separate layer. Load the minimal core layer in the head (to get dojo.addOnLoad, etc) and then the other layer at the end of the body. If you live in a modular application framework where apps come and go in the page content area depending on the page you're on, each app should put the dojo.require statements for the respective dojo module it uses immediately before the module is actually referenced.
This won't work obviously if you need a module immediately in an inline script, but if that's the case then a custom dojo build will also help mitigate that case also.
I'm unaware of a reported issue with Mozilla, but I have also seen this much less often on other browsers some time ago.
Why are JS scripts usually place in the header of a document? Is it required by standards, or is it just a convention with no particular reason?
See http://developer.yahoo.com/performance/rules.html#js_bottom
Although past practice has often been to place them in the header for the sake of centralizing scripts and styles (and the like), it is advisable now to place the scripts at the bottom to improve loading speed of the rest of the page.
To quote:
The problem caused by scripts is that they block parallel downloads. The HTTP/1.1 specification suggests that browsers download no more than two components in parallel per hostname. If you serve your images from multiple hostnames, you can get more than two downloads to occur in parallel. While a script is downloading, however, the browser won't start any other downloads, even on different hostnames.
In some situations it's not easy to move scripts to the bottom. If, for example, the script uses document.write to insert part of the page's content, it can't be moved lower in the page. There might also be scoping issues. In many cases, there are ways to workaround these situations.
An alternative suggestion that often comes up is to use deferred scripts. The DEFER attribute indicates that the script does not contain document.write, and is a clue to browsers that they can continue rendering. Unfortunately, Firefox doesn't support the DEFER attribute. In Internet Explorer, the script may be deferred, but not as much as desired. If a script can be deferred, it can also be moved to the bottom of the page. That will make your web pages load faster.
A <script src="url"></script> will
block the downloading of other page
components until the script has been
fetched, compiled, and executed. It is
better to call for the script as late
as possible, so that the loading of
images and other components will not
be delayed.
It depends on what the script is doing. If your code is wrapped in onLoad event then it doesn't matter since it will return almost immediately and not block otherwise you should put it where it fits because the placement does matter.
As for putting it at the end, it does give a little extra time for user to start looking at the page. Just ask yourself a question - does my site work without javascript? If it doesn't, then in my opinion it doesn't mater where you put it since onLoad code will only be executed when the DOM has been fully loaded (that includes binary content like images). If you can use it without javascript then put it at the end so that images can load faster.
Also note that most JS libraries use special code which works around the onLoad problem and uses custom event for this which gets fired once DOM has loaded and doesn't wait for binary data.
Now that I wrote all that, I got a question of my own. Does using say jQuery's
$(document).ready(function () {});
and putting the script tag at the end of page is the same as using onLoad event and putting it at the start?
It should be the same because browser would load all images before loading the script which is the last one in the list. If you know the answer leave a comment (I'm too lazy and it's too late to test it atm).
It's just a convention. It's usually recommended to put scripts at the end of the body so the page can display before loading them, which is always a plus. Also, document.body can't be used until the document is loaded or if you put the script in the body.
I want to have the addthis widget available for my users, but I want to lazy load it so that my page loads as quickly as possible. However, after trying it via a script tag and then via my lazy loading method, it appears to only work via the script tag. In the obfuscated code, I see something that looks like it's dependent on the DOMContentLoaded event (at least for Firefox).
Since the DOMContentLoaded event has already fired, the widget doesn't render properly. What to do?
I could just use a script tag (slower)... or could I fire (in a cross browser way) the DOMContentLoaded (or equivalent) event? I have a feeling this may not be possible because I believe that (like jQuery) there are multiple tests of the content ready event, and so multiple simulated events would have to occur.
Nonetheless, this is an interesting problem because I have seen a couple widgets now assume that you are including their stuff via static script tags. It would be nice if they wrote code that was more useful to developers concerned about speed, but until then, is there a work around? And/or are any of my assumptions wrong?
Edit:
Because the 1st answer to the question seemed to miss the point of my problem, I wanted to clarify the situation.
This is about a specific problem. I'm not looking for yet another lazy load script or check if some dependencies are loaded script. Specifically this problem deals with
external widgets that you do not
have control over and may or may not
be obfuscated
delaying the load of the
external widgets until they
are needed or at least, til
substantially after everything else
has been loaded including other deferred elements
b/c of the how
the widget was written, precludes
existing, typical lazy loading
paradigms
While it's esoteric, I have seen it happen with a couple widgets - where the widget developers assume that you're just willing to throw in another script tag at the bottom of the page. I'm looking to save those 500-1000 ms** though as numerous studies by Yahoo, Google, and Amazon show it to be important to your user's experience.
**My testing with hammerhead and personal experience indicates that this will be my savings in this case.
The simplest solution is to set parameter domready to 1 when embedding addthis script into your page. Here is an example:
<script type="text/javascript"
src="http://s7.addthis.com/js/250/addthis_widget.js#username=addthis&domready=1">
</script>
I have tested it on IE, Firefox, Chrome, and Safari, and all worked fine. More information on addthis configuration parameters is available here.
This code solves the problem and saves the loading time that I was looking for.
After reading this post about how most current js libraries implement tests for a dom loaded event. I spent some time with the obfuscated code, and I was able to determine that addthis uses a combination of the mentioned doscroll method, timers, and the DOMContentLoaded event for various browsers. Since only those browsers dependent on the DOMContentloaded event would need the following code anyway:
if( document.createEvent ) {
var evt = document.createEvent("MutationEvents");
evt.initMutationEvent("DOMContentLoaded", true, true, document, "", "", "", 0);
document.dispatchEvent(evt);
}
and the rest depend on timers testing for existence of certain properties, I only had to accommodate this one case to be able to lazy load this external JS content rather than using the static script tags, thus saving the time that I was hoping for. :)
Edit: If the goal is simply to have your other contetn load first, try putting the <script> tags near the bottom of your page. It will still be able to catch the DOMContentLoaded and the content that comes before will be loaded before the script.
Original:
in addition to loading on DOMContentLoaded, you could have it load if a certain var is set true. e.g.
var isDOMContentLoaded = false;
document.addEventListener("DOMContentLoaded",function() { isDOMContentLoaded = true; }, false);
then add to the other script file
if (isDOMContentLoaded) loadThisScript();
Edit in response to comments:
Load the script, and run the function that the DOMContentLoaded listener fires. (read the script if you're not sure what function is being called ).
e.g.
var timerID;
var iteration=0;
function checkAndLoad() {
if (typeof loadThisScript != "undefined") {
clearInterval(timerID);
loadThisScript();
}
iteration++;
if (iteration > 59) clearInterval(timerID);
}
var extScript = document.createElement("script");
extScript.setAttribute("src",scriptSrcHere);
document.head.appendChild(extScript);
timerID = setInterval(checkAndLoad,1000);
The above will try once a second for 60 seconds to check if the function you need is available, and, if so, run it
AddThis has a section on how to load their tools asynchronously.
Current 'best' solution:
<script type="text/javascript" src="//s7.addthis.com/js/300/addthis_widget.js#pubid=[YOUR PROFILE ID]" async="async"></script>