For a while I had been running JavaScript component initialization by waiting for the "onload" event to fire and executing a main() of sorts. It seemed cleaner and you could be sure the ID state of your DOM was in order. But after some time of putting that through its paces I found that the component's initialization was choked off by any sort of resource hanging during the load (images, css, iframes, flash, etc.).
Now I have moved the initialization invocation to the end of the HTML document itself using inlined <script /> execution and found that it pushes the initialization before other external resources.
Now I wonder if there are some pitfalls that come with doing that instead of waiting for the "onload".
Which method are you using?
EDIT: Thanks. It seems each library has a specialized function for DOMContentLoaded/readyState implementation differences. I use prototype so this is what I needed.
For me, we use jquery, and its document ready state ensures that the DOM is loaded, but is not waiting for resources like you say. You can of course to this without a javascript framework, it does require a function you can create for example: document ready Now, for the most part putting script at the end of the page sure ensure the rest of the page is there, but making sure the DOM is ready is never a bad thing.
Jquery has $(document).ready()
The ideal point at which to run most scripts is when the document is ready, and not necessarily when it is "loaded".
See here
I use neither. Instead, I depend on YUI's onDomReady() (or onContentReady()/onAvailable()), because it handles the timing of initialization for me.
(Other JS libraries have similar methods for executing only once the page is fully loaded, as this is a common JS problem.)
That is not conform to any (X)HTML spec and I would be advised against that. It would put your site in to a quirks mode of the browser.
The correct way around the issue would be to use the DOMContentLoaded event, which isn't supported in all browsers. Hacks (eg polling doScroll() or using onreadystatechange) exist, so libraries are able to provide this functionality across browsers.
But there are still problems with DOMContentLoaded and chunked transfers which haven't been addressed by popular JavaScript frameworks.
Here's my take on the issue.
Related
I know document.write is considered bad practice; and I'm hoping to compile a list of reasons to submit to a 3rd party vendor as to why they shouldn't use document.write in implementations of their analytics code.
Please include your reason for claiming document.write as a bad practice below.
A few of the more serious problems:
document.write (henceforth DW) does not work in XHTML
DW does not directly modify the DOM, preventing further manipulation (trying to find evidence of this, but it's at best situational)
DW executed after the page has finished loading will overwrite the page, or write a new page, or not work
DW executes where encountered: it cannot inject at a given node point
DW is effectively writing serialised text which is not the way the DOM works conceptually, and is an easy way to create bugs (.innerHTML has the same problem)
Far better to use the safe and DOM friendly DOM manipulation methods
There's actually nothing wrong with document.write, per se. The problem is that it's really easy to misuse it. Grossly, even.
In terms of vendors supplying analytics code (like Google Analytics) it's actually the easiest way for them to distribute such snippets
It keeps the scripts small
They don't have to worry about overriding already established onload events or including the necessary abstraction to add onload events safely
It's extremely compatible
As long as you don't try to use it after the document has loaded, document.write is not inherently evil, in my humble opinion.
Another legitimate use of document.write comes from the HTML5 Boilerplate index.html example.
<!-- Grab Google CDN's jQuery, with a protocol relative URL; fall back to local if offline -->
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.6.3/jquery.min.js"></script>
<script>window.jQuery || document.write('<script src="js/libs/jquery-1.6.3.min.js"><\/script>')</script>
I've also seen the same technique for using the json2.js JSON parse/stringify polyfill (needed by IE7 and below).
<script>window.JSON || document.write('<script src="json2.js"><\/script>')</script>
It can block your page
document.write only works while the page is loading; If you call it after the page is done loading, it will overwrite the whole page.
This effectively means you have to call it from an inline script block - And that will prevent the browser from processing parts of the page that follow. Scripts and Images will not be downloaded until the writing block is finished.
Pro:
It's the easiest way to embed inline content from an external (to your host/domain) script.
You can overwrite the entire content in a frame/iframe. I used to use this technique a lot for menu/navigation pieces before more modern Ajax techniques were widely available (1998-2002).
Con:
It serializes the rendering engine to pause until said external script is loaded, which could take much longer than an internal script.
It is usually used in such a way that the script is placed within the content, which is considered bad-form.
Here's my twopence worth, in general you shouldn't use document.write for heavy lifting, but there is one instance where it is definitely useful:
http://www.quirksmode.org/blog/archives/2005/06/three_javascrip_1.html
I discovered this recently trying to create an AJAX slider gallery. I created two nested divs, and applied width/height and overflow: hidden to the outer <div> with JS. This was so that in the event that the browser had JS disabled, the div would float to accommodate the images in the gallery - some nice graceful degradation.
Thing is, as with the article above, this JS hijacking of the CSS didn't kick in until the page had loaded, causing a momentary flash as the div was loaded. So I needed to write a CSS rule, or include a sheet, as the page loaded.
Obviously, this won't work in XHTML, but since XHTML appears to be something of a dead duck (and renders as tag soup in IE) it might be worth re-evaluating your choice of DOCTYPE...
It overwrites content on the page which is the most obvious reason but I wouldn't call it "bad".
It just doesn't have much use unless you're creating an entire document using JavaScript in which case you may start with document.write.
Even so, you aren't really leveraging the DOM when you use document.write--you are just dumping a blob of text into the document so I'd say it's bad form.
It breaks pages using XML rendering (like XHTML pages).
Best: some browser switch back to HTML rendering and everything works fine.
Probable: some browser disable the document.write() function in XML rendering mode.
Worst: some browser will fire an XML error whenever using the document.write() function.
Off the top of my head:
document.write needs to be used in the page load or body load. So if you want to use the script in any other time to update your page content document.write is pretty much useless.
Technically document.write will only update HTML pages not XHTML/XML. IE seems to be pretty forgiving of this fact but other browsers will not be.
http://www.w3.org/MarkUp/2004/xhtml-faq#docwrite
Chrome may block document.write that inserts a script in certain cases. When this happens, it will display this warning in the console:
A Parser-blocking, cross-origin script, ..., is invoked via
document.write. This may be blocked by the browser if the device has
poor network connectivity.
References:
This article on developers.google.com goes into more detail.
https://www.chromestatus.com/feature/5718547946799104
Browser Violation
.write is considered a browser violation as it halts the parser from rendering the page. The parser receives the message that the document is being modified; hence, it gets blocked until JS has completed its process. Only at this time will the parser resume.
Performance
The biggest consequence of employing such a method is lowered performance. The browser will take longer to load page content. The adverse reaction on load time depends on what is being written to the document. You won't see much of a difference if you are adding a <p> tag to the DOM as opposed to passing an array of 50-some references to JavaScript libraries (something which I have seen in working code and resulted in an 11 second delay - of course, this also depends on your hardware).
All in all, it's best to steer clear of this method if you can help it.
For more info see Intervening against document.write()
I don't think using document.write is a bad practice at all. In simple words it is like a high voltage for inexperienced people. If you use it the wrong way, you get cooked. There are many developers who have used this and other dangerous methods at least once, and they never really dig into their failures. Instead, when something goes wrong, they just bail out, and use something safer. Those are the ones who make such statements about what is considered a "Bad Practice".
It's like formatting a hard drive, when you need to delete only a few files and then saying "formatting drive is a bad practice".
Based on analysis done by Google-Chrome Dev Tools' Lighthouse Audit,
For users on slow connections, external scripts dynamically injected via document.write() can delay page load by tens of seconds.
One can think of document.write() (and .innerHTML) as evaluating a source code string. This can be very handy for many applications. For example if you get HTML code as a string from some source, it is handy to just "evaluate" it.
In the context of Lisp, DOM manipulation would be like manipulating a list structure, e.g. create the list (orange) by doing:
(cons 'orange '())
And document.write() would be like evaluating a string, e.g. create a list by evaluating a source code string like this:
(eval-string "(cons 'orange '())")
Lisp also has the very useful ability to create code using list manipulation (like using the "DOM style" to create a JS parse tree). This means you can build up a list structure using the "DOM style", rather than the "string style", and then run that code, e.g. like this:
(eval '(cons 'orange '()))
If you implement coding tools, like simple live editors, it is very handy to have the ability to quickly evaluate a string, for example using document.write() or .innerHTML. Lisp is ideal in this sense, but you can do very cool stuff also in JS, and many people are doing that, like http://jsbin.com/
A simple reason why document.write is a bad practice is that you cannot come up with a scenario where you cannot find a better alternative.
Another reason is that you are dealing with strings instead of objects (it is very primitive).
It does only append to documents.
It has nothing of the beauty of for instance the MVC (Model-View-Controller) pattern.
It is a lot more powerful to present dynamic content with ajax+jQuery or angularJS.
The disadvantages of document.write mainly depends on these 3 factors:
a) Implementation
The document.write() is mostly used to write content to the screen as soon as that content is needed. This means it happens anywhere, either in a JavaScript file or inside a script tag within an HTML file. With the script tag being placed anywhere within such an HTML file, it is a bad idea to have document.write() statements inside script blocks that are intertwined with HTML inside a web page.
b) Rendering
Well designed code in general will take any dynamically generated content, store it in memory, keep manipulating it as it passes through the code before it finally gets spit out to the screen. So to reiterate the last point in the preceding section, rendering content in-place may render faster than other content that may be relied upon, but it may not be available to the other code that in turn requires the content to be rendered for processing. To solve this dilemma we need to get rid of the document.write() and implement it the right way.
c) Impossible Manipulation
Once it's written it's done and over with. We cannot go back to manipulate it without tapping into the DOM.
I think the biggest problem is that any elements written via document.write are added to the end of the page's elements. That's rarely the desired effect with modern page layouts and AJAX. (you have to keep in mind that the elements in the DOM are temporal, and when the script runs may affect its behavior).
It's much better to set a placeholder element on the page, and then manipulate it's innerHTML.
So I've recently learned that putting your js at the bottom of the DOM is antiquated, and that I should be once again putting them in the <head> with the “async” and “defer” attributes.
Great. But I'm a bit confused as to which I should use, based on priority.
So I have:
jquery
jquery plugins that don't have immediate effects on the look of the
page
jquery plugins that do have immediate effects on the look of the page
my own personal scripts, which have immediate effects on the look of
the page, and is also reliant on jquery
Which should get async, and which should get defer?
If I understand all this correctly, the ones that don't have an immediate effect on the look of the site should get defer, while everything else gets async. Correct? Or am I getting these mixed up.
It's quite simple. You should use [async] for scripts which can be executed in any order, and [defer] for scripts which have to be executed after HTML is parsed.
For example, if you have a script that add social sharing icons next to your posts, and this script doesn't rely on any other script, you can use both [async] and [defer]. But if your scripts requires jQuery, you can't use [async], because if you do, it might turn out that it gets executed before jQuery is loaded and it breaks.
If all your scripts require jQuery, then you shouldn't use [async] at all. As for [defer], it depends on whether your scripts access DOM. For plugins it probably doesn't matter, but you'll probably need it for your own code.
If you wrap your scripts in $(document).ready();, you can use [defer] for scripts which don't have immediate effect (e.g. require user interaction).
I want to place jQuery just before the closing </body>-Tag like it's recommended. But because I'm using a Content Management System, inline scripts that require jQuery will be executed before the closing body tag.
My question now is: Is it worth to collect jQuery-based scripts in an array and run them at the end of the document when jQuery is loaded (EXAMPLE) OR just load jQuery in the head section?
You could adopt the approach described here
the idea is to create a q global variable soon in the header and use a temporary window.$ function to collect all the code/functions/plugin jQuery dependent.
window.q=[];
window.$=function(f){
q.push(f);
};
and after you load jQuery you will pass all the functions to the real $.ready function.
$.each(q,function(index,f){
$(f)
});
in this way you will be able to safely include your jquery code before the jQuery lib inclusion
If this could be better than loading jQuery in the head it may depends on how much code you have to push into q temporary function.
jQuery placed into <head> section would require a single blocking script. But if you have much code to inject everywhere inside the document you may have a lot of huge blocking scripts that stop the rendering process.
On the contrary loading a lot of scripts after dom ready event it could easily make your page faster to load so this approach it's better and the benefits can be more evident.
So there's no a definitive answer valid for all code and all pages: even if this kind of technique anyway is generally good and preferable (because you have the benefit of a as-later-as-possible script execution), you should ever make some test with both the approaches and look at loading and rendering time. The article at the beginning has some consideration on the performance but also explains why stackoverflow didn't use it.
Just load jQuery in the head, it will be cached after the first load and won't slow down anything after that.
Everything else sounds like it would be over the top and I am not sure that the gain in performance will be that significant to justify the extra work.
Also sometime if your page load slowly with the javascript at the bottom it can take longer to come and load which means the visual without javascript might be visible and not provide such a good experience to the user.
Is there any drawback to putting code (which will interact with the DOM, adding event listeners and so on) just before the closing </body> tag?
<!-- all the HTML comes before this -->
(function() {
do_stuff_with_the_DOM();
})();
</body>
It seems to work in my own tests, but I never see this method used in tutorials, code examples, or other people's projects. Is there a reason not to do it? Are there edge cases that only seem to pop up when you begin using this in production and see many page views across a variety of browsers?
My project doesn't use jQuery or any other toolkit, and I'm aware of the alternatives that mimic jQuery's $(document).ready() functionality. Do I really need to use one of those? (It should go without saying, but I'm looking to run the code before window.load.)
Note that the code I want to run (do_stuff_with_the_DOM() in the example above) would be defined in an external script file, if that makes a difference.
You should put your JavaScript code in the place that makes the most sense for what it needs to do. In most cases, you need your js to attach events to DOM objects, which is hard to imagine if those DOM objects don't exist when the js is running. So putting it at the bottom of the html is a sensible, simple, and common approach. Is it the best? That's arguable.
Another approach is to attach your JavaScript to the various events that different browsers fire when the DOM is fully loaded. There is nothing wrong with this, although detractors don't like that it's often done in a way that requires an additional blocking HTTP request in the head element.
Event delegation offers a third approach that lets you attach events to parent elements (such as body) very early, and when appropriate child events exist the events will fire as if they had been attached to those elements all along. It's a very cool approach with theoretically the best early-loading performance of any of the above, but has pitfalls in that not all events bubble all the way to the top, like you might expect, and that it often tempts you to separate your JavaScript into multiple chunks, which can violate separation of content and behavior.
In General
Putting code just before </body> should always be the aim.
Its highly recommended, as the download of scripts (if requesting external JavaScript files) blocks parallel downloading (i.e. whilst a script is downloading, nothing else - be it another script or an image for example - can be downloaded at the same time).
The only time you should have an issue with this, is in something like a poor CMS system where you need to have jQuery in-place in the <head> in order for some of its scripts to work.
Using inline JavaScript
Adding inline JavaScript (or inline CSS code) to a page is generally considered bad practice as, for one, its a merging of concerns (i.e. you no longer have true separation between HTML/CSS/JS).
I cannot think of a negative performance issue if you did have all your code inlined - indeed Google use this as a practice (they load all their JavaScript in a bit comment (so that it isn't parsed) and then eval() elements of this blob of "text" as and when they need to.
I would note however, that its unlikely nowadays that you'll have many pages that don't at some point have a requirement on at least one external JavaScript file (be that JQuery, Mootools, Underscore of Backbone). In which case, as you will always have at least one external file (unless you're going to Google route), then you might as well put both external references AND inline code together... at the bottom. Creating consistency.
References
Yahoo Developer Network best practices
Google Page Speed - Defer the loading of JavaScript
Recently I saw some HTML with only a single <script> element in its <head>...
<head>
<title>Example</title>
<script src="script.js" type="text/javascript"></script>
<link href="plain.css" type="text/css" rel="stylesheet" />
</head>
This script.js then adds any other necessary <script> elements and <link> elements to the document using document.write(...): (or it could use document.createElement(...) etc)
document.write("<link href=\"javascript-enabled.css\" type=\"text/css\" rel=\"styleshet\" />");
document.write("<script src=\"https://ajax.googleapis.com/ajax/libs/jquery/1.5.1/jquery.min.js\" type=\"text/javascript\"></script>");
document.write("<script src=\"https://ajax.googleapis.com/ajax/libs/jqueryui/1.8.10/jquery-ui.min.js\" type=\"text/javascript\"></script>");
document.write("<link href=\"http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.0/themes/trontastic/jquery-ui.css\" type=\"text/css\" rel=\"stylesheet\" />")
document.write("<script src=\"validation.js\" type=\"text/css\"></script>")
Note that there is a plain.css CSS file in the document <head> and script.js just adds any and all CSS and JavaScript which would be used by a JS-enabled user agent.
What are some of the pros and cons of this technique?
The blocking nature of document.write
document.write will pause everything that the browser is working on the page (including parsing). It is highly recommended to avoid because of this blocking behavior. The browser has no way of knowing what you're going to shuff into the HTML text stream at that point, or whether the write will totally trash everything on the DOM tree, so it has to stop until you're finished.
Essentially, loading scrips this way will force the browser to stop parsing HTML. If your script is in-line, then the browser will also execute those scripts before it goes on. Therefore, as a side-note, it is always recommended that you defer loading scripts until after your page is parsed and you've shown a reasonable UI to the user.
If your scripts are loaded from separate files in the "src" attribute, then the scripts may not be consistently executed across all browsers.
Losing browser speed optimizations and predictability
This way, you lose a lot of the performance optimizations made by modern browsers. Also, when your scripts execute may be unpredictable.
For example, some browsers will execute the scripts right after you "write" them. In such cases, you lose parallel downloads of scripts (because the browser doesn't see the second script tag until it has downloaded and executed the first). You lose parallel downloads of scripts and stylesheets and other resources (many browsers can download resources, stylesheets and scripts all at the same time).
Some browsers defer the scripts until after the end to execute them.
The browser cannot continue to parse the HTML while document.write is going on and, in certain cases, when the scripts written are being executed due to the blocking behavior of document.write, so your page shows up much slower.
In other words, your site has just become as slow as it was loading on a decades-old browser with no optimizations.
Why would somebody do it like this?
The reason you may want to use something like this is usually for maintainability. For instance, you may have a huge site with thousands of pages, each loading the same set of scripts and stylesheets. However, when you add a script file, you don't want to edit thousands of HTML files to add the script tags. This is particularly troublesome when loading JavaScript libraries (e.g. Dojo or jQuery) -- you have to change each HTML page when you upgrade to the next version.
The problem is that JavaScript doesn't have an #include or #import statement for you to include other files.
Some solutions
The solution to this is probably not by injecting scripts via document.write, but by:
Using #import directives in stylesheets
Using a server scripting language (e.g. PHP) to manage your "master page" and to generate all other pages (however, if you can't use this and must maintain many HTML pages individually, this is not a solution)
Avoid document.write, but load the JavaScript files via XHR, then eval() them -- this may have security concerns though
Use a JavaScript Library (e.g. Dojo) that has module-loading features so that you can keep a master JS file which loads other files. You won't be able to avoid having to update the version numbers of the library file though...
One major disadvantage is browser incompatibility. Not all browsers correctly fetch and incorporate the resources into the DOM, so it's risky to use this approach. This is more true of stylesheets than scripts.
Another issue is one of maintainability. Concatenating and writing strings to add DOM elements on the client'side can become a maintenance nightmare. It's better to use DOM methods such as createElement to semantically create elements.
One obvious advantage is that it makes conditional use of resources much easier. You can have logic that determines which resources to load, thereby reducing the bandwidth consumption and overall processing time for the page. I would use a library call such as jQuery $.getScript() to load scripts versus document.write. The advantage being that such an approach is cleaner and also allows you to have code executed when the request is completed or fails.
Well, I may as well throw my hat in the ring on this one...
If you examine google's closure library, base.js, you will see that document.write is used in their writeScriptTag_() function. This is an essential part of the dependency management system which 'closure' provides, and is an enormous advantage when creating a complicated, multi-file, library-base javascript application - it lets the file/code prerequisites determine loading order. We currently use this technique and have been having little trouble with it. TBH, we have not had a single issue with browser compatibility, and we test regularly on IE 6/7/8, FF3/2, Safari 4/5 and Chrome latest.
The only disadvantage that we have had so far is that it can be challenging to track down issues caused by loading a resource twice, or failing to load one at all. Since the act of loading resources is programmatic, it is subject to programmatic errors, and unlike adding the tags directly to the HTML, it can be hard to see what the exact loading order is. However, this issue can be largely overcome by using a library with some form of dependency management system such as closure, or dojo.
EDIT: I have made a few comments to this nature, but I thought it best to summarize in my answer:
There are some problems with dojo.require() and jQuery.getScript() (both which ultimiately perform an ajax request and eval).
loading via ajax means no cross scripting - ie no loading javascript that is not from your site. This will be a problem if you want to include https://ajax.googleapis.com as listed in the description.
Eval'd scripts will not show up in a javascript debugger's list of page scripts, making debugging quite challenging. Recent releases of firebug will show you eval'd code, however filenames are lost making the act of setting breakpoints tedious. AFAIK, Webkit javascript console and IE8 developer tools do not show eval'd scripts.
It has the advantage that you don't need to repeat the script references in each HTML file. The disadvantage is that the browser must fetch and execute the main javascript file before it may load the others.
I guess one advantage I could think of would be if you use these scripts on multiple pages, you only have to remember to include one script, and it saves some space.
At Google PageSpeed, they highly discourage you from using this technique, because it makes things slower. Apart from the sequential loading of your script.js before all the others, there's another catch:
Modern browsers use speculative parsers to more efficiently discover external resources [...] Thus, using JavaScript's document.write() to fetch external resources makes it impossible for the speculative parser to discover those resources, which can delay the download, parsing, and rendering of those resources.
It's also possible that this was written as a recommendation by an SEO firm to make the head element shorter if possible, so unique content is closer to the top of the document - also creating a higher text-to-HTML ratio. Though it does sound, all-in-all, like not a very good way of doing this; though it would make maintenance more time-consuming, a better approach would probably be to condense javascript into a single .js file and css into a single .css file, if it were deemed utterly necessary to reduce head element size.
A great disadvantage is that adding scripts into head will pause the processing of the document until those scripts were completely downloaded parsed and executed (because the browser thinks they may use document.write). - This will hurt responsiveness.
Now days it is recommended that you put your script tags right befo </body>. Of course this is not possible 100% of times, but if you are using unobtrusve Javascript (as you should), all scripts can be respositioned at the end of the document.
HTML5 came up with the async attribute which suggests the browser only to execute the scripts after the main document has been loaded. This is the behaviour of script-inserted scripts in many browsers, but notin all of them.
I advise against using document.write at all costs. Even without it, this results in one extra request towards the server. (We like to inimze the number of requests, for example with css sprites.)
And yes, as others mentioned earlier, if scriping is disabled your page will be shown without CSS (which makes it potentially unusable).
If JavaScript is disabled - <script> and <link> elements won't be added at all.
If you place JavaScript init functions at the bottom of your page (which is a good practice) and link CSS with JavaScript, it may cause a some delay before CSS loads (broken layout will be visible for a short time).