Should CSS always precede JavaScript? - javascript

In countless places online I have seen the recommendation to include CSS prior to JavaScript. The reasoning is generally, of this form:
When it comes to ordering your CSS and JavaScript, you want your CSS
to come first. The reason is that the rendering thread has all the
style information it needs to render the page. If the JavaScript
includes come first, the JavaScript engine has to parse it all before
continuing on to the next set of resources. This means the rendering
thread can't completely show the page, since it doesn't have all the
styles it needs.
My actual testing reveals something quite different:
My test harness
I use the following Ruby script to generate specific delays for various resources:
require 'rubygems'
require 'eventmachine'
require 'evma_httpserver'
require 'date'
class Handler < EventMachine::Connection
include EventMachine::HttpServer
def process_http_request
resp = EventMachine::DelegatedHttpResponse.new( self )
return unless #http_query_string
path = #http_path_info
array = #http_query_string.split("&").map{|s| s.split("=")}.flatten
parsed = Hash[*array]
delay = parsed["delay"].to_i / 1000.0
jsdelay = parsed["jsdelay"].to_i
delay = 5 if (delay > 5)
jsdelay = 5000 if (jsdelay > 5000)
delay = 0 if (delay < 0)
jsdelay = 0 if (jsdelay < 0)
# Block which fulfills the request
operation = proc do
sleep delay
if path.match(/.js$/)
resp.status = 200
resp.headers["Content-Type"] = "text/javascript"
resp.content = "(function(){
var start = new Date();
while(new Date() - start < #{jsdelay}){}
})();"
end
if path.match(/.css$/)
resp.status = 200
resp.headers["Content-Type"] = "text/css"
resp.content = "body {font-size: 50px;}"
end
end
# Callback block to execute once the request is fulfilled
callback = proc do |res|
resp.send_response
end
# Let the thread pool (20 Ruby threads) handle request
EM.defer(operation, callback)
end
end
EventMachine::run {
EventMachine::start_server("0.0.0.0", 8081, Handler)
puts "Listening..."
}
The above mini server allows me to set arbitrary delays for JavaScript files (both server and client) and arbitrary CSS delays. For example, http://10.0.0.50:8081/test.css?delay=500 gives me a 500 ms delay transferring the CSS.
I use the following page to test.
<!DOCTYPE html>
<html>
<head>
<title>test</title>
<script type='text/javascript'>
var startTime = new Date();
</script>
<link href="http://10.0.0.50:8081/test.css?delay=500" type="text/css" rel="stylesheet">
<script type="text/javascript" src="http://10.0.0.50:8081/test2.js?delay=400&jsdelay=1000"></script>
</head>
<body>
<p>
Elapsed time is:
<script type='text/javascript'>
document.write(new Date() - startTime);
</script>
</p>
</body>
</html>
When I include the CSS first, the page takes 1.5 seconds to render:
When I include the JavaScript first, the page takes 1.4 seconds to render:
I get similar results in Chrome, Firefox and Internet Explorer. In Opera, however, the ordering simply does not matter.
What appears to be happening is that the JavaScript interpreter refuses to start until all the CSS is downloaded. So, it seems that having JavaScript includes first is more efficient as the JavaScript thread gets more run time.
Am I missing something? Is the recommendation to place CSS includes prior to JavaScript includes not correct?
It is clear that we could add async or use setTimeout to free up the render thread or put the JavaScript code in the footer, or use a JavaScript loader. The point here is about ordering of essential JavaScript bits and CSS bits in the head.

This is a very interesting question. I've always put my CSS <link href="...">s before my JavaScript <script src="...">s because "I read one time that it's better." So, you're right; it's high time we do some actual research!
I set up my own test harness in Node.js (code below). Basically, I:
Made sure there was no HTTP caching so the browser would have to do a full download each time a page is loaded.
To simulate reality, I included jQuery and the H5BP CSS (so there's a decent amount of script/CSS to parse)
Set up two pages - one with CSS before script, one with CSS after script.
Recorded how long it took for the external script in the <head> to execute
Recorded how long it took for the inline script in the <body> to execute, which is analogous to DOMReady.
Delayed sending CSS and/or script to the browser by 500 ms.
Ran the test 20 times in the three major browsers.
Results
First, with the CSS file delayed by 500 ms (the unit is milliseconds):
Browser: Chrome 18 | IE 9 | Firefox 9
CSS: first last | first last | first last
=======================================================
Header Exec | | |
Average | 583 36 | 559 42 | 565 49
St Dev | 15 12 | 9 7 | 13 6
------------|--------------|--------------|------------
Body Exec | | |
Average | 584 521 | 559 513 | 565 519
St Dev | 15 9 | 9 5 | 13 7
Next, I set jQuery to delay by 500 ms instead of the CSS:
Browser: Chrome 18 | IE 9 | Firefox 9
CSS: first last | first last | first last
=======================================================
Header Exec | | |
Average | 597 556 | 562 559 | 564 564
St Dev | 14 12 | 11 7 | 8 8
------------|--------------|--------------|------------
Body Exec | | |
Average | 598 557 | 563 560 | 564 565
St Dev | 14 12 | 10 7 | 8 8
Finally, I set both jQuery and the CSS to delay by 500 ms:
Browser: Chrome 18 | IE 9 | Firefox 9
CSS: first last | first last | first last
=======================================================
Header Exec | | |
Average | 620 560 | 577 577 | 571 567
St Dev | 16 11 | 19 9 | 9 10
------------|--------------|--------------|------------
Body Exec | | |
Average | 623 561 | 578 580 | 571 568
St Dev | 18 11 | 19 9 | 9 10
Conclusions
First, it's important to note that I'm operating under the assumption that you have scripts located in the <head> of your document (as opposed to the end of the <body>). There are various arguments regarding why you might link to your scripts in the <head> versus the end of the document, but that's outside the scope of this answer. This is strictly about whether <script>s should go before <link>s in the <head>.
In modern DESKTOP browsers, it looks like linking to CSS first never provides a performance gain. Putting CSS after script gets you a trivial amount of gain when both CSS and script are delayed, but gives you large gains when CSS is delayed. (Shown by the last columns in the first set of results.)
Given that linking to CSS last does not seem to hurt performance but can provide gains under certain circumstances, you should link to external style sheets after you link to external scripts only on desktop browsers if the performance of old browsers is not a concern. Read on for the mobile situation.
Why?
Historically, when a browser encountered a <script> tag pointing to an external resource, the browser would stop parsing the HTML, retrieve the script, execute it, then continue parsing the HTML. In contrast, if the browser encountered a <link> for an external style sheet, it would continue parsing the HTML while it fetched the CSS file (in parallel).
Hence, the widely-repeated advice to put style sheets first – they would download first, and the first script to download could be loaded in parallel.
However, modern browsers (including all of the browsers I tested with above) have implemented speculative parsing, where the browser "looks ahead" in the HTML and begins downloading resources before scripts download and execute.
In old browsers without speculative parsing, putting scripts first will affect performance since they will not download in parallel.
Browser Support
Speculative parsing was first implemented in: (along with the percentage of worldwide desktop browser users using this version or greater as of Jan 2012)
Chrome 1 (WebKit 525) (100%)
Internet Explorer 8 (75%)
Firefox 3.5 (96%)
Safari 4 (99%)
Opera 11.60 (85%)
In total, roughly 85% of desktop browsers in use today support speculative loading. Putting scripts before CSS will have a performance penalty on 15% of users globally; your mileage may vary based on your site's specific audience. (And remember that number is shrinking.)
On mobile browsers, it's a little harder to get definitive numbers simply due to how heterogeneous the mobile browser and OS landscape is. Since speculative rendering was implemented in WebKit 525 (released Mar 2008), and just about every worthwhile mobile browser is based on WebKit, we can conclude that "most" mobile browsers should support it. According to quirksmode, iOS 2.2/Android 1.0 use WebKit 525. I have no idea what Windows Phone looks like.
However, I ran the test on my Android 4 device, and while I saw numbers similar to the desktop results, I hooked it up to the fantastic new remote debugger in Chrome for Android, and Network tab showed that the browser was actually waiting to download the CSS until the JavaScript code completely loaded – in other words, even the newest version of WebKit for Android does not appear to support speculative parsing. I suspect it might be turned off due to the CPU, memory, and/or network constraints inherent to mobile devices.
Code
Forgive the sloppiness – this was Q&D.
File app.js
var express = require('express')
, app = express.createServer()
, fs = require('fs');
app.listen(90);
var file={};
fs.readdirSync('.').forEach(function(f) {
console.log(f)
file[f] = fs.readFileSync(f);
if (f != 'jquery.js' && f != 'style.css') app.get('/' + f, function(req,res) {
res.contentType(f);
res.send(file[f]);
});
});
app.get('/jquery.js', function(req,res) {
setTimeout(function() {
res.contentType('text/javascript');
res.send(file['jquery.js']);
}, 500);
});
app.get('/style.css', function(req,res) {
setTimeout(function() {
res.contentType('text/css');
res.send(file['style.css']);
}, 500);
});
var headresults={
css: [],
js: []
}, bodyresults={
css: [],
js: []
}
app.post('/result/:type/:time/:exec', function(req,res) {
headresults[req.params.type].push(parseInt(req.params.time, 10));
bodyresults[req.params.type].push(parseInt(req.params.exec, 10));
res.end();
});
app.get('/result/:type', function(req,res) {
var o = '';
headresults[req.params.type].forEach(function(i) {
o+='\n' + i;
});
o+='\n';
bodyresults[req.params.type].forEach(function(i) {
o+='\n' + i;
});
res.send(o);
});
File css.html
<!DOCTYPE html>
<html>
<head>
<title>CSS first</title>
<script>var start = Date.now();</script>
<link rel="stylesheet" href="style.css">
<script src="jquery.js"></script>
<script src="test.js"></script>
</head>
<body>
<script>document.write(jsload - start);bodyexec=Date.now()</script>
</body>
</html>
File js.html
<!DOCTYPE html>
<html>
<head>
<title>CSS first</title>
<script>var start = Date.now();</script>
<script src="jquery.js"></script>
<script src="test.js"></script>
<link rel="stylesheet" href="style.css">
</head>
<body>
<script>document.write(jsload - start);bodyexec=Date.now()</script>
</body>
</html>
File test.js
var jsload = Date.now();
$(function() {
$.post('/result' + location.pathname.replace('.html','') + '/' + (jsload - start) + '/' + (bodyexec - start));
});
jQuery was jquery-1.7.1.min.js

There are two main reasons to put CSS before JavaScript.
Old browsers (Internet Explorer 6-7, Firefox 2, etc.) would block all subsequent downloads when they started downloading a script. So if you have a.js followed by b.css they get downloaded sequentially: first a then b. If you have b.css followed by a.js they get downloaded in parallel so the page loads more quickly.
Nothing is rendered until all stylesheets are downloaded - this is true in all browsers. Scripts are different - they block rendering of all DOM elements that are below the script tag in the page. If you put your scripts in the HEAD then it means the entire page is blocked from rendering until all stylesheets and all scripts are downloaded. While it makes sense to block all rendering for stylesheets (so you get the correct styling the first time and avoid the flash of unstyled content FOUC), it doesn't make sense to block rendering of the entire page for scripts. Often scripts don't affect any DOM elements or just a portion of DOM elements. It's best to load scripts as low in the page as possible, or even better load them asynchronously.
It's fun to create examples with Cuzillion. For example, this page has a script in the HEAD so the entire page is blank until it's done downloading. However, if we move the script to the end of the BODY block the page header renders since those DOM elements occur above the SCRIPT tag, as you can see on this page.

I would not emphasize too much on the results that you have got. I believe that it is subjective, but I have a reason to explain you that it is better to put in CSS before JavaScript.
During the loading of your website, there are two scenarios that you would see:
Case 1: white screen → unstyled website → styled website → interaction → styled and interactive website
Case 2: white screen → unstyled website → interaction → styled website → styled and interactive website
I honestly can't imagine anyone choosing Case 2. This would mean that visitors using slow Internet connections will be faced with an unstyled website, that allows them to interact with it using JavaScript (since that is already loaded). Furthermore, the amount of time spend looking at an unstyled website would be maximized this way. Why would anyone want that?
It also works better, as jQuery states:
"When using scripts that rely on the value of CSS style properties,
it's important to reference external stylesheets or embed style
elements before referencing the scripts".
When the files are loaded in the wrong order (first JavaScript, then CSS), any JavaScript code relying on properties set in CSS files (for example, the width or height of a div) won't be loaded correctly. It seems that with the wrong loading order, the correct properties are 'sometimes' known to JavaScript (perhaps this is caused by a race condition?). This effect seems bigger or smaller depending on the browser used.

Were your tests performed on your personal computer, or on a web server? It is a blank page, or is it a complex online system with images, databases, etc.? Are your scripts performing a simple hover event action, or are they a core component to how your website renders and interacts with the user? There are several things to consider here, and the relevance of these recommendations almost always become rules when you venture into high-caliber web development.
The purpose of the "put stylesheets at the top and scripts at the bottom" rule is that, in general, it's the best way to achieve optimal progressive rendering, which is critical to the user experience.
All else aside: assuming your test is valid, and you really are producing results contrary to the popular rules, it'd come as no surprise, really. Every website (and everything it takes to make the whole thing appear on a user's screen) is different and the Internet is constantly evolving.

I include CSS files before JavaScript for a different reason.
If my JavaScript code needs to do dynamic sizing of some page element (for those corner cases where CSS is really a main in the back) then loading the CSS after the JS is russing can lead to race conditions, where the element is resized before CSS styles are applied and thus looks weird when the styles finally kick in. If I load the CSS beforehand I can guarantee that things run in the intended order and that the final layout is what I want it to be.

Is the recommendation to include CSS before JavaScript invalid?
Not if you treat it as simply a recommendation. But if your treat it as a hard and fast rule?, yes, it is invalid.
From Window: DOMContentLoaded event:
Stylesheet loads block script execution, so if you have a <script>
after a <link rel="stylesheet" ...> the page will not finish parsing
and DOMContentLoaded will not fire - until the stylesheet is loaded.
It appears that you need to know what each script relies on and make sure that execution of the script is delayed until after the right completion event. If the script relies only on the DOM, it can resume in ondomready/domcontentloaded. If it relies on images to be loaded or style sheets to be applied, then if I read the above reference correctly, that code must be deferred until the onload event.
I don't think that one sock size fits all, even though that is the way they are sold and I know that one shoe size does not fit all. I don't think that there is a definitive answer to which to load first, styles or script. It is more a case by case decision of what must be loaded in what order and what can be deferred until later as not being on the "critical path".
To speak to the observer that commented that it is better to delay the users ability to interact until the sheet is pretty. There are many of you out there and you annoy your counterparts that feel the opposite. They came to a site to accomplish a purpose and delays to their ability to interact with a site while waiting for things that don't matter to finish loading are very frustrating. I am not saying that you are wrong, only that you should be aware that there is another faction that exists that does not share your priority.
This question particularly applies to all of the ads being placed on web sites. I would love it if site authors rendered just placeholder divs for the ad content and made sure that their site was loaded and interactive before injecting the ads in an onload event. Even then I would like to see the ads loaded serially instead of all at once because they impact my ability to even scroll the site content while the bloated ads are loading. But that is just one person's point of view.
Know your users and what they value.
Know your users and what browsing environment they use.
Know what each file does, and what its prerequisites are. Making everything work will take precedence over both speed and pretty.
Use tools that show you the network time line when developing.
Test in each of the environments that your users use. It may be needed to dynamically (server side, when creating the page) alter the order of loading based on the users environment.
When in doubt, alter the order and measure again.
It is possible that intermixing styles and scripts in the load order will be optimal; not all of one then all of the other.
Experiment not just what order to load the files but where. head? In body? After body? DOM Ready/Loaded? Loaded?
Consider async and defer options when appropriate to reduce the net delay the user will experience before being able to interact with the page. Test to determine if they help or hurt.
There will always be trade-offs to consider when evaluating the optimal load order. Pretty vs. responsive being just one.

Updated 2017-12-16
I was not sure about the tests in OP. I decided to experiment a little and ended up busting some of the myths.
Synchronous <script src...> will block downloading of the resources
below it until it is downloaded and executed
This is no longer true. Have a look at the waterfall generated by Chrome 63:
<head>
<script src="//alias-0.redacted.com/payload.php?type=js&delay=333&rand=1"></script>
<script src="//alias-1.redacted.com/payload.php?type=js&delay=333&rand=2"></script>
<script src="//alias-2.redacted.com/payload.php?type=js&delay=333&rand=3"></script>
</head>
<link rel=stylesheet> will not block download and execution of
scripts below it
This is incorrect. The style sheet will not block download but it will block execution of the script (little explanation here). Have a look at performance chart generated by Chrome 63:
<link href="//alias-0.redacted.com/payload.php?type=css&delay=666" rel="stylesheet">
<script src="//alias-1.redacted.com/payload.php?type=js&delay=333&block=1000"></script>
Keeping the above in mind, the results in OP can be explained as follows:
CSS First:
CSS Download 500 ms:<------------------------------------------------>
JS Download 400 ms:<-------------------------------------->
JS Execution 1000 ms: <-------------------------------------------------------------------------------------------------->
DOM Ready #1500 ms: ◆
JavaScript First:
JS Download 400 ms:<-------------------------------------->
CSS Download 500 ms:<------------------------------------------------>
JS Execution 1000 ms: <-------------------------------------------------------------------------------------------------->
DOM Ready #1400 ms: ◆

The 2020 answer: it probably doesn't matter
The best answer here was from 2012, so I decided to test for myself. On Chrome for Android, the JS and CSS resources are downloaded in parallel and I could not detect a difference in page rendering speed.
I included a more detailed writeup on my blog

I'm not exactly sure how your testing 'render' time as your using JavaScript. However, consider this:
One page on your site is 50 kB which is not unreasonable. The user is on the East Coast while your server is on the west. MTU is definitely not 10k so there will be a few trips back and forth. It may take 1/2 a second to receive your page and style sheets. Typically (for me) JavaScript (via jQuery plugin and such) are much more than CSS. There’s also what happens when your Internet connection chokes up midway on the page, but let’s ignore that (it happens to me occasionally and I believe the CSS renders, but I am not 100% sure).
Since CSS is in head, there may be additional connections to get it which means it potentially can finish before the page does. Anyway, during the type the remainder of the page takes and the JavaScript files (which is many more bytes) the page is unstyled which makes the site/connection appear slow.
Even if the JavaScript interpreter refuses to start until the CSS is done, the time taken to download the JavaScript code, especially when far from the server, is cutting into CSS time, which will make the site look not pretty.
It’s a small optimization, but that’s the reason for it.

Here is a summary of all the previous major answers:
For modern browsers, put the CSS content wherever you like it. They would analyze your HTML file (which they call speculative parsing) and start downloading CSS in parallel with HTML parsing.
For old browsers, keep putting the CSS on top (if you don't want to show a naked, but interactive page first).
For all browsers, put the JavaScript content as far down on the page as possible, since it will halt parsing of your HTML. Preferably, download it asynchronously (i.e., an Ajax call).
There are also some experimental results for a particular case which claims putting JavaScript first (as opposed to traditional wisdom of putting CSS first) gives better performance, but there isn't any logical reasoning given for it, and lacks validation regarding widespread applicability, so you can ignore it for now.
So, to answer the question: Yes. The recommendation to include the CSS before JavaScript is invalid for the modern browsers. Put CSS wherever you like, and put JavaScript towards the end, as possible.

Steve Souders has already given a definitive answer, but...
I wonder whether there's an issue with both Sam's original test and Josh's repeat of it.
Both tests appear to have been performed on low latency connections where setting up the TCP connection will have a trivial cost.
How this affects the result of the test I'm not sure and I'd want to look at the waterfalls for the tests over a 'normal' latency connection but...
The first file downloaded should get the connection used for the HTML page, and the second file downloaded will get the new connection. (Flushing the <head> early alters that dynamic, but it's not being done here.)
In newer browsers the second TCP connection is opened speculatively so the connection overhead is reduced / goes away. In older browsers this isn't true, and the second connection will have the overhead of being opened.
Quite how/if this affects the outcome of the tests I'm not sure.

I think this won’t be true for all the cases. Because the CSS content will download parallel, but JavaScript code can’t. Consider for the same case:
Instead of having a single piece of CSS content, take two or three CSS files and try it out these ways,
CSS..CSS..JavaScript
CSS..JavaScript..CSS
JavaScript..CSS..CSS
I'm sure CSS..CSS..JavaScript will give a better result than all others.

We have to keep in mind that new browsers have worked on their JavaScript engines, their parsers and so on, optimizing common code and markup problems in a way that problems experienced in ancient browsers such Internet Explorer 8 or before are no longer relevant, not only with regards to markup but also to use of JavaScript variables, element selectors, etc.
I can see in the not-so-distant future a situation where technology has reached a point where performance is not really an issue any more.

Personally, I would not place too much emphasis on such "folk wisdom." What may have been true in the past might well not be true now. I would assume that all of the operations relating to a web-page's interpretation and rendering are fully asynchronous ("fetching" something and "acting upon it" are two entirely different things that might be being handled by different threads, etc.), and in any case entirely beyond your control or your concern.
I'd put CSS references in the "head" portion of the document, along with any references to external scripts. (Some scripts may demand to be placed in the body, and if so, oblige them.)
Beyond that ... if you observe that "this seems to be faster/slower than that, on this/that browser," treat this observation as an interesting but irrelevant curiosity and don't let it influence your design decisions. Too many things change too fast. (Anyone want to lay any bets on how many minutes it will be before the Firefox team comes out with yet another interim-release of their product? Yup, me neither.)

Related

Why google is using the term "Render-Blocking JavaScript"?

See:
https://developers.google.com/speed/docs/insights/BlockingJS
Google is talking there about "Render-Blocking JavaScript", but in my opinion that term is incorrect, confusing and misleading. It almost looks like "Google" is also not understanding it?
This point is that Javascript execution is always pausing / blocking rendering AND also always pausing / blocking the "HTML parser" (at least in Chrome and Firefox). It's even blocking it in case of an external js file, in combination with an async script tag!
So talking about removing "render-blocking Javascript" by for example using async, implies that there is also non blocking Javascript or that "async Javascript execution" is not blocking rendering, but that's not true!
The correct term would be: "Render-Blocking Download(s)". With async you will avoid that: downloading the js file, will not pause / block rendering. But the execution will still block rendering.
One more example which confirms it looks like Google is not "understanding" it.
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>Test</title>
</head>
<body>
Some HTML line and this is above the fold
<script>
// Synchronous delay of 5 seconds
var timeWhile = new Date().getTime();
while( new Date().getTime() - timeWhile < 5000 );
</script>
</body>
</html>
I tested it in Firefox and Chrome and they are showing (rendering): "Some HTML line and this is above the fold" after 5 seconds and not within 5 seconds!!!! It looks like Google is thinking that in a case like that, the Javascript will not block rendering, but as expected from theory, it will block. Before the js execution will start, all the html is already in the DOM (execept end body / html tag), but rendering is not done yet and will be paused. So if Google would be really aware of this, then Chrome would first finish rendering before starting with the execution of javascript.
If you take the example above and you're using:
<script src="delay.js" async></script>
or
<script src="delay.js"></script>
instead of internal javascript. Then it can also give the same results as the example above. For example:
If the preloader (scanning for files to already download) already would have downloaded "delay.js", before the "HTML parser" is coming at the Javascript part.
Usually external files from Google, Facebook et cetera are already stored in the cache, so there is no download and they just take the file from cache.
In cases like that (and also with async), the result will be the same as the example above (at least in pretty a lot of cases). Because if there is no extra download time, the "Javascript execution" will / can already start, before the preceding html finished rendering.
So in a case like that you could even consider to put "no-cache" / "no-store" on delay.js (or even extra delay), to make your page render more fast. By forcing a download (or extra delay) you will give the browser some extra time to finish rendering of the preceding html, before executing the render blocking Javascript.
So i really don't understand why Google (and others) are using the term "Render-Blocking JavaScript", while from theory and from "real life" examples it looks like it's the wrong term and wrong thinking. I see noone talking about this on the internet, so i don't understand. I know i am f**king intelligent (j/k), but it looks kind of weird to me, to be the only one with the thoughts above.
I work with developers on Chrome, Firefox, Safari and Edge, and I can assure the people working on these aspects of the browser understand the difference between async/defer and neither. You might find others will react more politely to your questions if you ask them politely.
Here's an image from the HTML spec on script loading and execution:
This shows that the blocking happens during fetch if a classic script has neither async or defer. It also shows that execution will always block parsing, or certainly the observable effects of parsing. This is because the DOM and JS run on the same thread.
I tested it in Firefox and Chrome and they are showing (rendering): "Some HTML line and this is above the fold" after 5 seconds and not within 5 seconds!!!!
Browsers may render the line above, but nothing below. Whether the above line renders depends on the timing of the event loop in regards to the screen refresh.
It looks like Google is thinking that in a case like that, the Javascript will not block rendering
I'm struggling to find a reference to this. You linked to my article in an email you sent me, which specifically talks about rendering being blocked during fetching.
In cases like that (and also with async), the result will be the same
That isn't guaranteed by the spec. You're relying on retrieval from the cache being instant, which may not be the case.
in a case like that you could even consider to put "no-cache" / "no-store" on delay.js (or even extra delay), to make your page render more fast. By forcing a download (or extra delay) you will give the browser some extra time to finish rendering of the preceding html, before executing the render blocking Javascript.
Why not use defer in this case? It achieves the same without the bandwidth penalty and unpredictability.
Maarten B, I did test you code and you are correct indeed. Whether you use async, defer or whatever, the lines above the inline JavaScript are not beeing rendered. The information in the documentation of Google is therefore incorrect.

Javascript inlined between script tags vs src DATA URI UTF-8 with percent-encoding

I build mobile first and I use tiny frameworks (under 10kB) which I inline in index.html to save on HTTP request.
I looked for days now and it seems like everyone else who inlines javascript does it like this:
<script>UGLIFIED JAVASCRIPT</script>
I do it like this:
<script src="data:application/javascript;utf8, UGLIFIED PERCENT-ENCODED JAVASCRIPT"></script>
You may say percent encoding will make a file much larger but it actually doesn't because the way gzip works- it replaces the repetition and it doesn't matter if the repeated phrase is <div> or %3Cdiv%3E.
My question is- are there any potential advantages of my approach?
PS. One of my ideas was browser caching file-like DATA-URI elements but I don't know if this makes sense since then I would have to also find the way of controlling how to prevent the load of parts of index.html. Unless I could use the cached elements elsewhere - that would have it's use cases too. Thoughts?
First, if your site isn't an SPA, inlining your shared scripts (regardless of method) means you're loading them on every page, negating the value of the browser cache.
Second, the trip across the wire may be similar for encoded vs. not script, but the more important metric is the time it takes for the Javascript to be parsed and compiled. URL decoding isn't free, but while I don't think it's going to matter much in the grand scheme of things, I see no reason why it would actually be faster to load than just script within the tag.

How to optimize "Parse HTML" events?

While profiling my webapp I noticed that my server is lighting fast and Chrome seems to be the bottleneck. I fired up Chrome's "timeline" developer tool and got the following numbers:
Total time: 523ms
Scripting: 369ms (70%)
I also ran a few console.log(performance.now()) from the main Javascript file and the load time is actually closer to 700ms. This is pretty shocking for what I am rendering (an empty table and 2 buttons).
I continued my investigation by drilling into "Scripting":
Evaluating jQuery-min.js: 33ms
Evaluating jQuery-UI-min.js: 50ms
Evaluating raphael-min.js: 29ms
Evaluating content.js: 41ms
Evaluating jQuery.js: 12ms
Evaluating content.js: 19ms
GC Event: 63 ms
(I didn't list the smaller scripts but they accounted for the remaining time) I don't know what to make of this.
Are these numbers normal?
Where do I go from here? Are there other tools I should be running?
How do I optimize Parse HTML events?
For all the cynicism this question received, I am amused to discover they were all wrong.
I found Chrome's profiler output hard to interpret so I turned to console.log(performance.now()). This led me to discover that the page was taking 1400 ms to load the Javascript files, before I even invoke a single line of code!
This didn't make much sense, so revisited Chrome's Javascript profiler tool. The default sorting order Heavy (Bottom Up) didn't reveal anything meaningful, so I switched over to Chart mode. This revealed that many browser plugins were being loaded, and they were taking much longer to run than I had anticipated. So I disabled all plugins and reloaded the page. Guess what? The load time went down to 147ms.
That's right: browser plugins were responsible for 90% of the load time!
So to conclude:
JQuery is substantially slower than native APIs, but this might be irrelevant in the grand scheme of things. This is why good programmers use profilers to find bottlenecks, as opposed to optimizing blindly. Don't trust people's subjective bias or a "gut feeling". Had I followed people's advise to optimize away JQuery it wouldn't have made a noticeable difference (I would have saved 100ms).
The timeline tool doesn't report the correct total time. Skip the pretty graphs and use the following tools...
Start simple. Use console.log(performance.now()) to verify basic assumptions.
Chrome's Javascript profiler
Chart will give you a chronological overview of the Javascript execution.
Tree (Top Down) will allow you to drill into methods, one level at a time.
Turn off all browser plugins, restart the browser, and try again. You'd be surprised how much overhead some plugins contribute!
I hope this helps others.
PS: There is a nice article at http://www.sitepoint.com/jquery-vs-raw-javascript-1-dom-forms/ which helps if you want to replace jQuery with the native APIs.
I think Parse HTML events happen every time you modify the inner HTML of an element, e.g.
$("#someiD").html(text);
A common style is to repeatedly append elements:
$.each(something, function() {
$("#someTable").append("<tr>...</tr>");
});
This will parse the HTML for each row that's added. You can optimize this with:
var tablebody = '';
$.each(something, function() {
tablebody += "<tr>...</tr>";
});
$("#someTable").html(tablebody);
Now it parses the entire thing at once, instead of repeatedly parsing it.

Browser performance impact of lots of js includes

I'm working on a website for work that uses one master layout for the whole site which includes lots (over 40) js files. This website is really slow to render. How much overhead is there for the browser to parse and (for a lack of better technical term) "deal with" all these includes? I know that they are cached, so they are not being downloaded on each page view. However, does each include get parsed and executed anew on every page refresh?
At any rate, I imagine there is some overhead in dealing with all these includes, but I'm not sure if it's big or small.
The best way to understand is to measure. Try merging those 40 js files into a single one and see if it makes a big difference. Also compressing them could reduce bandwidth costs.
There will be an overhead of having multiple includes but as you say those pages are cached and the overhead should be only on the first request. I think that if we ignore this initial overhead the performance difference won't be enormous compared to the time spent in those scripts manipulating the DOM, etc...
it depends on what they do - to test you could do this before they are all loaded:
<script>
var test_start_time = (new Date()).getTime();
</script>
and this after:
<script>
alert("took: " + (((new Date()).getTime()-test_start_time)/1000) + " seconds");
</script>
Definitely compare and contrast - that will be the most reliable judge.
Nevertheless, I do my best to only load one or two JS files in the head section, then I use jquery to test for certain elements which might require additional scripts or css once the DOM is loaded. For example, I use the source highlight js library to stylize pre tags:
if($('pre').length > 0) {
$.getScript(svx_cdns+'pkgs/shjs-0.6/sh_main.min.js', function() {
$('<link>', {
'rel': 'stylesheet',
'type': 'text/css',
'href': svx_cdns+'pkgs/shjs-0.6/css/sh_vim.min.css'
}).appendTo('head');
sh_highlightDocument('/s/pkgs/shjs-0.6/lang/', '.min.js');
});
}
That way the page loads very quickly, and then "adjusts" afterwards.
You could try to put all of the .js files into one file and then compress it.
This will lower the amount of requests made by the browser by 39 as well :).
Hope this helped.
The impact may be important. Take into account that script downloading blocks page rendering
A couple of things you may try:
Combine as many scripts as you can so you download less files
Minimize and compress combined js files
Try to put as many references as you can at the bottom of the page so they don't block the rendering (this is not easy and must be done carefully, you might end up allowing interaction with some controls before the necessary javascript is downloaded).
Implement paralell download for js files (by default they are downloaded sequentially). Here you have some examples about that
Even if the files are cached, there's still a request to check if the file has been modified. You can change your caching strategy and set your files never to expire. That way the browser will not even ask if it's been modified. That will mean you'll need to add a cache buster to all your urls. Look at firebug's net tab to be sure. I get a 304 Not modified with all my css/js/imgs.
The files are going to have to be parsed everytime, but that's probably not the bottleneck.
Try copying all your js into one file. One of our screen was including over 100 js files. We created a unified minimized file and our screen load time went from 10 seconds to less then 3.

Is Javascript parsed/interpreted on load? (IE)

I Know, for instance, that when Chrome downloads a Javascript file, it is interpreted and JITed.
My question is, when IE6,7,8, first download a Javascript file, is the entire thing parsed and interpreted?
My understanding was that only top level function signatures and anything executed in the global scope was parsed on load. And then function bodies and the rest were parsed on execution.
If they are fully parsed on load, what do you think the time savings would be on deferring the function bodies to be downloaded and parsed later?
They are fully parsed on load. (IE has to parse the script to know where each function body ends, of course.) In the open-source implementations, every function is compiled to bytecode or even to machine code at the same time, and I imagine IE works the same way.
If you have a page that's actually loading too slowly, and you can defer loading 100K of script that you're probably not going to use, it might help your load times. Or not—see the update below.
(Trivia: JS benchmarks like Sunspider generally do not measure the time it takes to parse and compile the code.)
UPDATE – Since I posted this answer, things have changed! Implementations still parse each script on load at least enough to detect any SyntaxErrors, as required by the standard. But they sometimes defer compiling functions until they are first called.
Because defining a function is actually an operation, yes, your entire javascript file is parsed, and all of the top-level operations are interpreted. The code inside of your functions is not actually executed until it's called, but it is parsed.
for example:
var i=0;
var print = function( a ) {
document.getElementById( 'foo' ).innerHtml = a;
}
Everything gets parsed in the above example, and lines 1 and 2 get executed. However, line 3 doesn't get executed until it's called.
There are little "perceptual games" you can play with your users, like putting the script tags at the bottom of the HTML instead of at the top, so that the browser will render the top of the page before it receives the instructions to download and parse the javascript. You could probably also push your function definitions into a document.onload function, so that they don't get executed until after the whole page is loaded and in memory. However, this could cause a "flash of unstyled content" if your javascript is applying visual styles to things (such as jQuery UI stuff).
Yes, on all browsers the downloading of the resource blocks everything else on the page (CSS downloading, other JS downloading, rendering) if done with a <script> tag.
If you're loading all the javascript at the beginning or throughout your page you will see hiccups as a request is about 50ms and the parsing for a library file or something similar could be more than 100ms. 100ms is used as the standard for which anything greater will be noticed as "lag" by the user.
The time savings may be negligible, but the slight loss of user experience if there are pauses when your page is loading may be significant depending on your situation.
See LABjs' site for a lot of articles and great explanations on the benefits of deferring loading and parsing.
What do you mean by "downloads"? When it's included with tag, or when it's downloaded through XMLHttpRequest?
If you mean the inclusion by script, then IE interpret all js files at once. Otherwise you will be not able to call functions in that file or see syntax error message.
If you mean download by XMLHttpRequest, then you have to evaluate the content of the file yourself.

Categories

Resources