Free browser memory from a JS script - javascript

Is it possible freeing the browser memory from JavaScript objects?
I am designing the asynchronous navigation process of a web app, in which page parts are loaded through AJAX requests together with their necessary components (CSS, JS, images etc): I guess that without a proper scripting, a long usage of the app would load many different objects causing a brutal memory growth.
As I could imagine, the removal of a script tag from the DOM only removes the DOM node, not the objects and functions it defines, which remains loaded into the browser memory.
(A similar test: Should a "script" tag be allowed to remove itself?).
I also tried to test the browser behaviour while setting a variable as a big string and then overwriting it:
<html>
<head>
<title>JS memory test</title>
<script type="text/javascript">
function allocate() {
window.x = '';
var s = 'abcdefghijklmnopqrstuvwxyz0123456789-';
for (var i = 0; i < 1000000; ++i) {
window.x += 'hello' + i + s;
}
alert('allocated');
}
function free() {
window.x = '';
alert('freed');
}
</script>
</head>
<body>
<input type="button" value="Allocate" onclick="javascript:allocate();" />
<input type="button" value="Free" onclick="javascript:free();" />
</body>
</html>
Data from the Chrome task manager:
page weight when loaded:
5736 KB
page weight after having set the variable:
81720 KB
page weight after having the variable reset:
81740 KB
It strikes me since I thought the JS garbage collector throws away the old variable value, since it is overwritten from the new assigned value.
Firefox has not a single-task-tab management, so there's not a similar process monitor like in Chrome, but the global memory usage seems that it behaves almost like Chrome.
Is there some wrong assumption?
Can somebody provide suggestions about effective programming practices or useful resources to read?
Thanks a lot.

You have no direct access to memory management features. The best you can do is remove references to objects so that they are available for garbage collection, which will run from time to time as the browser sees fit.
From time to time certain browsers exhibit memory leaks, there are strategies to reduce them but usually for specific cases (e.g. the infamous IE circular reference leak, which has been fixed, even in updated versions of IE 6, as far as I know).
It's probably not worth worrying about unless you find you have issues. Deal with it then.

Related

Javascript single page application and script management

I'm building a single page application with vanilla JS + Knockout JS. The application will consist of multiple sub-applications which I would like to dynamically load (and subsequently unload). The problem is that while I can add and execute a new script with the following:
function loadJs(url, hash){
var fileObj=document.createElement('script');
fileObj.setAttribute('type','text/javascript');
fileObj.setAttribute('src', url);
if (hash != undefined) {
fileObj.setAttribute('integrity', hash);
}
fileObj.setAttribute('crossorigin', 'anonymous')
document.getElementsByTagName('head')[0].appendChild(fileObj);
}
I cannot remove (including from memory) with the following:
function unloadJs(url){
var allScripts = Array.from(document.getElementsByTagName('script'));
allScripts.forEach( script => {
if (script.src == url) {
script.remove();
}
});
}
The remove script only removes the tag, but the code is still in memory. Based on this, it seems like I should just load all the application scripts when the application is initially opened rather than managing them dynamically. In this method, I could combine and minify the script into a single file. I was hoping to minimize browser memory usage and prevent leaks.
I read a few SO answers discussing closures and code leaving memory automatically when no longer referenced, but I couldn't establish exactly what the examples were showing and whether my time was well invested to specifically understand closures.
The Javascript code is only handling the UI, and there is minimal data manipulation. The major work will be done on the server with JS displaying the results. Is this type of thing simply premature optimization?
As you have found, removing the script tag does not automatically remove the code or variables from memory. It might eventually get collected eventually, but it's not something you can guarantee. In my experience, they stick around for a while.
I would suggest loading your scripts into a minified JS file and loading them in advance like you describe.
If you're concerned about memory use, you could externalize each set of mini-apps' scripts into a JS object so that all the objects and functions are part of one single parent object for each mini-app (sort of a namespace). You would create a new isntance for that specific page/app's object when it's loaded, and then you can destroy/delete that object when it is unloaded. That would tell the garbage collector that the object can be removed.

Why does this javascript have a memory leak in Chrome?

I have a timer function that creates a large amount of data, declared with var. Why does the object not get garbage-collected? The number shown by usedJSHeapSize keeps growing. Task Manager in Chrome also shows memory increasing.
I'm testing this in Windows 10, Chrome, using VS 2017.
If I copy and paste the code into a separate file called test.html and open that in Chrome, it also shows the leak.
I've tested this code in Edge and IE (using Developer Tools instead of usedJSHeapSize) and I see no memory leak.
Is this an issue with Chrome?
<script type="text/javascript">
function refreshTimer() {
try {
var longStr = new Array(1000000).join('*');
document.getElementById('div1').textContent = 'usedJSHeapSize: ' + window.performance.memory.usedJSHeapSize;
}
catch (err) {
document.getElementById('div1').textContent = 'refreshTimer: ' + err;
}
}
window.setInterval("refreshTimer()", 3*1000);
</script>
<div id="div1" style="font-family:Calibri"></div>
I expect that there would be no memory leak because the large data object is declared with var and should be garbage-collected when it leaves scope.
Edit to my my original post:
I have run Chrome with and without "--enable-precise-memory-info" and it makes no difference. I have observed memory growing in Chrome->More Tools->Task Manager and in Windows Task manager with just one instance of Chrome running with my test.html file.
The only links I can find that mention this as a possible bug in Chrome are these:
Javascript garbage collection of typed arrays in Chrome
https://bugs.chromium.org/p/chromium/issues/detail?id=232415
These are old posts though and I can't believe the bug would live this long.
So - I'm still perplexed.
1/2/2019 - adding a comment to move this question to the top of SO. If anyone knows, please add your thoughts.
You've run into a security issue with Chrome. Chrome does not expose true memory usage via "window.performance.memory". Attackers could use this information to attack the web browser.

Is this a Chrome bug in Xpath evaluation

Open this page: http://sunnah.com/abudawud/2
And run this simple xpath search query in the console. Then the browser tab crashes
for(var k=0, kl=2000; k < kl; k++){
console.log(k);
var xpathResult = document.evaluate("//div[#class='hello']", document, null, XPathResult.ANY_TYPE, null);
}
On chrome Version 46.0.2490.80 (64-bit) on Macbook Pro, OSX 10.10.5
Unfortunately i have to run the xpath on this page a couple of thousands of time to search for different elements. So i can't get away with not doing that many calls to evaluate.
The crash is dependent on the xpath term. for some terms it crashes and for some others it does not.
It fails consistently on the same count so it makes me think it is not a timing issue or garbage collection issue.
I am not getting any error codes so I am not sure where else to look.
Update
After further investigation we believe this is a legitimate Chrome bug or at least not very good way of releasing memory. What happens is that if your xpath starts with / or // then the search context is expanded to all of DOM and for some reason chrome keeps the DOM or some other intermediary object in memory. If the xpath does starts with relative path like (div/p) and the search scope ( second argument) is set to portions of the DOM the memory consumption is much more reasonable and there is no crash. Thanks to #JLRishe for several hints that were very helpful to get to this conclusion.
Update2
I filed a bug on chromium. But after a few months they rejected the bug as wont-fix. I managed to work around it for the time being.
If I run your code on that page and watch Task Manager, I can see Chrome's working set increase to about 3.3 GB before it eventually crashes after about 1300 iterations.
Each XPath query is causing Chrome to allocate memory for the results and any operation involved in obtaining them, but it seems like it is not releasing any of the allocated memory because you are not releasing control of the thread.
I have found that the working set levels out at 1.65 GB and the operation finishes without crashing if I do this:
var k = 0;
var intv = setInterval(function () {
console.log(k);
var xpathResult = document.evaluate("//div[#class='hello']", document, null, XPathResult.ANY_TYPE, null);
k += 1;
if (k >= 2000) {
clearInterval(intv);
}
}, 0);
so something like that might be a possible solution.
This is still using considerable system resources, and this isn't even including any values you might be storing in the course of your operation. I encourage you to seek out a smarter approach that doesn't require running quite so many XPath queries.

How to use FileSystemObject to read file in JavaScript

I want to read a file with FileSystemObject. My code is as following:
<!DOCTYPE html>
<html lang="en">
<head>
<title>Read json</title>
</head>
<body>
<script type="text/javascript">
function readFile(filename){
var fso = new ActiveXObject("Scripting.FileSystemObject");
var ForReading = 1;
var f1 = fso.OpenTextFile(filename, ForReading);
var text = f1.ReadAll();
f1.close();
return text;
}
myJSONText = "text.txt";
var myObject = readFile(myJSONText);//eval('(' + myJSONText + ')');
document.write(myObject.name);
</script>
</body>
</html>
First, let me repeat some comments above. I've never seen using ActiveXObject client side extolled as a thing that should be done.
Now, let me say I'm trying to learn how to do this myself. Here are some thoughts (and helpful links, see the bottom) on this question.
The general layout, according to "Much ADO about Text Files" on MSDN's scripting clinic column, is:
Create the object.
Create another object, using the first, that uses
a method of the first object (such as getting a file).
Do things to
the file.
Close the file.
How do you start? According to IE Dev Center (linked here), use an ActiveXObject in Javascript as follows:
newObj = new ActiveXObject(servername.typename[, location])
You've got that when you declare fso in your code. What about this "servername" thing, isn't the file accessed locally? Instead of "servername etc" you've put in Scripting.FileSystemObject. This is actually fine, if the HKEY_CLASSES_ROOT registry key on the host PC supports it (see ref above).
Once the ActiveXObject is successfully declared, and if the browser allows it (IE only), and if the end user agrees to any warnings that pop up ("An ActiveX control on this page might be unsafe to interact with other parts of the page..." etc), then the object allows you to use any of the methods associated with that object. That's where the power of the Windows Scripting FileSystemObject comes into play.
Any FileSystemObject (fso) method is now available to use, which as its name suggests, means file (and directory) interaction on the local machine. Not just reading, as your question is focused on, but writing and deleting as well. A complete list of methods and properties is available at MSDN here. After being used, close out the file using the .close() method.
So, this is dangerous for obvious reasons. But what wasn't obvious to me at first was that these interactions with the filesystem may happen invisibly. There is a good chance that whatever you do, from reading a file to deleting a directory tree, no warnings or command prompts will come up to let you know what's happening because of your few lines of code.
Let me finish by commenting on the last bits of code above. Using JSON in conjunction with data pulled from the FileSystemObject provides a great way to allow JavaScript interaction (JSON .parse and .stringify come immediately to mind). With this, data could be stored locally, perhaps as an alternative to HTML5 local storage (ref this SO thread, which goes more in-depth with this concept, and another SO question I raised about this here).
Here are some links for further reading:
IE Dev Center, JavaScript Objects, ActiveXObject
MSDN JScript Windows Scripting (including FileSystemObject methods, etc)
MSDN Scripting Clinic (older articles, many broken links, but stil a lot of good info on this stuff)

Should CSS always precede JavaScript?

In countless places online I have seen the recommendation to include CSS prior to JavaScript. The reasoning is generally, of this form:
When it comes to ordering your CSS and JavaScript, you want your CSS
to come first. The reason is that the rendering thread has all the
style information it needs to render the page. If the JavaScript
includes come first, the JavaScript engine has to parse it all before
continuing on to the next set of resources. This means the rendering
thread can't completely show the page, since it doesn't have all the
styles it needs.
My actual testing reveals something quite different:
My test harness
I use the following Ruby script to generate specific delays for various resources:
require 'rubygems'
require 'eventmachine'
require 'evma_httpserver'
require 'date'
class Handler < EventMachine::Connection
include EventMachine::HttpServer
def process_http_request
resp = EventMachine::DelegatedHttpResponse.new( self )
return unless #http_query_string
path = #http_path_info
array = #http_query_string.split("&").map{|s| s.split("=")}.flatten
parsed = Hash[*array]
delay = parsed["delay"].to_i / 1000.0
jsdelay = parsed["jsdelay"].to_i
delay = 5 if (delay > 5)
jsdelay = 5000 if (jsdelay > 5000)
delay = 0 if (delay < 0)
jsdelay = 0 if (jsdelay < 0)
# Block which fulfills the request
operation = proc do
sleep delay
if path.match(/.js$/)
resp.status = 200
resp.headers["Content-Type"] = "text/javascript"
resp.content = "(function(){
var start = new Date();
while(new Date() - start < #{jsdelay}){}
})();"
end
if path.match(/.css$/)
resp.status = 200
resp.headers["Content-Type"] = "text/css"
resp.content = "body {font-size: 50px;}"
end
end
# Callback block to execute once the request is fulfilled
callback = proc do |res|
resp.send_response
end
# Let the thread pool (20 Ruby threads) handle request
EM.defer(operation, callback)
end
end
EventMachine::run {
EventMachine::start_server("0.0.0.0", 8081, Handler)
puts "Listening..."
}
The above mini server allows me to set arbitrary delays for JavaScript files (both server and client) and arbitrary CSS delays. For example, http://10.0.0.50:8081/test.css?delay=500 gives me a 500 ms delay transferring the CSS.
I use the following page to test.
<!DOCTYPE html>
<html>
<head>
<title>test</title>
<script type='text/javascript'>
var startTime = new Date();
</script>
<link href="http://10.0.0.50:8081/test.css?delay=500" type="text/css" rel="stylesheet">
<script type="text/javascript" src="http://10.0.0.50:8081/test2.js?delay=400&jsdelay=1000"></script>
</head>
<body>
<p>
Elapsed time is:
<script type='text/javascript'>
document.write(new Date() - startTime);
</script>
</p>
</body>
</html>
When I include the CSS first, the page takes 1.5 seconds to render:
When I include the JavaScript first, the page takes 1.4 seconds to render:
I get similar results in Chrome, Firefox and Internet Explorer. In Opera, however, the ordering simply does not matter.
What appears to be happening is that the JavaScript interpreter refuses to start until all the CSS is downloaded. So, it seems that having JavaScript includes first is more efficient as the JavaScript thread gets more run time.
Am I missing something? Is the recommendation to place CSS includes prior to JavaScript includes not correct?
It is clear that we could add async or use setTimeout to free up the render thread or put the JavaScript code in the footer, or use a JavaScript loader. The point here is about ordering of essential JavaScript bits and CSS bits in the head.
This is a very interesting question. I've always put my CSS <link href="...">s before my JavaScript <script src="...">s because "I read one time that it's better." So, you're right; it's high time we do some actual research!
I set up my own test harness in Node.js (code below). Basically, I:
Made sure there was no HTTP caching so the browser would have to do a full download each time a page is loaded.
To simulate reality, I included jQuery and the H5BP CSS (so there's a decent amount of script/CSS to parse)
Set up two pages - one with CSS before script, one with CSS after script.
Recorded how long it took for the external script in the <head> to execute
Recorded how long it took for the inline script in the <body> to execute, which is analogous to DOMReady.
Delayed sending CSS and/or script to the browser by 500 ms.
Ran the test 20 times in the three major browsers.
Results
First, with the CSS file delayed by 500 ms (the unit is milliseconds):
Browser: Chrome 18 | IE 9 | Firefox 9
CSS: first last | first last | first last
=======================================================
Header Exec | | |
Average | 583 36 | 559 42 | 565 49
St Dev | 15 12 | 9 7 | 13 6
------------|--------------|--------------|------------
Body Exec | | |
Average | 584 521 | 559 513 | 565 519
St Dev | 15 9 | 9 5 | 13 7
Next, I set jQuery to delay by 500 ms instead of the CSS:
Browser: Chrome 18 | IE 9 | Firefox 9
CSS: first last | first last | first last
=======================================================
Header Exec | | |
Average | 597 556 | 562 559 | 564 564
St Dev | 14 12 | 11 7 | 8 8
------------|--------------|--------------|------------
Body Exec | | |
Average | 598 557 | 563 560 | 564 565
St Dev | 14 12 | 10 7 | 8 8
Finally, I set both jQuery and the CSS to delay by 500 ms:
Browser: Chrome 18 | IE 9 | Firefox 9
CSS: first last | first last | first last
=======================================================
Header Exec | | |
Average | 620 560 | 577 577 | 571 567
St Dev | 16 11 | 19 9 | 9 10
------------|--------------|--------------|------------
Body Exec | | |
Average | 623 561 | 578 580 | 571 568
St Dev | 18 11 | 19 9 | 9 10
Conclusions
First, it's important to note that I'm operating under the assumption that you have scripts located in the <head> of your document (as opposed to the end of the <body>). There are various arguments regarding why you might link to your scripts in the <head> versus the end of the document, but that's outside the scope of this answer. This is strictly about whether <script>s should go before <link>s in the <head>.
In modern DESKTOP browsers, it looks like linking to CSS first never provides a performance gain. Putting CSS after script gets you a trivial amount of gain when both CSS and script are delayed, but gives you large gains when CSS is delayed. (Shown by the last columns in the first set of results.)
Given that linking to CSS last does not seem to hurt performance but can provide gains under certain circumstances, you should link to external style sheets after you link to external scripts only on desktop browsers if the performance of old browsers is not a concern. Read on for the mobile situation.
Why?
Historically, when a browser encountered a <script> tag pointing to an external resource, the browser would stop parsing the HTML, retrieve the script, execute it, then continue parsing the HTML. In contrast, if the browser encountered a <link> for an external style sheet, it would continue parsing the HTML while it fetched the CSS file (in parallel).
Hence, the widely-repeated advice to put style sheets first – they would download first, and the first script to download could be loaded in parallel.
However, modern browsers (including all of the browsers I tested with above) have implemented speculative parsing, where the browser "looks ahead" in the HTML and begins downloading resources before scripts download and execute.
In old browsers without speculative parsing, putting scripts first will affect performance since they will not download in parallel.
Browser Support
Speculative parsing was first implemented in: (along with the percentage of worldwide desktop browser users using this version or greater as of Jan 2012)
Chrome 1 (WebKit 525) (100%)
Internet Explorer 8 (75%)
Firefox 3.5 (96%)
Safari 4 (99%)
Opera 11.60 (85%)
In total, roughly 85% of desktop browsers in use today support speculative loading. Putting scripts before CSS will have a performance penalty on 15% of users globally; your mileage may vary based on your site's specific audience. (And remember that number is shrinking.)
On mobile browsers, it's a little harder to get definitive numbers simply due to how heterogeneous the mobile browser and OS landscape is. Since speculative rendering was implemented in WebKit 525 (released Mar 2008), and just about every worthwhile mobile browser is based on WebKit, we can conclude that "most" mobile browsers should support it. According to quirksmode, iOS 2.2/Android 1.0 use WebKit 525. I have no idea what Windows Phone looks like.
However, I ran the test on my Android 4 device, and while I saw numbers similar to the desktop results, I hooked it up to the fantastic new remote debugger in Chrome for Android, and Network tab showed that the browser was actually waiting to download the CSS until the JavaScript code completely loaded – in other words, even the newest version of WebKit for Android does not appear to support speculative parsing. I suspect it might be turned off due to the CPU, memory, and/or network constraints inherent to mobile devices.
Code
Forgive the sloppiness – this was Q&D.
File app.js
var express = require('express')
, app = express.createServer()
, fs = require('fs');
app.listen(90);
var file={};
fs.readdirSync('.').forEach(function(f) {
console.log(f)
file[f] = fs.readFileSync(f);
if (f != 'jquery.js' && f != 'style.css') app.get('/' + f, function(req,res) {
res.contentType(f);
res.send(file[f]);
});
});
app.get('/jquery.js', function(req,res) {
setTimeout(function() {
res.contentType('text/javascript');
res.send(file['jquery.js']);
}, 500);
});
app.get('/style.css', function(req,res) {
setTimeout(function() {
res.contentType('text/css');
res.send(file['style.css']);
}, 500);
});
var headresults={
css: [],
js: []
}, bodyresults={
css: [],
js: []
}
app.post('/result/:type/:time/:exec', function(req,res) {
headresults[req.params.type].push(parseInt(req.params.time, 10));
bodyresults[req.params.type].push(parseInt(req.params.exec, 10));
res.end();
});
app.get('/result/:type', function(req,res) {
var o = '';
headresults[req.params.type].forEach(function(i) {
o+='\n' + i;
});
o+='\n';
bodyresults[req.params.type].forEach(function(i) {
o+='\n' + i;
});
res.send(o);
});
File css.html
<!DOCTYPE html>
<html>
<head>
<title>CSS first</title>
<script>var start = Date.now();</script>
<link rel="stylesheet" href="style.css">
<script src="jquery.js"></script>
<script src="test.js"></script>
</head>
<body>
<script>document.write(jsload - start);bodyexec=Date.now()</script>
</body>
</html>
File js.html
<!DOCTYPE html>
<html>
<head>
<title>CSS first</title>
<script>var start = Date.now();</script>
<script src="jquery.js"></script>
<script src="test.js"></script>
<link rel="stylesheet" href="style.css">
</head>
<body>
<script>document.write(jsload - start);bodyexec=Date.now()</script>
</body>
</html>
File test.js
var jsload = Date.now();
$(function() {
$.post('/result' + location.pathname.replace('.html','') + '/' + (jsload - start) + '/' + (bodyexec - start));
});
jQuery was jquery-1.7.1.min.js
There are two main reasons to put CSS before JavaScript.
Old browsers (Internet Explorer 6-7, Firefox 2, etc.) would block all subsequent downloads when they started downloading a script. So if you have a.js followed by b.css they get downloaded sequentially: first a then b. If you have b.css followed by a.js they get downloaded in parallel so the page loads more quickly.
Nothing is rendered until all stylesheets are downloaded - this is true in all browsers. Scripts are different - they block rendering of all DOM elements that are below the script tag in the page. If you put your scripts in the HEAD then it means the entire page is blocked from rendering until all stylesheets and all scripts are downloaded. While it makes sense to block all rendering for stylesheets (so you get the correct styling the first time and avoid the flash of unstyled content FOUC), it doesn't make sense to block rendering of the entire page for scripts. Often scripts don't affect any DOM elements or just a portion of DOM elements. It's best to load scripts as low in the page as possible, or even better load them asynchronously.
It's fun to create examples with Cuzillion. For example, this page has a script in the HEAD so the entire page is blank until it's done downloading. However, if we move the script to the end of the BODY block the page header renders since those DOM elements occur above the SCRIPT tag, as you can see on this page.
I would not emphasize too much on the results that you have got. I believe that it is subjective, but I have a reason to explain you that it is better to put in CSS before JavaScript.
During the loading of your website, there are two scenarios that you would see:
Case 1: white screen → unstyled website → styled website → interaction → styled and interactive website
Case 2: white screen → unstyled website → interaction → styled website → styled and interactive website
I honestly can't imagine anyone choosing Case 2. This would mean that visitors using slow Internet connections will be faced with an unstyled website, that allows them to interact with it using JavaScript (since that is already loaded). Furthermore, the amount of time spend looking at an unstyled website would be maximized this way. Why would anyone want that?
It also works better, as jQuery states:
"When using scripts that rely on the value of CSS style properties,
it's important to reference external stylesheets or embed style
elements before referencing the scripts".
When the files are loaded in the wrong order (first JavaScript, then CSS), any JavaScript code relying on properties set in CSS files (for example, the width or height of a div) won't be loaded correctly. It seems that with the wrong loading order, the correct properties are 'sometimes' known to JavaScript (perhaps this is caused by a race condition?). This effect seems bigger or smaller depending on the browser used.
Were your tests performed on your personal computer, or on a web server? It is a blank page, or is it a complex online system with images, databases, etc.? Are your scripts performing a simple hover event action, or are they a core component to how your website renders and interacts with the user? There are several things to consider here, and the relevance of these recommendations almost always become rules when you venture into high-caliber web development.
The purpose of the "put stylesheets at the top and scripts at the bottom" rule is that, in general, it's the best way to achieve optimal progressive rendering, which is critical to the user experience.
All else aside: assuming your test is valid, and you really are producing results contrary to the popular rules, it'd come as no surprise, really. Every website (and everything it takes to make the whole thing appear on a user's screen) is different and the Internet is constantly evolving.
I include CSS files before JavaScript for a different reason.
If my JavaScript code needs to do dynamic sizing of some page element (for those corner cases where CSS is really a main in the back) then loading the CSS after the JS is russing can lead to race conditions, where the element is resized before CSS styles are applied and thus looks weird when the styles finally kick in. If I load the CSS beforehand I can guarantee that things run in the intended order and that the final layout is what I want it to be.
Is the recommendation to include CSS before JavaScript invalid?
Not if you treat it as simply a recommendation. But if your treat it as a hard and fast rule?, yes, it is invalid.
From Window: DOMContentLoaded event:
Stylesheet loads block script execution, so if you have a <script>
after a <link rel="stylesheet" ...> the page will not finish parsing
and DOMContentLoaded will not fire - until the stylesheet is loaded.
It appears that you need to know what each script relies on and make sure that execution of the script is delayed until after the right completion event. If the script relies only on the DOM, it can resume in ondomready/domcontentloaded. If it relies on images to be loaded or style sheets to be applied, then if I read the above reference correctly, that code must be deferred until the onload event.
I don't think that one sock size fits all, even though that is the way they are sold and I know that one shoe size does not fit all. I don't think that there is a definitive answer to which to load first, styles or script. It is more a case by case decision of what must be loaded in what order and what can be deferred until later as not being on the "critical path".
To speak to the observer that commented that it is better to delay the users ability to interact until the sheet is pretty. There are many of you out there and you annoy your counterparts that feel the opposite. They came to a site to accomplish a purpose and delays to their ability to interact with a site while waiting for things that don't matter to finish loading are very frustrating. I am not saying that you are wrong, only that you should be aware that there is another faction that exists that does not share your priority.
This question particularly applies to all of the ads being placed on web sites. I would love it if site authors rendered just placeholder divs for the ad content and made sure that their site was loaded and interactive before injecting the ads in an onload event. Even then I would like to see the ads loaded serially instead of all at once because they impact my ability to even scroll the site content while the bloated ads are loading. But that is just one person's point of view.
Know your users and what they value.
Know your users and what browsing environment they use.
Know what each file does, and what its prerequisites are. Making everything work will take precedence over both speed and pretty.
Use tools that show you the network time line when developing.
Test in each of the environments that your users use. It may be needed to dynamically (server side, when creating the page) alter the order of loading based on the users environment.
When in doubt, alter the order and measure again.
It is possible that intermixing styles and scripts in the load order will be optimal; not all of one then all of the other.
Experiment not just what order to load the files but where. head? In body? After body? DOM Ready/Loaded? Loaded?
Consider async and defer options when appropriate to reduce the net delay the user will experience before being able to interact with the page. Test to determine if they help or hurt.
There will always be trade-offs to consider when evaluating the optimal load order. Pretty vs. responsive being just one.
Updated 2017-12-16
I was not sure about the tests in OP. I decided to experiment a little and ended up busting some of the myths.
Synchronous <script src...> will block downloading of the resources
below it until it is downloaded and executed
This is no longer true. Have a look at the waterfall generated by Chrome 63:
<head>
<script src="//alias-0.redacted.com/payload.php?type=js&delay=333&rand=1"></script>
<script src="//alias-1.redacted.com/payload.php?type=js&delay=333&rand=2"></script>
<script src="//alias-2.redacted.com/payload.php?type=js&delay=333&rand=3"></script>
</head>
<link rel=stylesheet> will not block download and execution of
scripts below it
This is incorrect. The style sheet will not block download but it will block execution of the script (little explanation here). Have a look at performance chart generated by Chrome 63:
<link href="//alias-0.redacted.com/payload.php?type=css&delay=666" rel="stylesheet">
<script src="//alias-1.redacted.com/payload.php?type=js&delay=333&block=1000"></script>
Keeping the above in mind, the results in OP can be explained as follows:
CSS First:
CSS Download 500 ms:<------------------------------------------------>
JS Download 400 ms:<-------------------------------------->
JS Execution 1000 ms: <-------------------------------------------------------------------------------------------------->
DOM Ready #1500 ms: ◆
JavaScript First:
JS Download 400 ms:<-------------------------------------->
CSS Download 500 ms:<------------------------------------------------>
JS Execution 1000 ms: <-------------------------------------------------------------------------------------------------->
DOM Ready #1400 ms: ◆
The 2020 answer: it probably doesn't matter
The best answer here was from 2012, so I decided to test for myself. On Chrome for Android, the JS and CSS resources are downloaded in parallel and I could not detect a difference in page rendering speed.
I included a more detailed writeup on my blog
I'm not exactly sure how your testing 'render' time as your using JavaScript. However, consider this:
One page on your site is 50 kB which is not unreasonable. The user is on the East Coast while your server is on the west. MTU is definitely not 10k so there will be a few trips back and forth. It may take 1/2 a second to receive your page and style sheets. Typically (for me) JavaScript (via jQuery plugin and such) are much more than CSS. There’s also what happens when your Internet connection chokes up midway on the page, but let’s ignore that (it happens to me occasionally and I believe the CSS renders, but I am not 100% sure).
Since CSS is in head, there may be additional connections to get it which means it potentially can finish before the page does. Anyway, during the type the remainder of the page takes and the JavaScript files (which is many more bytes) the page is unstyled which makes the site/connection appear slow.
Even if the JavaScript interpreter refuses to start until the CSS is done, the time taken to download the JavaScript code, especially when far from the server, is cutting into CSS time, which will make the site look not pretty.
It’s a small optimization, but that’s the reason for it.
Here is a summary of all the previous major answers:
For modern browsers, put the CSS content wherever you like it. They would analyze your HTML file (which they call speculative parsing) and start downloading CSS in parallel with HTML parsing.
For old browsers, keep putting the CSS on top (if you don't want to show a naked, but interactive page first).
For all browsers, put the JavaScript content as far down on the page as possible, since it will halt parsing of your HTML. Preferably, download it asynchronously (i.e., an Ajax call).
There are also some experimental results for a particular case which claims putting JavaScript first (as opposed to traditional wisdom of putting CSS first) gives better performance, but there isn't any logical reasoning given for it, and lacks validation regarding widespread applicability, so you can ignore it for now.
So, to answer the question: Yes. The recommendation to include the CSS before JavaScript is invalid for the modern browsers. Put CSS wherever you like, and put JavaScript towards the end, as possible.
Steve Souders has already given a definitive answer, but...
I wonder whether there's an issue with both Sam's original test and Josh's repeat of it.
Both tests appear to have been performed on low latency connections where setting up the TCP connection will have a trivial cost.
How this affects the result of the test I'm not sure and I'd want to look at the waterfalls for the tests over a 'normal' latency connection but...
The first file downloaded should get the connection used for the HTML page, and the second file downloaded will get the new connection. (Flushing the <head> early alters that dynamic, but it's not being done here.)
In newer browsers the second TCP connection is opened speculatively so the connection overhead is reduced / goes away. In older browsers this isn't true, and the second connection will have the overhead of being opened.
Quite how/if this affects the outcome of the tests I'm not sure.
I think this won’t be true for all the cases. Because the CSS content will download parallel, but JavaScript code can’t. Consider for the same case:
Instead of having a single piece of CSS content, take two or three CSS files and try it out these ways,
CSS..CSS..JavaScript
CSS..JavaScript..CSS
JavaScript..CSS..CSS
I'm sure CSS..CSS..JavaScript will give a better result than all others.
We have to keep in mind that new browsers have worked on their JavaScript engines, their parsers and so on, optimizing common code and markup problems in a way that problems experienced in ancient browsers such Internet Explorer 8 or before are no longer relevant, not only with regards to markup but also to use of JavaScript variables, element selectors, etc.
I can see in the not-so-distant future a situation where technology has reached a point where performance is not really an issue any more.
Personally, I would not place too much emphasis on such "folk wisdom." What may have been true in the past might well not be true now. I would assume that all of the operations relating to a web-page's interpretation and rendering are fully asynchronous ("fetching" something and "acting upon it" are two entirely different things that might be being handled by different threads, etc.), and in any case entirely beyond your control or your concern.
I'd put CSS references in the "head" portion of the document, along with any references to external scripts. (Some scripts may demand to be placed in the body, and if so, oblige them.)
Beyond that ... if you observe that "this seems to be faster/slower than that, on this/that browser," treat this observation as an interesting but irrelevant curiosity and don't let it influence your design decisions. Too many things change too fast. (Anyone want to lay any bets on how many minutes it will be before the Firefox team comes out with yet another interim-release of their product? Yup, me neither.)

Categories

Resources