Browser performance impact of lots of js includes - javascript

I'm working on a website for work that uses one master layout for the whole site which includes lots (over 40) js files. This website is really slow to render. How much overhead is there for the browser to parse and (for a lack of better technical term) "deal with" all these includes? I know that they are cached, so they are not being downloaded on each page view. However, does each include get parsed and executed anew on every page refresh?
At any rate, I imagine there is some overhead in dealing with all these includes, but I'm not sure if it's big or small.

The best way to understand is to measure. Try merging those 40 js files into a single one and see if it makes a big difference. Also compressing them could reduce bandwidth costs.
There will be an overhead of having multiple includes but as you say those pages are cached and the overhead should be only on the first request. I think that if we ignore this initial overhead the performance difference won't be enormous compared to the time spent in those scripts manipulating the DOM, etc...

it depends on what they do - to test you could do this before they are all loaded:
<script>
var test_start_time = (new Date()).getTime();
</script>
and this after:
<script>
alert("took: " + (((new Date()).getTime()-test_start_time)/1000) + " seconds");
</script>

Definitely compare and contrast - that will be the most reliable judge.
Nevertheless, I do my best to only load one or two JS files in the head section, then I use jquery to test for certain elements which might require additional scripts or css once the DOM is loaded. For example, I use the source highlight js library to stylize pre tags:
if($('pre').length > 0) {
$.getScript(svx_cdns+'pkgs/shjs-0.6/sh_main.min.js', function() {
$('<link>', {
'rel': 'stylesheet',
'type': 'text/css',
'href': svx_cdns+'pkgs/shjs-0.6/css/sh_vim.min.css'
}).appendTo('head');
sh_highlightDocument('/s/pkgs/shjs-0.6/lang/', '.min.js');
});
}
That way the page loads very quickly, and then "adjusts" afterwards.

You could try to put all of the .js files into one file and then compress it.
This will lower the amount of requests made by the browser by 39 as well :).
Hope this helped.

The impact may be important. Take into account that script downloading blocks page rendering
A couple of things you may try:
Combine as many scripts as you can so you download less files
Minimize and compress combined js files
Try to put as many references as you can at the bottom of the page so they don't block the rendering (this is not easy and must be done carefully, you might end up allowing interaction with some controls before the necessary javascript is downloaded).
Implement paralell download for js files (by default they are downloaded sequentially). Here you have some examples about that

Even if the files are cached, there's still a request to check if the file has been modified. You can change your caching strategy and set your files never to expire. That way the browser will not even ask if it's been modified. That will mean you'll need to add a cache buster to all your urls. Look at firebug's net tab to be sure. I get a 304 Not modified with all my css/js/imgs.
The files are going to have to be parsed everytime, but that's probably not the bottleneck.
Try copying all your js into one file. One of our screen was including over 100 js files. We created a unified minimized file and our screen load time went from 10 seconds to less then 3.

Related

LSL HttpServer - Serving large Notecards from Prim Inventory on Secondlife

I am writing a Media-HUD that runs totally on local notecard files stored within the prim's inventory, with notecards named like index.html, style.css, icon.svg etc.
My hope is to use the LSL HttpServer functions, and the script's URL to create a totally self-contained media based HUD that is easy to edit like editing any web page.
This is completely possible on its own, however there is a limitation in that, the pages must fit into the memory allocated to the LSL script. Under mono this is only 64kb.
I want to remove this limitation, by somehow, perhaps from javascript, reading in each 'file' from a notecard line by line in the users browser itself (thusly, getting around the memory limit by only bringing one notecard line into memory at a time).
Is there a way to do this? generate a entire file in javascript procedurally by loading in the strings making it up line by line, and then serve it as though it were a whole file? I'm not sure how feasible this is.
Any idea's/guidance greatly appreciated!
You could do this through Javascript using XMLHttpRequest. jQuery's wrapper for this is called Ajax. You could request each line individually, which would be slightly slower, or read in a number of lines at a time, at the script's leisure. http_request is not throttled so either works. Note that the loader has to be sent in a single response, because the LSL server has no way of pushing data "piecemeal" like an actual server does.
Notes:
llGetNotecardLine only returns the first 255 bytes per line.
llHTTPResponse must be called within ~20 seconds of the request, so you can't feasibly read more than 20 lines from a notecard at a time.
I'm not sure how this would work for non-DOM filetypes. All files would need to be embed-able in the HTML using the Javascript DOM. To my knowledge, Javascript can't arbitrarily create an external file and serve it to itself. Obviously it would not work for non-text filetypes, but you could certainly load in the rest of the HTML, CSS, and HTML5 SVG. Basically, if it can go in a single HTML file, you could load it in through Javascript.
I have no experience with React but it gives a good example of what is possible on the UI side with loading things in purely through Javascript.
So less than 64 thousand characters in memory at most per script. I will give you some advise that might make your task feasable:
External resources
Minimize the amount of code you have to have in your notecards by sourcing popular libraries from the web Bootstrap, React and such.
You will have to rely on their mirror's availability thought. But it will greatly reduce the amount of memory needed to provide pretty and functional pages.
Minify your code
Use tools like Uglify or Closure Compiler to make your javascript lighter. Though you must be careful, as these tools will fit all your code a single long line by the default, and you can't read lines longer than 255 characters with LSL. Luckily you can customize your options in these tools to limit the amount of characters per line.
Divide and conquer
Since a single script can't handle much memory, make dedicated scripts. One could serve resources (act as a file server, providing your html and js) while the other would receive API request calls, to handle the application's logic.

Javascript inlined between script tags vs src DATA URI UTF-8 with percent-encoding

I build mobile first and I use tiny frameworks (under 10kB) which I inline in index.html to save on HTTP request.
I looked for days now and it seems like everyone else who inlines javascript does it like this:
<script>UGLIFIED JAVASCRIPT</script>
I do it like this:
<script src="data:application/javascript;utf8, UGLIFIED PERCENT-ENCODED JAVASCRIPT"></script>
You may say percent encoding will make a file much larger but it actually doesn't because the way gzip works- it replaces the repetition and it doesn't matter if the repeated phrase is <div> or %3Cdiv%3E.
My question is- are there any potential advantages of my approach?
PS. One of my ideas was browser caching file-like DATA-URI elements but I don't know if this makes sense since then I would have to also find the way of controlling how to prevent the load of parts of index.html. Unless I could use the cached elements elsewhere - that would have it's use cases too. Thoughts?
First, if your site isn't an SPA, inlining your shared scripts (regardless of method) means you're loading them on every page, negating the value of the browser cache.
Second, the trip across the wire may be similar for encoded vs. not script, but the more important metric is the time it takes for the Javascript to be parsed and compiled. URL decoding isn't free, but while I don't think it's going to matter much in the grand scheme of things, I see no reason why it would actually be faster to load than just script within the tag.

URL to address in page data?

We have a JavaScript widget which loads data from an URL.
To reduce round-trips I would like to avoid a second HTTP request and put the data into the HTML page.
It would be great if I could leave the JavaScript widget unchanged.
Is there a URL scheme to read data from the current HTML page?
Example: Instead of https://.... this dom://....
No, but you can use data URIs, if that's a feasible approach for you. It's not the best choice for large amounts of data though.
I'm not sure to have completely caught your needs, zeroflagL answer could be a correct answer; possibly read also
http://blog.teamtreehouse.com/using-data-uris-speed-website before discarding the option.
Otherwise, although it might take a little adaptation to your javascript, consider that HTML5 has a feature called data blocks
read about it in https://developer.mozilla.org/en/docs/Using_XML_Data_Islands_in_Mozilla:
Leveraging this feature you can reduce round-trips and put one or more dataset into the HTML page, in the case into namespaces script blocks like this:
<script id="purchase-order" type="application/xml">
<purchaseOrder xmlns="http://entities.your.own.domain/PurchaseOrderML">
or this
<script id="another-set-of-data" type="application/xml">
<dataSet xmlns="http://entities.your.own.domain/DataSetML">
therefore, your javascript can access data reading them from the current HTML page; ....example:
<script>
function runDemo() {
var orderSource = document.getElementById("purchase-order").textContent;
var parser = new DOMParser();
var doc = parser.parseFromString(orderSource, "application/xml");
var lineItems = doc.getElementsByTagNameNS("http://entities.your.own.domain/PurchaseOrderML", "lineItem");
var firstPrice = lineItems[0].getElementsByTagNameNS("http://entities.your.own.domain/PurchaseOrderML", "price")[0].textContent;
document.body.textContent = "The purchase order contains " + lineItems.length + " line items. The price of the first line item is " + firstPrice + ".";
}
</script>
I am also an advocate for dataURIs as they are the most transparent (client code-wise) way to implement embedding of data in webpages.
They were, however, first used to embed small images and other resources that would hamper performance due to the connection overhead and also the parallel download limitations of HTTP/1. The tradeoff is delicate since encoding data as dataURIs can cause a (ballpark estimation) of 30% increase in data size, however the critical point where dataURIs stop being helpful is around the size of small images, which are usually orders of magnitude above serialized data.
The critical point here for a single page application scenario is that there's more than the single data-fetch roundtrip to consider.
Embedding the data for use by page's scripts on otherwise static HTML has the following implications:
The HTML itself can't be cached (only with a cached copy for every different set of embedded data and every version of the page.)
The (multiple versions of the) entire page must be generated on a server that also has knowledge of how to get the data.
The inlined data might block page rendering up to a user-perceivable time (this might be worked around by embedding the data at the end of the HTML, but client script execution would probably have to wait completely, thus also making stuff like displaying a loading indicator harder to implement.)
On the other hand, keeping the data on a separate round trip, despite the round trip itself, would:
Probably keep your already working implementation as it is
Allow for clients to use the cached HTML and scripts which would only need a refresh on an actual version change (there was a failed specification called AppCache for this purpose, now superseded by the experimental Service Workers)
Allow for that HTML and scripts to be fully static assets that can be served from 'dumb' CDNs that are faster and closer to the client browser and don't need to query the database or run any server-side code
All those are big wins in my view, so I recommend you to seriously consider the need for embedding data, because it could be an early optimization that can lead to a lot of pain and an actual decrease in performance! Specially because SPDY and now HTTP/2 are already coming in to address these round-trip and connection-number issues.
You could put the data, whatever it is, in the global window object, and use it later on.
But that requires you to change the code.

Difference in performance and memory footprint between referencing a Javascript using src or directly injecting it in the HEAD

What are the differences in performance and memory footprint , if any, in these different approaches:
1. Using Src
<script type='text/javascript" src="1MBOfjavascript.js"></script>
2. Directly injecting in the Head
$('head').append("<script type='text/javascript'>1MBOfJavascriptCode</script>");
I'm interested because we are developing a Cordova App where we use the second method to Inject to the DOM a previously downloaded Javascript bundle read from the HTML Local Storage.
Given that the script will probably grow in size, I'd like to know if with the second method I could incur in some memory issues or other DOM problem.
I believe the overhead in such cases should be insignificant as the main processing/memory consumption is based on how the actual script works. Ie the memory used by file would be the total size of the script which is what 1MB at max? However during execution the same script can use up 100MB easily.
Anyways , coming to the point.
Plain text inclusion would be better if it has to be included in all cases as it skips a script execution and also does not cause re-rendering by browser after the append.
Append option should be used in cases where you only need the script under specific conditions on client side and do not load it un-necessarily.
The browser goes through all elements before to render the page so I would imagine it is exactly the same, if anything if you had scripts after the page is downloaded chances are that you will get errors of the type "call to undefined functions" if you call onto a function that you have added after the load.
My advise would be add them at load using src but keep things tidy.
Javascript impact a lot depending upon how and where you are loading your file, there are many ways to load a file or code.
First method you mentioned is an conventional way to load javascipt file that is load file using src and usually its loaded before closing </head> tag but now a days it is in trend to load file befor your </body> tag. it speeds up application load and prepare DOM faster.
second method is not obviously a good way to load javascript at first as there can be some code that should be working only after your DOM is ready and as here you are loading your javscript with javascript/jquery append which depends upon your jquery file loading which will delay your code execution. and there might be possible that some of your code will not have desired output (depending upon how you have called functions and how much they are dependent upon DOM ready )
I would prefer to load a javascript file with first method and at the bottom of the page/app if possible.
asyncrounous loading can also be tried.
i think because browser behavior after retrieving response from any web server
will try to parse all the elements first for building the DOm, after the DOM are ready then your script would be executed, based on this it is obvious its the main
reasons is which is the first
will be executed first than injecting it.
About the second method
i think javascript engine in each browser can handle that thing easily and withouth any problem. the engine does not care about the size actually it only about how long the script will be loaded and then executed. what will be the issue is when you try injecting any elements into DOM is when you try to do it in loop there might be a memory problem.
when you say big javascript files yeah we should think about the memory. but
about the method you previously said its just about which are executed first.
thats my opinion

Where to put JavaScript configuration functions?

What is the general developer opinion on including javascript code on the file instead of including it on the script tag.
So we all agree that jquery needs to be included with a script file, like below:
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3/jquery.min.js"
type="text/javascript"></script>
My question is, in order to get functions on a page that is not on all pages of a site. Do we include the functions like below in the same page or in a global include file like above called mysite.js.
$(document).ready(function(){
$(".clickme").click(function(event){
alert("Thanks for visiting!");
});
});
ok. So the question is: if the code above is going to be called in every class="clickme" on a specific pages, and you have the ability to call it either from an include separate file called mysite.js or in the content of the page. Which way will you go?
Arguments are:
If you include it on the page you will only call it from those specific pages that the js functionality is needed.
Or you include it as a file, which the browser cached, but then jquery will have to spend x ms to know that that function is not trigger on a page without "clickme" class in it.
EDIT 1:
Ok. One point that I want to make sure people address is what is the effect of having the document.ready function called things that does not exist in the page, will that trigger any type of delay on the browser? Is that a significant impact?
First of all - $("#clickme") will find the id="clickme" not class="clickme". You'd want $(".clickme") if you were looking for classes.
I (try to) never put any actual JavaScript code inside my XHTML documents, unless I'm working on testing something on a page quickly. I always link to an external JS file to load the functionality I want. Browsers without JS (like web crawlers) will not load these files, and it makes your code look much cleaner to the "view source".
If I need a bit of functionality only on one page - it sometimes gets its own include file. It all depends on how much functionality / slow selectors it uses. Just because you put your JS in an external JS file doesn't mean you need to include it on every page.
The main reason I use this practice - if I need to change some JavaScript code, it will all be in the same place, and change site wide.
As far as the question about performance goes- Some selectors take a lot of time, but most of them (especially those that deal with ID) are very quick. Searching for a selector that doesn't exist is a waste of time, but when you put that up against the wasted time of a second script HTTP request (which blocks the DOM from being ready btw), searching for an empty selector will generally win as being the lesser of the two evils. jQuery 1.3 Performace Notes and SlickSpeed will hopefully help you decide on how many MS you really are losing to searching for a class.
I tend to use an external file so if a change is needed it is done in one place for all pages, rather than x changes on x pages.
Also if you leave the project and someone else has to take over, it can be a massive pain to dig around the project trying to find some inline js.
My personal preference is
completely global functions, plugins and utilities - in a separate JavaScript file and referenced in each page (much like the jQuery file)
specific page functionality - in a separate JavaScript file and only referenced in the page it is needed for
Remember that you can also minify and gzip the files too.
I'm a firm believer of Unobtrusive JavaScript and therefore try to avoid having any JavaScript code in with the markup, even if the JavaScript is in it's own script block.
I agreed to never have code in your HTML page. In ASP.net I programmatically have added a check for each page to see if it has a same name javascript file.
Eg. MyPage.aspx will look for a MyPage.aspx.js
For my MVC master page I have this code to add a javascript link:
// Add Each page's javascript file
if (Page.ViewContext.View is WebFormView)
{
WebFormView view = Page.ViewContext.View as WebFormView;
string shortUrl = view.ViewPath + ".js";
if (File.Exists(Server.MapPath(shortUrl)))
{
_clientScriptIncludes["PageJavascript"] = Page.ResolveUrl(shortUrl);
}
}
This works well because:
It is automagically included in my files
The .js file lives alongside the page itself
Sorry if this doesn't apply to your language/coding style.

Categories

Resources