Should I minify and concatenate javascript and CSS when using HTTP2/SPDY? - javascript

Given the advantages of connection reuse and multiplexing in HTTP2 (and SPDY) and the availability of gzip compression, is the effort of adding a minification and concatenation step into a build process justified?

According to Surma from the Chrome team, on H2 you can and in fact should stop bundling because it's useless and to allow more efficient browser caching:
https://www.youtube.com/watch?v=w--PU4HO9SM (time 1:10)
I think that minifying or obfuscating can still be desirable, depending on your needs.

Testing is the only true means of deciding to minify and/or concatenate when resources are being served via H2/SPDY.
The idea behind HTTP/2 (H2) is to serve small static resources on the stream (a single multiplexing TCP connection). Tests have shown that "most" sites benefit an increase of speed by not concatenating resources (and even not using a CDN). It all depends on the sizes of the resources being served on H2/SPDY. I have seen one site increase speed by 30%+ and others w/o change.
With that inmind, my suggestion is too test by minifing all resources and not concatenating them. I'd also test serving all common resources (not using a CDN - and that as well depends where your clients are).
Resources:
Akamai
Columnist Patrick Stox
HTTP/2 101 (Chrome Dev Summit 2015)

Yes you still need to minify and concatenate js and css files for the following reasons:
script minifying and SPDY compression are not the same. a good minifier knows to take advantage of local scope and replace verbose variable names with short repetitive names that are compression-friendly.
SPDY combines your requests so you don't have to stitch the scripts together. but not all browsers support SPDY
SPDY 2 and 3 are binary incompatible. When a browser supports 2 and the server advertises 3, the connection falls back to HTTP 1.1 over SSL; there's no SPDY benefits at all
loading 10 files through one request still incurs 10 fetches on the server side. combining the files reduces disk I/O.
Your question is comparable to "can I care less about writing efficient code now that the machine can run faster?"
The answer is NO. Don't be lazy. Code properly.

Related

How to multithread a download in client-side javascript

I have very large (50-500GB) files for download, and the single-threaded download offered by the browser engine (edge, chrome, or firefox) is a painfully slow user experience. I had hoped to speed this up by using multithreading to download chunks of the file, but I keep running into browser sandbox issues.
So far the best approach I've found would be to download and stuff all the chunks into localStorage and then download that as a blob, but I'm concerned about the soft limits on storing that much data locally (as well as the performance of that approach when it comes to stitching all the data together).
Ideally, someone has already solved this (and my search skills weren't up to the task of finding it). The only thing I have found have been server-side solutions (which have straightforward file system access). Alternately, I'd like another approach less likely to trip browser security or limit dialogs and more likely to provide the performance my users are seeking.
Many thanks!
One cannot. Browsers intentionally limit the number of connections to a website. To get around this limitation with today’s browsers requires a plugin or other means to escape the browser sandbox.
Worse, because of a lack of direct file system access, the data from multiple downloads has to be cached and then reassembled into the final file, instead of having multiple writers to the same file (and letting the OS cache handle optimization).
TLDR: Although it is possible to have multiple download threads, the maximum is low (4), and the data has to be handled repeatedly. Use a plugin or an actual download program such as FTP or Curl.

Should I worry about memory consumption of a 100+ module html5 app?

Say I have an MVC-ish html5 app that consists of 100+ small modules. I'd like it to run as smooth as possible even on a tablet or a smartphone.
Since only a handful of the 100+ modules are in use simultaneously and I'd say half of them aren't even used during an ordinary session with the app, loading them as a single concatenated js file and keeping it all in memory feels kind of icky.
I currently use CujoJS curl, which is an AMD loader. It's great for development and I think it fits nicely in some production environments too. The downside of course is that individual files take longer to download, but I don't really consider it an issue in this case. What I'm worried about is the memory usage over time, like if the user never closes the window and more modules keep accumulating in the memory as they explore the app. As far as I know, AMD loaders don't provide any means to unload modules.
The question is, should I really be worried about memory consumption at all in this situation? As an exaggerated example, would the difference in memory usage between 200KiB (on-demand essential modules) and 4000KiB (everything from essentials to practically never used features) of js code be negligible even on a mobile device?
If I should be concerned about memory consumption, what should I do to minimize wasted memory? I can only think of minimizing the amount of code in memory by planning ahead, writing efficient code and unloading unneeded modules. Or as a last resort, by reloading the page at some points.
Bonus question: (How) can I unload modules from curl cache? I've read that in RequireJS it is possible with a little tweaking but I've found nothing for curl.

Auditing front end performance on web application

I am currently trying to performance tune the UI of a company web application. The application is only ever going to be accessed by staff, so the speed of the connection between the server and client will always be considerably more than if it was on the internet.
I have been using performance auditing tools such as Y Slow! and Google Chrome's profiling tool to try and highlight areas that are worth targeting for investigation. However, these tools are written with the internet in mind. For example, the current suggestions from a Google Chrome audit of the application suggests is as follows:
Network Utilization
Combine external CSS (Red warning)
Combine external JavaScript (Red warning)
Enable gzip compression (Red warning)
Leverage browser caching (Red warning)
Leverage proxy caching (Amber warning)
Minimise cookie size (Amber warning)
Parallelize downloads across hostnames (Amber warning)
Serve static content from a cookieless domain (Amber warning)
Web Page Performance
Remove unused CSS rules (Amber warning)
Use normal CSS property names instead of vendor-prefixed ones (Amber warning)
Are any of these bits of advice totally redundant given the connection speed and usage pattern? The users will be using the application frequently throughout the day, so it doesn't matter if the initial hit is large (when they first visit the page and build their cache) so long as a minimal amount of work is done on future page views.
For example, is it worth the effort of combining all of our CSS and JavaScript files? It may speed up the initial page view, but how much of a difference will it really make on subsequent page views throughout the working day?
I've tried searching for this but all I keep coming up with is the standard internet facing performance advice. Any advice on what to focus my performance tweaking efforts on in this scenario, or other auditing tool recommendations, would be much appreciated.
One size does not fit all with these things; the item that immediately jumps out as something that will have a big impact is "leverage browser caching". This reduces bandwidth use, obviously, but also tells the browser it doesn't need to re-parse whatever you've cached. Even if you have plenty of bandwidth, each file you download requires resources from the browser - a thread to manage the download, the parsing of the file, managing memory etc. Reducing that will make the app feel faster.
GZIP compression is possibly redundant, and potentially even harmful if you really do have unlimited bandwidth - it consumes resources both on the server and the client to compress the data. Not much, and I've never been able to measure - but in theory it might make a difference.
Proxy caching may also help - depending on your company's network infrastructure.
Reducing cookie size may help - not just because of the bandwidth issue, but again managing cookies consumes resources on the client; this also explains why serving static assets from cookie-less domains helps.
However, if you're going to optimize the performance of the UI, you really need to understand where the slow-down is. Y!Slow and Chrome focus on common problems, many of them related to bandwidth and the behaviour of the browser. They don't know if one particular part of the JS is slow, or whether the server is struggling with a particular dynamic page request.
Tools like Firebug help with that - look at what's happening with the network, and whether any assets take longer than you expect. Use the JavaScript profiler to see where you're spending the most time.
Most of these tools provides steps or advice for one time check. However it solves few issues, it does not tell you how your user experiences your site. Always Real user monitoring is a right solution to measuring live user performances. You can use Navigation Timing API to measure page load time and resource timings.
If you want to look for service, you can try https://www.atatus.com/ which provides Real User monitoring, Ajax Monitoring, Transaction monitoring and JavaScript error tracking.
Here is a list of additional services you can use to test website speed:
http://sixrevisions.com/tools/free-website-speed-testing/

What tricks did Fabrice Bellard use to make his PC emulator in Javascript so fast?

Fabrice Bellard's PC emulator implemented in Javascript is impressively fast--it boots a small Linux image in the browser within a few seconds.
What techniques were used to get this performance?
I believe that sharing some general credit with "speed" of the modern JS interpreter is a far way an offtopic in the list of Bellard's techniques (since he does not replace browser's engine). What are his optimization techniques? is a great question and I would like to get a more detailed record on it.
The points I can name so far
(Optional) JS Typed arrays exclude unnecessary memory allocation dynamics (resizing). Fixed type (size) allows to allocate contiguous blocks of memory (with no variable-length elements' segments in such blocks) and uniformly address elements of a single type.
Fast boot by a custom minimalistic booter (see linuxstart code published by Fabrice, also see his project called TCCBOOT http://bellard.org/tcc/tccboot.html)
Optimized uncompressed embedded kernel (See the kernel configuration, it is extra tiny and optimized for small "linuxes").
Minimal number of devices (Devices are super standard and easy to recognize by the kernel. So far, I have properly studied serial device but the rest benefits from similar properties). Ramdisk initialization is rather slow though.
Small (2048 blocks) uncompressed root.bin ext2 system. The root system consists of minimal combination (rootfs, proc, tmpfs, devpts). No swap.
(Unsure) he has patched the buffer size for ttyS0 (serial port device, or actually the kernel UART driver - to be precise) which communicates to terminal. Communication is in any way buffered using his term.js binding (I have found no transmit buffer in UART itself). Note that emulation (as in this case) can be much faster than the real thing.
Please also mind the browser cache while refreshing the page. It kicks very fast if its all in memory (optimized by the host OS). Performing direct (if cached - in-memory) copying (with load_binary()) of "uncompressed" binary segments (start_linux.bin, vmlinux26.bin, root.bin). No hard disk I/O limitations.
I used the excellent http://jsbeautifier.org/ to prettify the minified JS code. It looks to me like painstakingly written, un-fussy, sensible procedural code. It's a magnificent achievement in itself, but the credit has to be shared with the phenomenal performance of modern JavaScript interpreters.
As of 2018, Fabrice has used asm.js and WebAssembly to achieve this.
You can read more here.
If you look at the Inspector (or we know as Chrome DevTools, or Firefox's Inspector), you would see some wasm:// sources (on Firefox), implying that he used WebAssembly to achieve this.
Maybe using a C to JavaScript compiler? Like Emscripten: http://code.google.com/p/emscripten/

When serving JavaScript files, is it safe to gzip it by default

The question fits in the title. I am not interested in what the spec recommend but what the mix of browsers currently deployed support the best.
Google Docs gzips their JS.
The Google AJAX Libraries API CDN gzips JS.
Yahoo gzips the JS for their YUI files.
The Yahoo home page gzips their JS.
So I think that the answer to my question is yes, it is fine to gzip JS for all browsers. But you'll let me know if you disagree.
If you gzip your .js (or any other content), two problems may arise: 1. gzip increases the latency for uncompressible files (needs time to compress and uncompress) 2. an older browser may not understand the gzipped content. To avoid problem 2, you should examine the Accept-Encoding and User-Agent or other parts of the HTTP request to guess if the browser supports gzip. Modern browsers should not have problems with gzippd content.
An excerpt from http://httpd.apache.org/docs/2.2/mod/mod_deflate.html: At first we probe for a User-Agent string that indicates a Netscape Navigator version of 4.x. These versions cannot handle compression of types other than text/html. The versions 4.06, 4.07 and 4.08 also have problems with decompressing html files. Thus, we completely turn off the deflate filter for them.
No, it's not. Firstly, the browser must declare that they accept gzip encoding as per Supercharging Javascript. On top of that, certain versions of IE6 have broken implementations, which is still an issue if they haven't been patched. More in The Internet Explorer Problem (with gzip encoding).

Categories

Resources