Auditing front end performance on web application - javascript

I am currently trying to performance tune the UI of a company web application. The application is only ever going to be accessed by staff, so the speed of the connection between the server and client will always be considerably more than if it was on the internet.
I have been using performance auditing tools such as Y Slow! and Google Chrome's profiling tool to try and highlight areas that are worth targeting for investigation. However, these tools are written with the internet in mind. For example, the current suggestions from a Google Chrome audit of the application suggests is as follows:
Network Utilization
Combine external CSS (Red warning)
Combine external JavaScript (Red warning)
Enable gzip compression (Red warning)
Leverage browser caching (Red warning)
Leverage proxy caching (Amber warning)
Minimise cookie size (Amber warning)
Parallelize downloads across hostnames (Amber warning)
Serve static content from a cookieless domain (Amber warning)
Web Page Performance
Remove unused CSS rules (Amber warning)
Use normal CSS property names instead of vendor-prefixed ones (Amber warning)
Are any of these bits of advice totally redundant given the connection speed and usage pattern? The users will be using the application frequently throughout the day, so it doesn't matter if the initial hit is large (when they first visit the page and build their cache) so long as a minimal amount of work is done on future page views.
For example, is it worth the effort of combining all of our CSS and JavaScript files? It may speed up the initial page view, but how much of a difference will it really make on subsequent page views throughout the working day?
I've tried searching for this but all I keep coming up with is the standard internet facing performance advice. Any advice on what to focus my performance tweaking efforts on in this scenario, or other auditing tool recommendations, would be much appreciated.

One size does not fit all with these things; the item that immediately jumps out as something that will have a big impact is "leverage browser caching". This reduces bandwidth use, obviously, but also tells the browser it doesn't need to re-parse whatever you've cached. Even if you have plenty of bandwidth, each file you download requires resources from the browser - a thread to manage the download, the parsing of the file, managing memory etc. Reducing that will make the app feel faster.
GZIP compression is possibly redundant, and potentially even harmful if you really do have unlimited bandwidth - it consumes resources both on the server and the client to compress the data. Not much, and I've never been able to measure - but in theory it might make a difference.
Proxy caching may also help - depending on your company's network infrastructure.
Reducing cookie size may help - not just because of the bandwidth issue, but again managing cookies consumes resources on the client; this also explains why serving static assets from cookie-less domains helps.
However, if you're going to optimize the performance of the UI, you really need to understand where the slow-down is. Y!Slow and Chrome focus on common problems, many of them related to bandwidth and the behaviour of the browser. They don't know if one particular part of the JS is slow, or whether the server is struggling with a particular dynamic page request.
Tools like Firebug help with that - look at what's happening with the network, and whether any assets take longer than you expect. Use the JavaScript profiler to see where you're spending the most time.

Most of these tools provides steps or advice for one time check. However it solves few issues, it does not tell you how your user experiences your site. Always Real user monitoring is a right solution to measuring live user performances. You can use Navigation Timing API to measure page load time and resource timings.
If you want to look for service, you can try https://www.atatus.com/ which provides Real User monitoring, Ajax Monitoring, Transaction monitoring and JavaScript error tracking.

Here is a list of additional services you can use to test website speed:
http://sixrevisions.com/tools/free-website-speed-testing/

Related

How to multithread a download in client-side javascript

I have very large (50-500GB) files for download, and the single-threaded download offered by the browser engine (edge, chrome, or firefox) is a painfully slow user experience. I had hoped to speed this up by using multithreading to download chunks of the file, but I keep running into browser sandbox issues.
So far the best approach I've found would be to download and stuff all the chunks into localStorage and then download that as a blob, but I'm concerned about the soft limits on storing that much data locally (as well as the performance of that approach when it comes to stitching all the data together).
Ideally, someone has already solved this (and my search skills weren't up to the task of finding it). The only thing I have found have been server-side solutions (which have straightforward file system access). Alternately, I'd like another approach less likely to trip browser security or limit dialogs and more likely to provide the performance my users are seeking.
Many thanks!
One cannot. Browsers intentionally limit the number of connections to a website. To get around this limitation with today’s browsers requires a plugin or other means to escape the browser sandbox.
Worse, because of a lack of direct file system access, the data from multiple downloads has to be cached and then reassembled into the final file, instead of having multiple writers to the same file (and letting the OS cache handle optimization).
TLDR: Although it is possible to have multiple download threads, the maximum is low (4), and the data has to be handled repeatedly. Use a plugin or an actual download program such as FTP or Curl.

Is there a way to know anything about hardware resources of 'platform' accessing webpage?

I'd like to be able to find out about a browser's hardware resources from a web page, or at least a rough estimation.
Even when you detect the presence of modern technology (such as csstransforms3d, csstransitions, requestAnimationFrame) in a browser via a tool like Modernizr, you cannot be sure whether to activate some performance-consuming option (such as fancy 3D animation) or to avoid it.
I'm asking because I have (a lot of) experience with situations where the browser is modern (latest Chrome or Firefox supporting all cool technologies) but OS's CPU, GPU, and available memory are just catastrophic (32bit Windows XP with integrated GPU) and thus a decision based purely on detected browser caps is no good.
While Nickolay gave a very good and extensive explanation, I'd like to suggest one very simple, but possibly effective solution - you could try measuring how long it took for the page to load and decide whether to go with the resource-hungry features or not (Gmail does something similar - if the loading goes on for too long, a suggestion to switch to the "basic HTML" version will show up).
The idea is that, for slow computers, loading any page, regardless of content, should be, on average, much slower than on modern computers. Getting the amount of time it took to load your page should be simple, but there are a couple of things to note:
You need to experiment a bit to determine where to put the "too slow" threshold.
You need to keep in mind that slow connections can cause the page to load slower, but this will probably make a difference in a very small number of cases (using DOM ready instead of the load event can also help here).
In addition, the first time a user loads your site will probably be much slower, due to caching. One simple solution for this is to keep your result in a cookie or local storage and only take loading time into account when the user visits for the first time.
Don't forget to always, no matter what detection mechanism you used and how accurate it is, allow the user to choose between the regular, resource-hungry and the faster, "uglier" version - some people prefer better looking effects even if it means the website will be slower, while other value speed and snappiness more.
In general, the available (to web pages) information about the user's system is very limited.
I remember a discussion of adding one such API to the web platform (navigator.hardwareConcurrency - the number of available cores), where the opponents of the feature explained the reasons against it, in particular:
The number of cores available to your app depends on other workload, not just on the available hardware. It's not constant, and the user might not be willing to let your app use all (or whatever fixed portion you choose) of the available hardware resources;
Helps "fingerprinting" the client.
Too oriented on the specifics of today. The web is designed to work on many devices, some of which do not even exist today.
These arguments work as well for other APIs for querying the specific hardware resources. What specifically would you like to check to see if the user's system can afford running a "fancy 3D animation"?
As a user I'd rather you didn't use additional resources (such as fancy 3D animation) if it's not necessary for the core function of your site/app. It's sad really that I have to buy a new laptop every few years just to be able to continue with my current workflow without running very slowly due to lack of HW resources.
That said, here's what you can do:
Provide a fallback link for the users who are having trouble with the "full" version of the site.
If this is important enough to you, you could first run short benchmarks to check the performance and fall back to the less resource-hungry version of the site if you suspect that a system is short on resources.
You could target the specific high-end platforms by checking the OS, screen size, etc.
This article mentions this method on mobile: http://blog.scottlogic.com/2014/12/12/html5-android-optimisation.html
WebGL provides some information about the renderer via webgl.getParameter(). See this page for example: http://analyticalgraphicsinc.github.io/webglreport/

JavaScript: figuring out max memory that could be used in a program

JavaScript in Chrome (or any other browser for that matter, but I rather limit the discussion to Chrome to make it simpler) does not provide an API which can be used to observe memory related information (e.g. how much memory is being used by the current tab where the JS is running).
I am looking for a creative solution for getting an estimation of how much bytes I can cache in a JavaScript object that my web page is running. The problem definition is that I would like to cache as much as possible.
Can anyone think of a decent way of estimating how much memory can a tab handle before it will crash / become unusable on a machine? I guess a statistical approach could work out fine for some cases, but I'm looking for something more dynamic.

How do you detect memory limits in JavaScript?

Can browsers enforce any sort of limit on the amount of data that can be stored in JavaScript objects? If so, is there any way to detect that limit?
It appears that by default, Firefox does not:
var data;
$("document").ready(function() {
data = [];
for(var i = 0; i < 100000000000; i++) {
data.push(Math.random());
}
});
That continues to consume more and more memory until my system runs out.
Since we can't detect available memory, is there any other way to tell we are getting close to that limit?
Update
The application I'm developing relies on very fast response times to be usable (it's the core selling point). Unfortunately, it also has a very large data set (more than will fit into memory on weaker client machines). Performance can be greatly improved by preemptively loading data strategically (guessing what will be clicked). The fallback to loading the data from the server works when the guesses are incorrect, but the server round trip isn't ideal. Making use of every bit of memory I can makes the application as performant as possible.
Right now, it works to allow the user to "configure" their performance settings (max data settings), but users don't want to manage that. Also, since it's a web application, I have to handle users setting that per computer (since a powerful desktop has a lot more memory than an old iPhone). It's better if it just uses optimal settings for what is available on the systems. But guessing too high can cause problems on the client computer too.
While it might be possible on some browsers, the right approach should be to decide what limit is acceptable for the typical customer and optionally provide a UI to define their limit.
Most heavy web apps get away with about 10MB JavaScript heap size. There does not seem to be a guideline. But I would imagine consuming more than 100MB on desktop and 20MB on mobile is not really nice. For everything after that look into local storage, e.g. FileSystem API (and you can totally make it PERSISTENT)
UPDATE
The reasoning behind this answer is the following. It is next to never user runs only one application. More so with counting on the browser having only one tab open. Ultimately, consuming all available memory is never a good option. Hence determining the upper boundary is not necessary.
Reasonable amount of memory user would like to allocate to the web app is a guess work. E.g. highly interactive data analytics tool is quite possible in JS and might need millions of data points. One option is to default to less resolution (say, daily instead of each second measurements) or smaller window (one day vs. a decade of seconds). But as user keeps exploring the data set, more and more data will be needed, potentially crippling the underlying OS on the agent side.
Good solution is to go with some reasonable initial assumption. Let's open some popular web applications and go to dev tools - profiles - heap snapshots to take a look:
FB: 18.2 MB
GMail: 33 MB
Google+: 53.4 MB
YouTube: 54 MB
Bing Maps: 55 MB
Note: these numbers include DOM nodes and JS Objects on the heap.
It seems to be then, people come to accept 50MB of RAM for a useful web site. (Update 2022: nowadays averaging closer to 100MB.) Once you build your DOM Tree, fill your data structures with test data and see how much is OK to keep in RAM.
Using similar measurements while turning device emulation in Chrome, one can see the consumption of the same sites on tablets and phones, BTW.
This is how I arrived at 100 MB on desktop and 20 MB on mobile numbers. Seemed to be reasonable too. Of course, for occasional heavy user it would be nice to have an option to bump max heap up to 2 GB.
Now, what do you do if pumping all this data from the server every time is too costly?
One thing is to use Application Cache. It does create mild version management headaches but allows you to store around 5 MB of data. Rather than storing data though, it is more useful to keep app code and resources in it.
Beyond that we have three choices:
SQLite - support was limited and it seems to be abandoned
IndexDB - better option but support is not universal yet (can I use it?)
FileSystem API
Of them, FileSystem is most supported and can use sizable chunk of storage.
In Chrome the answer is Sure!
Go to the console and type:
performance.memory.jsHeapSizeLimit; // will give you the JS heap size
performance.memory.usedJSHeapSize; // how much you're currently using
arr = []; for(var i = 0; i < 100000; i++) arr.push(i);
performance.memory.usedJSHeapSize; // likely a larger number now
I think you'll want something like the following:
const memory = navigator.deviceMemory
console.log (`This device has at least ${memory}GiB of RAM.`)
You can check out the following: https://developer.mozilla.org/en-US/docs/Web/API/Navigator/deviceMemory
Note: This feature is not supported across all browsers.
Since a web app can't have access to any system-related information (like the available amount of memory), and since you would prefer not having to ask users to manually set their performance settings, you must rely on a solution that allows you to get such information about the user's system (available memory) without asking them. Seems impossible ? Well, almost...
But I suggest you do the following : make a Java applet that will automatically get the available memory size (e.g. using Runtime.exec(...) with an appropriate command), provided your applet is signed, and return that information to the server or directly to the web page (with JSObject, see http://docs.oracle.com/javafx/2/api/netscape/javascript/JSObject.html).
However, that would assume your users can all run a Java applet within their browsers, which is not always the case. Therefore, you could ask them to install a small piece of software on their machines that will measure how much memory your app should use without crashing the browser, and will send that information to your server. Of course, you would have to re-write that little program for every OS and architecture (Windows, Mac, Linux, iPhone, Android...), but it's simpler that having to re-write the whole application in order to gain some performance. It's a sort of in-between solution.
I don't think there is an easy solution. There will be some drawbacks, whatever you choose to do. Remember that web applications don't have the reputation of being fast, so if performance is critical, you should consider writing a traditional desktop application.

How to do performance analysis of a heavy JavaScript web app?

I have a huge Web App that's switching from a HTML-rendered-on-the-server-and-pushed-to-the-client approach to a let-the-client-decide-how-to-render-the-data-the-server-sends, which means the performance on the client mattered in the past, but it's critica now. So I'm wondering if in the current state of affairs it's possible to profile Web apps and extract the same data (like call stacks, "threads", event handlers, number of calls to certain functions, etc) we use for server side perf.
I know every browser implements some of these functionalities to some extent (IE dev tools has an embedded profiler, so does Firefox [with Firebug], and Google Chrome has Speed Tracer), but I was wondering if it'd be possible to get, for example, stack traces of sessions. Is it advisable to instrument the code and have a knob to turn on/off the instrumentation? Or it's simply not that useful to go that level in analyzing JavaScript performance?
Fireunit is decent and YUI also provides a profiler, but neither provide stack traces or call frames. Unfortunately, there aren't many JS profiling tools out right now. And none of them are particularly great.
I think it's very important to go to a high level of performance analysis, especially considering the user will deal with the JS app 90%+ of the time directly.

Categories

Resources