How to do performance analysis of a heavy JavaScript web app? - javascript

I have a huge Web App that's switching from a HTML-rendered-on-the-server-and-pushed-to-the-client approach to a let-the-client-decide-how-to-render-the-data-the-server-sends, which means the performance on the client mattered in the past, but it's critica now. So I'm wondering if in the current state of affairs it's possible to profile Web apps and extract the same data (like call stacks, "threads", event handlers, number of calls to certain functions, etc) we use for server side perf.
I know every browser implements some of these functionalities to some extent (IE dev tools has an embedded profiler, so does Firefox [with Firebug], and Google Chrome has Speed Tracer), but I was wondering if it'd be possible to get, for example, stack traces of sessions. Is it advisable to instrument the code and have a knob to turn on/off the instrumentation? Or it's simply not that useful to go that level in analyzing JavaScript performance?

Fireunit is decent and YUI also provides a profiler, but neither provide stack traces or call frames. Unfortunately, there aren't many JS profiling tools out right now. And none of them are particularly great.
I think it's very important to go to a high level of performance analysis, especially considering the user will deal with the JS app 90%+ of the time directly.

Related

Is there a way to know anything about hardware resources of 'platform' accessing webpage?

I'd like to be able to find out about a browser's hardware resources from a web page, or at least a rough estimation.
Even when you detect the presence of modern technology (such as csstransforms3d, csstransitions, requestAnimationFrame) in a browser via a tool like Modernizr, you cannot be sure whether to activate some performance-consuming option (such as fancy 3D animation) or to avoid it.
I'm asking because I have (a lot of) experience with situations where the browser is modern (latest Chrome or Firefox supporting all cool technologies) but OS's CPU, GPU, and available memory are just catastrophic (32bit Windows XP with integrated GPU) and thus a decision based purely on detected browser caps is no good.
While Nickolay gave a very good and extensive explanation, I'd like to suggest one very simple, but possibly effective solution - you could try measuring how long it took for the page to load and decide whether to go with the resource-hungry features or not (Gmail does something similar - if the loading goes on for too long, a suggestion to switch to the "basic HTML" version will show up).
The idea is that, for slow computers, loading any page, regardless of content, should be, on average, much slower than on modern computers. Getting the amount of time it took to load your page should be simple, but there are a couple of things to note:
You need to experiment a bit to determine where to put the "too slow" threshold.
You need to keep in mind that slow connections can cause the page to load slower, but this will probably make a difference in a very small number of cases (using DOM ready instead of the load event can also help here).
In addition, the first time a user loads your site will probably be much slower, due to caching. One simple solution for this is to keep your result in a cookie or local storage and only take loading time into account when the user visits for the first time.
Don't forget to always, no matter what detection mechanism you used and how accurate it is, allow the user to choose between the regular, resource-hungry and the faster, "uglier" version - some people prefer better looking effects even if it means the website will be slower, while other value speed and snappiness more.
In general, the available (to web pages) information about the user's system is very limited.
I remember a discussion of adding one such API to the web platform (navigator.hardwareConcurrency - the number of available cores), where the opponents of the feature explained the reasons against it, in particular:
The number of cores available to your app depends on other workload, not just on the available hardware. It's not constant, and the user might not be willing to let your app use all (or whatever fixed portion you choose) of the available hardware resources;
Helps "fingerprinting" the client.
Too oriented on the specifics of today. The web is designed to work on many devices, some of which do not even exist today.
These arguments work as well for other APIs for querying the specific hardware resources. What specifically would you like to check to see if the user's system can afford running a "fancy 3D animation"?
As a user I'd rather you didn't use additional resources (such as fancy 3D animation) if it's not necessary for the core function of your site/app. It's sad really that I have to buy a new laptop every few years just to be able to continue with my current workflow without running very slowly due to lack of HW resources.
That said, here's what you can do:
Provide a fallback link for the users who are having trouble with the "full" version of the site.
If this is important enough to you, you could first run short benchmarks to check the performance and fall back to the less resource-hungry version of the site if you suspect that a system is short on resources.
You could target the specific high-end platforms by checking the OS, screen size, etc.
This article mentions this method on mobile: http://blog.scottlogic.com/2014/12/12/html5-android-optimisation.html
WebGL provides some information about the renderer via webgl.getParameter(). See this page for example: http://analyticalgraphicsinc.github.io/webglreport/

Javascript function profiling

I'm currently testing Javascript Visualization Toolkits and want to measure execution time, memory consumption etc.
I know hot to profile Javascript with the chrome dev tools, google speed analyzer and so on, but I want users to perform the tests on their own and display the results. (without using dev tools or installing an extension)
Is there a library or something that can be used to achieve this? Subtracting start and end time for each function does not seem like a good solution.
Best case scenario would be a Library to profile individual functions.
Caveat: you will not be able to get CPU profile or memory usage using a JS-based testing solution. If this is what you are after, a Chrome extension may very well be the way forward.
If, however, this doesn't bother you, if you are after a ready-made solution, Benchmark.js may prove to be a good starting point.
The method it uses is akin to what you mentioned - taking time differences in execution. However, it does so multiple times (100 or more times) in order to average out the statistical errors. This allows your results to be free of truly random errors (this does not, however, mean that your data will be meaningful.).

Auditing front end performance on web application

I am currently trying to performance tune the UI of a company web application. The application is only ever going to be accessed by staff, so the speed of the connection between the server and client will always be considerably more than if it was on the internet.
I have been using performance auditing tools such as Y Slow! and Google Chrome's profiling tool to try and highlight areas that are worth targeting for investigation. However, these tools are written with the internet in mind. For example, the current suggestions from a Google Chrome audit of the application suggests is as follows:
Network Utilization
Combine external CSS (Red warning)
Combine external JavaScript (Red warning)
Enable gzip compression (Red warning)
Leverage browser caching (Red warning)
Leverage proxy caching (Amber warning)
Minimise cookie size (Amber warning)
Parallelize downloads across hostnames (Amber warning)
Serve static content from a cookieless domain (Amber warning)
Web Page Performance
Remove unused CSS rules (Amber warning)
Use normal CSS property names instead of vendor-prefixed ones (Amber warning)
Are any of these bits of advice totally redundant given the connection speed and usage pattern? The users will be using the application frequently throughout the day, so it doesn't matter if the initial hit is large (when they first visit the page and build their cache) so long as a minimal amount of work is done on future page views.
For example, is it worth the effort of combining all of our CSS and JavaScript files? It may speed up the initial page view, but how much of a difference will it really make on subsequent page views throughout the working day?
I've tried searching for this but all I keep coming up with is the standard internet facing performance advice. Any advice on what to focus my performance tweaking efforts on in this scenario, or other auditing tool recommendations, would be much appreciated.
One size does not fit all with these things; the item that immediately jumps out as something that will have a big impact is "leverage browser caching". This reduces bandwidth use, obviously, but also tells the browser it doesn't need to re-parse whatever you've cached. Even if you have plenty of bandwidth, each file you download requires resources from the browser - a thread to manage the download, the parsing of the file, managing memory etc. Reducing that will make the app feel faster.
GZIP compression is possibly redundant, and potentially even harmful if you really do have unlimited bandwidth - it consumes resources both on the server and the client to compress the data. Not much, and I've never been able to measure - but in theory it might make a difference.
Proxy caching may also help - depending on your company's network infrastructure.
Reducing cookie size may help - not just because of the bandwidth issue, but again managing cookies consumes resources on the client; this also explains why serving static assets from cookie-less domains helps.
However, if you're going to optimize the performance of the UI, you really need to understand where the slow-down is. Y!Slow and Chrome focus on common problems, many of them related to bandwidth and the behaviour of the browser. They don't know if one particular part of the JS is slow, or whether the server is struggling with a particular dynamic page request.
Tools like Firebug help with that - look at what's happening with the network, and whether any assets take longer than you expect. Use the JavaScript profiler to see where you're spending the most time.
Most of these tools provides steps or advice for one time check. However it solves few issues, it does not tell you how your user experiences your site. Always Real user monitoring is a right solution to measuring live user performances. You can use Navigation Timing API to measure page load time and resource timings.
If you want to look for service, you can try https://www.atatus.com/ which provides Real User monitoring, Ajax Monitoring, Transaction monitoring and JavaScript error tracking.
Here is a list of additional services you can use to test website speed:
http://sixrevisions.com/tools/free-website-speed-testing/

Adobe AIR, memory leaks

We all know how web browsers (such as Firefox) are certain to fill up memory consumption because we continuously execute JavaScript code (from websites) that is prone to memory leakage.
I am debating in developing a Desktop app, and given my experience with Javascript/Css/HTML, I thought I would give AIR a try, this way I don't have to use Java (for example) and deal with learning all its GUI swing stuff.
The problem is that I worry about memory leakage in AIR, since AIR is simply a web browser with an API layer to interact with the Operating System.
Is it plausible to worry about memory leakage in AIR? What should I do about it?
My name is Rob Christensen and I am product manager on Adobe AIR. First, let me say that it is quite easy to build a desktop application, regardless of underlying technology, that consumes a large amount of memory and/or does not free up memory.
In the next release of AIR, we are looking at providing some additional capabilities to the AIR runtime to make it easier to identify memory leaks for JavaScript-based applications. Developers that are building Flash or Flex based applications can already take advantage of the memory profiler included in Flex Builder to track this down. We are hoping to do something similar for JavaScript developers as well.
In my experience talking to developers, memory leaks often occur when objects in memory are never cleaned up. For example, imagine a Twitter client that lists tweets from users based around a search keyword. Overtime, more results show and the list becomes longer. If there is not a limit on the maximum number of Tweets visible, memory will, of course, go up over time. Instead, the application should impose a reasonable limit on the number of items that appear in that list.
There are some talks available that describe best practices around handling memory in AIR. Though the examples in this article are mostly written in ActionScript, the same concepts apply to JavaScript as well.
Performance-Tuning AIR applications
http://www.adobe.com/devnet/air/articles/air_performance.html
If there are memory leaks in the runtime, we jump on these as quickly as we can. We encourage developers to know about such issues by sending them back to our team using the following feedback form (www.adobe.com/go/wish).
If you are using an Ajax framework, you may want to look into whether there are known issues with memory leaks for that particular framework.
So, to summarize, yes, you should always worry about memory when building a desktop application -- whether with AIR or C++. As you are developing your application, you should monitor the memory usage of your application so that you can identify any issues sooner than later. One way to do this is to run longevity tests -- keep your application open over night to see if memory is creeping up.
In general, the tools available for browsers are very limited as well. I expect this will change soon as browser vendors also start providing more hooks into their browsers for identifying memory usage. Hope this helps.
Thank you!
-Rob
Product Manager, Adobe AIR
Sure. I've seen AIR apps on Linux swallow gigabytes of memory over time. It's a real blocker for me and stops me using them.
That said, other people on other platforms have no issue with it. Ultimately you need to decide what most of your market will be using and how affected they'll be by any issues in AIR (or other).
If it's not that important (but it's still an issue) submit bug reports and hope Adobe fix things.

Is debugging complex HTML/CSS/JavaScript pages feasible?

Consider a complex rich internet application with lots of user interaction. I'm thinking of extensive drag-drop support, server-side user input validation, custom drawn UI controls such as an Outlook-like calendar, real-time UI feedback, etc... Would such an application be debuggable? I mean, can you easily step through the source code, place breakpoints, view the contents of variables, see the current call stack, use a profiler to pinpoint performance issues, etc...
Yes, why wouldn't it be?
Complexity just means more code to dig through, but tools like console.trace() from Firebug makes that easier.
Yes, it would be debug-able.
If you're using IE8 to test your site, you could use the Developer Tools to inspect individual HTML elements and change their CSS on the fly. There's also the ability to break into Javascript from the same interface.
If you're using Firefox, Firebug has almost identical abilities with a different interface.
Safari also has developer tools installed by default, you just have to go through the hoops of enabling them.
When you are designing your application, design it with debugability and testability in mind. Make sure that individual parts are independently testable, you have enough test data, you have appropriate debug/probe points in your program logic, etc. Essentially if the complexity is properly managed, debugability won't be an issue at all.
If your job depended on it, you would find a way! :)
Seriously... a passenger jet has literally millions of parts and yet there are regular routine maintenance checks and if it breaks down it gets fixed. It's a very rare piece of software that approaches that much complexity.
Web app front ends tend to be relatively simple. Essentially you're just pushing some text from the server to the browser and making it pretty; and you're using various parts of the in-browser display as controls, some of which initiate some more text conversations with the server. There are lots of little things that can go wrong, of course, but much of the hardship is simply getting the browser (all of them!) to Do What You Mean.
The only truly difficult problems are those that are intermittent and/or timing sensitive. Those can be a bear to reproduce and trace. That calls for in-depth logical analysis of your source code and/or some specialized testing methods.

Categories

Resources