JavaScript in Chrome (or any other browser for that matter, but I rather limit the discussion to Chrome to make it simpler) does not provide an API which can be used to observe memory related information (e.g. how much memory is being used by the current tab where the JS is running).
I am looking for a creative solution for getting an estimation of how much bytes I can cache in a JavaScript object that my web page is running. The problem definition is that I would like to cache as much as possible.
Can anyone think of a decent way of estimating how much memory can a tab handle before it will crash / become unusable on a machine? I guess a statistical approach could work out fine for some cases, but I'm looking for something more dynamic.
I am currently reading High Performance JavaScript by Nicholas C. Zakas and in the book he says things like:
Comparing the two pieces of code shows that using the Selectors API is
2 to 6 times faster across browsers (FigureĀ 3-6).
What I'm looking for is a browser based tool that lets me capture and measure the performance of a given piece of JavaScript and compare it against another script that uses a different approach (e.g., using the Selector API vs getElementsByTagName).
I've used Chrome and Firebug, but neither of them really seem to give me the kind of comparisons he's doing here. Am I using these tools incorrectly or is there a new tool I'm not familiar with that I should be using?
The most popular approach is just use the free online services of http://jsperf.com/.
Or clone it from github.
It has one big advantage over manual testing: It uses a Java Applet which gives access to a nanosecond timer, while JS timers (Date objects) can only resolve to milliseconds.
Chrome developers tools are the way to go. There are three awesome features.
Timeline. This will show you the rendering time of any executing javascript on a timeline graph.
Heap snapshot. This guy will take a snapshot of your current JS, including all of your chain. It also shows you how much memory each element is taking - provides a good way to find places your code is chewing.
CPU Profile. Shows how much CPU usage a function is eating up, also useful for finding places to optimize and perhaps introduce web workers.
If you use the Chrome beta channel check out the Speed Tracer extension (by Google). It's basically an enhanced timeline. If you're a jQuery guy the beta also has CSS Selector Profiling. I have yet to use so I can't speak to its uses.
I have a huge Web App that's switching from a HTML-rendered-on-the-server-and-pushed-to-the-client approach to a let-the-client-decide-how-to-render-the-data-the-server-sends, which means the performance on the client mattered in the past, but it's critica now. So I'm wondering if in the current state of affairs it's possible to profile Web apps and extract the same data (like call stacks, "threads", event handlers, number of calls to certain functions, etc) we use for server side perf.
I know every browser implements some of these functionalities to some extent (IE dev tools has an embedded profiler, so does Firefox [with Firebug], and Google Chrome has Speed Tracer), but I was wondering if it'd be possible to get, for example, stack traces of sessions. Is it advisable to instrument the code and have a knob to turn on/off the instrumentation? Or it's simply not that useful to go that level in analyzing JavaScript performance?
Fireunit is decent and YUI also provides a profiler, but neither provide stack traces or call frames. Unfortunately, there aren't many JS profiling tools out right now. And none of them are particularly great.
I think it's very important to go to a high level of performance analysis, especially considering the user will deal with the JS app 90%+ of the time directly.
I'm toying with the idea of progressively enabling/disabling JavaScript (and CSS) effects on a page - depending on how fast/slow the browser seems to be.
I'm specifically thinking about low-powered mobile devices and old desktop computers -- not just IE6 :-)
Are there any examples of this sort of thing being done?
What would be the best ways to measure this - accounting for things, like temporary slowdowns on busy CPUs?
Notes:
I'm not interested in browser/OS detection.
At the moment, I'm not interested in bandwidth measurements - only browser/cpu performance.
Things that might be interesting to measure:
Base JavaScript
DOM manipulation
DOM/CSS rendering
I'd like to do this in a way that affects the page's render-speed as little as possible.
BTW: In order to not confuse/irritate users with inconsistent behavior - this would, of course, require on-screen notifications to allow users to opt in/out of this whole performance-tuning process.
[Update: there's a related question that I missed: Disable JavaScript function based on user's computer's performance. Thanks Andrioid!]
Not to be a killjoy here, but this is not a feat that is currently possible in any meaningful way in my opinion.
There are several reasons for this, the main ones being:
Whatever measurement you do, if it is to have any meaning, will have to test the maximum potential of the browser/cpu, which you cannot do and maintain any kind of reasonable user experience
Even if you could, it would be a meaningless snapshot since you have no idea what kind of load the cpu is under from other applications than the browser while your test is running, and weather or not that situation will continue while the user is visiting your website.
Even if you could do that, every browser has their own strengths and weaknesses, which means, you'd have to test every dom manipulation function to know how fast the browser would complete it, there is no "general" or "average" that makes sense here in my experience, and even if there was, the speed with which dom manipulation commands execute, is based on the context of what is currently in the dom, which changes when you manipulate it.
The best you can do is to either
Let your users decide what they want, and enable them to easily change that decision if they regret it
or better yet
Choose to give them something that you can be reasonably sure that the greater part of your target audience will be able to enjoy.
Slightly off topic, but following this train of thought: if your users are not techleaders in their social circles (like most users in here are, but most people in the world are not) don't give them too much choice, ie. any choice that is not absolutely nescessary - they don't want it and they don't understand the technical consequences of their decision before it is too late.
A different approach, that does not need explicit benchmark, would be to progressively enable features.
You could apply features in prioritized order, and after each one, drop the rest if a certain amount of time has passed.
Ensuring that the most expensive features come last, you would present the user with a somewhat appropriate selection of features based on how speedy the browser is.
You could try timing some basic operations - have a look at Steve Souder's Episodes and Yahoo's boomerang for good ways of timing stuff browserside. However its going to be rather complicated to work out how the metrics relate to an acceptable level of performance / a rewarding user experience.
If you're going to provide a UI to let users opt in / opt out, why not just let the user choose the level of eye candy in the app vs the rendering speed?
Take a look at some of Google's (copyrighted!) benchmarks for V8:
http://v8.googlecode.com/svn/data/benchmarks/v4/regexp.js
http://v8.googlecode.com/svn/data/benchmarks/v4/splay.js
I chose a couple of the simpler ones to give an idea of similar benchmarks you could create yourself to test feature sets. As long as you keep the run-time of your tests between time logged at start to time logged at completion to less than 100 ms on the slowest systems (which these Google tests are vastly greater than) you should get the information you need without being detrimental to user experience. While the Google benchmarks care at a granularity between the faster systems, you don't. All that you need to know is which systems take longer than XX ms to complete.
Things you could test are regular expression operations (similar to the above), string concatenation, page scrolling, anything that causes a browser repaint or reflow, etc.
You could run all the benchmarks you want using Web Workers, then, according to results, store your detection about the performance of the machine in a cookie.
This is only for HTML5 Supported browsers, of-course
Some Ideas:
Putting a time-limit on the tests seems like an obvious choice.
Storing test results in a cookie also seems obvious.
Poor test performance on a test could pause further scripts
and trigger display of a non-blocking prompt UI (like the save password prompts common in modern web browsers)
that asks the user if they want to opt into further scripting effects - and store the answer in a cookie.
while the user hasn't answered the prompt, then periodically repeat the tests and auto-accept the scripting prompt if consecutive tests finish faster than the first one.
.
On a sidenote - Slow network speeds could also probably be tested
by timing the download of external resources (like the pages own CSS or JavaScript files)
and comparing that result with the JavaScript benchmark results.
this may be useful on sites relying on loads of XHR effects and/or heavy use of <img/>s.
.
It seems that DOM rendering/manipulation benchmarks are difficult to perform before the page has started to render - and are thus likely to cause quite noticable delays for all users.
I came with a similar question and I solved it this way, in fact it helped me taking some decisions.
After rendering the page I do:
let now, finishTime, i = 0;
now = Date.now();//Returns the number of miliseconds after Jan 01 1970
finishTime = now + 200; //We add 200ms (1/5 of a second)
while(now < finishTime){
i++;
now = Date.now();
}
console.log("I looped " + i + " times!!!");
After doing that I tested it on several browser with different OS and the i value gave me the following results:
Windows 10 - 8GB RAM:
1,500,000 aprox on Chrome
301,327 aprox on Internet Explorer
141,280 (on Firefox on a VirtualMachine running Lubuntu 2GB given)
MacOS 8GB RAM:
3,000,000 aprox on Safari
1,500,000 aprox on Firefox
70,000 (on Firefox 41 on a Virtual Machine running Windows XP 2GB given)
Windows 10 - 4GB RAM (This is an Old computer I have)
500,000 aprox on Google Chrome
I load a lot of divs in a form of list, the are loaded dinamically accordeing to user's input, this helped me to limit the number of elements I create according to the performance the have given, BUT
But the JS is not all!, because even tough the Lubuntu OS running on a virtual machine gave poor results, it loaded 20,000 div elements in less than 2 seconds and you could scroll through the list with no problem while I took more than 12 seconds for IE and the performance sucked!
So a Good way could be that, but When it comes to rendering, thats another story, but this definitely could help to take some decisions.
Good luck, everyone!
Consider a complex rich internet application with lots of user interaction. I'm thinking of extensive drag-drop support, server-side user input validation, custom drawn UI controls such as an Outlook-like calendar, real-time UI feedback, etc... Would such an application be debuggable? I mean, can you easily step through the source code, place breakpoints, view the contents of variables, see the current call stack, use a profiler to pinpoint performance issues, etc...
Yes, why wouldn't it be?
Complexity just means more code to dig through, but tools like console.trace() from Firebug makes that easier.
Yes, it would be debug-able.
If you're using IE8 to test your site, you could use the Developer Tools to inspect individual HTML elements and change their CSS on the fly. There's also the ability to break into Javascript from the same interface.
If you're using Firefox, Firebug has almost identical abilities with a different interface.
Safari also has developer tools installed by default, you just have to go through the hoops of enabling them.
When you are designing your application, design it with debugability and testability in mind. Make sure that individual parts are independently testable, you have enough test data, you have appropriate debug/probe points in your program logic, etc. Essentially if the complexity is properly managed, debugability won't be an issue at all.
If your job depended on it, you would find a way! :)
Seriously... a passenger jet has literally millions of parts and yet there are regular routine maintenance checks and if it breaks down it gets fixed. It's a very rare piece of software that approaches that much complexity.
Web app front ends tend to be relatively simple. Essentially you're just pushing some text from the server to the browser and making it pretty; and you're using various parts of the in-browser display as controls, some of which initiate some more text conversations with the server. There are lots of little things that can go wrong, of course, but much of the hardship is simply getting the browser (all of them!) to Do What You Mean.
The only truly difficult problems are those that are intermittent and/or timing sensitive. Those can be a bear to reproduce and trace. That calls for in-depth logical analysis of your source code and/or some specialized testing methods.