I'm currently looking for a good tool to profile javascript in browser. What problems i have with what I currently use:
Chrome - When I start profiling after reloading page, it takes infinity to load page. Not possible to finish
Firefox - Profiling with firebug is not easily readable (summary of each function in total). __For example
I'm looking for a profile, that would allow me to see not only how much time each function "ate". But analyze each call and subcalls.
Something simmiliar to KCacheGrind display.
If you can convert your app to the stand alone app, and if profiling on device is suitable for you, you can use Intel XDK and different profiling types under it. You can find out more info by this link: https://software.intel.com/en-us/html5/articles/using-the-profile-tab
The differences between CDT and XDK profilers are follow
CPU profiler - XDK is annotating source file by the time spent by the lines, not only call tree
Memory profiler - XDK profiler is more function centric and points which functions (call tree) allocate memory and annotate source view by the self and total memory allocated by the line. You can see bottom-up view and see the hotspot allocating more or switch to the callee view and analyze which high level function implicitly allocates a lot through library calls.
Here is a tool I wrote: http://yellowlab.tools
It spies and logs every JS access to the DOM on page load. Perfect tool to understand what's going on and to optimize browser-side JS performances.
Just launch a test then click the "JS Timeline" tab.
Related
I've got a highly recursive JavaScript function, which calls no other JavaScript functions. It's just the one function calling itself doing some simple logic and calling system functions (Array.slice, Array.splice , Array.push, etc.).
And I'm trying to optimize it, however Chrome's and Firefox's (the only browsers the website works in) DevTools and Firebug's profilers don't show anything more specific than function calls. Visual Studio has a nice thing where after profiling an application, it will tell you what percent of execution was spent on each line of your functions, which is really helpful.
I've tried breaking up the function into smaller functions, but then the function call overhead inflates to take up most of my execution time.
Firebug's and the DevTools' profilers provide you with detailed information on how much time was spent within each function. See the following screenshots:
Firebug (Own Time column)
Firefox DevTools (Self Time column)
Chrome DevTools (Self column)
The Firefox DevTools furthermore allow you to include platform data by enabling the option Show Gecko Platform Data within the Performance panel options:
Though the tools only display data per-function. They do not allow you to display per-line, or to be more precise, per-statement information, probably because this is something the JavaScript author cannot influence directly.
If you believe that this information can be relevant for a JavaScript author, you should file requests for each of those tools to implement this feature explaining the reasoning behind it.
Intel XDK provides information you are asking for. Here is a link to the Inbtel XDK profiling tools: https://software.intel.com/en-us/xdk/docs/using-the-profile-tab There are several pictures and help how to use it.
We are collecting the profile and annotate the source view by the self time metrics.
Currently we are doing this on Android devices, but have plans to migrate GUI to CDT and upstream it. But even before upstreaming this functionality will be available on Windows and Linux platforms in the browser named Crosswalk. Crosswalk is a chromium based browser, containing promising features like SIMD.js or WebCL.js
Several more worlds regarding collected information. Intel XDK JavaScript CPU profiler annotates only sources by self time, but we are working on adding total times - how much time was spend for certain line and all callee functions from this line.
For running of the profiling you need to download XDK, create new project and add your code to it. Then switch to Profile tab, plug the device via wire, select CPU profiler if it is not selected and press Profile button. Waiting your feedback on using it.
I'm currently testing Javascript Visualization Toolkits and want to measure execution time, memory consumption etc.
I know hot to profile Javascript with the chrome dev tools, google speed analyzer and so on, but I want users to perform the tests on their own and display the results. (without using dev tools or installing an extension)
Is there a library or something that can be used to achieve this? Subtracting start and end time for each function does not seem like a good solution.
Best case scenario would be a Library to profile individual functions.
Caveat: you will not be able to get CPU profile or memory usage using a JS-based testing solution. If this is what you are after, a Chrome extension may very well be the way forward.
If, however, this doesn't bother you, if you are after a ready-made solution, Benchmark.js may prove to be a good starting point.
The method it uses is akin to what you mentioned - taking time differences in execution. However, it does so multiple times (100 or more times) in order to average out the statistical errors. This allows your results to be free of truly random errors (this does not, however, mean that your data will be meaningful.).
I have a single web page application that is all JavaScript. I noticed the JavaScript heap size goes up on each AJAX call that returns a new view. Is there something I should be doing to clean up the older views?
I would recommend you to take a look at the following resources. You can see how to inspect the browser timeline in order to detect leaking references and performance problems.
The first one is from a Chrome developer talking this year at the Google IO:
http://www.youtube.com/watch?v=3pxf3Ju2row
The second one is from Paul Irish talking at the breaking point episode 2:
http://www.youtube.com/watch?v=PPXeWjWp-8Y
I'm sure you will find a lot of clues!!
Anyway ... if you share a test case of your code at jsfiddle.net, we can take a look :)
I am currently reading High Performance JavaScript by Nicholas C. Zakas and in the book he says things like:
Comparing the two pieces of code shows that using the Selectors API is
2 to 6 times faster across browsers (FigureĀ 3-6).
What I'm looking for is a browser based tool that lets me capture and measure the performance of a given piece of JavaScript and compare it against another script that uses a different approach (e.g., using the Selector API vs getElementsByTagName).
I've used Chrome and Firebug, but neither of them really seem to give me the kind of comparisons he's doing here. Am I using these tools incorrectly or is there a new tool I'm not familiar with that I should be using?
The most popular approach is just use the free online services of http://jsperf.com/.
Or clone it from github.
It has one big advantage over manual testing: It uses a Java Applet which gives access to a nanosecond timer, while JS timers (Date objects) can only resolve to milliseconds.
Chrome developers tools are the way to go. There are three awesome features.
Timeline. This will show you the rendering time of any executing javascript on a timeline graph.
Heap snapshot. This guy will take a snapshot of your current JS, including all of your chain. It also shows you how much memory each element is taking - provides a good way to find places your code is chewing.
CPU Profile. Shows how much CPU usage a function is eating up, also useful for finding places to optimize and perhaps introduce web workers.
If you use the Chrome beta channel check out the Speed Tracer extension (by Google). It's basically an enhanced timeline. If you're a jQuery guy the beta also has CSS Selector Profiling. I have yet to use so I can't speak to its uses.
Fabrice Bellard's PC emulator implemented in Javascript is impressively fast--it boots a small Linux image in the browser within a few seconds.
What techniques were used to get this performance?
I believe that sharing some general credit with "speed" of the modern JS interpreter is a far way an offtopic in the list of Bellard's techniques (since he does not replace browser's engine). What are his optimization techniques? is a great question and I would like to get a more detailed record on it.
The points I can name so far
(Optional) JS Typed arrays exclude unnecessary memory allocation dynamics (resizing). Fixed type (size) allows to allocate contiguous blocks of memory (with no variable-length elements' segments in such blocks) and uniformly address elements of a single type.
Fast boot by a custom minimalistic booter (see linuxstart code published by Fabrice, also see his project called TCCBOOT http://bellard.org/tcc/tccboot.html)
Optimized uncompressed embedded kernel (See the kernel configuration, it is extra tiny and optimized for small "linuxes").
Minimal number of devices (Devices are super standard and easy to recognize by the kernel. So far, I have properly studied serial device but the rest benefits from similar properties). Ramdisk initialization is rather slow though.
Small (2048 blocks) uncompressed root.bin ext2 system. The root system consists of minimal combination (rootfs, proc, tmpfs, devpts). No swap.
(Unsure) he has patched the buffer size for ttyS0 (serial port device, or actually the kernel UART driver - to be precise) which communicates to terminal. Communication is in any way buffered using his term.js binding (I have found no transmit buffer in UART itself). Note that emulation (as in this case) can be much faster than the real thing.
Please also mind the browser cache while refreshing the page. It kicks very fast if its all in memory (optimized by the host OS). Performing direct (if cached - in-memory) copying (with load_binary()) of "uncompressed" binary segments (start_linux.bin, vmlinux26.bin, root.bin). No hard disk I/O limitations.
I used the excellent http://jsbeautifier.org/ to prettify the minified JS code. It looks to me like painstakingly written, un-fussy, sensible procedural code. It's a magnificent achievement in itself, but the credit has to be shared with the phenomenal performance of modern JavaScript interpreters.
As of 2018, Fabrice has used asm.js and WebAssembly to achieve this.
You can read more here.
If you look at the Inspector (or we know as Chrome DevTools, or Firefox's Inspector), you would see some wasm:// sources (on Firefox), implying that he used WebAssembly to achieve this.
Maybe using a C to JavaScript compiler? Like Emscripten: http://code.google.com/p/emscripten/