Chrome 39 JavaScript Performance Anomaly - javascript

I did a jsPerf test to see if there were any performance differences between using arguments or local variables in a function in JavaScript.
In Firefox 34, there was practically no difference. However, in Chrome 39, the compiler seems to be doing a lot of harm. See these results:
Can anyone explain why this happens?

First of all for a benchmark that tries to measure arguments vs. local variables performance behavior you are doing too much in the each case - you allocate a closure again and again, you allocate object from object literal, you use for-in loop. All these operations are way more expensive than local variable access. Their costs subsume and hide whatever small cost variable access has.
Now the anomaly you are seeing is due to V8 not having a fast path for creating closures that contain literals: there is FastNewClosureStub but it is only used when there are no literals in the closure[1]. This makes closure allocation more expensive in the first case compared to the second - you are seeing this reflected in the score as closure allocation is rather dominant part of your benchmark (it allocates one closure per op).
If you "hide" literal creation[2] into a separate function you will see anomaly going away. Note: such hiding doesn't make benchmark anymore representative: it is still not measuring what you want to measure.
Overall trying to capture performance characteristics of variable access in a benchmark is very hard because these are usually among the fastest and smallest operations even in the code produced by non-optimizing (baseline) compiler. In the most common case when no variables are captured and the scope does not contain with, eval or arguments object - there'll be no difference between arguments and local variable access both compiling down into a single memory load.
[1] https://github.com/v8/v8-git-mirror/blob/9def087efcd844342c35f42628bac4ead49cac81/src/ia32/full-codegen-ia32.cc#L1213-L1218
[2] http://jsperf.com/variable-vs-variable-passed-as-an-argument-to-a-self-in/3

Related

How compare performance of LuaJIT vs JavaScriptCore, for a bridge to Objective-C?

I’m writing a few unit tests for an Objective-C <-> LuaJIT bridge, the idea is to compare the performance of LuaJIT against JavaScriptCore and make the test fail when JS has better execution time than LuaJit at some specific tasks.
What tasks?
a for loop that calculates a sum.
[Lua] [JS]
passing a dictionary and an array to the virtual machine, cloning the object, and returning the new object back to ObjC.
[Dictionary access Lua][Dictionary access JS][Array access Lua][Array access JS]
deep copying a dictionary containing arrays, dictionaries and values
[Deep copy Lua][Deep copy JS]
hmm.. seriously why?
I just want to switch back to developing games and keep updating LuaJIT with each new release, so that when the tests fail I can roll back to the previous version or do whatever be needed to have the fastest implementation of Lua running in the VM.
ok… and the question?
The tests are failing randomly, some times it’s LuaJIT the fastest, sometimes it’s JS. The only reason that came to mind for that erratic behavior is that the garbage collector is messing with the execution times.
Is there a way to force the garbage collector do it’s thing on each iteration for both engines?
An alternative is to have a sample of N interactions and find the Finite Mixture Model with the two means for the execution time, the one when the script runs in “normal” mode, and the execution time when the GC is activated to compare the performance of both engines accordingly.
sounds to much of a hassle
Maybe, I just want a silver bullet to kill slower scripting engines, and what else that compare them with the the native implementation of JavaScript on OS X and iOS

Javascript memory representation

Is there any way to dump the environment records at some point during the execution of a Javascript program ?
I want to detect if two variables, or object properties are pointing to the same address, thus potentially producing "side-effects".
I think one way to do it, is to get the bindings allocation address from an environment record.
Any tools are welcome.
Thanks.
In Firefox/Spidermonkey the thing you're looking for is called GC/CC logs. You can dump it from the browser (e.g. via about:memory) or from the command-line JS shell.
When you do, you'll find that a typical JS program has a rather large and complex graph of objects and properties, so finding the aliasing cases that you're interested in will be hard.
If, on the other hand, you have the list of object references you're interested in, checking with === is enough. (See also Equality comparisons and sameness on MDN.

Tracking down memory leaks using node-memwatch?

I am attempting to use node-memwatch to track down memory leaks in my application. Currently I am creating a HeapDiff when the app starts and then doing a diff when mem-watch detects a leak. I have found a few items that look suspect but I don't understand how I should map what is being reported to my code. For example, the following item is reported in the diff:
{ what: 'String',
size_bytes: 4785072,
size: '4.56 mb',
'+': 32780,
'-': 563 },
Which seems like a prime suspect for a memory leak. How can I figure out which piece of my code is causing this leak? In the examples they give on their site, what is typically something obvious like MyLeakyClass and not a system type...
It seems that feature is yet to be implemented:
"In particular, we want node-memwatch to be able to provide some examples of a leaked object (e.g., names of variables, array indices, or closure code)."
https://hacks.mozilla.org/2012/11/tracking-down-memory-leaks-in-node-js-a-node-js-holiday-season/
LeakingClass example should have been given from this code: https://github.com/lloyd/node-memwatch/blob/master/examples/basic_heapdiff.js
What that means is that you have created 32780 strings, and garbage collected 563, since starting your HeapDiff. (the ones you collected may or may not be ones created in this window; they could have existed already when the diff started). The total amount of memory used by strings has grown by 4.56mb. That could all be in one string, or it could be perfectly evenly distributed among the 32k strings. You have no data on that.
Strings, of course, show up all over your code. So my advice is, don't look at those. Look for objects with more trackable (greppable, rarer, whatever) names that are growing, even if they appear to be growing less than your strings, and track those. In the process, you may find your big leaks.

Does assigning a new string value create garbage that needs collecting?

Consider this javascript code:
var s = "Some string";
s = "More string";
Will the garbage collector (GC) have work to do after this sort of operation?
(I'm wondering whether I should worry about assigning string literals when trying to minimize GC pauses.)
e: I'm slightly amused that, although I stated explicitly in my question that I needed to minimize GC, everyone assumed I'm wrong about that. If one really must know the particular details: I've got a game in javascript -- it runs fine in Chrome, but in Firefox has semi-frequent pauses, that seem to be due to GC. (I've even checked with the MemChaser extension for Firefox, and the pauses coincide exactly with garbage collection.)
Yes, strings need to be garbage-collected, just like any other type of dynamically allocated object. And yes, this is a valid concern as careless allocation of objects inside busy loops can definitely cause performance issues.
However, string values are immutable (non-changable), and most modern JavaScript implementations use "string interning", that is they store only one instance of each unique string value. This means that if you have something like this...
var s1 = "abc",
s2 = "abc";
...only one instance of "abc" will be allocated. This only applies to string values, not String objects.
A couple of things to keep in mind:
Functions like substring, slice, etc. will allocate a new object for each function call (if called with different parameters).
Even though both variable point to the same data in memory, there are still two variables to process when the GC cycle runs. Having too many local variables can also hurt you as each of them will need to be processed by the GC, adding overhead.
Some further reading on writing high-performance JavaScript:
https://developer.mozilla.org/en-US/docs/JavaScript/Memory_Management
https://www.scirra.com/blog/76/how-to-write-low-garbage-real-time-javascript
http://jonraasch.com/blog/10-javascript-performance-boosting-tips-from-nicholas-zakas
Yes, but unless you are doing this in a loop millions of times it won't likely be a factor for you to worry about.
As you already noticed, JavaScript is not JavaScript. It runs on different platforms and thus will have different performance characteristics.
So the definite answer to the question "Will the GC have work to do after this sort of operation?" is: maybe. If the script is as short as you've shown it, then a JIT-Compiler might well drop the first string completely. But there's no rule in the language definition that says it has to be that way or the other way. So in the end it's like it is all too often in JavaScript: you have to try it.
The more interesting question might also be: how can you avoid garbage collection. And that is try to minimize the allocation of new objects. Games typically have a pretty constant amount of objects and often there won't be new objects until an old one gets unused. For strings this might be harder as they are immutable in JS. So try to replace strings with other (mutable) representations where possible.
Yes, the garbage collector will have a string object containing "Some string" to get rid of. And, in answer to your question, that string assignment will make work for the GC.
Because strings are immutable and are used a lot, the JS engine has a pretty efficient way of dealing with them. You should not notice any pauses from garbage collecting a few strings. The garbage collector has work to do all the time in the normal course of javascript programming. That's how it's supposed to work.
If you are observing pauses from GC, I rather doubt it's from a few strings. There is more likely a much bigger issue going on. Either you have thousands of objects needing GC or some very complicated task for the GC. We couldn't really speculate on that without study of the overall code.
This should not be a concern unless you were doing some enormous loop and dealing with tens of thousands of objects. In that case, one might want to program a little more carefully to minimize the number of intermediate objects that are created. But, absent that level of objects, you should first right clear, reliable code and then optimize for performance only when something has shown you that there is a performance issue to worry about.
To answer your question "I'm wondering whether I should worry about assigning string literals when trying to minimize GC pauses": No.
You really don't need to worry about this sort of thing with regard to garbage collection.
GC is only a concern when creating & destroying huge numbers of Javascript objects, or large numbers of DOM elements.

How can I determine what objects are being collected by the garbage collector?

I have significant garbage collection pauses. I'd like to pinpoint the objects most responsible for this collection before I try to fix the problem. I've looked at the heap snapshot on Chrome, but (correct me if I am wrong) I cannot seem to find any indicator of what is being collected, only what is taking up the most memory. Is there a way to answer this empirically, or am I limited to educated guesses?
In chrome profiles takes two heap snapshots, one before doing action you want to check and one after.
Now click on second snapshot.
On the bottom bar you will see select box with option "summary". Change it to "comparision".
Then in select box next to it select snaphot you want to compare against (it should automaticaly select snapshot1).
As the results you will get table with data you need ie. "New" and "Deleted" objects.
With newer Chrome releases there is a new tool available that is handy for this kind of task:
The "Record Heap Allocations" profiling type. The regular "Heap SnapShot" comparison tool (as explained in Rafał Łużyński answers) cannot give you that kind of information because each time you take a heap snapshot, a GC run is performed, so GCed objects are never part of the snapshots.
However with the "Record Heap Allocations" tool constantly all allocations are being recorded (that's why it may slow down your application a lot when it is recording). If you are experiencing frequent GC runs, this tool can help you identify places in your code where lots of memory is allocated.
In conjunction with the Heap SnapShot comparison you will see that most of the time a lot more memory is allocated between two snapshots, than you can see from the comparison. In extreme cases the comparison will yield no difference at all, whereas the allocation tool will show you lots and lots of allocated memory (which obviously had to be garbage collected in the meantime).
Unfortunately the current version of the tool does not show you where the allocation took place, but it will show you what has been allocated and how it is was retained at the time of the allocation. From the data (and possibly the constructors) you will however be able to identify your objects and thus the place where they are being allocated.
If you're trying to choose between a few likely culprits, you could modify the object definition to attach themselves to the global scope (as list under document or something).
Then this will stop them from being collected. Which may make the program faster (they're not being reclaimed) or slower (because they build up and get checked by the mark-and-sweep every time). So if you see a change in performance, you may have found the problem.
One alternative is to look at how many objects are being created of each type (set up a counter in the constructor). If they're getting collected a lot, they're also being created just as frequently.
Take a look at https://developers.google.com/chrome-developer-tools/docs/heap-profiling
especially Containment View
The Containment view is essentially a "bird's eye view" of your
application's objects structure. It allows you to peek inside function
closures, to observe VM internal objects that together make up your
JavaScript objects, and to understand how much memory your application
uses at a very low level.
The view provides several entry points:
DOMWindow objects — these are objects considered as "global" objects
for JavaScript code; GC roots — actual GC roots used by VM's garbage
collector; Native objects — browser objects that are "pushed" inside
the JavaScript virtual machine to allow automation, e.g. DOM nodes,
CSS rules (see the next section for more details.) Below is the
example of what the Containment view looks like:

Categories

Resources