I did read few articles like this on MDN and this one I got the idea of how GC happens in JavaScript
I still don't understand things like
a) When does Garbage collector kicks in ( it gets called after some interval or some conditions have to met) ?
b) Who is responsible for Garbage collection ( it's part of JavaScript engine or browser/Node ) ?
c) runs on main thread or separate thread ?
d) which one of the following have higher peak memory usage ?
// first-case
// variables will be unreachable after each cycle
(function() {
for (let i = 0; i < 10000; i++) {
let name = 'this is name' + i;
let index = i;
}
})()
// second-case
// creating variable once
(function() {
let i, name, index;
for (i = 0; i < 10000; i++) {
name = 'this is name' + i;
index = i;
}
})()
V8 developer here. The short answer is: it's complicated. In particular, different JavaScript engines, and different versions of the same engine, will do things differently.
To address your specific questions:
a) When does Garbage collector kicks in ( it gets called after some interval or some conditions have to met) ?
Depends. Probably both. Modern garbage collectors often are generational: they have a relatively small "young generation", which gets collected whenever it is full. Additionally they have a much larger "old generation", where they typically do their work in many small steps, so as to never interrupt execution for too long. One common way to trigger such a small step is when N bytes (or objects) have been allocated since the last step. Another way, especially in modern tabbed browsers, is to trigger GC activity when a tab is inactive or in the background. There may well be additional triggers beyond these two.
b) Who is responsible for Garbage collection ( it's part of JavaScript engine or browser/Node ) ?
The garbage collector is part of the JavaScript engine. That said, it must have certain interactions with the respective embedder to deal with embedder-managed objects (e.g. DOM nodes) whose lifetime is tied to JavaScript objects in one way or another.
c) runs on main thread or separate thread ?
Depends. In a modern implementation, typically both: some work happens in the background (in one or more threads), some steps are more efficient to do on the main thread.
d) which one of the following have higher peak memory usage ?
These two snippets will (probably) have the same peak memory usage: neither of them ever lets objects allocated by more than one iteration be reachable at the same time.
Edit: if you want to read more about recent GC-related work that V8 has been doing, you can find a series of blog posts here: https://v8.dev/blog/tags/memory
I am reading the slides Breaking the Javascript Speed Limit with V8, and there is an example like the code below. I cannot figure out why <= is slower than < in this case, can anybody explain that? Any comments are appreciated.
Slow:
this.isPrimeDivisible = function(candidate) {
for (var i = 1; i <= this.prime_count; ++i) {
if (candidate % this.primes[i] == 0) return true;
}
return false;
}
(Hint: primes is an array of length prime_count)
Faster:
this.isPrimeDivisible = function(candidate) {
for (var i = 1; i < this.prime_count; ++i) {
if (candidate % this.primes[i] == 0) return true;
}
return false;
}
[More Info] the speed improvement is significant, in my local environment test, the results are as follows:
V8 version 7.3.0 (candidate)
Slow:
time d8 prime.js
287107
12.71 user
0.05 system
0:12.84 elapsed
Faster:
time d8 prime.js
287107
1.82 user
0.01 system
0:01.84 elapsed
Other answers and comments mention that the difference between the two loops is that the first one executes one more iteration than the second one. This is true, but in an array that grows to 25,000 elements, one iteration more or less would only make a miniscule difference. As a ballpark guess, if we assume the average length as it grows is 12,500, then the difference we might expect should be around 1/12,500, or only 0.008%.
The performance difference here is much larger than would be explained by that one extra iteration, and the problem is explained near the end of the presentation.
this.primes is a contiguous array (every element holds a value) and the elements are all numbers.
A JavaScript engine may optimize such an array to be an simple array of actual numbers, instead of an array of objects which happen to contain numbers but could contain other values or no value. The first format is much faster to access: it takes less code, and the array is much smaller so it will fit better in cache. But there are some conditions that may prevent this optimized format from being used.
One condition would be if some of the array elements are missing. For example:
let array = [];
a[0] = 10;
a[2] = 20;
Now what is the value of a[1]? It has no value. (It isn't even correct to say it has the value undefined - an array element containing the undefined value is different from an array element that is missing entirely.)
There isn't a way to represent this with numbers only, so the JavaScript engine is forced to use the less optimized format. If a[1] contained a numeric value like the other two elements, the array could potentially be optimized into an array of numbers only.
Another reason for an array to be forced into the deoptimized format can be if you attempt to access an element outside the bounds of the array, as discussed in the presentation.
The first loop with <= attempts to read an element past the end of the array. The algorithm still works correctly, because in the last extra iteration:
this.primes[i] evaluates to undefined because i is past the array end.
candidate % undefined (for any value of candidate) evaluates to NaN.
NaN == 0 evaluates to false.
Therefore, the return true is not executed.
So it's as if the extra iteration never happened - it has no effect on the rest of the logic. The code produces the same result as it would without the extra iteration.
But to get there, it tried to read a nonexistent element past the end of the array. This forces the array out of optimization - or at least did at the time of this talk.
The second loop with < reads only elements that exist within the array, so it allows an optimized array and code.
The problem is described in pages 90-91 of the talk, with related discussion in the pages before and after that.
I happened to attend this very Google I/O presentation and talked with the speaker (one of the V8 authors) afterward. I had been using a technique in my own code that involved reading past the end of an array as a misguided (in hindsight) attempt to optimize one particular situation. He confirmed that if you tried to even read past the end of an array, it would prevent the simple optimized format from being used.
If what the V8 author said is still true, then reading past the end of the array would prevent it from being optimized and it would have to fall back to the slower format.
Now it's possible that V8 has been improved in the meantime to efficiently handle this case, or that other JavaScript engines handle it differently. I don't know one way or the other on that, but this deoptimization is what the presentation was talking about.
I work on V8 at Google, and wanted to provide some additional insight on top of the existing answers and comments.
For reference, here's the full code example from the slides:
var iterations = 25000;
function Primes() {
this.prime_count = 0;
this.primes = new Array(iterations);
this.getPrimeCount = function() { return this.prime_count; }
this.getPrime = function(i) { return this.primes[i]; }
this.addPrime = function(i) {
this.primes[this.prime_count++] = i;
}
this.isPrimeDivisible = function(candidate) {
for (var i = 1; i <= this.prime_count; ++i) {
if ((candidate % this.primes[i]) == 0) return true;
}
return false;
}
};
function main() {
var p = new Primes();
var c = 1;
while (p.getPrimeCount() < iterations) {
if (!p.isPrimeDivisible(c)) {
p.addPrime(c);
}
c++;
}
console.log(p.getPrime(p.getPrimeCount() - 1));
}
main();
First and foremost, the performance difference has nothing to do with the < and <= operators directly. So please don't jump through hoops just to avoid <= in your code because you read on Stack Overflow that it's slow --- it isn't!
Second, folks pointed out that the array is "holey". This was not clear from the code snippet in OP's post, but it is clear when you look at the code that initializes this.primes:
this.primes = new Array(iterations);
This results in an array with a HOLEY elements kind in V8, even if the array ends up completely filled/packed/contiguous. In general, operations on holey arrays are slower than operations on packed arrays, but in this case the difference is negligible: it amounts to 1 additional Smi (small integer) check (to guard against holes) each time we hit this.primes[i] in the loop within isPrimeDivisible. No big deal!
TL;DR The array being HOLEY is not the problem here.
Others pointed out that the code reads out of bounds. It's generally recommended to avoid reading beyond the length of arrays, and in this case it would indeed have avoided the massive drop in performance. But why though? V8 can handle some of these out-of-bound scenarios with only a minor performance impact. What's so special about this particular case, then?
The out-of-bounds read results in this.primes[i] being undefined on this line:
if ((candidate % this.primes[i]) == 0) return true;
And that brings us to the real issue: the % operator is now being used with non-integer operands!
integer % someOtherInteger can be computed very efficiently; JavaScript engines can produce highly-optimized machine code for this case.
integer % undefined on the other hand amounts to a way less efficient Float64Mod, since undefined is represented as a double.
The code snippet can indeed be improved by changing the <= into < on this line:
for (var i = 1; i <= this.prime_count; ++i) {
...not because <= is somehow a superior operator than <, but just because this avoids the out-of-bounds read in this particular case.
TL;DR The slower loop is due to accessing the Array 'out-of-bounds', which either forces the engine to recompile the function with less or even no optimizations OR to not compile the function with any of these optimizations to begin with (if the (JIT-)Compiler detected/suspected this condition before the first compilation 'version'), read on below why;
Someone just has to say this (utterly amazed nobody already did):
There used to be a time when the OP's snippet would be a de-facto example in a beginners programming book intended to outline/emphasize that 'arrays' in javascript are indexed starting at 0, not 1, and as such be used as an example of a common 'beginners mistake' (don't you love how I avoided the phrase 'programing error' ;)): out-of-bounds Array access.
Example 1:
a Dense Array (being contiguous (means in no gaps between indexes) AND actually an element at each index) of 5 elements using 0-based indexing (always in ES262).
var arr_five_char=['a', 'b', 'c', 'd', 'e']; // arr_five_char.length === 5
// indexes are: 0 , 1 , 2 , 3 , 4 // there is NO index number 5
Thus we are not really talking about performance difference between < vs <= (or 'one extra iteration'), but we are talking:
'why does the correct snippet (b) run faster than erroneous snippet (a)'?
The answer is 2-fold (although from a ES262 language implementer's perspective both are forms of optimization):
Data-Representation: how to represent/store the Array internally in memory (object, hashmap, 'real' numerical array, etc.)
Functional Machine-code: how to compile the code that accesses/handles (read/modify) these 'Arrays'
Item 1 is sufficiently (and correctly IMHO) explained by the accepted answer, but that only spends 2 words ('the code') on Item 2: compilation.
More precisely: JIT-Compilation and even more importantly JIT-RE-Compilation !
The language specification is basically just a description of a set of algorithms ('steps to perform to achieve defined end-result'). Which, as it turns out is a very beautiful way to describe a language.
And it leaves the actual method that an engine uses to achieve specified results open to the implementers, giving ample opportunity to come up with more efficient ways to produce defined results.
A spec conforming engine should give spec conforming results for any defined input.
Now, with javascript code/libraries/usage increasing, and remembering how much resources (time/memory/etc) a 'real' compiler uses, it's clear we can't make users visiting a web-page wait that long (and require them to have that many resources available).
Imagine the following simple function:
function sum(arr){
var r=0, i=0;
for(;i<arr.length;) r+=arr[i++];
return r;
}
Perfectly clear, right? Doesn't require ANY extra clarification, Right? The return-type is Number, right?
Well.. no, no & no... It depends on what argument you pass to named function parameter arr...
sum('abcde'); // String('0abcde')
sum([1,2,3]); // Number(6)
sum([1,,3]); // Number(NaN)
sum(['1',,3]); // String('01undefined3')
sum([1,,'3']); // String('NaN3')
sum([1,2,{valueOf:function(){return this.val}, val:6}]); // Number(9)
var val=5; sum([1,2,{valueOf:function(){return val}}]); // Number(8)
See the problem ? Then consider this is just barely scraping the massive possible permutations...
We don't even know what kind of TYPE the function RETURN until we are done...
Now imagine this same function-code actually being used on different types or even variations of input, both completely literally (in source code) described and dynamically in-program generated 'arrays'..
Thus, if you were to compile function sum JUST ONCE, then the only way that always returns the spec-defined result for any and all types of input then, obviously, only by performing ALL spec-prescribed main AND sub steps can guarantee spec conforming results (like an unnamed pre-y2k browser).
No optimizations (because no assumptions) and dead slow interpreted scripting language remains.
JIT-Compilation (JIT as in Just In Time) is the current popular solution.
So, you start to compile the function using assumptions regarding what it does, returns and accepts.
you come up with checks as simple as possible to detect if the function might start returning non-spec conformant results (like because it receives unexpected input).
Then, toss away the previous compiled result and recompile to something more elaborate, decide what to do with the partial result you already have (is it valid to be trusted or compute again to be sure), tie in the function back into the program and try again. Ultimately falling back to stepwise script-interpretation as in spec.
All of this takes time!
All browsers work on their engines, for each and every sub-version you will see things improve and regress. Strings were at some point in history really immutable strings (hence array.join was faster than string concatenation), now we use ropes (or similar) which alleviate the problem. Both return spec-conforming results and that is what matters!
Long story short: just because javascript's language's semantics often got our back (like with this silent bug in the OP's example) does not mean that 'stupid' mistakes increases our chances of the compiler spitting out fast machine-code. It assumes we wrote the 'usually' correct instructions: the current mantra we 'users' (of the programming language) must have is: help the compiler, describe what we want, favor common idioms (take hints from asm.js for basic understanding what browsers can try to optimize and why).
Because of this, talking about performance is both important BUT ALSO a mine-field (and because of said mine-field I really want to end with pointing to (and quoting) some relevant material:
Access to nonexistent object properties and out of bounds array elements returns the undefined value instead of raising an exception. These dynamic features make programming in JavaScript convenient, but they also make it difficult to compile JavaScript into efficient machine code.
...
An important premise for effective JIT optimization is that programmers use dynamic features of JavaScript in a systematic way. For example, JIT compilers exploit the fact that object properties are often added to an object of a given type in a specific order or that out of bounds array accesses occur rarely. JIT compilers exploit these regularity assumptions to generate efficient machine code at runtime. If a code block satisfies the assumptions, the JavaScript engine executes efficient, generated machine code. Otherwise, the engine must fall back to slower code or to interpreting the program.
Source:
"JITProf: Pinpointing JIT-unfriendly JavaScript Code"
Berkeley publication,2014, by Liang Gong, Michael Pradel, Koushik Sen.
http://software-lab.org/publications/jitprof_tr_aug3_2014.pdf
ASM.JS (also doesn't like out off bound array access):
Ahead-Of-Time Compilation
Because asm.js is a strict subset of JavaScript, this specification only defines the validation logic—the execution semantics is simply that of JavaScript. However, validated asm.js is amenable to ahead-of-time (AOT) compilation. Moreover, the code generated by an AOT compiler can be quite efficient, featuring:
unboxed representations of integers and floating-point numbers;
absence of runtime type checks;
absence of garbage collection; and
efficient heap loads and stores (with implementation strategies varying by platform).
Code that fails to validate must fall back to execution by traditional means, e.g., interpretation and/or just-in-time (JIT) compilation.
http://asmjs.org/spec/latest/
and finally https://blogs.windows.com/msedgedev/2015/05/07/bringing-asm-js-to-chakra-microsoft-edge/
were there is a small subsection about the engine's internal performance improvements when removing bounds-check (whilst just lifting the bounds-check outside the loop already had an improvement of 40%).
EDIT:
note that multiple sources talk about different levels of JIT-Recompilation down to interpretation.
Theoretical example based on above information, regarding the OP's snippet:
Call to isPrimeDivisible
Compile isPrimeDivisible using general assumptions (like no out of bounds access)
Do work
BAM, suddenly array accesses out of bounds (right at the end).
Crap, says engine, let's recompile that isPrimeDivisible using different (less) assumptions, and this example engine doesn't try to figure out if it can reuse current partial result, so
Recompute all work using slower function (hopefully it finishes, otherwise repeat and this time just interpret the code).
Return result
Hence time then was:
First run (failed at end) + doing all work all over again using slower machine-code for each iteration + the recompilation etc.. clearly takes >2 times longer in this theoretical example!
EDIT 2: (disclaimer: conjecture based in facts below)
The more I think of it, the more I think that this answer might actually explain the more dominant reason for this 'penalty' on erroneous snippet a (or performance-bonus on snippet b, depending on how you think of it), precisely why I'm adament in calling it (snippet a) a programming error:
It's pretty tempting to assume that this.primes is a 'dense array' pure numerical which was either
Hard-coded literal in source-code (known excelent candidate to become a 'real' array as everything is already known to the compiler before compile-time) OR
most likely generated using a numerical function filling a pre-sized (new Array(/*size value*/)) in ascending sequential order (another long-time known candidate to become a 'real' array).
We also know that the primes array's length is cached as prime_count ! (indicating it's intent and fixed size).
We also know that most engines initially pass Arrays as copy-on-modify (when needed) which makes handeling them much more fast (if you don't change them).
It is therefore reasonable to assume that Array primes is most likely already an optimized array internally which doesn't get changed after creation (simple to know for the compiler if there is no code modifiying the array after creation) and therefore is already (if applicable to the engine) stored in an optimized way, pretty much as if it was a Typed Array.
As I have tried to make clear with my sum function example, the argument(s) that get passed higly influence what actually needs to happen and as such how that particular code is being compiled to machine-code. Passing a String to the sum function shouldn't change the string but change how the function is JIT-Compiled! Passing an Array to sum should compile a different (perhaps even additional for this type, or 'shape' as they call it, of object that got passed) version of machine-code.
As it seems slightly bonkus to convert the Typed_Array-like primes Array on-the-fly to something_else while the compiler knows this function is not even going to modify it!
Under these assumptions that leaves 2 options:
Compile as number-cruncher assuming no out-of-bounds, run into out-of-bounds problem at the end, recompile and redo work (as outlined in theoretical example in edit 1 above)
Compiler has already detected (or suspected?) out of bound acces up-front and the function was JIT-Compiled as if the argument passed was a sparse object resulting in slower functional machine-code (as it would have more checks/conversions/coercions etc.). In other words: the function was never eligable for certain optimisations, it was compiled as if it received a 'sparse array'(-like) argument.
I now really wonder which of these 2 it is!
To add some scientificness to it, here's a jsperf
https://jsperf.com/ints-values-in-out-of-array-bounds
It tests the control case of an array filled with ints and looping doing modular arithmetic while staying within bounds. It has 5 test cases:
1. Looping out of bounds
2. Holey arrays
3. Modular arithmetic against NaNs
4. Completely undefined values
5. Using a new Array()
It shows that the first 4 cases are really bad for performance. Looping out of bounds is a bit better than the other 3, but all 4 are roughly 98% slower than the best case.
The new Array() case is almost as good as the raw array, just a few percent slower.
I'm coding a function that takes an object and a projection to know on which field it has to work.
I'm wondering if I should use a string like this :
const o = {
a: 'Hello There'
};
function foo(o, str) {
const a = o[str];
/* ... */
}
foo(o, 'a');
Or with a function :
function bar(o, proj) {
const a = proj(o);
/* ... */
}
bar(o, o => o.a);
I think V8 is creating classes with my javascript objects. If I use a string to access dynamically a field, will it be still able to create a class with my object and not a hashtable or something else ?
V8 developer here. The answer to "which pattern should I use?" is probably "it depends". I can think of scenarios where the one or the other would (likely) be (a bit) faster, depending on your app's behavior. So I would suggest that you either try both (in real code, not a microbenchmark!) and measure yourself, or simply pick whichever you prefer and/or makes more sense in the larger context, and not worry about it until profiling shows that this is an actual bottleneck that's worth spending time on.
If the properties are indeed known at the call site, then the fastest option is probably to load the property before the call:
function baz(o, str, a) {
/* ... */
}
baz(o, "a", o.a);
I realize that if things actually were this simple, you probably wouldn't be asking this question; if that assumption is true then this is a great example for how simplifications in microbenchmarks can easily change what the right answer is.
The answer to the classes question is that this decision has no impact on how V8 represents your objects under the hood -- that mostly depends on how you modify your objects, not on how you read from them. Also, for the record:
every object has a "hidden class"; whether or not it uses hash table representation is orthogonal to that
whether hash table mode or shape-tracking mode is better for any given object is one of the things that depend on the use case, which is precisely why both modes exist. I wouldn't worry too much about it, unless you know (from profiling) that it happens to be a problem in your case (more often than not, V8's heuristics get it right; manual intervention is rarely necessary).
According to What's the Fastest Way to Code a Loop in JavaScript? and Why is to decrement the iterator toward 0 faster than incrementing ,
a basic for loop is slower than a for - loop with simplified test condition,
i.e.:
console.log("+++++++");
var until = 100000000;
function func1() {
console.time("basic")
var until2 = until;
for (var i = 0; i < until2; i++) {}
console.timeEnd("basic")
}
function func2() {
console.time("reverse")
var until2 = until;
for (until2; until2--;) {}
//while(until2--){}
console.timeEnd("reverse")
}
func1();
func2();
As you might see the first function is, contrary to expectations, faster than the second. Did something change since the release of this oracle article, or did I do something wrong?
Yes, something has changed since the article was released. Firefox has gone from version 3 to version 38 for one thing. Mostly when a new version of a browser is released, the performance of several things has changed.
If you try that code in different versions of different browsers on different systems, you will see that you will get quite a difference in performance. Different browsers are optimised for different Javascript code.
As performance differs, and you can't rely on any measurements to be useful for very long, there are basically two principles that you can follow if you need to optimise Javascript:
Use the simplest and most common code for each task; that is the code that browser vendors will try to optimise the most.
Don't look for the best performance in a specific browser, look for the worst performance in any brower. Test the code in different browsers, and pick a method that doesn't give remarkably bad performance in any of them.
I'm trying to learn a bit more about Javascript apart from the typical var x = function(){...} constructs, so I gone for namespaces.
In PHP, I always work with namespaces to avoid collisions and to organize my constants, classes and functions. So far, I've just done the basic namespacing like this:
var helpers = {
strings: {
add: function(a, b) {
alert(a + ' plus ' + b + ' equals to ' + (a + b));
},
msgbox: function(text) {
alert(text);
}
}
}
So I can write HTML blocks like this:
<button class="ui-button" type="button" onclick="helpers.strings.msgbox('Hello, world!');"><img src="assets/images/alert.png" alt="Alert"> Click me!</button>
My questions are:
is there any practical/hard limit to the number of levels I can nest my namespaces within?
is there any performance impact associated with the level of nesting for any given function?
can I extend a given namespace later in time? Like... having a core.js file and extending the strings namespace for adding more functions in, let's say, extended.js?
I'm not going to build a horribly nested structure or anything like that but I would just like to know if there are any practical limitations imposed by the browser engine or the language itself, so my question is more of a theorical nature (I'm not building a construct to test this, in this case).
is there any practical/hard limit to the number of levels I can nest
my namespaces within?
Obviously there is because if nothing else more levels will require more memory and memory is finite, and in practice there other restrictions will also be in place (derived from the implementation details of each particular JavaScript engine).
But the practical answer is: if you have reason to believe you might go near these limits, you are doing something wrong.
is there any performance impact associated with the level of nesting
for any given function?
Yes, because each level of indirection involves finding where the next nested "namespace" object is in memory and looking up its properties. In practice this cost is infinitesimal compared to other stuff that your code will be doing, so you will not be able to measure any difference unless the number of levels is large and you are digging up a nested value within a loop.
For example, this is not the best of ideas:
for(var i = 0; i < 1000000000; ++i) {
ns1.ns2.ns3.ns4.ns5.ns6.ns7.ns8.ns9.ns10.ns11.ns12.ns13.count += 1;
}
Fortunately if you ever need to do this there is a simple workaround:
var ns13 = ns1.ns2.ns3.ns4.ns5.ns6.ns7.ns8.ns9.ns10.ns11.ns12.ns13;
for(var i = 0; i < 1000000000; ++i) {
ns13.count += 1;
}
can I extend a given namespace later in time? Like... having a core.js
file and extending the strings namespace for adding more functions in,
let's say, extended.js?
You can, but you have to be careful so that both of these files use a mechanism for injecting variables into a namespace that does not actually replace the contents of the namespace.
There is a slight performance hit for each level of nesting, as it's another lookup. And there's the additional overhead of downloading more code if this is for client-side scripting. The first is likely quite minor. The second you'll have to decide for yourself.
And you can easily add new functions to your namespaces later:
helpers.strings.multiply = function(a, b) { /* ... 8/}
Although I made heavy use of namespacing for years, I rarely do so now, preferring a module loading system to manage my dependencies, and not exposing even something like helper. But these sorts of namespaces are easy to create and easy to use if you choose to do so.