Memory being allocated - javascript

By using the Chrome Development Tools, I found out arrays and objects were being allocated. I gone through my code looking for the obvious [], {} andnew. But there isn't any. I have checked functions that create a new [], {}, new and looked to see where those functions are used, and I've learnt not to use them. So, how else can memory be allocated?
This is a problem for me, because every time GC kicks in, it blocks the main loop and the animation becomes inconsistent.

It is fruitless to worry overmuch about memory allocation. Memory will be allocated for everything, variables, arrays, objects, etc. There isn't much you could do with javascript without using a variable or an object, but again, the allocation of memory is not really the domain of a javascript script. Any and all javascript will use some degree of memory no matter what. Indeed, I would say that if you have "learned to avoid using" objects and arrays, you have been misinformed or are learning the wrong lesson.
It is far more important to avoid circular references, to avoid excessive memory consumption per scope, and to generally avoid locking up the browser thread with tight loops and other bad practices. For instance, in a for loop, avoid recalculating the limit in the for declaration: for (var x = 1; x < myString.length; x++) should be var max = myString.length; for(var x = 1; x < max; x++). Even such optimizations (micro-optimizations in most cases) are not critical to a javascript developer, for the browser is handling the overall memory allocation/consumption as well as the garbage collection of out-of-scope references.
For more information about practical practices to avoid leaks, check out this article: http://www.javascriptkit.com/javatutors/closuresleak/index.shtml (or other articles like this). Otherwise, as long as you aren't leaking memory, it is expected that any script will allocate/use some degree of memory; it is unavoidable. Considering that the modern PC has gigabytes of memory available, your script's paltry kilobytes or even megabytes of memory use are not of much consequence - that's what the memory is there for, to use it.

Related

Why are Typed Arrays causing Memoryleaks in JavaScript

I noticed that i get this Error when trying to create many Float32Arrays:
Uncaught RangeError: Array buffer allocation failed
I tried to reproduce the error like this:
console.log("trying to cause memory leaks");
for(var i = 0; i < 1000; i++){
console.log(i);
var x = new Float32Array(100000000);
//var x = new Array(100000000);
}
console.log("finished");
jsfiddle
And as far as I understand the concept of the garbage collector, he should collect and dump unlinked objects. But since Im getting the error mentioned above I dont think it works as expected. Nevertheless there are no problems when I do the same thing with an Array instead of a Float32Array. I can even scale the numbers up with Array.
So maybe I am illinformed about the garbage collector or there is something fishy about the Float32Array constructor. Maybe my test lacks integrity, ie. the garbage collector has not enough time to collect, or sth trivial like that?
Maybe someone can give me some insight?
PS: Im using Chrome 52.0.2743.116 m which uses the V8 engine afaik
Nevertheless there are no problems when I do the same thing with an Array instead of a Float32Array. I can even scale the numbers up with Array.
new Array(100000000) doesn't allocate anything but an array object with a length property. That's it. No element slots are created, because standard arrays aren't really arrays at all (that's a post on my blog), they're just objects backed by Array.prototype with a special length property and special handling of properties whose names meet the spec's definition of "array indexes" (details in the spec).
In contrast, new Float32Array(100000000) has to allocate contiguous memory for 100,000,000 32-bit slots (plus the object overhead). So if there isn't a contiguous block of 400,000,000 bytes available for that buffer, it's going to fail.
Side note: I was able to run your loop to completion in Chrome 52.0.2743.116 (64-bit) on *nix. Took a while, but... No memory leaks showed up, system memory use looked like this while it was running (the flatline at the beginning is before I started it, I've chopped it off long before it finished, it would be way too wide otherwise):
We can see that V8 (the JavaScript engin in Chrome) would let some garbage pile up, then run GC and clean up some unreferenced Float32Arrays, then let garbage pile up, then GC, etc. No leak.
Chrome's task manager showed this usage:
Before starting: 58,000k
While running: 3,700,000 - 5,100,000k
At end: 450,000,
...which makes sense since nothing clears x at the end of the loop, so it's still referencing that last array.

Chrome 39 JavaScript Performance Anomaly

I did a jsPerf test to see if there were any performance differences between using arguments or local variables in a function in JavaScript.
In Firefox 34, there was practically no difference. However, in Chrome 39, the compiler seems to be doing a lot of harm. See these results:
Can anyone explain why this happens?
First of all for a benchmark that tries to measure arguments vs. local variables performance behavior you are doing too much in the each case - you allocate a closure again and again, you allocate object from object literal, you use for-in loop. All these operations are way more expensive than local variable access. Their costs subsume and hide whatever small cost variable access has.
Now the anomaly you are seeing is due to V8 not having a fast path for creating closures that contain literals: there is FastNewClosureStub but it is only used when there are no literals in the closure[1]. This makes closure allocation more expensive in the first case compared to the second - you are seeing this reflected in the score as closure allocation is rather dominant part of your benchmark (it allocates one closure per op).
If you "hide" literal creation[2] into a separate function you will see anomaly going away. Note: such hiding doesn't make benchmark anymore representative: it is still not measuring what you want to measure.
Overall trying to capture performance characteristics of variable access in a benchmark is very hard because these are usually among the fastest and smallest operations even in the code produced by non-optimizing (baseline) compiler. In the most common case when no variables are captured and the scope does not contain with, eval or arguments object - there'll be no difference between arguments and local variable access both compiling down into a single memory load.
[1] https://github.com/v8/v8-git-mirror/blob/9def087efcd844342c35f42628bac4ead49cac81/src/ia32/full-codegen-ia32.cc#L1213-L1218
[2] http://jsperf.com/variable-vs-variable-passed-as-an-argument-to-a-self-in/3

Does assigning a new string value create garbage that needs collecting?

Consider this javascript code:
var s = "Some string";
s = "More string";
Will the garbage collector (GC) have work to do after this sort of operation?
(I'm wondering whether I should worry about assigning string literals when trying to minimize GC pauses.)
e: I'm slightly amused that, although I stated explicitly in my question that I needed to minimize GC, everyone assumed I'm wrong about that. If one really must know the particular details: I've got a game in javascript -- it runs fine in Chrome, but in Firefox has semi-frequent pauses, that seem to be due to GC. (I've even checked with the MemChaser extension for Firefox, and the pauses coincide exactly with garbage collection.)
Yes, strings need to be garbage-collected, just like any other type of dynamically allocated object. And yes, this is a valid concern as careless allocation of objects inside busy loops can definitely cause performance issues.
However, string values are immutable (non-changable), and most modern JavaScript implementations use "string interning", that is they store only one instance of each unique string value. This means that if you have something like this...
var s1 = "abc",
s2 = "abc";
...only one instance of "abc" will be allocated. This only applies to string values, not String objects.
A couple of things to keep in mind:
Functions like substring, slice, etc. will allocate a new object for each function call (if called with different parameters).
Even though both variable point to the same data in memory, there are still two variables to process when the GC cycle runs. Having too many local variables can also hurt you as each of them will need to be processed by the GC, adding overhead.
Some further reading on writing high-performance JavaScript:
https://developer.mozilla.org/en-US/docs/JavaScript/Memory_Management
https://www.scirra.com/blog/76/how-to-write-low-garbage-real-time-javascript
http://jonraasch.com/blog/10-javascript-performance-boosting-tips-from-nicholas-zakas
Yes, but unless you are doing this in a loop millions of times it won't likely be a factor for you to worry about.
As you already noticed, JavaScript is not JavaScript. It runs on different platforms and thus will have different performance characteristics.
So the definite answer to the question "Will the GC have work to do after this sort of operation?" is: maybe. If the script is as short as you've shown it, then a JIT-Compiler might well drop the first string completely. But there's no rule in the language definition that says it has to be that way or the other way. So in the end it's like it is all too often in JavaScript: you have to try it.
The more interesting question might also be: how can you avoid garbage collection. And that is try to minimize the allocation of new objects. Games typically have a pretty constant amount of objects and often there won't be new objects until an old one gets unused. For strings this might be harder as they are immutable in JS. So try to replace strings with other (mutable) representations where possible.
Yes, the garbage collector will have a string object containing "Some string" to get rid of. And, in answer to your question, that string assignment will make work for the GC.
Because strings are immutable and are used a lot, the JS engine has a pretty efficient way of dealing with them. You should not notice any pauses from garbage collecting a few strings. The garbage collector has work to do all the time in the normal course of javascript programming. That's how it's supposed to work.
If you are observing pauses from GC, I rather doubt it's from a few strings. There is more likely a much bigger issue going on. Either you have thousands of objects needing GC or some very complicated task for the GC. We couldn't really speculate on that without study of the overall code.
This should not be a concern unless you were doing some enormous loop and dealing with tens of thousands of objects. In that case, one might want to program a little more carefully to minimize the number of intermediate objects that are created. But, absent that level of objects, you should first right clear, reliable code and then optimize for performance only when something has shown you that there is a performance issue to worry about.
To answer your question "I'm wondering whether I should worry about assigning string literals when trying to minimize GC pauses": No.
You really don't need to worry about this sort of thing with regard to garbage collection.
GC is only a concern when creating & destroying huge numbers of Javascript objects, or large numbers of DOM elements.

Memory safest way to pull sub-arrays from an array

I'm iterating through an array of arrays, pulling out the sub-arrays I need and discarding the rest.
var newArray = [];
for (var i = 0; i < oldArray.length; i++) {
if (oldArray[i][property] == value) {
newArray.push(oldArray[i]);
}
}
oldArray = newArray;
Is this the most memory-friendly way to do this?
Will garbage collection safely take care of the sub-arrays I did not push onto
newArray?
Will newArray be scattered across memory in a way that could prevent this method from scaling efficiently?
The javascript prototypal nature makes everything be an object. And all objects are maps, literally hash maps. Which means that when you are "pulling", as you say, objects from one array into another, you are only copying their references.
So yes I would say it won't bring you much memory problems, if you drop the references (at least in modern browsers). But the garbage collectors are implemented in different ways depending on the browser you are working with.
1 - Is this the most memory-friendly way to do this?
If you drop the references, yes it is a memory friendly way to do it. You don't have any free like in c/c++, and for testing purposes on chrome i think you can call window.gc() to call the garbage collector. I think a delete exists or existed but I don't know how it works.
2- Will garbage collection safely take care of the sub-arrays I did not append to newArray?
If there aren't any other references pointing to them. Yes. Circular memory leaks were common in older browsers but with the new ones it's safe to say yes.
3 - Will newArray be scattered across memory in a way that could prevent this method from scaling efficiently?
Yes it will be scattered across memory because in javascript arrays work like hashmaps or linked hashmaps (if i'm mistaken here someone pls correct me) but the garbage collector will take care of it, because it is used to work with maps. And again you are only working with references the objects will keep in the same place and you will only store references in the array.
1.no... fmsf is mostly right, but there are some micro-optimization that you could do for page load time improvement, and for the time to check the condition. If you leave oldArray.length in the loop, it will look up length on the oldArray object for every iteration, which can add some time in large loops, also instantiating all your variables at once can also save some time if the method this is contained within is called many times.
When someone downloads your script, it is good to have all the variables with shortest names as possible to have the least data transference from server to client.
var nA = [],
i = 0,
t = oA.length,
s;
for (; i < t; i++) {
s = oA[i];
if (s[p] == v) {
nA.push(s[i]);
}
}
oA = nA;
If you want to get really serious, you would use a code minifier for the renaming of variables and white space removal.
2.yes, JavaScript is pretty good about these things, the only things you have to look out for is IE closures which would not be caused by your code here. I have still found closures in rare cases to be prevalent in IE9 in odd cases, but these are usually caused by linking to the DOM, which is irrelevant in this case.
3.no, the only things that are changing with this code are refrences to objects, not how they are stored in memory.

What should and shouldn't I cache in javascript?

I know that it's a good idea to cache objects that will be used many times. But what about if I will use the following many times:
var chacheWindow = window;
var chacheDocument = document;
var chacheNavigator = navigator;
var chacheScreen = screen;
var chacheWindowLocationHash = window.location.hash;
var chacheDocumentBody = document.body;
Maybe it is only good to chace stuff between <html></html>? Please explain.
The point of caching is to avoid either:
Typing long names repeatedly
Every one of your examples has a longer name then the original so you don't get that benefit
Avoiding making an expensive operation repeatedly (or a slightly costly operation, e.g. array.length before a for loop, a very large number of times)
There is no sign that you are getting that benefit either.
You probably shouldn't be copying the references to local variables at all.
Pretty hard to say exactly what you should cache.
I wouldn't cache native global objects or things that may change. What you are doing in your example is just creating another reference to the same object.
References to DOM elements should be cached, else you will spend time to serach for them again. Also result of functions that make heavy operations could be cached.
You can use some profiler and see the performences on different functions to get a hint on what you should cache.
Caching is a double edged sword. If you've got some value that's intensive to calculate or requires a round trip to the server, caching that is a fantastic idea. For pretty much all of the values that you specified in your question, you're not buying yourself anything. You're simply replacing one variable reference with another.
Generally speaking it's not a good idea to bog yourself down in these micro-optimizations. Optimizing and improving performance is good, but your time is generally better spent looking for that loop that does way too much work or a fixing bad algorithm than handling this type of case where if there's any improvement at all, you're only looking at an improvement of nanoseconds at best - but again for the values you mentioned you will see absolutely no improvement.

Categories

Resources