I'm working on a page that, eventually, could have more than 100 arrays, only a few of which would be used at any given time.
At the moment, I'm populating all arrays as global variables, but I suspect this is inefficient in terms of memory use.
Should I change my code to clear the arrays when they are not being used? If so, what is the best way to do this? I'd guess var myArray = new array() but perhaps there's a better option.
Unless you have many thousands of objects in your arrays, you don't need to worry. Don't prematurely optimize your code; the browser is quite good at handling lots of small objects.
If memory does become an issue or you notice performance issues, you can simply reassign a new array to those variables:
myArray = [];
The garbage collector will clean up the objects that you dereferenced.
In a broader case, if there's no need to keep references to those objects, you don't even need the arrays to begin with. I.e., if you never access the elements that you put in the arrays a second time, just remove the arrays and don't bother assigning the data.
Related
I have to reach a value (direct access) many times in a very large 2D array. Is it better to assign a temporary variable or should I use the array[req.params.position.x][req.params.position.y].anyValue every time?
I know the "new variable" option would make it easier to look at it, I was wondering if that would make an impact on the performance of the code.
My hypothesis is that it acts as some kind of forEach in a forEach and thus takes more time to reach it every time.. ?
From your description array[req.params.position.x][req.params.position.y], it sounds like that whilst this is a 2D array, you also know up front the indexes of each array. This is direct access to the array which is extremely quick. It would be different if you needed to search for something in the array, but here you aren't needing that.
Internally, in browsers, this will be constant time access no matter how big the array is. It does not need to "lookup", since the passed indexes will reference the value location in memory -- where it will be retrieved directly.
So there is no performance concern here.
I'm currently building a small application is Vanilla JS (without any dependencies like lodash/jquery) and I needed to compare two objects to check for equality in keys and values. I was just wondering about how to optimize this problem.
The keys of both objects are in the same order as they are derived from the same method. According to this answer, the fastest and most efficient way to do this is using JSON.stringify(object1) === JSON.stringify(object2).
However, in my app, if the two objects are not equal, then I loop through the two of them and perform some operations. The problem is that these operations are pretty performance heavy and run occasionally. I need to optimize my solution.
Therefore, I was wondering if JSON.stringify runs some sort of for loop internally as well. In my application, it is more likely for the two objects to be unequal. Therefore, if JSON.stringify also runs some sort of for loop, I could just remove the check and run the operations I need right away (which will only cause a difference in the program if the two objects are unequal) saving time and making it more optimized. If I don't do this, then I will technically be running two for loops for the exact same purpose when the two objects are unequal and running one for loop either way when the two objects are equal. If JSON.stringify is some sort of for loop internally, then I can just one for loop no matter if the objects are equal. Am I making sense here? Please let me know if you don't understand something. Is this check useless and should I remove it to optimize my code?
Your question touches 4 different areas:
The implementation (and thus performance) of JSON.stringify
The implementation (and thus performance) of object iteration
The quality and performance of the JIT compiler
The speed of memory allocation (JSON.stringify is a memory hog for big objects)
So it is quite clear, that there is now "Universal" answer for all JS engines and OSes.
I recommend you do checks your in code ... why?
While right now the order of attributes might be constant, future maintenance to your codebase might change that and introduce a hard to track down bug.
It is good practice to create an isEqual method for all object types you use
It is better readable.
Ofcourse there are also disadvantages:
Your code will become bigger (this might be linked to better readable)
ANything else I might have forgotten.
For example:
array1 = new Array(5); array2 = new Array(10);
Both console.log(array1) and console.log(array2) would return [].
then, what is the role of arrayLength here?
JavaScript hides a lot of the details that you'd typically have to deal with when working with arrays in many other languages. For example, an array in JavaScript can grow automatically as you push values to it.
However, under the covers, the runtime is still dealing with the same sort of memory allocation issues that languages like C or Java make visible to the developer. The runtime may set aside a little extra memory for the array to grow into, and then once it runs out of contiguous memory space, it'll reallocate a larger piece of memory somewhere else and copy all of the values from the first set of memory to another location.
(This is vastly oversimplifying things, but hopefully you get the general idea.)
If you know ahead of time exactly how many items you can expect to put into the array, using new Array(number) will give the runtime a hint that it can begin by allocating that much memory, and avoid the need for the memory to be reallocated and copied around as you grow it.
It's in this light, for example, that this page suggests the following practices to achieve maximum performance in the V8 javascript engine:
Don't pre-allocate large Arrays (e.g. > 64K elements) to their maximum size, instead grow as you go
Initialize using array literals for small fixed-sized arrays
Preallocate small arrays (<64k) to correct size before using them
Passing a number to the Array constructor sets the length property of the array without setting the indices of the items (which is why your console.log isn't showing anything).
To quote JavaScript Garden:
Being able to set the length of the array in advance is only useful in a few cases, like repeating a string, in which it avoids the use of a loop.
Here's an example of doing just that:
new Array(count + 1).join(stringToRepeat);
Consider this javascript code:
var s = "Some string";
s = "More string";
Will the garbage collector (GC) have work to do after this sort of operation?
(I'm wondering whether I should worry about assigning string literals when trying to minimize GC pauses.)
e: I'm slightly amused that, although I stated explicitly in my question that I needed to minimize GC, everyone assumed I'm wrong about that. If one really must know the particular details: I've got a game in javascript -- it runs fine in Chrome, but in Firefox has semi-frequent pauses, that seem to be due to GC. (I've even checked with the MemChaser extension for Firefox, and the pauses coincide exactly with garbage collection.)
Yes, strings need to be garbage-collected, just like any other type of dynamically allocated object. And yes, this is a valid concern as careless allocation of objects inside busy loops can definitely cause performance issues.
However, string values are immutable (non-changable), and most modern JavaScript implementations use "string interning", that is they store only one instance of each unique string value. This means that if you have something like this...
var s1 = "abc",
s2 = "abc";
...only one instance of "abc" will be allocated. This only applies to string values, not String objects.
A couple of things to keep in mind:
Functions like substring, slice, etc. will allocate a new object for each function call (if called with different parameters).
Even though both variable point to the same data in memory, there are still two variables to process when the GC cycle runs. Having too many local variables can also hurt you as each of them will need to be processed by the GC, adding overhead.
Some further reading on writing high-performance JavaScript:
https://developer.mozilla.org/en-US/docs/JavaScript/Memory_Management
https://www.scirra.com/blog/76/how-to-write-low-garbage-real-time-javascript
http://jonraasch.com/blog/10-javascript-performance-boosting-tips-from-nicholas-zakas
Yes, but unless you are doing this in a loop millions of times it won't likely be a factor for you to worry about.
As you already noticed, JavaScript is not JavaScript. It runs on different platforms and thus will have different performance characteristics.
So the definite answer to the question "Will the GC have work to do after this sort of operation?" is: maybe. If the script is as short as you've shown it, then a JIT-Compiler might well drop the first string completely. But there's no rule in the language definition that says it has to be that way or the other way. So in the end it's like it is all too often in JavaScript: you have to try it.
The more interesting question might also be: how can you avoid garbage collection. And that is try to minimize the allocation of new objects. Games typically have a pretty constant amount of objects and often there won't be new objects until an old one gets unused. For strings this might be harder as they are immutable in JS. So try to replace strings with other (mutable) representations where possible.
Yes, the garbage collector will have a string object containing "Some string" to get rid of. And, in answer to your question, that string assignment will make work for the GC.
Because strings are immutable and are used a lot, the JS engine has a pretty efficient way of dealing with them. You should not notice any pauses from garbage collecting a few strings. The garbage collector has work to do all the time in the normal course of javascript programming. That's how it's supposed to work.
If you are observing pauses from GC, I rather doubt it's from a few strings. There is more likely a much bigger issue going on. Either you have thousands of objects needing GC or some very complicated task for the GC. We couldn't really speculate on that without study of the overall code.
This should not be a concern unless you were doing some enormous loop and dealing with tens of thousands of objects. In that case, one might want to program a little more carefully to minimize the number of intermediate objects that are created. But, absent that level of objects, you should first right clear, reliable code and then optimize for performance only when something has shown you that there is a performance issue to worry about.
To answer your question "I'm wondering whether I should worry about assigning string literals when trying to minimize GC pauses": No.
You really don't need to worry about this sort of thing with regard to garbage collection.
GC is only a concern when creating & destroying huge numbers of Javascript objects, or large numbers of DOM elements.
I'm iterating through an array of arrays, pulling out the sub-arrays I need and discarding the rest.
var newArray = [];
for (var i = 0; i < oldArray.length; i++) {
if (oldArray[i][property] == value) {
newArray.push(oldArray[i]);
}
}
oldArray = newArray;
Is this the most memory-friendly way to do this?
Will garbage collection safely take care of the sub-arrays I did not push onto
newArray?
Will newArray be scattered across memory in a way that could prevent this method from scaling efficiently?
The javascript prototypal nature makes everything be an object. And all objects are maps, literally hash maps. Which means that when you are "pulling", as you say, objects from one array into another, you are only copying their references.
So yes I would say it won't bring you much memory problems, if you drop the references (at least in modern browsers). But the garbage collectors are implemented in different ways depending on the browser you are working with.
1 - Is this the most memory-friendly way to do this?
If you drop the references, yes it is a memory friendly way to do it. You don't have any free like in c/c++, and for testing purposes on chrome i think you can call window.gc() to call the garbage collector. I think a delete exists or existed but I don't know how it works.
2- Will garbage collection safely take care of the sub-arrays I did not append to newArray?
If there aren't any other references pointing to them. Yes. Circular memory leaks were common in older browsers but with the new ones it's safe to say yes.
3 - Will newArray be scattered across memory in a way that could prevent this method from scaling efficiently?
Yes it will be scattered across memory because in javascript arrays work like hashmaps or linked hashmaps (if i'm mistaken here someone pls correct me) but the garbage collector will take care of it, because it is used to work with maps. And again you are only working with references the objects will keep in the same place and you will only store references in the array.
1.no... fmsf is mostly right, but there are some micro-optimization that you could do for page load time improvement, and for the time to check the condition. If you leave oldArray.length in the loop, it will look up length on the oldArray object for every iteration, which can add some time in large loops, also instantiating all your variables at once can also save some time if the method this is contained within is called many times.
When someone downloads your script, it is good to have all the variables with shortest names as possible to have the least data transference from server to client.
var nA = [],
i = 0,
t = oA.length,
s;
for (; i < t; i++) {
s = oA[i];
if (s[p] == v) {
nA.push(s[i]);
}
}
oA = nA;
If you want to get really serious, you would use a code minifier for the renaming of variables and white space removal.
2.yes, JavaScript is pretty good about these things, the only things you have to look out for is IE closures which would not be caused by your code here. I have still found closures in rare cases to be prevalent in IE9 in odd cases, but these are usually caused by linking to the DOM, which is irrelevant in this case.
3.no, the only things that are changing with this code are refrences to objects, not how they are stored in memory.