I have an array declared as
var arr = new Array();
Then i have an array of objects which are returned by Server. And each object in this array has three fields(always). I have to loop through this and add to the arr array conditionally.
Since this arr is not pre-allocated, it hits performance for large number in the main array.
Is there any way i can pre-allocate the arr array after i get the main response array so that i can avoid this performance issue?
Also how do i get the size of the object?
Thanks.
Suppose you have 10 objects, and you are going to pass three values from each object to an array. You can initialize your array with length 30 (10*3) by passing the integer 30 to the Array constructor as such:
var numObjects = 10;
var myArray = new Array(3*numObjects);
Please refer to my jsperf benchmark for a proof of the performance gained. In short summary, pre-sizing your array is ~25% faster in Firefox 38, ~81% faster in Chrome 42, and ~16% faster in Internet Explorer 11. Numbers will vary by the individual's experience who runs these benchmarks, but the trend will remain consistent. The optimal performance will result from pre-sizing your arrays.
http://jsperf.com/array-growth-dynamic-vs-preset
A more thorough discussion of this topic has occured here on SO at
How to initialize an array's length in javascript?
Thank whatever deity you believe in (or not) that Javascript does not have any direct access to memory allocation. That would have been truly horrible considering the quality of much of the JS littering the interwebs.
Javascript will by itself allocate memory to arrays on creation and reclaim the memory when it is garbage collected. Pre-Filling an array will have no positive effect on memory usage or performance.
Edit: I was wrong. See #ThisClark's answer.
MDN has a pretty good article on how memory management and GC work in javascript.
You can filter your array using filter function like the below example
var result = [
{
age: 15
},
{
age: 21
},
{
age: 25
}
];
function isGreaterThan20(obj) {
return obj.age > 20;
}
var arr = result.filter(isGreaterThan20);
// arr becomes [{ age: 21}, { age: 25}]
If you need to pre-allocate an array with defined size, use new Array(size)
Related
I have in my project history of changes.
History is an array consisting of objects where each object has 2 arrays.
So when I add history snapshot it looks like this (but in reality i'm not adding empty arrays):
history.push({ //new history moment
firstP: [],
secondP: [],
})
And e.g. the array firstP consists of objects like this:
{
color: "red",
move: 1,
... and some other fields (max 14 fields if it matters)
}
firstP and secondP usually holds thousands of objects.
So each history snapshot is pretty heavy for memory.
So i added limit
const limitOfSteps = 50;
now after every push i check if length of history isn't greater than 50.
If it is i do history.shift();
But what i see in my memory is that even when shifting (removing first element in the array) used memory is increasing. The element is added to history when user do something in the react app so he can do as many changes as he wants to.
I know there is garbage collector but how does it work with arrays?
Shifting array should mean that the element is gone (and gone from memory too?)
But it's not gone immediately (If user will make changes quickly then the whole app
will be out of memory).
Changing the removed element (just before shifting the array) to undefined or null would make the memory free quicker?
Main goal is to use less memory... does anyone know how to?
Edit:
The array may be shifted even thousand times.
Edit2 (Maybe my question was all wrong? Maybe i should ask if when the whole array is removed ?)
It is all in the react app in the state.
Probably slicing the history (doing copy) is much more memory consuming, but it is inevitable because state is immutable.
My method to update looks something like this:
updateHistory = (newElement) => {
const history = this.state.history.slice();
history.push(newElement);
if(history.length - 1 > 50) history.shift();
this.setState({history: history});
}
Does it anything make sense?
Garbage collection is done automatically in JavaScript, it's not something that you have to manage yourself. This is also why you're seeing the memory increasing when shifting items from your array although you limitted the size to 50.
You can't force or prevent the garbage collection, but when it runs, it takes care of removing values without a reference to it.
An object is said to be "garbage", or collectible if there are zero references pointing to it.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Memory_Management
Associates to that approach you can try to remove variable reference. Example:
var a = 1;
var array = [a, 2, 3, 4];
a = null;
array.shift();
// or
var array = [1, 2, 3, 4];
array[0] = null;
array.shift();
Is there a way to return the rest of an array in JavaScript i.e the portion of the array that consists of all elements but the first element of the array?
Note: I do not ask for returning a new array e.g. with arr.slice(1) etc. and I do not want to chop off the first element of the array e.g. with arr.shift().
For example, given the array [3, 5, 8] the rest of the array is [5, 8] and if the rest of the array is changed, e.g. by an assignment (a destructive operation), the array also changes. I just figured out that as a test that proves the rest is the rest of the array but not a new array consists of the rest of the elements of the array.
Note: The following code example is to describe what I want, but not specifically what I want to do (i.e. not the operations I want to perform). What I want to do is in the every algorithm at the bottom.
var arr = [3, 5, 8];
var rest = rest(arr); // rest is [5, 8]
rest.push(13); // rest is [5, 8, 13] and hence the arr is [3, 5, 8, 13]
An example I possibly need this and I would want to have it is following algorithm and many other I am writing in that GitHub organization, in both of which I use always arr.slice(1):
function every(lst, f) {
if (lst.length === 0) {
return false;
} else {
if (f(lst[0]) === true) {
return every(lst.slice(1), f);
} else {
return false;
}
}
}
I think having what I ask for instead of arr.slice(1) would keep the memory usage of such algorithms and retain the recursive-functional style I want to employ.
No, this is generally not possible. There are no "views on" or "pointers to" normal arrays1.
You might use a Proxy to fake it, but I doubt this is a good idea.
1: It's trivial to do this on typed arrays (which are views on a backing buffer), but notice that you cannot push to them.
I possibly need this and I would want to have it for recursive-functional style algorithms where I currently use arr.slice(1) but would prefer to keep memory usage low
Actually, all of these implementations do have low memory usage - they don't allocate more memory than the input. Repeatedly calling slice(1) does lead to high pressure on the garbage collector, though.
If you were looking for better efficiency, I would recommend to
avoid recursion. JS engines still didn't implement tail recursion, so recursion isn't cheap.
not to pass around (new copies of) arrays. Simply pass around an index at which to start, e.g. by using an inner recursive function that closes over the array parameter and accesses array[i] instead of array[0]. See #Pointy's updated answer for an example.
If you were looking for a more functional style, I would recommend to use folds. (Also known as reduce in JavaScript, although you might need to roll your own if you want laziness). Implement your algorithms in terms of fold, then it's easy to swap out the fold implementation for a more efficient (e.g. iterative) one.
Last but not least, for higher efficiency while keeping a recursive style you can use iterators. Their interface might not look especially functional, but if you insist you could easily create an immutable wrapper that lazily produces a linked list.
please test this function
function rest(arr) {
var a = arr.slice(1);
a.push = function() {
for (var i = 0, l = arguments.length; i < l; i++) {
this[this.length] = arguments[i];
arr[this.length] = arguments[i];
}
return this.length;
};
return a;
}
Based on the code posted in the update to the question, it's clear why you might want to be able to "alias" a portion of an array. Here is an alternative that is more typical of how I would solve the (correctly) perceived efficiency problem with your implementation:
function every(lst, f) {
function r(index) {
if (index >= lst.length)
return true; // different from OP, but I think correct
return f(lst[index]) && r(index+1);
}
return r(0);
}
That is still a recursive solution to the problem, but no array copy is made; the array is not changed at all. The general pattern is common even in more characteristically functional programming languages (Erlang comes to mind personally): the "public" API for some recursive code is augmented by an "internal" or "private" API that provides some extra tools for keeping track of the progress of the recursion.
original answer
You're looking for Array.prototype.shift.
var arr = [1, 2, 3];
var first = arr.shift();
console.log(first); // 1
console.log(arr); // [2, 3]
This is a linear time operation: the execution cost is relative to the length of the original array. For most small arrays that does not really matter much, but if you're doing lots of such work on large arrays you may want to explore a better data structure.
Note that with ordinary arrays it is not possible to create a new "shadow" array that overlaps another array. You can do something like that with typed arrays, but for general purpose use in most code typed arrays are somewhat awkward.
The first limitation of typed arrays is that they are, of course, typed, which means that the array "view" onto the backing storage buffer gives you values of only one consistent type. The second limitation is that the only available types are numeric types: integers and floating-point numbers of various "physical" (storage) sizes. The third limitation is that the size of a typed array is fixed; you can't extend the array without creating a new backing buffer and copying.
Such limitations would be quite familiar to a FORTRAN programmer of course.
So to create an array for holding 5 32-bit integers, you'd write
var ints = new Int32Array(5);
You can put values into the array just like you put values into an ordinary array, so long as you get the type right (well close enough):
for (let i = 0; i < 5; i++)
ints[i] = i;
console.log(ints); // [0, 1, 2, 3, 4]
Now: to do what the OP asked, you'd grab the buffer from the array we just created, and then make a new typed array on top of the same buffer at an offset from the start. The offsets when doing this are always in bytes, regardless of the type used to create the original array. That's super useful for things like looking at the individual parts of a floating point value, and other "bit-banging" sorts of jobs, though of course that doesn't come up much in normal JavaScript coding. Anyway, to get something like the rest array from the original question:
var rest = new Int32Array(ints.buffer, 4);
In that statement, the "4" means that the new array will be a view into the buffer starting 4 bytes from the beginning; 32-bit integers being 4 bytes long, that means that the new view will skip the first element of the original array.
Since JavaScript can't do this, the only real solution to your problem is WebAssembly. Otherwise use Proxy.
apologies if this question has been asked before but I'm finding it hard to word the question in a way that might have been asked before.
Q: Is it more efficient to have something like:
mass[128] = {0.0}
speed[128] = {0.0}
age[128] = {0}
Or:
properties[128] = {mass=0.0, speed=0.0, age=0}
And why? Is there a simple rule to always bear in mind, (are few larger arrays better than many small etc)?
I'm writing in JS using Chrome. Reading and writing to elements very often.
Thanks very much!
In general, the answer here is: Do what makes the most sense to let you write the simplest, clearest code; and worry about any performance or memory issue if you actually run into one.
Using an array of objects with named properties will likely be more efficient in terms of access time on a modern JavaScript engine, and will likely be less efficient in terms of memory use. In both cases, the difference will be incredibly minor and probably imperceptible.
If your values are numbers and your arrays can be of fixed size, you might use typed arrays, since they really are arrays (where as normal arrays aren't1 unless the JavaScript engine can do it as an optimization). But there are downsides to typed arrays (their being fixed size, for instance), so again, if and when it becomes necessary...
Example of an array of objects with named properties:
var properties = [
{mass: 0, speed: 0, age: 0},
{mass: 1, speed: 1, age: 1},
// ...
];
If you're using ES2015 (you said you're using Chrome, so you can), you might make that a const:
const properties = [
{mass: 0, speed: 0, age: 0},
{mass: 1, speed: 1, age: 1},
// ...
];
That only makes properties a constant, not the contents of the array it points to, so you can still add, remove, or amend entries as desired.
1 That's a post on my anemic little blog.
I'm looking for both practical, and also theoretical insight about my application.
Take 50,000 js objects, with 5 properties each, structured as
0: Object
CostCenter: "1174"
Country: "USA"
Job: "110-Article Search"
Team: "Financial"
Username: "anderson"
And take 5 respective arrays (one for each object property) such as the 'Country' array
4: Array[4]
0: "Asia Pacific"
1: "Australia"
2: "Brazil"
3: "Canada"
What is the most efficient way to filter the 50,000 objects, eliminating all objects which have at least one property which has 0 matches in its respective array.
The max sizes for the arrays are:
CostCenter, 77
Country, 27
Job, 27
Team, 10
Username, 99
My first idea is to loop through the 50,000 objects, and
if the 'CostCenter' property === any CostCenter array item,
push the object into a temporary array of objects
Which might leave me with only 20,000 objects in the temporary array. Then repeat this process for each property and its respective filtering array, building a new temporary object each time.
Finally this process would leave me with the last array, which would be the resultant data after going through 5 filters.
It takes about 20 seconds for me to download the 18mb JSON file (but I'm okay with that)
...which is exponentially longer than the time it takes my chrome browser on 16gb ram to process the JSON into 50,000 js objects AND loop these objects to dynamically build the filtering arrays with all unique values contained in the JSON.
Is this efficient? It seems very very fast for the amount of data being processed, but I also get the feeling some user environments (like my boss' iPad) may run out of in-browser memory.
What better ways are there?
Should I do this in Node.JS? I am a javascript programmer, so that seems like it may not take too long to learn. Plus Node is super duper hip these days... maybe I should get on with it.
Will some browsers fail to download an 18mb json file? Where can I find info about limits?
Basically you want
var arrays = {
"Country": […],
…
};
var result = my50000items.filter(function(item) {
for (var prop in arrays)
if (arrays[prop].indexOf(item[prop]) == -1)
return false;
return true;
});
You can optimise this by replacing the indexOf call with a faster property lookup. To do so, make:
var lookups = {};
for (var prop in arrays) {
var obj = lookups[prop] = {};
for (var i=0; i<arrays[prop].length; i++)
obj[arrays[prop][i]] = true;
}
Then you can use
var result = my50000items.filter(function(item) {
for (var prop in lookups)
if (!lookups[prop][item[prop]])
return false;
return true;
});
var arr = [];
arr[50] = 'foo';
arr[10000] = 'boo';
Depends on the implementation. This:
arr=[]
arr[1000]=1
arr[1000000000]=2
arr.sort()
will give [1,2] on Chrome (in no more time then sorting a dense array with two elements), but an allocation size overflow error on Firefox.
No harm at all. Just make sure you are testing if the value is defined before using it though.
Consider working with "key/value array" for such thing:
var arr = {};
arr[50] = 'foo';
arr[10000] = 'boo';
Having this you lose the ability to detect the array length (arr.length will be undefined) and you can iterate it using different kind of loop, but if you don't need any of those IMO that's better way.
No. Most JavaScript implementations will allocate two slots (i.e. the array will allocate the same amount of memory as if it had just two elements with the indexes 0 and 1).