I want to save an array in an object in runtime through a loop.
For example, I take an input in an array inp=[2, 7, 20, 15, 19] and I want to save it in an obj={0:2, 1:7, 2:20, 3:15, 4:19}. But, at runtime such that I have a
for(i=0;i<inp.length;i++)
{ save each element of array into the respective object element }
The problem is that I have to save arrays of different lengths, these array come from taking an input from user.
I am also sorting the object afterwards and returning the indices in another array in my code. I am stuck only at how to save an array in an object during runtime. I searched a lot for a clue to get started but, I could not find anything.
Autoassign: (credit: am not i am)
var obj = inp.slice();
Manual Assignment:
var obj = {};
for(var i=0, n=inp.length; i<n; i++)
obj[i]=inp[i];
Though an Array is technically a subclass of an Object in JavaScript, the only thing that is really happening in going from an Array to an Object, is that you're losing the native methods (indexOf,concat,reverse,etc) that are created during the array's construction.
Array is already an object
If you do some experiments, you would find out:
typeof([]) //<--retruns "object"
Related
Ever since its introduction in ECMA-262, 3rd Edition, the Array.prototype.push method's return value is a Number:
15.4.4.7 Array.prototype.push ( [ item1 [ , item2 [ , … ] ] ] )
The arguments are appended to the end of the array, in the order in which they appear. The new length of the array is returned as the result of the call.
What were the design decisions behind returning the array's new length, as opposed to returning something potentially more useful, like:
A reference to the newly appended item/s
The mutated array itself
Why was it done like this, and is there a historical record of how these decisions came to be made?
I understand the expectation for array.push() to return the mutated array instead of its new length. And the desire to use this syntax for chaining reasons.
However, there is a built in way to do this: array.concat().
Note that concat expects to be given an array, not an item. So, remember to wrap the item(s) you want to add in [], if they are not already in an array.
newArray = oldArray.concat([newItem]);
Array chaining can be accomplished by using .concat(), as it returns an array,
but not by .push(), as it returns an integer (the new length of the array).
Here is a common pattern used in React for changing the state variable, based on its prior value:
// the property value we are changing
selectedBook.shelf = newShelf;
this.setState((prevState) => (
{books: prevState.books
.filter((book) => (book.id !== selectedBook.id))
.concat(selectedBook)
}
));
state object has a books property, that holds an array of book.
book is an object with id, and shelf properties (among others).
setState() takes in an object that holds the new value to be assigned to state
selectedBook is already in the books array, but its property shelf needs to be changed.
We can only give setState a top level object, however.
We cannot tell it to go find the book, and look for a property on that book, and give it this new value.
So we take the books array as it were.
filter to remove the old copy of selectedBook.
Then concat to add selectedBook back in, after updating its shelf property.
Great use case for wanting to chain push.
However, the correct way to do this is actually with concat.
Summary:
array.push() returns a number (mutated array's new length).
array.concat([]) returns a new array.
Technically, it returns a new array with the modified element added to the end, and leaves the initial arrays unchanged.
Returning a new array instance, as opposed to recycling the existing array instance is an important distinction, that makes it very useful for state objects in React applications, to get changed data to re-render.
I posted this in TC39's communication hub, and was able to learn a bit more about the history behind this:
push, pop, shift, unshift were originally added to JS1.2 (Netscape 4) in 1997.
There were modeled after the similarly named functions in Perl.
JS1.2 push followed the Perl 4 convention of returning the last item pushed.
In JS1.3 (Netscape 4.06 summer 1998) changed push to follow the Perl 5 conventions of returning the new length of the array.
see original jsarray.c source
/*
* If JS1.2, follow Perl4 by returning the last thing pushed. Otherwise,
* return the new array length.
*/
I cannot explain why they chose to return the new length, but in response to your suggestions:
Returning the newly appended item:
Given that JavaScript uses C-style assignment which emits the assigned value (as opposed to Basic-style assignment which does not) you can still have that behavior:
var addedItem;
myArray.push( addedItem = someExpression() );
(though I recognise this does mean you can't have it as part of an r-value in a declaration+assignment combination)
Returning the mutated array itself:
That would be in the style of "fluent" APIs which gained popularity significantly after ECMAScript 3 was completed and it would not be keeping in the style of other library features in ECMAScript, and again, it isn't that much extra legwork to enable the scenarios you're after by creating your own push method:
Array.prototype.push2 = function(x) {
this.push(x);
return this;
};
myArray.push2( foo ).push2( bar ).push2( baz );
or:
Array.prototype.push3 = function(x) {
this.push(x);
return x;
};
var foo = myArray.push3( computeFoo() );
I was curious since you asked. I made a sample array and inspected it in Chrome.
var arr = [];
arr.push(1);
arr.push(2);
arr.push(3);
console.log(arr);
Since I already have reference to the array as well as every object I push into it, there's only one other property that could be useful... length. By returning this one additional value of the Array data structure, I now have access to all the relevant information. It seems like the best design choice. That, or return nothing at all if you want to argue for the sake of saving 1 single machine instruction.
Why was it done like this, and is there a historical record of how these decisions came to be made?
No clue - I'm not certain a record of rationale along these lines exists. It would be up to the implementer and is likely commented in any given code base implementing the ECMA script standards.
I don't know "Why was it done like this, and is there a historical record of how these decisions came to be made?".
But I also think it's not clear and not intuitive that push() returns the length of array like below:
let arr = ["a", "b"];
let test = arr.push("c");
console.log(test); // 3
Then, if you want to use clear and intuitive method instead of push(), you can use concat() which returns the array with its values like below:
let arr = ["a", "b"];
let test = arr.concat("c");
console.log(test); // ["a", "b", "c"]
The question is partially answered in the document you mention (Ecma 262 3rd edition), there are methods that mutate the array and methods that don't. The methods that mutate the array will return the length of the mutated array. For adding elements that would be push, splice and unshift (Depending on the position you want the new element in).
If you want to get the new mutated array you can use concat. Concat will input any number of arrays you want added to the original array and add all the elements into a new array. i.e:
const array1 = ['a', 'b', 'c'];
const array2 = ['d', 'e', 'f'];
const array3=['g','h'];
const array4 = array1.concat(array2,array3);
The new array created will have all the elements and the other three won't be changed. There are other (Many) ways to add the elements to an array both mutative and not mutative. So there is your answer, it returns the length because it is changing it, it doesn't need to return the full array.
Ever since its introduction in ECMA-262, 3rd Edition, the Array.prototype.push method's return value is a Number:
15.4.4.7 Array.prototype.push ( [ item1 [ , item2 [ , … ] ] ] )
The arguments are appended to the end of the array, in the order in which they appear. The new length of the array is returned as the result of the call.
What were the design decisions behind returning the array's new length, as opposed to returning something potentially more useful, like:
A reference to the newly appended item/s
The mutated array itself
Why was it done like this, and is there a historical record of how these decisions came to be made?
I understand the expectation for array.push() to return the mutated array instead of its new length. And the desire to use this syntax for chaining reasons.
However, there is a built in way to do this: array.concat().
Note that concat expects to be given an array, not an item. So, remember to wrap the item(s) you want to add in [], if they are not already in an array.
newArray = oldArray.concat([newItem]);
Array chaining can be accomplished by using .concat(), as it returns an array,
but not by .push(), as it returns an integer (the new length of the array).
Here is a common pattern used in React for changing the state variable, based on its prior value:
// the property value we are changing
selectedBook.shelf = newShelf;
this.setState((prevState) => (
{books: prevState.books
.filter((book) => (book.id !== selectedBook.id))
.concat(selectedBook)
}
));
state object has a books property, that holds an array of book.
book is an object with id, and shelf properties (among others).
setState() takes in an object that holds the new value to be assigned to state
selectedBook is already in the books array, but its property shelf needs to be changed.
We can only give setState a top level object, however.
We cannot tell it to go find the book, and look for a property on that book, and give it this new value.
So we take the books array as it were.
filter to remove the old copy of selectedBook.
Then concat to add selectedBook back in, after updating its shelf property.
Great use case for wanting to chain push.
However, the correct way to do this is actually with concat.
Summary:
array.push() returns a number (mutated array's new length).
array.concat([]) returns a new array.
Technically, it returns a new array with the modified element added to the end, and leaves the initial arrays unchanged.
Returning a new array instance, as opposed to recycling the existing array instance is an important distinction, that makes it very useful for state objects in React applications, to get changed data to re-render.
I posted this in TC39's communication hub, and was able to learn a bit more about the history behind this:
push, pop, shift, unshift were originally added to JS1.2 (Netscape 4) in 1997.
There were modeled after the similarly named functions in Perl.
JS1.2 push followed the Perl 4 convention of returning the last item pushed.
In JS1.3 (Netscape 4.06 summer 1998) changed push to follow the Perl 5 conventions of returning the new length of the array.
see original jsarray.c source
/*
* If JS1.2, follow Perl4 by returning the last thing pushed. Otherwise,
* return the new array length.
*/
I cannot explain why they chose to return the new length, but in response to your suggestions:
Returning the newly appended item:
Given that JavaScript uses C-style assignment which emits the assigned value (as opposed to Basic-style assignment which does not) you can still have that behavior:
var addedItem;
myArray.push( addedItem = someExpression() );
(though I recognise this does mean you can't have it as part of an r-value in a declaration+assignment combination)
Returning the mutated array itself:
That would be in the style of "fluent" APIs which gained popularity significantly after ECMAScript 3 was completed and it would not be keeping in the style of other library features in ECMAScript, and again, it isn't that much extra legwork to enable the scenarios you're after by creating your own push method:
Array.prototype.push2 = function(x) {
this.push(x);
return this;
};
myArray.push2( foo ).push2( bar ).push2( baz );
or:
Array.prototype.push3 = function(x) {
this.push(x);
return x;
};
var foo = myArray.push3( computeFoo() );
I was curious since you asked. I made a sample array and inspected it in Chrome.
var arr = [];
arr.push(1);
arr.push(2);
arr.push(3);
console.log(arr);
Since I already have reference to the array as well as every object I push into it, there's only one other property that could be useful... length. By returning this one additional value of the Array data structure, I now have access to all the relevant information. It seems like the best design choice. That, or return nothing at all if you want to argue for the sake of saving 1 single machine instruction.
Why was it done like this, and is there a historical record of how these decisions came to be made?
No clue - I'm not certain a record of rationale along these lines exists. It would be up to the implementer and is likely commented in any given code base implementing the ECMA script standards.
I don't know "Why was it done like this, and is there a historical record of how these decisions came to be made?".
But I also think it's not clear and not intuitive that push() returns the length of array like below:
let arr = ["a", "b"];
let test = arr.push("c");
console.log(test); // 3
Then, if you want to use clear and intuitive method instead of push(), you can use concat() which returns the array with its values like below:
let arr = ["a", "b"];
let test = arr.concat("c");
console.log(test); // ["a", "b", "c"]
The question is partially answered in the document you mention (Ecma 262 3rd edition), there are methods that mutate the array and methods that don't. The methods that mutate the array will return the length of the mutated array. For adding elements that would be push, splice and unshift (Depending on the position you want the new element in).
If you want to get the new mutated array you can use concat. Concat will input any number of arrays you want added to the original array and add all the elements into a new array. i.e:
const array1 = ['a', 'b', 'c'];
const array2 = ['d', 'e', 'f'];
const array3=['g','h'];
const array4 = array1.concat(array2,array3);
The new array created will have all the elements and the other three won't be changed. There are other (Many) ways to add the elements to an array both mutative and not mutative. So there is your answer, it returns the length because it is changing it, it doesn't need to return the full array.
I have a variable called uids
var uids = [];
Then I write some value to it property
uids[16778923] = "3fd6335d-b0e4-4d77-b304-d30c651ed509"
But before it
if (!uids[user.id]) {
uids[user.id] = generateKey(user);
}
This thing behaves ok. If I try to get the value of it property
uids[currentUser.id]
It will give me a value of this property. If I try to call some methods like
Object.keys(uids);
It will give me, what I expected. And here the mystery comes...
uids;
RAM rest in piece. See the node eating ram
I am very confused now. What's wrong?
This is because you are creating a huge array and node will reserve memory for it - who knows what comes. I'd say that's a scenario where you would use a Map (or a plain object, but Map feels better here.
var uids = new Map();
var key = 456464564564654;
if (! uids.has(key)) {
uids.set(key, generateKey(user))
}
You are creating an empty array (length is zero), then you assign some value to an arbitrary index. This will make the array grow as big as the index and assign the value to that index. Look at this example using node.js REPL:
> var a = []
undefined
> a[5] = "something"
'something'
> a
[ , , , , , 'something' ]
> a.length
6
Instead of creating an array, you could create a Map() or an common javascript object (singleton). Javascript objects behave like Maps but only Strings can be used as keys. If you assign a Number to be key, javascript will convert it to String automatically.
Personally, I would go with objects because they perform better. Instantiating an object takes longer than instantiating a Map (and it doesn't seem like you need to create several groups of "uids"), but once done, adding new keys and retrieving values from any key in faster when using common objects. At least that's how things go in my node.js v6.7.0 on ubuntu 14.04 but you could try for yourself. And it would also make the least alteration to your code.
var uids = {} // common/ordinary empty javascript object instead of array.
if (!uids[user.id]) { // getting value from one key works the same.
uids[user.id] = generateKey(user) // assignment works the same.
}
////
uids[16778923] = "3fd6335d-b0e4-4d77-b304-d30c651ed509" // key will be "16778923".
uids[16778923] // getting value for key "16778923" can be done using 16778923 instead of "16778923".
////
uids[currentUser.id] // still returning values like this.
Object.keys(uids) // still returning an array of keys like this. but they are all Strings.
It seems like JavaScript somehow tries to optimize code, so if we want to fill a multidimensional array (largeArr) with changing values of one-dimensional array (smallArr) within a loop and use this code:
largeArr = []
smallArr = []
for (i=0; i<2; i++)
{
smallArr[0]=i
smallArr[1]=2*i
largeArr[i]=smallArr
}
we get an unexpected result: largeArr=[[1,2],[1,2]] (must be [[0,0],[1,2]]). So, Javascript calculates smallArr values in the first place, and only then fills largeArr.
To get the right result we must declare smallArr in the loop:
largeArr = []
for (i=0; i<2; i++)
{
smallArr = []
smallArr[0]=i
smallArr[1]=2*i
largeArr[i]=smallArr
}
and then it works as expected (largeArr=[[0,0],[1,2]]).
Why does it behave this way?
Because Pointers, that's why. Javascript takes after Java, and C, in this (and only this) way. When you do the assignment
largeArr[i] = smallArr
you're assigning a pointer. A breakdown of pointers:
In C, (and to a lesser extent, Java and Javascript) you don't have a basic array type - instead, an array points to a space in memory, and you can fill that space with whatever information you want (or rather, you've declared). The way a pointer exists in memory? A four (or eight, or two, depending on your system) byte memory address, which tells the compiler/parser where to get the appropriate in formation. So, when you do that assignment there, you're telling it: "Hey, set largeArr[i] equal to the memory address of smallArr." Thus, when you make changes to smallArr, it's reflected every time you dereference the array - because it's actually the same array. But when you do:
smallArr = []
inside the loop, you're saying, "make a new array, and set smallArr equal to the address of that array." That way, the arrays stay separate.
With the line largeArr[i]=smallArr, you set the i property to a reference to the smallArr. You do not copy it. In the end, all properties of the largeArr will point to the same one smallArr, where you have overwritten the values each time.
By initializing the smallArr each loop turn, you create new objects; so each property of largeArr will point to a different array. Btw, it is an assignment, not a declaration - you would (and should) declare the variables as local (to the function) with a var statement.
In the last for iteration
smallArr[0]=i
smallArr[1]=2*i
(where i=1) the above code is transformed into :
smallArr[0]=1
smallArr[1]=2
And your big array is nothing than this :
[smallArr, smallArr]
which leads to the unexpected result :
[[1, 2], [1, 2]]
In javascript objects are copyed by reference (a kind of c style pointer).
In order to have the desired result, you must copy the array by value, or assign a different array in each loop :
var largeArr = [];
for (i=0; i<2; i++)
largeArr[i] = [[i, 2*i]];
When you assign an array reference as you have above, you're not assigning the values of that array, but just a reference to the array.
Think of it as a pointer. largeArr[0] and largeArr[1] are pointing to smallArr, and the loop iterations are simply changing the contents of smallArr. The thing to which largeArr is being "pointed" is not changing.
I'm using a specific game making framework but I think the question applies to javascript
I was trying to make a narration script so the player can see "The orc hits you." at the bottom of his screen. I wanted to show the last 4 messages at one time and possibly allow the player to look back to see 30-50 messages in a log if they want. To do this I set up and object and an array to push the objects into.
So I set up some variables like this initially...
servermessage: {"color1":"yellow", "color2":"white", "message1":"", "message2":""},
servermessagelist: new Array(),
and when I use this command (below) multiple times with different data called by an event by manipulating servermessage.color1 ... .message1 etc...
servermessagelist.push(servermessage)
it overwrites the entire array with copies of that data... any idea why or what I can do about it.
So if I push color1 "RED" and message1 "Rover".. the data is correct then if I push
color1"yellow" and message1 "Bus" the data is two copies of .color1:"yellow" .message1:"Bus"
When you push servermessage into servermessagelist you're really (more or less) pushing a reference to that object. So any changes made to servermessage are reflected everywhere you have a reference to it. It sounds like what you want to do is push a clone of the object into the list.
Declare a function as follows:
function cloneMessage(servermessage) {
var clone ={};
for( var key in servermessage ){
if(servermessage.hasOwnProperty(key)) //ensure not adding inherited props
clone[key]=servermessage[key];
}
return clone;
}
Then everytime you want to push a message into the list do:
servermessagelist.push( cloneMessage(servermessage) );
When you add the object to the array, it's only a reference to the object that is added. The object is not copied by adding it to the array. So, when you later change the object and add it to the array again, you just have an array with several references to the same object.
Create a new object for each addition to the array:
servermessage = {"color1":"yellow", "color2":"white", "message1":"", "message2":""};
servermessagelist.push(servermessage);
servermessage = {"color1":"green", "color2":"red", "message1":"", "message2":"nice work"};
servermessagelist.push(servermessage);
There are two ways to use deep copy the object before pushing it into the array.
1. create new object by object method and then push it.
servermessagelist = [];
servermessagelist.push(Object.assign({}, servermessage));
Create an new reference of object by JSON stringigy method and push it with parse method.
servermessagelist = [];
servermessagelist.push(JSON.parse(JSON.stringify(servermessage));
This method is useful for nested objects.
servermessagelist: new Array() empties the array every time it's executed. Only execute that code once when you originally initialize the array.
I also had same issue. I had bit complex object that I was pushing in to the array. What I did; I Convert JSON object as String using JSON.stringify() and push in to the Array.
When it is returning from the array I just convert that String to JSON object using JSON.parse().
This is working fine for me though it is bit far more round solution.
Post here If you guys having alternative options
I do not know why a JSON way of doing this has not been suggested yet.
You can first stringify the object and then parse it again to get a copy of the object.
let uniqueArr = [];
let referencesArr = [];
let obj = {a: 1, b:2};
uniqueArr.push(JSON.parse(JSON.stringify(obj)));
referencesArr.push(obj);
obj.a = 3;
obj.c = 5;
uniqueArr.push(JSON.parse(JSON.stringify(obj)));
referencesArr.push(obj);
//You can see the differences in the console logs
console.log(uniqueArr);
console.log(referencesArr);
This solution also work on the object containing nested keys.
Before pushing, stringify the obj by
JSON.stringify(obj)
And when you are using, parse by
JSON.parse(obj);
As mentioned multiple times above, the easiest way of doing this would be making it a string and converting it back to JSON Object.
this.<JSONObjectArray>.push(JSON.parse(JSON.stringify(<JSONObject>)));
Works like a charm.