Redux: cloning state does not work - javascript

I have a very simple question, but…
The code (in a redux/react-native app) of a reducer:
...
case SAMPLES_DELETE_REQUEST_SUCCESS: {
var newState = Object.assign({}, state);
const indexToDelete = newState.samples.findIndex( sample => {
return sample.id == action.sample.id
})
newState.samples.splice(indexToDelete, 1)
debugger;
return newState
}
...
Ok, I copy the state and store it into newState. But when I do newState.samples.splice(indexToDelete, 1), newState is correctly modified, but also state! Why?? I must be tired…

splice function modifies original array. Object.assign does not do deep cloning. Therefore you are still modifying the original state!
You will have to manually copy the nested object(or array) you want to clone:
// Deep Clone
obj1 = { a: 0 , b: { c: 0}};
let obj2 = JSON.parse(JSON.stringify(obj1));

As someone mention before you could use JSON.parse(JSON.stringify(obj)) to create a new copy of the entire object (nested object as well). If you don't want to do that, you could check libraries like Inmutable JS
Also if you want to use spread notation, a better way to do that will be:
return {
...state,
samples: state.samples.filter(sample => sample.id !== action.sample.id)
}

Related

Best way to create a deep cloned object literal from an object with accessors?

I have an object that is a combination of literals and accessors:
const obj = {
stuff: [],
get processedStuff() { return this.stuff.map(el => `${el}!`) }
}
obj.stuff = ['woot']
console.log(obj.processedStuff) // ['woot!']
I want to create a deepClone of obj so that the clone behaves entirely like a literal. So in the clone, changes to stuff will no longer result in changes to processedStuff:
const obj2 = cl0n3Me(obj)
obj2.stuff = ['nope']
console.log(obj.processedStuff) // ['woot!']
Using a library function like cloneDeep in lodash doesn't do this -- the accessors come along for the ride and are a part of the new obj.
I can do this with the following...
const obj2 = JSON.parse(JSON.stringify(obj))
...however I am not sure if that is this the most efficient way / recommended way to do this.
What say ye javascript masters?
You can do a deep clone with using destructuring assignment. afaik, there are no downsides but could be wrong. in my testing it works for your example. it should be noted that this is not a complete deep cloning solution, as it won't handle Date or Map cloning without additional code to handle those types.
function clone( o ) {
if(Array.isArray(o)){
const o2 = [...o];
for(let i=0;i<o2.length;i++){
if(typeof o2[i]==='object'&&o2[i]!==null){
o2[i]=clone(o2[i]);
}
}
return o2;
}else{
const o2 = {...o};
for(let k in o2){
if(typeof o2[k]==='object'&&o2[k]!==null){
o2[k]=clone(o2[k]);
}
}
return o2;
}
}

one liner to remove a n element from an object

This is related to storing manipulated objects in react, but is also a general javascript question.
In react, you usually take a current state object, and add to it like so:
setFormErrors({...formErrors, agreeToRefund: 'Some message'})
where formErrors and setFormErrors is what comes out of useState() hook.
To do a delete, you have to the more verbose:
const newFormErrors = {...formErrors};
delete newFormErrors.agreeToRefund;
setFormErrors(newFormErrors)
Which is a little tedious. Is there a more abbreviated 1 liner to do this?
In 2 statements, yes, but not 1.
const {agreeToRefund , ...newFormErrors} = formErrors;
setFormErrors(newFormErrors)
But if it were me, I'd change the user of formErrors to understand a property value of null or '' instead of removing it from state entirely.
setFormErrors({...formErrors, agreeToRefund: null})
Or you can play with map and comma operator if you do not mind:
box it in a array so you can map over it to delete what you want then pop it back.
setFormErrors([{...newFormErrors}].map(obj=> (delete obj.agreeToRefund,obj)).pop())
let formErrors={fistnam:'error',lastname:'error again'}
console.log('formErrors:',formErrors)
let newFormErrors={...formErrors, agreeToRefund: 'Some message'}
console.log('newFormErrors:',newFormErrors)
let noagreeToRefund=[{...newFormErrors}].map(obj=> (delete obj.agreeToRefund,obj)).pop()//?
console.log('noagreeToRefund:',noagreeToRefund)
One option is to create a utility function (probably exported in a util library)
function newObjWithoutProperty(obj, prop) {
const newObj = { ...obj };
delete newObj[prop];
return newObj;
}
then it becomes:
setFormErrors(newObjWithoutProperty(formErrors, 'agreeToRefund'));

Is mutating accumulator in reduce function considered bad practice?

I'm new to functional programming and I'm trying rewrite some code to make it more functional-ish to grasp the concepts. Just now I've discovered Array.reduce() function and used it to create an object of arrays of combinations (I've used for loop before that). However, I'm not sure about something. Look at this code:
const sortedCombinations = combinations.reduce(
(accum, comb) => {
if(accum[comb.strength]) {
accum[comb.strength].push(comb);
} else {
accum[comb.strength] = [comb];
}
return accum;
},
{}
);
Obviously, this function mutates its argument accum, so it is not considered pure. On the other hand, the reduce function, if I understand it correctly, discards accumulator from every iteration and doesn't use it after calling callback function. Still, it's not a pure function. I can rewrite it like this:
const sortedCombinations = combinations.reduce(
(accum, comb) => {
const tempAccum = Object.assign({}, accum);
if(tempAccum[comb.strength]) {
tempAccum[comb.strength].push(comb);
} else {
tempAccum[comb.strength] = [comb];
}
return tempAccum;
},
{}
);
Now, in my understanding, this function is considered pure. However, it creates a new object every iteration, which consumes some time, and, obviously, memory.
So the question is: which variant is better and why? Is purity really so important that I should sacrifice performance and memory to achieve it? Or maybe I'm missing something, and there is some better option?
TL; DR: It isn't if you own the accumulator.
It's quite common in JavaScript to use the spread operator to create nice looking one-liner reducing functions. Developers often claim that it also makes their functions pure in the process.
const foo = xs => xs.reduce((acc, x) => ({...acc, [x.a]: x}), {});
//------------------------------------------------------------^
// (initial acc value)
But let's think about it for a second... What could possibly go wrong if you mutated acc? e.g.,
const foo = xs => xs.reduce((acc, x) => {
acc[x.a] = x;
return acc;
}, {});
Absolutely nothing.
The initial value of acc is an empty literal object created on the fly. Using the spread operator is only a "cosmetic" choice at this point. Both functions are pure.
Immutability is a trait not a process per se. Meaning that cloning data to achieve immutability is most likely both a naive and inefficient approach to it. Most people forget that the spread operator only does a shallow clone anyway!
I wrote this article a little while ago where I claim that mutation and functional programming don't have to be mutually exclusive and I also show that using the spread operator isn't a trivial choice to make.
Creating a new object on every iteration is common practice, and sometimes recommended, despite any potential performance issues.
(EDIT:) I guess that is because if you want to have only one general advice, then copying less likely causes
problems than mutating. The performance starts to become a "real" issue
if you have more than lets say about 1000 iterations. (For more details see my update below)
You can make your function pure in e.g. in this way:
const sortedCombinations = combinations.reduce(
(accum, comb) => {
return {
...accum,
[comb.strength]: [
...(accum[comb.strength] || []),
comb
]
};
},
{}
);
Purity might become more important if your state and reducer is defined somewhere else:
const myReducer = (accum, comb) => {
return {
...accum,
[comb.strength]: [
...(accum[comb.strength] || []),
comb
]
};
};
const initialState = {};
const sortedCombinations = combinations.reduce( myReducer, initialState );
const otherSortedCombinations = otherCombinations.reduce( myReducer, initialState );
const otherThing = otherList.reduce( otherReducer, initialState );
Update (2021-08-22):
preface to this update
As stated in the comments (and also mentioned in the question), of course copying on every iteration is less performant.
And I admit that in many cases, technically I can't see any disadvantages of mutating the accumulator (if you know what you are doing!).
Actually, thinking about it again, inspired from the comments and other answers,
I changed my mind a bit, and will consider mutating more often now, maybe at least
where I don't see any risk that e.g. somebody else misunderstands my code later.
But then again the question was explicitly about purity ... anyway, so here some more details:
purity
(Disclaimer: I must admit here that I know about React, but I don't know much about "the world of functional programming"
and their arguments about the advantages, e.g. in Haskell)
Using this "pure" approach is a tradeoff. You loose performance, and you win easier understandable and less coupled code.
E.g. in React, with many nested Components, you can always rely on the consistent state of the current component.
You know it will not be changed anywhere outside, except if you have passed down some 'onChange' callback explicitly.
If you define an object, you know for sure it will always stay unchanged.
If you need a modified version, you would have an new variable assignment,
this way it is obvious that you are working with a new version of the data
from here down, and any code that might use the old object will not be affected.:
const myObject = { a1: 1, a2: 2, a3: 3 }; <-- stays unchanged
// ... much other code ...
const myOtherObject = modifySomehow( myObject ); <-- new version of the data
Pros, Cons, and Caveats
I couldn't give a general advice which way (copy or mutate) is "the better one".
Mutating is more performant, but can cause lots of hard-to-debug problems, if you aren't absolutely sure what's happening.
At least in somewhat complex scenarios.
1. problem with non-pure reducer
As already mentioned in my original answer, a non-pure function
might unintentionally change some outside state:
var initialValue = { a1: 1, a2: 2, a3: 3, a4: 4 };
var newKeys = [ 'n1', 'n2', 'n3' ];
var result = newKeys.reduce( (acc, key) => {
acc[key] = 'new ' + key;
return acc
}, initialValue);
console.log( 'result:', result ); // We are interested in the 'result',
console.log( 'initialValue:', initialValue ); // but the initialValue has also changed.
Somebody might argue that you can copy the initial value beforehand:
var result = newKeys.reduce( (acc, key) => {
acc[key] = 'new ' + key;
return acc
}, { ...initialValue }); // <-- copy beforehand
But this might be even less efficient in cases where e.g. the object is very big and nested,
the reducer is called often, and maybe there are multiple conditionally used small modifications
inside the reducer, which are only changing little.
(think of useReducer in React,
or the Redux reducer)
2. shallow copies
An other answer stated correctly that even with the supposedly pure approach there might still be a reference to the original object.
And this is indeed something to be aware of, but the problems arise only if you do not follow this 'immutable' approach consequently enough:
var initialValue = { a1: { value: '11'}, a2: { value: '22'} }; // <-- an object with nested 'non-primitive' values
var newObject = Object.keys(initialValue).reduce( (acc, key) => {
return {
...acc,
['newkey_' + key]: initialValue[key], // <-- copies a reference to the original object
};
}, {}); // <-- starting with empty new object, expected to be 'pure'
newObject.newkey_a1.value = 'new ref value'; // <-- changes the value of the reference
console.log( initialValue.a1 ); // <-- initialValue has changed as well
This is not a problem, if it is taken care that no references are copied (which might be not trivial sometimes):
var initialValue = { a1: { value: '11'}, a2: { value: '22'} };
var newObject = Object.keys(initialValue).reduce( (acc, key) => {
return {
...acc,
['newkey_' + key]: { value: initialValue[key].value }, // <-- copies the value
};
}, {});
newObject.newkey_a1.value = 'new ref value';
console.log( initialValue.a1 ); // <-- initialValue has not changed
3. performance
The performance is no problem with a few elements, but if the object has several thousand items, the performance becomes indeed a significant issue:
// create a large object
var myObject = {}; for( var i=0; i < 10000; i++ ){ myObject['key' + i] = i; }
// copying 10000 items takes seconds (increasing exponentially!)
// (create a new object 10000 times, with each 1,2,3,...,10000 properties)
console.time('copy')
var result = Object.keys(myObject).reduce( (acc, key)=>{
return {
...acc,
[key]: myObject[key] * 2
};
}, {});
console.timeEnd('copy');
// mutating 10000 items takes milliseconds (increasing linearly)
console.time('mutate')
var result = Object.keys(myObject).reduce( (acc, key)=>{
acc[key] = myObject[key] * 2;
return acc;
}, {});
console.timeEnd('mutate');

What is the difference between { something: value } & Object.assign({}, { something: value })

I am learning react & redux. I wanted to find the difference between these two codes.
export default function xReducer (state = [], action) {
switch (action.type) {
// I am simply assigning my key value to the action property
case 'SOMETHING': {
return [ ...state, { myKey: action.x } ]
}
// I am using Object.assign
case 'SOMEMORE': {
return [ ...state, Object.assign({}, { myKey: action.x }) ]
}
}
}
To the best of my knowledge, in this particular example, there is no difference. You use Object.assign to combine multiple objects where there may be overlapping keys, such that the value of those keys in objects to the right override the value in objects to the left. The canonical example is something like this:
let options = Object.assign({}, defaultOptions, passedOptions)
In this case, since the only objects being merged are an empty one and a single literal one, the result is the same as the literal one by itself.
The usage of Object.assign in your example provides no benefit.
Object.assign creates a shallow copy. Basically using Object.assign will create a new instance of an object keeping the original object intact. In React terminology, it's all about keeping your object immutable.
var obj1 = {prop1: "A"};
var obj2 = Object.assign({}, obj1);
obj1 === obj2; //false as they are 2 different instances
This can be achieved doing:
var obj2 = {
prop1: obj1.prop1
};
obj1 === obj2; //false as they are 2 different instances
Note that Object.assign works well for primitive types (string, number, boolean for the most significant ones) as they are all immutable types (cannot be changed). Object.assign will not create a new instance of reference types such as Array, Function, Object, null, undefined.
Example:
var obj3 = {
fn: function() {}
};
var obj4 = Object.assign({}, obj3);
obj3.fn === obj4.fn; //true as Object.assign will not create a new instance of fn
When used properly, Object.assign is versatile and powerful.
See Mozilla's page: https://developer.mozilla.org/en-US/docs/Glossary/Primitive

Nodejs: how to clone an object

If I clone an array, I use cloneArr = arr.slice()
I want to know how to clone an object in nodejs.
For utilities and classes where there is no need to squeeze every drop of performance, I often cheat and just use JSON to perform a deep copy:
function clone(a) {
return JSON.parse(JSON.stringify(a));
}
This isn't the only answer or the most elegant answer; all of the other answers should be considered for production bottlenecks. However, this is a quick and dirty solution, quite effective, and useful in most situations where I would clone a simple hash of properties.
Object.assign hasn't been mentioned in any of above answers.
let cloned = Object.assign({}, source);
If you're on ES6 you can use the spread operator:
let cloned = { ... source };
Reference: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/assign
There are some Node modules out there if don't want to "roll your own". This one looks good: https://www.npmjs.com/package/clone
Looks like it handles all kinds of stuff, including circular references. From the github page:
clone masters cloning objects, arrays, Date objects, and RegEx
objects. Everything is cloned recursively, so that you can clone dates
in arrays in objects, for example. [...] Circular references? Yep!
You can use lodash as well. It has a clone and cloneDeep methods.
var _= require('lodash');
var objects = [{ 'a': 1 }, { 'b': 2 }];
var shallow = _.clone(objects);
console.log(shallow[0] === objects[0]);
// => true
var deep = _.cloneDeep(objects);
console.log(deep[0] === objects[0]);
It's hard to do a generic but useful clone operation because what should be cloned recursively and what should be just copied depends on how the specific object is supposed to work.
Something that may be useful is
function clone(x)
{
if (x === null || x === undefined)
return x;
if (typeof x.clone === "function")
return x.clone();
if (x.constructor == Array)
{
var r = [];
for (var i=0,n=x.length; i<n; i++)
r.push(clone(x[i]));
return r;
}
return x;
}
In this code the logic is
in case of null or undefined just return the same (the special case is needed because it's an error to try to see if a clone method is present)
does the object have a clone method ? then use that
is the object an array ? then do a recursive cloning operation
otherwise just return the same value
This clone function should allow implementing custom clone methods easily... for example
function Point(x, y)
{
this.x = x;
this.y = y;
...
}
Point.prototype.clone = function()
{
return new Point(this.x, this.y);
};
function Polygon(points, style)
{
this.points = points;
this.style = style;
...
}
Polygon.prototype.clone = function()
{
return new Polygon(clone(this.points),
this.style);
};
When in the object you know that a correct cloning operation for a specific array is just a shallow copy then you can call values.slice() instead of clone(values).
For example in the above code I am explicitly requiring that a cloning of a polygon object will clone the points, but will share the same style object. If I want to clone the style object too instead then I can just pass clone(this.style).
There is no native method for cloning objects. Underscore implements _.clone which is a shallow clone.
_.clone = function(obj) {
return _.isArray(obj) ? obj.slice() : _.extend({}, obj);
};
It either slices it or extends it.
Here's _.extend
// extend the obj (first parameter)
_.extend = function(obj) {
// for each other parameter
each(slice.call(arguments, 1), function(source) {
// loop through all properties of the other objects
for (var prop in source) {
// if the property is not undefined then add it to the object.
if (source[prop] !== void 0) obj[prop] = source[prop];
}
});
// return the object (first parameter)
return obj;
};
Extend simply iterates through all the items and creates a new object with the items in it.
You can roll out your own naive implementation if you want
function clone(o) {
var ret = {};
Object.keys(o).forEach(function (val) {
ret[val] = o[val];
});
return ret;
}
There are good reasons to avoid deep cloning because closures cannot be cloned.
I've personally asked a question about deep cloning objects before and the conclusion I came to is that you just don't do it.
My recommendation is use underscore and it's _.clone method for shallow clones
For a shallow copy, I like to use the reduce pattern (usually in a module or such), like so:
var newObject = Object.keys(original).reduce(function (obj, item) {
obj[item] = original[item];
return obj;
},{});
Here's a jsperf for a couple of the options: http://jsperf.com/shallow-copying
Old question, but there's a more elegant answer than what's been suggested so far; use the built-in utils._extend:
var extend = require("util")._extend;
var varToCopy = { test: 12345, nested: { val: 6789 } };
var copiedObject = extend({}, varToCopy);
console.log(copiedObject);
// outputs:
// { test: 12345, nested: { val: 6789 } }
Note the use of the first parameter with an empty object {} - this tells extend that the copied object(s) need to be copied to a new object. If you use an existing object as the first parameter, then the second (and all subsequent) parameters will be deep-merge-copied over the first parameter variable.
Using the example variables above, you can also do this:
var anotherMergeVar = { foo: "bar" };
extend(copiedObject, { anotherParam: 'value' }, anotherMergeVar);
console.log(copiedObject);
// outputs:
// { test: 12345, nested: { val: 6789 }, anotherParam: 'value', foo: 'bar' }
Very handy utility, especially where I'm used to extend in AngularJS and jQuery.
Hope this helps someone else; object reference overwrites are a misery, and this solves it every time!
In Node.js 17.x was added the method structuredClone() to allow made a deep clone.
Documentation of reference: https://developer.mozilla.org/en-US/docs/Web/API/structuredClone
I implemented a full deep copy. I believe its the best pick for a generic clone method, but it does not handle cyclical references.
Usage example:
parent = {'prop_chain':3}
obj = Object.create(parent)
obj.a=0; obj.b=1; obj.c=2;
obj2 = copy(obj)
console.log(obj, obj.prop_chain)
// '{'a':0, 'b':1, 'c':2} 3
console.log(obj2, obj2.prop_chain)
// '{'a':0, 'b':1, 'c':2} 3
parent.prop_chain=4
obj2.a = 15
console.log(obj, obj.prop_chain)
// '{'a':0, 'b':1, 'c':2} 4
console.log(obj2, obj2.prop_chain)
// '{'a':15, 'b':1, 'c':2} 4
The code itself:
This code copies objects with their prototypes, it also copy functions (might be useful for someone).
function copy(obj) {
// (F.prototype will hold the object prototype chain)
function F() {}
var newObj;
if(typeof obj.clone === 'function')
return obj.clone()
// To copy something that is not an object, just return it:
if(typeof obj !== 'object' && typeof obj !== 'function' || obj == null)
return obj;
if(typeof obj === 'object') {
// Copy the prototype:
newObj = {}
var proto = Object.getPrototypeOf(obj)
Object.setPrototypeOf(newObj, proto)
} else {
// If the object is a function the function evaluate it:
var aux
newObj = eval('aux='+obj.toString())
// And copy the prototype:
newObj.prototype = obj.prototype
}
// Copy the object normal properties with a deep copy:
for(var i in obj) {
if(obj.hasOwnProperty(i)) {
if(typeof obj[i] !== 'object')
newObj[i] = obj[i]
else
newObj[i] = copy(obj[i])
}
}
return newObj;
}
With this copy I can't find any difference between the original and the copied one except if the original used closures on its construction, so i think its a good implementation.
I hope it helps
Depending on what you want to do with your cloned object you can utilize the prototypal inheritence mechanism of javascript and achieve a somewhat cloned object through:
var clonedObject = Object.create(originalObject);
Just remember that this isn't a full clone - for better or worse.
A good thing about that is that you actually haven't duplicated the object so the memory footprint will be low.
Some tricky things to remember though about this method is that iteration of properties defined in the prototype chain sometimes works a bit different and the fact that any changes to the original object will affect the cloned object as well unless that property has been set on itself also.
Objects and Arrays in JavaScript use call by reference, if you update copied value it might reflect on the original object.
To prevent this you can deep clone the object, to prevent the reference to be passed, using lodash library cloneDeep method
run command
npm install lodash
const ld = require('lodash')
const objectToCopy = {name: "john", age: 24}
const clonedObject = ld.cloneDeep(objectToCopy)
Try this module for complex structures, developed especially for nodejs - https://github.com/themondays/deppcopy
Works faster than JSON stringify and parse on large structures and supports BigInt.
for array, one can use
var arr = [1,2,3];
var arr_2 = arr ;
print ( arr_2 );
arr=arr.slice(0);
print ( arr );
arr[1]=9999;
print ( arr_2 );
How about this method
const v8 = require('v8');
const structuredClone = obj => {
return v8.deserialize(v8.serialize(obj));
};

Categories

Resources