We have a project using React + Redux + ImmutableJS. One of our engineers recently added a helper method to support destructuring an ImmutableJS Map when passing it to a component as props:
export function toObjectShallow(mapping: Map) {
const result = {};
mapping.map((value, key) => {
result[key] = value;
});
return result;
}
Thus, we can still do the following and avoid being verbose with repeated calls to Map.get:
<XyzComponent {...toObjectShallow(this.props.xyz)}/>
Yes, that's essentially making two shallow copies (our method + destructuring) of the original object. That should be of minimal expense. I'm wondering though, since I don't see this kind of recommendation really anywhere else in the React/Redux/Immutable communities, is there something else I'm missing that would make this unideal?
The passed properties are still the original, immutable props. It's just the containing object is mutated which doesn't matter, because it's not getting passed to the component anyways. So, what gives? This seems like such a simple solution while avoiding toJS(). Why isn't it really mentioned anywhere?
I followed the advice in the redux docs to use a HOC for all connected components to allow me to interact with plain javascript objects outside of redux. So my selectors and reducers still use ImmutableJS objects, but the rest of the code uses plain javascript objects:
https://redux.js.org/docs/recipes/UsingImmutableJS.html#use-a-higher-order-component-to-convert-your-smart-components-immutablejs-props-to-your-dumb-components-javascript-props
edit- not sure if this is the toJS you are mentioning above, I had assumed you meant ImmutableJS.toJS.
as far as preferences, using an HOC you only need to do it once per component, as opposed to each time you use a component in your method.
I think the "correct" answer I was looking for was to continue using Immutable JS's toObject() which does a shallow conversion of an Immutable Map object to a regular object, thus allowing us to continue using syntactic sugar while keeping the properties immutable and not having to use our own shallow copy.
Related
I like the functional programming paradigm which according to me produces cleaner code and easier to understand, but I'd like to know if the performance loss is significative or not.
Let's take the example of a function that adds a new property to an object. A non-functional approach would look like this:
const addProp = (obj, prop, val) => {
obj[prop] = val; // Mutate the object
return obj;
}
While the functional approach would look like this:
const addProp = (obj, prop, val) => ({ ...obj, [prop]: val }); // Shallow clone
What's the cost of the shallow clone compared to the object mutation? I'd like to know how much functional programming patterns degrade performance (if they do).
The performance of making shallow copies is much worse than the performance of mutation. If I have an array with 400 elements and I make a shallow copy, that's going to be much more expensive than destructively modifying a single element of the array. And the longer the array is, the worse the gap becomes. The same problem occurs with an object with many properties.
One essential element of efficient functional programming is using the right data structures. In this case, you want a data structure where modifying part of the data structure does not mean you have to completely recreate the data structure. You want a local modification of a data structure to be able to share most of its memory with the original.
An extremely basic example of this is a singly linked list. If you wish to prepend a single element to a linked list, this is an O(1) operation because the new list shares most of its memory with the old list. However, adding a single element to an array without mutating the original array is a linear-time operation because the whole array must be copied.
For sequences and maps, one generally ends up using some sort of balanced tree. Functional programming languages include these data structures as part of their standard libraries, but I'm sure someone has written a library for functional Javascript.
I was playing with React.Children and children property and i found that there is no difference between using children.filter() and using React.Children.toArray(children).filter i was also able to render a specific child from the children props by using it's index directly and treat it like a normal array
so i wonder is there any difference between the two approaches? and if i can use the props directly with the arrays method why the react team implemented the toString() method in the first place
i tried console.log() the two of them and i see no defference between the two approaches
const ChildrenContainer = (props) => {
const { children, howMany } = props;
console.log(children, '\n', Children.toArray(children));
return (
<div className="card-columns">
{Children.map(children, (thisArg) => thisArg)}
</div>
);
}
If you look at the docs here: https://reactjs.org/docs/react-api.html#reactchildren you will notice some things.
React.Children deals with null and undefined. Calling children.filter() would error should children be null or undefined.
It deals with <Fragment> children
Includes helper functions such as .only() that provide more react specific capabilities than a regular array.
toArray() alters keys to preserve semantics of nested arrays.
So in your specific case, you might not notice any difference. But even if you are certain that children will always be a usable array, it is still good practice to use React.children. This is because it abstracts away the children concept so that your code is more future proofed (for changes to your code, and potential changes to react itself).
let oldMessages = Object.assign({}, this.state.messages);
// this.state.messages[0].id = 718
console.log(oldMessages[0].id);
// Prints 718
oldMessages[0].id = 123;
console.log(this.state.messages[0].id);
// Prints 123
How can I prevent oldMessages to be a reference, I want to change the value of oldMessages without changing the value of state.messages
You need to make a deep copy. Lodash's cloneDeep makes this easy:
import cloneDeep from 'lodash/cloneDeep';
const oldMessages = cloneDeep(this.state.messages);
oldMessages[0].id = 123;
First let's clarify the difference between shallow and deep clone:
A shallow clone is a clone that has its primitive properties cloned but his REFERENCE properties still reference the original.
Allow me to clarify:
let original = {
foo: "brlja",
howBigIsUniverse: Infinity,
mrMethodLookAtMe: () => "they call me mr. Method",
moo: {
moo: "MOO"
}
};
// shallow copy
let shallow = Object.assign({}, original);
console.log(original, shallow); // looks OK
shallow.moo.moo = "NOT MOO";
console.log(original, shallow); // changing the copy changed the original
Notice how changing the shallow copy's not primitive property's inner properties REFLECTED on the original object.
So why would we use shallow copy?
It is definitely FASTER.
It can be done in pure JS via 1 liner.
When would you use shallow copy?
All of your object's properties are primitives
You are making a partial copy where all your copied properties are primitives
You don't care about the fate of the original (is there a reason to copy and not use that one instead?)
Oke, let's get into making a propper (deep) copy. A deep copy should obviously have the original object coped into the clone by value, not references. And this should persist as we drill deeper into the object. So if we got X levels deep nested object inside of the original's property it should still be a copy not a reference to the same thing in memory.
What most people suggest is to abuse the JSON API. They think that turning an object into a string then back into an object via it will make a deep copy. Well, yes and NO. Let's attempt to do just that.
Extend our original example with:
let falseDeep = JSON.parse(JSON.stringify(original));
falseDeep.moo.moo = "HEY I CAN MOO AGAIN";
console.log(original, falseDeep); // moo.moo is decoupled
Seems ok, right? WRONG!
Take a look at what happened to the mrMethodLookAtMe and howBigIsUniverse properties that I sneaked in from the start :)
One gives back null which is definitely not Infinity and the other one is GONE. Well, that is no bueno.
In short: There are problems with "smarter" values like NaN or Infinity that get turned to null by JSON API. There are FURTHER problems if you use:
methods, RegExps, Maps, Sets, Blobs, FileLists, ImageDatas, sparse Arrays, Typed Arrays as your original object's properties.
Why? Well this produces some of the nastiest to track bugs out there.. I have nightmares tracking the disappearing methods or type being turned to another (which passed someone's bad input parameter check but then couldn't produce a valid result) before Typescript became a thing.
Time to wrap this up! So what is the correct answer?
You write your own implementation of a deep copy. I like you but please don't do this when we have a deadline to meet.
Use a deep cloning function provided to you by the library or framework you already use in the project.
Lodash's cloneDeep
Many people still use jQuery. So in our example (please put import where it belongs, on top of the file):
import jQ from "jquery";
let trueDeep = jQ.extend(true, original, {});
console.log(original, trueDeep);
This works, it makes a nice deep copy and is a one-liner. But we had to import the entire jQuery. Which is fine if it is already being used in project, but I tend to avoid it since it is over-bloated and has terribly inconsistent naming.
Similarly, AngularJS users can use angular.copy().
But what if my framework/library does not have a similar function?
You can use my personal SUPERSTAR among JS libraries (I am not involved in the project, just a big fan) - Lodash (or _ for friends).
So extend our example with (again, mind the position of import):
import _ from "lodash"; // cool kids know _ is low-dash
var fastAndDeepCopy = _.cloneDeep(objects);
console.log(original, lodashDeep);
It is a simple oneliner, it works, it is fast.
This is pretty much it :)
Now you know the difference between shallow and deep copy in JS. You realize JSON API abuse is just that, abuse and not a true solution. If you are using jQuery or AngularJS already you now know there is a solution already there for you. If not you can write your own or consider using lodash.
The entire example can be found here:
codesandbox - entire example
Try Using
let tempVar = JSON.parse(JSON.stringify(this.state.statename))
What actually you are doing
let oldMessages = Object.assign({}, this.state.messages);
is a shallow copy which is similar to {...this.state.message} with spread operator.
Object has its own reference in memory to destroy it you can use JSON.parse (JSON.stringify(object))
no matter how nested key it has, it will remove the reference of the object and you will get a new object.
This concept is called a deep copy or deep clone.
I am trying to figure out the most idiomatic implementation of the react lifecycle method shouldComponentUpdate. I feel that I, and possibly others, don't utilize this method to the extent it could be because it is optional.
Generally I want to check if if the props or state of an object has changed between updates.
This does not work as this equality is pointing at the object reference:
shouldComponentUpdate(nextProps, nextState) {
return this.props !== nextProps;
}
So then we go down the rabbit hole of object cloning, which seems like a little bit of an ugly solution:
return JSON.parse(JSON.stringify(this.props) !== JSON.parse(JSON.stringify(nextProps));
// lodash cloning method
return _.cloneDeep(this.props) !== _.cloneDeep(nextProps);
Another possibility is using an immutable library like immutablejs, but that is another dependency I'm not sure I want to add to the project, and another API to learn.
Am I missing something? Is there a more concise approach to this?
You can simply use React's shallow compare to compare props and states.
Hope this helps!
Consider the following:
[SELECT]: (state, action) => {
let newState = {...state, newState}
delete newState[action.payload.key]
return(newState)
}
Why is it acceptable for me to mutate the shallow copy, return it and still satisfy the rule about not mutating my state?
It is acceptable because (at least in your example code), the mutation is on the shallow object, so you are not modifying an object that any other code is currently referring to.
It would not be acceptable to make a shallow copy and then modify a nested object! The key is that you clone any and all objects in the object tree that are on the path to the deep property you wish to change.
From the Redux FAQ:
It’s important to remember that whenever you update a nested value,
you must also return new copies of anything above it in your state
tree. If you have state.a.b.c.d, and you want to make an update to d,
you would also need to return new copies of c, b, a, and state. This
state tree mutation diagram demonstrates how a change deep in a tree
requires changes all the way up.
A reducer in Redux should be pure, which means it must guarantee that it doesn't change or effect anything outside of it's own function (i.e. produces no side effects). If you mutate the passed state object directly, you violate this.
By creating a new copy, you know for certain that no other part of your application uses it (it was just created, so how could they?). Which in turn means that it's safe to mutate it without worrying about strange side effects.