Background: I (like many in programming) am coding a Reactjs application and have not got an educational background in Computer Science. In my case it was a scientific discipline. I'm coming up with a regular occurrence, which I would greatly appreciate clarified by someone who knows more about what's going on under the hood.
For each case of render() on a React.Component, to access the props/state I can obviously either;
// reference directly from this
render() {
return <div>{this.props.text}</div>;
}
// reference from declared local variable
render() {
const { props } = this;
return <div>{props.text}</div>
}
Which way would be considered the 'optimized' way?
I may be wrong but I would assume that when props/state is accessed from 'this', there would be operations made on the component each time. Are these operations computationally more expensive than accessing a local variable?
I do, however, see it carried out from 'this' in many helpful articles/tutorials/stackoverflow, so my reasoning is could be flawed.
Performance shouldn't be a factor here whatsoever. Their only difference in terms of performance I'd say is that doing const {props} = this; is one extra variable assignment which almost certainly takes up a negligible amount of memory.
The use cases here are only for readability and which one "looks" better to the programmer.
Related
Is it acceptable to create a global object on window for variables, objects, properties, and methods needed across components,?
It seems to me a simple, easy, and direct approach. As long as everyone working on the project is aware of this global object, it is safe.
Regarding cross-component access, I've looked at:
Prop Drilling [1]. Tedious and prone to error. considered bad practice by some.
Redux [2]. Good for larger projects, but can get involved.
Context [3]. Also a bit of a pain. Seemingly just another way to create a global, with more overhead.
I know that globals are often viewed like the metric system in the USA: Evil and from the Devil. The reason is that programmers can step on each other... call to mind the classic foo bar example, where everyone loves to use these variables in the global scope, and their values are overwritten. However, I think it's safe as long as the global class is named for minimal conflicts, and other developers on the project are aware of its use.
Given the contortions from other global state systems, I think that this might be a valid use case.
Complete, Minimal, Verifiable Example
Note how we hang a couple variables, an array, a Cheese object with properties and getter/setter methods.
class Cheese {
constructor(brand, type) {
this.brand = brand;
this.type = type
}
getBrand() { return this.brand; }
setBrand(b) { this.brand =b; return 0; }
}
window.myGlobal = {}
window.myGlobal["swiss"] = new Cheese("Jarlsberg", "Swiss")
window.myGlobal["cheese"] = ["Swiss", "Cheddar", "Brie"]
window.myGlobal["version"] = "0.1.5"
...and when we need something from a component:
console.log(window.myGlobal.swiss.getBrand()) //-> "Swiss"
console.log(window.myGlobal.cheese[1]) //-> "Cheddar"
console.log(window.myGlobal.version) //-> "0.1.5"
References:
https://kentcdodds.com/blog/prop-drilling
https://redux.js.org/
https://reactjs.org/docs/context.html
With window global variables you cannot track your variable state and it's not reactive (the component doesn't listen to its changes), also these variables are not safe and they could be mutated outside the components for example the user could open the browser console and set other values to your variables.
From Vuex official docs (the equivalent of Redux in Vue.js) they say:
So why don't we extract the shared state out of the components, and manage it in a global singleton? With this, our component tree becomes a big "view", and any component can access the state or trigger actions, no matter where they are in the tree!
By defining and separating the concepts involved in state management and enforcing rules that maintain independence between views and states, we give our code more structure and maintainability.
They mean by global singleton a class instantiated once and access it fields and methods from the whole app like window .
I am developing some web-app using React, just so that anybody reading can understand what is going on. That said, this is essentially a question regarding ES6 code optimization, rather then React and what have you.
TL;DR: I know that it reeeeeally won't make any goddamn difference because of how simple the whole thing is, but I am
wondering, in theory, what is the best approach? What is the 'good
practice' standard for declaring functions to be used within ES6 class
methods with most performance optimization?
What would be the most optimal place to define a "helper" function which is used by one of the methods of my class, for example:
class Table extends Component {
constructor() {
super();
this.instanceBasedFilter = function(){...} // Option 1
}
state = {};
render() {
const { filterProp } = this.props;
return (
<React.Fragment>
<table>
{this.filterMethod(filterProp)}
</table>
</React.Fragment>
);
}
prototypeBasedFilter(){...} // Option 2
filterMethod = filter => {
filter = filter || [];
function methodBasedFilter(){...} // Option 3
filter.filter(/* need to pass a function here*/)
};
}
function outsideBasedFilter(){...} // Option 4
So obviously, they are all different, overhead cannot be avoided in this case, I am just wondering which would be considered the best approach. For argument's sake lets disregard the option of placing a filter helper inside a different .js file, let's say that it is specific to this component.
My view on the options is bellow, correct me if I am wrong and suggest an option that is best practice if you know one.
Option 1:
A function object will be created every time the component is created and attached to the DOM, this can result in a moderate amount of "creations". On the plus side though, it will only take up memory space for as long as the component is displayed, once removed, memory is freed up, if I understand garbage collector properly.
Option 2:
A single function object will be created for use with all the components, so the processing is done only once, however it will be kept in memory for as long as the application runs.
Option 3:
A function object will be created every time the component updates, this can really end up being a lot of times, on the plus side though, as soon as the rendering is done, the memory is freed up, this options is what I'd intuitively go for if I was not thinking about optimizing, because i'd just inline an arrow function and be done with it. But it is the most overhead, no?
Option 4:
Now honestly this one has me most wound up... since I cannot demystify how it gets compiled. Placing a function declaration outside of the class itself exposes it to that class, and it's prototype, but I have no clue how does webpack/babel compile it. I'd make an educated guess that it is similar to option two in terms of optimization, since I'd assume its "held in scope" of some anonymous function that denotes a module environment, which means so long as the app is running it will never be collected, but also it will only be defined once.
So best practice?
For quite sometime I've been wondering about this question: when working with AngularJS, should I use directly the model object properties on the view or can I use a function to get the that property value?
I've been doing some minor home projects in Angular, and (specially working with read-only directives or controllers) I tend to create scope functions to access and display scope objects and their properties values on the views, but performance-wise, is this a good way to go?
This way seems easier for maintaining the view code, since, if for some reason the object is changed (due to a server implementation or any other particular reason), I only have to change the directive's JS code, instead of the HTML.
Here's an example:
//this goes inside directive's link function
scope.getPropertyX = function() {
return scope.object.subobject.propX;
}
in the view I could simply do
<span>{{ getPropertyX() }}</span>
instead of
<span>{{ object.subobject.propX }}</span>
which is harder to maintain, amidst the HTML clutter that sometimes it's involved.
Another case is using scope functions to test properties values for evaluations on a ng-if, instead of using directly that test expression:
scope.testCondition = function() {
return scope.obj.subobj.propX === 1 && scope.obj.subobj.propY === 2 && ...;
}
So, are there any pros/cons of this approach? Could you provide me with some insight on this issue? It's been bothering me lately, on how an heavy app might behave when, for example a directive can get really complex, and on top of that could be used inside a ng-repeat that could generate hundreds or thousands of its instances.
Thank you
I don't think creating functions for all of your properties is a good idea. Not just will there be more function calls being made every digest cycle to see if the function return value has changed but it really seems less readable and maintainable to me. It could add a lot of unnecessary code to your controllers and is sort of making your controller into a view model. Your second case seems perfectly fine, complex operations seems like exactly what you would want your controller to handle.
As for performance it does make a difference according to a test I wrote (fiddle, tried to use jsperf but couldn't get different setup per test). The results are almost twice as fast, i.e. 223,000 digests/sec using properties versus 120,000 digests/sec using getter functions. Watches are created for bindings that use angular's $parse.
One thing to think about is inheritance. If you uncomment the ng-repeat list in the fiddle and inspect the scope of one of the elements you can see what I'm talking about. Each child scope that is created inherits the parent scope's properties. For objects it inherits a reference, so if you have 50 properties on your object it only copies the object reference value to the child scope. If you have 50 manually created functions it will copy each of those function to each child scope that it inherits from. The timings are slower for both methods, 126,000 digests/sec for properties and 80,000 digests/sec with getter functions.
I really don't see how it would be any easier for maintaining your code and it seems more difficult to me. If you want to not have to touch your HTML if the server object changes it would probably be better to do that in a javascript object instead of putting getter functions directly on your scope, i.e.:
$scope.obj = new MyObject(obj); // MyObject class
In addition, Angular 2.0 will be using Object.observe() which should increase performance even more, but would not improve the performance using getter functions on your scope.
It looks like this code is all executed for each function call. It calls contextGetter(), fnGetter(), and ensureSafeFn(), as well as ensureSafeObject() for each argument, for the scope itself and for the return value.
return function $parseFunctionCall(scope, locals) {
var context = contextGetter ? contextGetter(scope, locals) : scope;
var fn = fnGetter(scope, locals, context) || noop;
if (args) {
var i = argsFn.length;
while (i--) {
args[i] = ensureSafeObject(argsFn[i](scope, locals), expressionText);
}
}
ensureSafeObject(context, expressionText);
ensureSafeFunction(fn, expressionText);
// IE stupidity! (IE doesn't have apply for some native functions)
var v = fn.apply
? fn.apply(context, args)
: fn(args[0], args[1], args[2], args[3], args[4]);
return ensureSafeObject(v, expressionText);
};
},
By contrast, simple properties are compiled down to something like this:
(function(s,l /**/) {
if(s == null) return undefined;
s=((l&&l.hasOwnProperty("obj"))?l:s).obj;
if(s == null) return undefined;
s=s.subobj;
if(s == null) return undefined;
s=s.A;
return s;
})
Performance wise - it is likely to matter little
Jason Goemaat did a great job providing a Benchmarking Fiddle. Where you can change the last line from:
setTimeout(function() { benchmark(1); }, 500);
to
setTimeout(function() { benchmark(0); }, 500);
to see the difference.
But he also frames the answer as properties are twice as fast as function calls. In fact, on my mid-2014 MacBook Pro, properties are three times faster.
But equally, the difference between calling a function or accessing the property directly is 0.00001 seconds - or 10 microseconds.
This means that if you have 100 getters, they will be slower by 1ms compared to having 100 properties accessed.
Just to put things in context, the time it takes sensory input (a photon hitting the retina) to reach our consciousness is 300ms (yes, conscious reality is 300ms delayed). So you'd need 30,000 getters on a single view to get the same delay.
Code quality wise - it could matter a great deal
In the days of assembler, software was looked like this:
A collection of executable lines of code.
But nowadays, specifically for software that have even the slightest level of complexity, the view is:
A social interaction between communication objects.
The latter is concerned much more about how behaviour is established via communication objects, than the actual low-level implementation. In turn, the communication is granted by an interface, which is typically achieved using the query or command principle. What matters, is the interface of (or the contract between) collaborating objects, not the low-level implementation.
By inspecting properties directly, you are hit the internals of an object bypassing its interface, thus couple the caller to the callee.
Obviously, with getters, you may rightly ask "what's the point". Well, consider these changes to implantation that won't affect the interface:
Checking for edge cases, such as whether the property is defined at all.
Change from getName() returning first + last names rather than just the name property.
Deciding to store the property in mutable construct.
So even an implementation seemingly simple may change in a way that if using getters would only require single change, where without will require many changes.
I vote getters
So I argue that unless you have a profiled case for optimisation, you should use getters.
Whatever you put in the {{}} is going be evaluated A LOT. It has to be evaluated each digest cycle in order to know if the value changed or not. Thus, one very important rule of angular is to make sure you don't have expensive operations in any $watches, including those registered through {{}}.
Now the difference between referencing the property directly or having a function do nothing else but return it, to me, seems negligible. (Please correct me if I'm wrong)
So, as long as your functions aren't performing expensive operations, I think it's really a matter of personal preference.
I'm just curious. Maybe someone knows what JavaScript engines can optimize in 2013 and what they can't? Any assumptions for nearest future? I was looking for some good articles, but still there is no "bible" in the internet.
Ok, let's focus on single quesiton:
Suppose I have a function which is called every 10ms or in a tight loop:
function bottleneck () {
var str = 'Some string',
arr = [1,2,3,4],
job = function () {
// do something;
};
// Do something;
// console.log(Date.getTime());
}
I do not need to calculate the initial values for variables every time, as you see. But, if I move them to upper scope, I will loose on variable lookup. So is there a way to tell Javasript engine to do such an obvious thing - precalculate variables' initial values?
I've careated a jsperf to clear my question. I'm experimenting with different types. I'm especially interested in functions and primitives.
if you need to call a function every 10ms, and it's a bottleneck, the first thought you should have is "I shouldn't call this function every 10ms". Something went wrong in the architecting you did. That said, see 1b in http://jsperf.com/variables-caching/2, which is about four times faster than your "cached" version - the main reason being that for every variable in your code, you're either moving up scope, or redeclaring. In 1b, we go up scope once, to get "initials", then set up local aliasses for its content, from local reference. Much time is saved.
(Concerns V8)
Well the array data itself is not created but an unique array object needs to be created every-time. The backing array for the values 1,2,3,4 is shared by these objects.
The string is interned and it is actually fastest to copy paste same string everywhere as a literal rather than referencing some common variable. But for maintenance you don't really want to do that.
Don't create any new function inside a hot function, if your job function references any variables from the bottleneck function then first of all those variables will become context allocated and slow to access anywhere even in the outer function and it will prevent inlining of the bottleneck function as of now. Inlining is a big deal optimization you don't want to miss when otherwise possible.
I'm curious how my game would look like in functional style instead of OOP.
Here are core lines of code in Node.js:
if (!player.owns(flag) && player.near(flag) && flag.isUnlocked()) {
player.capture(flag);
}
My guess was, it could look like this:
var canCapture = [not(owns), isNear, canUnlock].every(function(cond) {
return cond(playerData, flagData);
});
if(canCapture) {
// how to capture?
}
But not sure, as not experienced functional coder. I'm interested in every answer close to the subject (it can be even in other programming style).
It could look somewhat like this:
if (!player.owns(flag) && player.near(flag) && flag.isUnlocked()) {
capturingPlayer = player.capture(flag);
}
where capturingPlayer is a new object, whose difference to player is that is has captured a flag. player is unmodified by the call to capture.
If you prefer a "non-OO" syntax (whatever that could mean)
if (!owns(player, flag) && near(player, flag) && isUnlocked(flag)) {
capturingPlayer = capture(player, flag);
}
To expand and hopefully clarify a bit:
Functional programming, in the sense employed by the functional programming community, does not just mean "functions/procedures are first-class objects".
What it does mean is that functions are functions in the mathematical sense, i.e.
All functions return a value.
Every function returns the same value every time it's passed the same arguments.
A function has no side-effects whatsoever - there are no mutable objects or assignment.
So, as long as none of your object's methods mutate the object, you don't really need to change much to program in a "functional style".
Edit:
Unfortunately both "functional" and "object-oriented" (in particular) are pretty ill-defined concepts.
Try and find a definition of "object-oriented" - there are at least as many definitions as there are people attempting to define it.
To get an understanding of functional programming, read Why functional programming matters by John Hughes, at least twice.