Attribute vs constant access in javascript - javascript

I'm working on High-performance oriented web components and I doubt if it's worths assign an object attribute's value to a constant before accessing it multiple times.
I mean turning this:
let counter = 0;
for (let i = 0, len = parentObject.myObject.items.length; i < len; i++) {
// items is an array of integers
counter += parentObject.myObject.items[i] ;
}
Into this:
let counter = 0;
const { myObject } = parentObject;
const { items } = myObject;
for (let i = 0, len =items.length; i < len; i++) {
counter += items[i] ;
}
In Python this change would have a sentitive impact in performance. However the tests I have made (code at https://gist.github.com/Edorka/fbfb0778c859d8f518f0508414d3e6a2) shows no difference:
caseA total 124999750000
Execution time (hr): 0s 1.88101ms
caseB total 124999750000
Execution time (hr): 0s 1.117547ms
I doubt if I'm making my tests wrong or if the VM has any optimization for this case I'm not aware of.
UPDATE: Following #George Jempty suggestion I made a quick adaptation on JSPerf at https://jsperf.com/attribute-vs-constants but results keep being quite erratic.

Nested property access is one of the most frequently executed operation in JavaScript. You can expect it to be heavily optimized.
Indeed, the V8 engine caches object properties at run time, so the performance benefit of caching manually would be negligible.
Live demo on jsperf.com
Conclusion: don't worry about it!

Related

Array for-loop: temp save the Array[i] or keep calling Array[i] ? Which is better/faster?

I'm trying to find out if I should temporally copy an array item while I work with it inside a for-loop.
Is there any performance difference ? Other than making the code more readable.
(JavaScript)
var max = someArray.length;
for (var i = 0; i < max; i++) {
// Here I will use someArray[i] quite often. Like this:
if (someArray[i].position.y > blabla) {
//...
}
}
OR:
var max = someArray.length;
for (var i = 0; i < max; i++) {
var tmp = someArray[i]; // <---- CHANGE
// Here I will use tmp quite often. Like this:
if (tmp.position.y > blabla) {
//...
}
}
Caveat: Worry about performance when you have a specific performance problem to worry about. Until then, write whatever seems clearest and least error prone to you and your team. 99.999999% of the time, the specific performance of a given loop just won't matter in real world terms.
With that said:
With modern JavaScript, I'd probably use a for-of loop (ES2015+) instead:
for (const entry of someArray) {
if (entry.position.y > blabla) {
// ...
}
}
In theory, that uses an iterator behind the scenes, which involves function calls; in practice, it's likely that if you're dealing with an actual array and that array uses the standard iterator, the JavaScript engine will be able to optimize the loop if the loop is a hot spot (and if it isn't, it's not worth worrying about).
Re your two alternatives, if i and someArray don't change in the loop and someArray is just a normal array, the JavaScript engine is likely to be able to optimize it into something like your second loop. As a matter of style, before for-of I always used a local within the loop rather than retyping someArray[i] each time, but that's just because it's easier to type tmp (or some more meaningful name) than someArray[i].
If there's a reason to believe that a specific loop is being slow (so for some reason, it's not getting optimized), then I might go with something like your second example:
for (let i = 0, max = someArray.length; i < max; i++) {
const tmp = someArray[i];
if (tmp.position.y > blabla) {
//...
}
}
But again, this is mostly a matter of style until/unless you have a specific performance problem you're diagnosing and fixing.

JS array length - when is it updated, when is it calculated? [duplicate]

What is the time complexity of a call to array.length in JavaScript? I think it would constant since it seems that property is set automatically on all arrays and you're just looking it up?
I think it would be constant since it seems that property is set automatically on all arrays and you're just looking it up?
Right. It's a property which is stored (not calculated) and automatically updated as necessary. The specification is explicit about that here and here amongst other places.
In theory, a JavaScript engine would be free to calculate length on access as though it were an accessor property as long as you couldn't tell (which would mean it couldn't literally be an accessor property, because you can detect that in code), but given that length is used repeatedly a lot (for (let n = 0; n < array.length; ++n) springs to mind), I think we can assume that all JavaScript engines in widespread use do what the spec says or at least something that's constant time access.
Just FWIW: Remember that JavaScript's standard arrays are, in theory, just objects with special behavior. And in theory, JavaScript objects are property bags. So looking up a property in a property bag could, in theory, depend on how many other properties are there, if the object is implemented as some kind of name->value hashmap (and they used to be, back in the bad old days). Modern engines optimize objects (Chrome's V8 famously creates dynamic classes on the fly and compiles them), but operations on those objects can still change property lookup performance. Adding a property can cause V8 to create a subclass, for instance. Deleting a property (actually using delete) can make V8 throw up its hands and fall back into "dictionary mode," which substantially degrades property access on the object.
In other words: It may vary, engine to engine, even object to object. But if you use arrays purely as arrays (not storing other non-array properties on them), odds are you'll get constant-time lookup.
It doesn't seem like a bottleneck but if you want to be sure use var len = arr.length and check that. It doesn't hurt and seems to be a tad faster on my machine albeit not a significant difference.
var arr = [];
for (var i = 0; i < 1000000; i++) {
arr[i] = Math.random();
}
var start = new Date();
for (var i = 0; i < arr.length; i++) {
arr[i] = Math.random();
}
var time1 = new Date() - start;
var start = new Date();
for (var i = 0, len = arr.length; i < len; i++) {
arr[i] = Math.random();
}
var time2 = new Date() - start;
document.getElementById("output").innerHTML = ".length: " + time1 + "<br/>\nvar len: " + time2;
<div id="output"></div>

Performance of array includes vs mapping to an Object and accessing it in JavaScript

According to the fundamentals of CS
the search functionality of an unsorted list has to occur in O(n) time where as direct access into an array will occur in O(1) time for HashMaps.
So is it more performant to map an array into a dictionary and then access the element directly or should I just use includes? This question is specifically for JavaScript because I believe this would come down to core implementation details of how includes() and {} is implemented.
let y = [1,2,3,4,5]
y.includes(3)
or...
let y = {
1: true,
2: true
3: true
4: true
5: true
}
5 in y
It's true that object lookup occurs in constant time - O(1) - so using object properties instead of an array is one option, but if you're just trying to check whether a value is included in a collection, it would be more appropriate to use a Set, which is a (generally unordered) collection of values, which can also be looked up in linear time. (Using a plain object instead would require you to have values in addition to your keys, which you don't care about - so, use a Set instead.)
const set = new Set(['foo', 'bar']);
console.log(set.has('foo'));
console.log(set.has('baz'));
This will be useful when you have to look up multiple values for the same Set. But, adding items to the Set (just like adding properties to an object) is O(N), so if you're just going to look up a single value, once, there's no benefit to this nor the object technique, and you may as well just use an array includes test.
Updated 04/29/2020
As the commenter rightly pointed out it would seem V8 was optimizing out the array includes calls. An updated version that assigns to a var and uses it produces more expected results. In that case Object address is fastest, followed by Set has and in a distant third is Array includes (on my system / browser).
All the same, I do stand by my original point, that if making micro-optimizations it is worth testing assumptions. Just make sure your tests are valid ;)
Original
Well. Despite the obvious expectation that Object address and Set has should outperform Array includes, benchmarks against Chrome indicate that implementation trumps expectation.
In the benches I ran against Chrome Array includes was far and away the best performer.
I also tested locally with Node and got more expected results. In that Object address wins, followed closely by Set has, then Array includes was marginally slower than both.
Bottom line is, if you're making micro-optimizations (not recommending that) it's worth benchmarking rather than assuming which might be best for your particular case. Ultimately it comes down to the implementation, as your question implies. So optimizing for the target platform is key.
Here's the results I got:
Node (12.6.0):
ops for Object address 7804199
ops for Array includes 5200197
ops for Set has 7178483
Chrome (75.0):
https://jsbench.me/myjyq4ixs1/1
This isn't necessarily a direct answer to the question but here is a related performance test I ran real quick in my chrome dev tools
function getRandomInt(max) {
return Math.floor(Math.random() * max);
}
var arr = [1,2,3];
var t = performance.now();
for (var i = 0; i < 100000; i++) {
var x = arr.includes(getRandomInt(3));
}
console.log(performance.now() - t);
var t = performance.now();
for (var i = 0; i < 100000; i++) {
var n = getRandomInt(3);
var x = n == 1 || n == 2 || n == 3;
}
console.log(performance.now() - t);
VM44:9 9.100000001490116
VM44:16 5.699999995529652
I find the array includes syntax nice to look at, so I wanted to know if the performance was likely to be an issue the way I use it, for checking if a variable is one of a set of enums for instance. It doesn't seem to be much of an impact for situations like this with a short list. Then I ran this.
function getRandomInt(max) {
return Math.floor(Math.random() * max);
}
var t = performance.now();
for (var i = 0; i < 100000; i++) {
var x = [1,2,3].includes(getRandomInt(3));
}
console.log(performance.now() - t);
var t = performance.now();
for (var i = 0; i < 100000; i++) {
var n = getRandomInt(3);
var x = n == 1 || n == 2 || n == 3;
}
console.log(performance.now() - t);
VM83:8 12.600000001490116
VM83:15 4.399999998509884
and so the way I actually use it and like lookin at it is quite worse with performance, despite still not being very significant unless run a few million times, so using it inside of an Array.filter that may run a lot as a react redux selector may not be a great idea like I was about to do when I decided to test this.

Remove element of array without splice()

I'm developing a JavaScript game and I want to keep my memory usage as low as possible.
Therefore I set some objects to null again, so they can get garbage collected.
I read an article which recommends avoiding functions like Array.splice(), because this creates a new array, which allocates new memory.
So I've implemented a JSFiddle with an own function, that deletes an element at a specific index and shifts all elements behind, so the length will be set to length -= 1. This only affects the existing array instead of creating a new one:
Function to use instead of splice:
deleteElem = function(arr, el) {
var index = arr.indexOf(el);
if (index > -1) {
var len = arr.length - 1;
for (var i = index; i < len; i++) {
arr[i] = arr[i + 1];
}
arr.length = len;
}
}
The JSFiddle for my function is sometimes faster, sometimes slower...
Should I pay more attention to better performance and worse memory, or better memory and worse performance?
What other ways exist to avoid using Array.splice?
You need to realize how jsperf runs your code. It doesn't run setup for each time it runs your code - the code runs hundreds or thousands of times for each setup.
That means you are using an empty array for 99.999999% of the calls and thus not measuring anything useful.
You could at least get some sense out of it by measuring it like this http://jsperf.com/splice-vs-own-function/2 however you need to note that the allocation of array of 50 times might blunt the differences and that your method is therefore actually much faster than the benchmark can show.

Do loops check the array.length every time when comparing i against array.length?

I was browsing around and I found this:
var i, len;
for(i = 0, len = array.length; i < len; i++) {
//...
}
My first thoughts are:
Why he did that? (it must be better for some reason)
Is it worth it? (I assume yes, why else he will do it this way?)
Do normal loops (the ones that don't cache the length) check the array.length each time?
A loop consisting of three parts is executed as follows:
for (A; B; C)
A - Executed before the enumeration
B - condition to test
C - expression after each enumeration (so, not if B evaluated to false)
So, yes: The .length property of an array is checked at each enumeration if it's constructed as for(var i=0; i<array.length; i++). For micro-optimisation, it's efficient to store the length of an array in a temporary variable (see also: What's the fastest way to loop through an array in JavaScript?).
Equivalent to for (var i=0; i<array.length; i++) { ... }:
var i = 0;
while (i < array.length) {
...
i++;
}
Is it worth it? (obviously yes, why else he will do it this way?)
Absolutely yes. Because, as you say, loop will calculate array length each time. So this will cause an enormous overhead. Run the following code snippets in your firebug or chrome dev tool vs.
// create an array with 50.000 items
(function(){
window.items = [];
for (var i = 0; i < 50000; i++) {
items.push(i);
}
})();
// a profiler function that will return given function's execution time in milliseconds
var getExecutionTime = function(fn) {
var start = new Date().getTime();
fn();
var end = new Date().getTime();
console.log(end - start);
}
var optimized = function() {
var newItems = [];
for (var i = 0, len = items.length; i < len; i++) {
newItems.push(items[i]);
}
};
var unOptimized = function() {
var newItems= [];
for (var i = 0; i < items.length; i++) {
newItems.push(items[i]);
}
};
getExecutionTime(optimized);
getExecutionTime(unOptimized);
Here is the approximate results in various browsers
Browser optimized unOptimized
Firefox 14 26
Chrome 15 32
IE9 22 40
IE8 82 157
IE7 76 148
So consider it again, and use optimized way :)
Note: I tried to work this code on jsPerf but I couldn't access jsPerf now. I guess, it is down when I tried.
I always thought in JavaScript length was just a property of the array object, pre-calculated by previous array operations - creation, addition, removal - or overridden by the user, so you're just looking up a variable anyway? I must admit I had just assumed that because of the lack of parenthesis, but looking at the MDN page for array.length, it seems to say the same thing.
In languages where length is a method or length is calculated by a standard library function, then you should pre-calculate the length before running the loop so The array isn't calculated every iteration, particularly for large datasets. Even then, in modern high level languages like Python, len() just returns the length property of the array object anyway.
So unless I'm mistaken, the complexity is just O(1), and from that standpoint, even if the variable were slightly faster than a property to lookup each pass, it wouldn't be worth the potential trouble of creating/reusing additional variables outside of the protective for loop scope.
However, I suspect that in this case the reason the example's programmer chose this approach is simply just a habit they picked up in another language and carried forwards JavaScript.
One reason to do this is say, if you're adding elements to the array during the loop but do not want to iterate over them. Say you want to turn [1, 2, 3] into [1, 2, 3, 1, 2, 3]. You could to that with:
var initialLength = items.length;
for(var i=0; i<initialLength; ++i) {
items.push(items[i]);
}
If you don't save the length before the loop, then array.length will keep increasing and the loop will run until the browser crashes / kills it.
Other than that, as the others said, it mildly affects performance. I wouldn't make a habit out of doing this because "premature optimization is the root of all evil". Plus, if you change the size of the array during the loop, doing this could break your code. For instance, if you remove elements from the array during the loop but keep comparing i to the previous array size, then the loop will try to access elements beyond the new size.
Yes
Its check the array.length every time, but if we don't want to run our loop for increased array then, in that case, we can use forEach method.
forEach is a javascript in build method which is does like for loop, but forEach only iterative the elements in an array which are before the loop start.
Let's understand this from below snippet:
array = [1,2,3,4]
for(i=0;i<=array.length;i++){
console.log(array[i]);
array.push(array[i]+1);
}
output like
1
2
3
4
5
6
...so on (infinite) while its checking array.length each time
let's check with forEach method
array = [1,2,3,4]
array.forEach((element) => {
console.log(element);
array.push(element+1);
})
console.log("array elements after loop",array);
it only processes for 4 elements which are present in array before iteration start.
**but in forEach case, it will affect on array.length if we pop out the element from array.
lets see this by an example:
array = [1,2,3,4]
array.forEach((element) => {
console.log(element);
array.pop()
})
console.log("array elements after loop",array);
Here are a number of performance tests for different approaches
http://www.websiteoptimization.com/speed/10/

Categories

Resources