Converting a larger byte array to a string - javascript

When N is set to 125K the following works
let N = 125000
let x = [...Array(N)].map(( xx,i) => i)
let y = String.fromCodePoint(...x)
console.log(y.length)
When N is set to 128K that same code breaks:
Uncaught RangeError: Maximum call stack size exceeded
This is a common operation: what is the optimal way to achieve the conversion?
Note that I did look at this related Q&A . https://stackoverflow.com/a/3195961/1056563 We should not depend on node.js and also the approaches with the fromCharCode.apply are failing. Finally that answer is nearly ten years old.
So what is an up to date performant way to handle this conversion?

The problem is caused because implementations have limits to the number of parameters accepted. This results in an exception being raised when too many parameters (over ~128k in this case) are supplied to the String.fromCodePoint functions via the spread operator.
One way to solve this problem relatively efficiently, albeit with slightly more code, is to batch the operation across multiple calls. Here is my proposed implementation, which fixes what I perceive as issues relating to scaling performance and the handling of surrogate pairs (that's incorrect: fromCodePoint doesn't care about surrogates, making it preferable to fromCharCode in such cases).
let N = 500 * 1000;
let A = [...Array(N)].map((x,i) => i); // start with "an array".
function codePointsToString(cps) {
let rs = [];
let batch = 32767; // Supported 'all' browsers
for (let i = 0; i < cps.length; ){
let e = i + batch;
// Build batch section, defer to Array.join.
rs.push(String.fromCodePoint.apply(null, cps.slice(i, e)));
i = e;
}
return rs.join('');
}
var result = codePointsToString(A);
console.log(result.length);
Also, I wanted a trophy. The code above should run in O(n) time and minimize the amount of objects allocated. No guarantees on this being the 'best' approach. A benefit of the batching approach, and why the cost of apply (or spread invocation) is subsumed, is that there are significantly less calls to String.fromCodePoint and intermediate strings. YMMV - especially across environments.
Here is an online benchmark. All tests have access to, and use, the same generated "A" array of 500k elements.

The given answers are of poor performance: i measured 19 seconds on one of them and the others are similar (*). It is necessary to preallocate the output array. The following is 20 to 40 milli seconds. Three orders of magnitude faster.
function wordArrayToByteArray(hash) {
var result = [...Array(hash.sigBytes)].map(x => -1)
let words = hash.words
//map each word to an array of bytes
.map(function (v) {
// create an array of 4 bytes (less if sigBytes says we have run out)
var bytes = [0, 0, 0, 0].slice(0, Math.min(4, hash.sigBytes))
// grab that section of the 4 byte word
.map(function (d, i) {
return (v >>> (8 * i)) % 256;
})
// flip that
.reverse()
;
// remove the bytes we've processed
// from the bytes we need to process
hash.sigBytes -= bytes.length;
return bytes;
})
words.forEach((w,i) => {
result.splice(i * 4, 4, ...w)
})
result = result.map(function (d) {
return String.fromCharCode(d);
}).join('')
return result
}
(*) With the possible exception of #User2864740 - we are awaiting his numbers. But his solution also uses apply() inside the loop which leads to believe it will also be slow.

"Old fashion" JavaScript:
var N=125000;
var y="";
for(var i=0; i<N; i++)
y+=String.fromCharCode(i);
console.log(y.length);
Worked with N=1000000

Related

how to discriminate two numbers that are very near?

So my case goes like this, my software that is being develop on JavaScript needs to manipulate exact numeric values but sometimes it can happen that the values are way too near and I need to discriminate.
This is a case example:
0:(2) [112.02598008561951, 9.12963236661007]
1:(2) [112.02598008561952, 9.129632366610064]
2:(2) [9.751846481442218, 3.5376744911193576]
In this array position 0 and 1 has similar values but with an slight difference at the end of the decimals, but the one that counts is the position 0, because the two numbers are way near it messes with the process that follows next.
So, how do I do to discriminate near numbers and just use the first of the similar numbers given?
In the end the end result would be an array like this:
0:(2) [112.02598008561951, 9.12963236661007]
1:(2) [9.751846481442218, 3.5376744911193576]
I tried doing a truncation but I need the whole number to work with.
Edit: as one of the comments asked about if the points can vary or not, in my real problem I get a series of numbers that I sort normally I get like 3 points or best case scenario I get 2 points.
Sometimes this problem happens when I get near numbers and the first layer of sorting doesn't work as intended and the next part doesn't work well.
In short, you need to consider that it is always like 3 positions of coordinates.
In short your easiest option is to round to a fixed number of decimal places. This is because floats in JS (and in computer science in general) can be a tricky thing. For example, this should make you want to throw your computer:
var x = 0.1 * 0.2; //-> 0.020000000000000004
There are different use cases for needing super exact precision (eg. when dealing with money, trajectory of a satellite, etc), but most use cases only need "good enough" precision. In your case, it's best to round all of your numbers to a fixed decimal length so that you don't encounter the low-level inaccuracies.
var ACCURACY = 100000000;
var round= (num) => Math.round(num * ACCURACY) / ACCURACY;
var x = round(0.1 * 0.2); //-> 0.2
If you trust the numbers you have and you're just needing to filter out a pair which is close to another pair, you will need to write a little function to apply your heuristics.
var areClose = (x, y) => Math.abs(x - y) < 0.0000000001;
var filterPoints = (arr) => {
return arr.filter(([x, y], i) => {
for(var n = i - 1; n >= 0; n--) {
if (areClose(x, arr[n][0]) && areClose(y, arr[n][1])) {
return false;
}
}
return true;
});
}
filterPoints([
[112.02598008561951, 9.12963236661007],
[112.02598008561952, 9.129632366610064],
[9.751846481442218, 3.5376744911193576],
]);
// [
// [112.02598008561951, 9.12963236661007],
// [9.751846481442218, 3.5376744911193576]]
// ]
Note: this will keep the "first" set of values. If you wish to keep the "last" set, then you can flip the inner loop to crawl upwards:
for(var n = i + 1; n < arr.length; n++) { ...
Let's see if I understood correctly, you have this array with vertex points, usually it's just a 2 elements bidimensional array, but sometimes it might receive an extra vertex points array, with a slight different value (differ of 1*10^-14) and you want to discard the higher extra values.
I came up with something like this:
const arr = [
[112.02598008561951, 9.12963236661007],
[112.02598008561952, 9.129632366610064],
[9.751846481442218, 3.5376744911193576],
];
for (let i = 0; i < arr.length-1; i++) {
const diff = Math.abs(arr[i][0] - arr[i + 1][0])
if (diff <= 0.00000000000002) arr.splice(i + 1, 1);
}
console.log("NEW ARR", arr)
This just checks the first element of the array, since if I understood correctly it automatically means even the second element differs of a similar amount.
I'm using a (2*10-14) threshold since 1 is not enough, not sure if it's due to JS issues with float precision.
You could sort and reduce
let arr = [
[112.02598008561952, 9.129632366610064],
[112.02598008561951, 9.12963236661007],
[9.751846481442218, 3.5376744911193576]
]
arr.sort((a,b) => a[0]-b[0]); // swap a and b for descending
const precision = 0.00000000000002;
arr = arr.reduce((acc,cur,i) => {
if (i===0) { acc.push(cur); return acc}
const diff = Math.abs(acc[acc.length-1][0]-cur[0])
if (diff > precision) acc.push(cur)
return acc
},[])
console.log(arr)
What about something like
const removeDecimalPlaces = (num) => Math.floor(num * 10000000000000) / 10000000000000;
console.log(removeDecimalPlaces(112.02598008561951) === removeDecimalPlaces(112.02598008561952))

How can I efficiently search a string for occurrences of words?

Essentially, I have a Set of words, about 250,000 of them and want to be able to return a list of which ones are found in a given string.
eg. input string is 'APPLEASEDITION', I want to return
[APP,APPLE,PLEA, PLEAS,PLEASE,PLEASED,lEA,LEAS,LEASE,LEASED,EA,EAS,EASE,EASED,AS,SEDITION,EDITION,IT,TI,ON]
I came up with this code, which works faster than the method mentioned above for shorter input strings (up to 15 characters), but doubles in execution time with each added letter:
const findWords = (instring, solutions = null) => {
if (!solutions) solutions = new Set();
if (!instring) {
return new Set();
}
if (words[instring]) {
solutions.add(instring);
}
const suffix = instring.slice(1);
const prefix = instring.slice(0, instring.length - 1);
if (!solutions.has(prefix))
solutions = new Set([...solutions, ...findWords(prefix, solutions)]);
if (!solutions.has(suffix))
solutions = new Set([...solutions, ...findWords(suffix, solutions)]);
return solutions;
};
Wondering if anyone can help me out optimizing the code?
Edit:
I made a different solution, it works much better
const getAllSubstrings = (str) => {
let result = [];
for (let i = 0; i < str.length; i++) {
for (let j = i + 1; j < str.length + 1; j++) {
result.push(str.slice(i, j));
}
}
return result;
}
const findWords = (instring) => {
const solutions = []
let subs = getAllSubstrings(instring)
for (let sub of subs) {
if (words[sub])
solutions.push(sub)
}
return solutions
}
Still open to possible improvements, but this works well enough for my use case
As it stands your logic assumes your input starts or ends with the phrase, but doesn't consider words in the middle - you'll need to generate permutations
Convert your dictionary to a hash where the words are keys - O(n) => O(1) - you can check if possible words are in the dictionary by checking dictionary[possibleWord]
You could convert your array of dictionary words into a binary search tree or a trie - there might be a performance benefit to converting your source text to a collection of BSTs/Tries, where each one represents a possible word/permutation, and then comparing BSTs/Tries rather than strings, but I'm not sure how that'd be faster than string comparison at the moment.
You can limit the length to the max length of a given permutation to the words in your dictionary. You'll end up with a lot of permutations, but possibly less than you have currently.
As the comments state you may want to do this server side for more power/in a language more efficient than JS, or using WASM.
Some javascript libraries that have binary search tree tools:
https://developers.google.com/closure/library/
https://www.npmjs.com/package/binary-search-tree
https://www.npmjs.com/package/trie-search
Alternatively, you might be able to create two hashes (one of permutations, one of dictionary words), or another data structure that's made for creating a "diff" or "overlap", and extract the keys that are in both sets.

Finding a hamiltonian path with Javascript. How to improve efficiency?

I'm trying to solve this kata:
Given an integer N (<1000), return an array of integers 1..N where the sum of each 2 consecutive numbers is a perfect square. If that's not possible, return false.
For example, if N=15, the result should be this array: [9, 7, 2, 14, 11, 5, 4, 12, 13, 3, 6, 10, 15, 1, 8]. Below N=14, there's no answer, so the function should return false.
I thought 'how hard can this be?' and it's been long days in the rabbit hole. I've been programming for just a few months and don't have a background of CS so I'll write what I understand so far of the problem trying to use the proper concepts but please feel free to tell me if any expression is not correct.
Apparently, the problem is very similar to a known problem in graph theory called TSP. In this case, the vertices are connected if the sum of them is a perfect square. Also, I don't have to look for a cycle, just find one Hamiltonian Path, not all.
I understand that what I'm using is backtracking. I build an object that represents the graph and then try to find the path recursively. This is how I build the object:
function buildAdjacentsObject (limit) {
const potentialSquares = getPotentialSquares(limit)
const adjacents = {}
for (let i = 0; i < (limit + 1); i++) {
adjacents[i] = {}
for (let j = 0; j < potentialSquares.length; j++) {
if (potentialSquares[j] > i) {
const dif = potentialSquares[j] - i
if (dif <= limit) {
adjacents[i][dif] = 1
} else {
break
}
}
}
}
return adjacents
}
function getPotentialSquares (limit) {
const maxSum = limit * 2 - 1
let square = 4
let i = 3
const potentialSquares = []
while (square <= maxSum) {
potentialSquares.push(square)
square = i * i
i++
}
return potentialSquares
}
At first I was using a hash table with an array of adjacent nodes on each key. But when my algorithm had to delete vertices from the object, it had to look for elements in arrays several times, which took linear time every time. I made the adjacent vertices hashable and that improved my execution time. Then I look for the path with this function:
function findSquarePathInRange (limit) {
// Build the graph object
const adjacents = buildAdjacentsObject(limit)
// Deep copy the object before making any changes
const adjacentsCopy = JSON.parse(JSON.stringify(adjacents))
// Create empty path
const solution = []
// Recursively complete the path
function getSolution (currentCandidates) {
if (solution.length === limit) {
return solution
}
// Sort the candidate vertices to start with the ones with less adjacent vert
currentCandidates = currentCandidates.sort((a, b) => {
return Object.keys(adjacentsCopy[a]).length -
Object.keys(adjacentsCopy[b]).length
})
for (const candidate of currentCandidates) {
// Add the candidate to the path
solution.push(candidate)
// and delete it from the object
for (const candidateAdjacent in adjacents[candidate]) {
delete adjacentsCopy[candidateAdjacent][candidate]
}
if (getSolution(Object.keys(adjacentsCopy[candidate]))) {
return solution
}
// If not solution was found, delete the element from the path
solution.pop()
// and add it back to the object
for (const candidateAdjacent in adjacents[candidate]) {
adjacentsCopy[candidateAdjacent][candidate] = 1
}
}
return false
}
const endSolution = getSolution(
Array.from(Array(limit).keys()).slice(1)
)
// The elements of the path can't be strings
return (endSolution) ? endSolution.map(x => parseInt(x, 10)) : false
}
My solution works 'fast' but it's not fast enough. I need to pass more than 200 tests in less than 12 seconds and so far it's only passing 150. Probably both my algorithm and my usage of JS can be improved, so, my questions:
Can you see a bottleneck in the code? The sorting step should be the one taking more time but it also gets me to the solution faster. Also, I'm not sure if I'm using the best data structure for this kind of problem. I tried classic looping instead of using for..in and for..of but it didn't change the performance.
Do you see any place where I can save previous calculations to look for them later?
Regarding the last question, I read that there is a dynamic solution to the problem but everywhere I found one, it looks for minimum distance, number of paths or existence of path, not the path itself. I read this everywhere but I'm unable to apply it:
Also, a dynamic programming algorithm of Bellman, Held, and Karp can be used to solve the problem in time O(n2 2n). In this method, one determines, for each set S of vertices and each vertex v in S, whether there is a path that covers exactly the vertices in S and ends at v. For each choice of S and v, a path exists for (S,v) if and only if v has a neighbor w such that a path exists for (S − v,w), which can be looked up from already-computed information in the dynamic program.
I just can't get the idea on how to implement that if I'm not looking for all the paths. I found this implementation of a similar problem in python that uses a cache and some binary but again, I could translate it from py but I'm not sure how to apply those concepts to my algorithm.
I'm currently out of ideas so any hint of something to try would be super helpful.
EDIT 1:
After Photon comment, I tried going back to using a hash table for the graph, storing adjacent vertices as arrays. Also added a separate array of bools to keep track of the remaining vertices.
That improved my efficiency a lot. With these changes I avoided the need to convert object keys to arrays all the time, no need to copy the graph object as it was not going to be modified and no need to loop after adding one node to the path. The bad thing is that then I needed to check that separate object when sorting, to check which adjacent vertices were still available. Also, I had to filter the arrays before passing them to the next recursion.
Yosef approach from the first answer of using an array to store the adjacent vertices and access them by index prove even more efficient. My code so far (no changes to the square finding function):
function square_sums_row (limit) {
const adjacents = buildAdjacentsObject(limit)
const adjacentsCopy = JSON.parse(JSON.stringify(adjacents))
const solution = []
function getSolution (currentCandidates) {
if (solution.length === limit) {
return solution
}
currentCandidates = currentCandidates.sort((a, b) => {
return adjacentsCopy[a].length - adjacentsCopy[b].length
})
for (const candidate of currentCandidates) {
solution.push(candidate)
for (const candidateAdjacent of adjacents[candidate]) {
adjacentsCopy[candidateAdjacent] = adjacentsCopy[candidateAdjacent]
.filter(t => t !== candidate)
}
if (getSolution(adjacentsCopy[candidate])) {
return solution
}
solution.pop()
for (const candidateAdjacent of adjacents[candidate]) {
adjacentsCopy[candidateAdjacent].push(candidate)
}
}
return false
}
return getSolution(Array.from(Array(limit + 1).keys()).slice(1))
}
function buildAdjacentsObject (limit) {
const potentialSquares = getPotentialSquares(limit)
const squaresLength = potentialSquares.length
const adjacents = []
for (let i = 1; i < (limit + 1); i++) {
adjacents[i] = []
for (let j = 0; j < squaresLength; j++) {
if (potentialSquares[j] > i) {
const dif = potentialSquares[j] - i
if (dif <= limit) {
adjacents[i].push(dif)
} else {
break
}
}
}
}
return adjacents
}
EDIT 2:
The code performs fine in most of the cases, but my worst case scenarios suck:
// time for 51: 30138.229ms
// time for 77: 145214.155ms
// time for 182: 22964.025ms
EDIT 3:
I accepted Yosef answer as it was super useful to improve the efficiency of my JS code. Found a way to tweak the algorithm to avoid paths with dead ends using some of the restrictions from this paper A Search Procedure for Hamilton Paths and Circuits..
Basically, before calling another recursion, I check 2 things:
If there is any node with no edges that's not part of the path till now and the path is missing more than 1 node
If there were more than 2 nodes with 1 edge (one can be following node, that had 2 edges before deleting the edge to the current node, and other can be the last node)
Both situations make it impossible to find a Hamiltonian path with the remaining nodes and edges (if you draw the graph it'll be clear why). Following that logic, there's another improvement if you check nodes with only 2 edges (1 way to get in and other to go out). I think you can use that to delete other edges in advance but it was not necessary at least for me.
Now, the algorithm performs worse in most cases, where just sorting by remaining edges was good enough to predict the next node and extra work was added, but it's able to solve the worst cases in a much better time. For example, limit = 77 it's solved in 15ms but limit=1000 went from 30ms to 100ms.
This is a really long post, if you have any edit suggestions, let me know. I don't think posting the final code it's the best idea taking into account that you can't check the solutions in the platform before solving the kata. But the accepted answer and this final edit should be good advice to think about this last part while still learning something. Hope it's useful.
By replacing the object by an array you save yourself from convert the object to an array every time you want to find the length (which you do a lot - in any step of the sort algorithm), or when you want to get the keys for the next candidates. in my tests the code below has been a lot more effective in terms of execution time
(0.102s vs 1.078s for limit=4500 on my machine)
function buildAdjacentsObject (limit) {
const potentialSquares = getPotentialSquares(limit)
const adjacents = [];
for (let i = 0; i < (limit + 1); i++) {
adjacents[i] = [];
for (let j = 0; j < potentialSquares.length; j++) {
if (potentialSquares[j] > i) {
const dif = potentialSquares[j] - i
if (dif <= limit) {
adjacents[i].push(dif)
} else {
break
}
}
}
}
return adjacents
}
function getPotentialSquares (limit) {
const maxSum = limit * 2 - 1
let square = 4
let i = 3
const potentialSquares = []
while (square <= maxSum) {
potentialSquares.push(square)
square = i * i
i++
}
return potentialSquares
}
function findSquarePathInRange (limit) {
// Build the graph object
const adjacents = buildAdjacentsObject(limit)
// Deep copy the object before making any changes
const adjacentsCopy = JSON.parse(JSON.stringify(adjacents))
// Create empty path
const solution = [];
// Recursively complete the path
function getSolution (currentCandidates) {
if (solution.length === limit) {
return solution
}
// Sort the candidate vertices to start with the ones with less adjacent vert
currentCandidates = currentCandidates.sort((a, b) => {
return adjacentsCopy[a].length - adjacentsCopy[b].length
});
for (const candidate of currentCandidates) {
// Add the candidate to the path
solution.push(candidate)
// and delete it from the object
for (const candidateAdjacent of adjacents[candidate]) {
adjacentsCopy[candidateAdjacent] = adjacentsCopy[candidateAdjacent].filter(t=>t!== candidate)
}
if (getSolution(adjacentsCopy[candidate])) {
return solution
}
// If not solution was found, delete the element from the path
solution.pop()
// and add it back to the object
for (const candidateAdjacent of adjacents[candidate]) {
adjacentsCopy[candidateAdjacent].push(candidate);
}
}
return false
}
const endSolution = getSolution(
Array.from(Array(limit).keys()).slice(1)
)
// The elements of the path can't be strings
return endSolution
}
var t = new Date().getTime();
var res = findSquarePathInRange(4500);
var t2 = new Date().getTime();
console.log(res, ((t2-t)/1000).toFixed(4)+'s');

Different ways to create Javascript arrays

I want to understand the performance difference for constructing arrays. Running the following program, I am puzzled by the output below:
Time for range0: 521
Time for range1: 149
Time for range2: 1848
Time for range3: 8411
Time for range4: 3487
I don't understand why 3 takes longer than 4, while 1 takes shorter than 2. Also, seems the map function is very inefficient; what is the use of it?
function range0(start, count) {
var arr = [];
for (var i = 0; i < count; i++) {
arr.push(start + i);
}
return arr;
}
function range1(start, count) {
var arr = new Array(count);
for (var i = 0; i < count; i++) {
arr[i] = start + i;
}
return arr;
}
function range2(start, count) {
var arr = Array.apply(0, Array(count));
for (var i = 0; i < count; i++) {
arr[i] = start + i;
}
return arr;
}
function range3(start, count) {
var arr = new Array(count);
return arr.map(function(element, index) {
return index + start;
});
}
function range4(start, count) {
var arr = Array.apply(0, Array(count));
return arr.map(function(element, index) {
return index + start;
});
}
function profile(range) {
var iterations = 100000,
start = 0, count = 1000,
startTime, endTime, finalTime;
startTime = performance.now();
for (var i = 0; i < iterations; ++i) {
range(start, count);
}
endTime = performance.now();
finalTime = (endTime - startTime);
console.log(range.name + ': ' + finalTime + ' ms');
}
[range0, range1, range2, range3, range4].forEach(profile);
I don't understand why 3 takes longer than 4
Me neither. It is a surprising result, given my superficial analysis and the results I obtained by profiling the code. On my computer running Google Chrome 50, range4 is twice as slow compared to range3.
I'd have to study the Javascript implementation you are using in order to figure out why that happens.
while 1 takes shorter than 2.
range1 executes faster because it uses a loop and optimizes memory allocations, while range2 uses functions and does unnecessary memory allocations.
Also, seems the map function is very inefficient; what is the use of it?
The map function is used to compute a new Array based on the values of an existing one.
[1, 2, 3, 4, 5].map(number => number * number);
// [1, 4, 9, 16, 25]
On my computer
Time for range0: 783
Time for range1: 287
Time for range2: 10541
Time for range3: 14981
Time for range4: 28243
My results reflect my expectations regarding the performance of each function.
A superficial analysis of each function
range0
Creates an Array and populates it via a loop. It is the most simple and straightforward code possible. I suppose it could be understood as the baseline for performance comparison.
range1
Uses the Array constructor with a length parameter. This greatly optimizes the underlying memory allocation required to store the elements. Since the exact number of elements is known beforehand, the memory does not have to be reallocated as the number of elements grows; the exact amount of memory needed to store all elements can be allocated exactly once, when the Array is instantiated.
range2
Applies an empty arguments list to the constructor function, with this set to the number 0. This is semantically equivalent to Array() – the fact the arguments list was created with a count parameter has no bearing on the result of the function application. In fact, it needlessly wastes time allocating memory for an empty arguments list.
You probably meant to use call:
Array.call(null, count)
range3
Like range1, but uses map with a function instead of a loop. The initial memory allocation is optimized, but the overhead of calling the function count times is likely to be huge.
In addition, map generates a new Array instance. Since that instance also has count elements, it would make sense to optimize that memory allocation as well, however it is unclear to me whether that actually happens. Nevertheless, two separate memory allocations are taking place, instead of just one as in range1.
range4
Combines all the inefficiencies of range2 and range3.
Amazingly, it executes faster than range3 on your computer. It's unclear to me why that happened. I suppose one would have to investigate your Javascript particular implementation in order to figure it out.

Javascript performance array of objects preassignment vs direct use

I have a doubt about how can be affected to speed the use of object data arrays, that is, use it directly or preasign them to simple vars.
I have an array of elements, for example 1000 elements.
Every array item is an object with 10 properties (for example).
And finally I use some of this properties to do 10 calculations.
So I have APPROACH1
var nn = myarray.lenght;
var a1,a2,a3,a4 ... a10;
var cal1,cal2,.. cal10
for (var x=0;x<nn;x++)
{ // assignment
a1=my_array[x].data1;
..
a10 =my_array[x].data10;
// calculations
cal1 = a1*a10 +a2*Math.abs(a3);
...
cal10 = (a8-a7)*4 +Math.sqrt(a9);
}
And APPROACH2
var nn = myarray.lenght;
for (var x=0;x<nn;x++)
{
// calculations
cal1 = my_array[x].data1*my_array[x].data10 +my_array[x].data2*Math.abs(my_array[x].data3);
...
cal10 = (my_array[x].data8-my_array[x].data7)*4 +Math.sqrt(my_array[x].data9);
}
Assign a1 ... a10 values from my_array and then make calculations is faster than make the calculations using my_array[x].properties; or the right is the opposite ?????
I dont know how works the 'js compiler' ....
The kind of short answer is: it depends on your javascript engine, there is no right and wrong here, only "this has worked in the past" and "this don't seem to speed thing up no more".
<tl;dr> If i would not run a jsperf test, i would go with "Cached example" 1 example down: </tl;dr>
A general rule of thumb is(read: was) that if you are going to use an element in an array more then once, it could be faster to cache it in a local variable, and if you were gonna use a property on an object more then once it should also be cached.
Example:
You have this code:
// Data generation (not discussed here)
function GetLotsOfItems() {
var ret = [];
for (var i = 0; i < 1000; i++) {
ret[i] = { calc1: i * 4, calc2: i * 10, calc3: i / 5 };
}
return ret;
}
// Your calculation loop
var myArray = GetLotsOfItems();
for (var i = 0; i < myArray.length; i++) {
var someResult = myArray[i].calc1 + myArray[i].calc2 + myArray[i].calc3;
}
Depending on your browser (read:this REALLY depends on your browser/its javascript engine) you could make this faster in a number of different ways.
You could for example cache the element being used in the calculation loop
Cached example:
// Your cached calculation loop
var myArray = GetLotsOfItems();
var element;
var arrayLen = myArray.length;
for (var i = 0; i < arrayLen ; i++) {
element = myArray[i];
var someResult = element.calc1 + element.calc2 + element.calc3;
}
You could also take this a step further and run it like this:
var myArray = GetLotsOfItems();
var element;
for (var i = myArray.length; i--;) { // Start at last element, travel backwards to the start
element = myArray[i];
var someResult = element.calc1 + element.calc2 + element.calc3;
}
What you do here is you start at the last element, then you use the condition block to see if i > 0, then AFTER that you lower it by one (allowing the loop to run with i==0 (while --i would run from 1000 -> 1), however in modern code this is usually slower because you will read an array backwards, and reading an array in the correct order usually allow for either run-time or compile-time optimization (which is automatic, mind you, so you don't need to do anything for this work), but depending on your javascript engine this might not be applicable, and the backwards going loop could be faster..
However this will, by my experience, run slower in chrome then the second "kinda-optimized" version (i have not tested this in jsperf, but in an CSP solver i wrote 2 years ago i ended caching array elements, but not properties, and i ran my loops from 0 to length.
You should (in most cases) write your code in a way that makes it easy to read and maintain, caching array elements is in my opinion as easy to read (if not easier) then non-cached elements, and they might be faster (they are, at least, not slower), and they are quicker to write if you use an IDE with autocomplete for javascript :P

Categories

Resources