browser optimization of splice to slice - javascript

I have the following 2 functions:
//destructive
const getEveryX = (arr, x, offset) => {
const _arr = [...arr];
let _arrArr = [];
if (offset && offset >= arr.length) {
_arrArr.push(_arr.splice(0, offset));
}
while (_arr.length > x) {
_arrArr.push(_arr.splice(0, x));
}
if (_arr.length) {
_arrArr.push(_arr);
}
return _arrArr
}
and
//copying
const getEveryX2 = (arr, x, offset) => {
let _pointer = 0;
const _arrArr = [];
if (offset && offset >= arr.length) {
_arrArr.push(arr.slice(0, offset));
}
while (arr.length >= _pointer + x) {
_arrArr.push(arr.slice(_pointer, _pointer + x));
_pointer += x;
}
if (arr.length) {
_arrArr.push(arr.slice(_pointer, arr.length - 1));
}
return _arrArr;
};
I wrote the second function because I thougt it would be faster to copy the parts I need from the original array instead of copying the original and splicing out the beginning every time (both functions should do the same, first uses splice, second slice) - I tested it and this doesnt seem to be the case, they both take the same time.
My theory is that the compiler knows what I want to do in both cases and creates the same code.
I could also be completely wrong and the second version shouldnt be faster without optimizations.
Do you know what is going on here?

I tested it and this doesnt seem to be the case, they both take the same time.
No, your test case is broken. JSperf doesn't run setup and teardown for each of your snippets runs, it runs a your snippets in a loop between setup and teardown. You are emptying the testArr on the first run, the rest of the iterations only measures the while (testArr.length > 1) condition evaluation (yielding false).
I've updated the benchmark, and as expected slice is now performing better.

Related

Can we change the iteration direction of Array.find?

Look at this crazy question... I have an array with 30.000 items, and I have to run something like this over it:
const bigArray = [
{ createdAt: 1 },
{ createdAt: 2 },
{ createdAt: 3 },
// And so on... 30.000
];
const found = bigArray.find(x => x.createdAt > 29950)
And the thing here, is that I know that 100% of the time, that element will be in the index 29.950 (approx). Because that array is already sorted by createdAt (coming from the backend)
How does .find works? Does it iterates starting from the first element? Is there a way to say "I know it's closer to the end... Change your behavior"?
Of course there is the alternative of doing something like:
bigArray.reverse()
const prevIndex = bigArray.findIndex(x => x.createdAt <= 29950);
const found = bigArray[prevIndex - 1];
bigArray.reverse()
But I'm not sure if that's gonna be actually worst (because of the fact that there we'll also have some multiples unnecessary iterations... I guess).
Who can give me some clues about this?
It's not that I have a bug here... Not even a performance issue (because 30.000 is not that much), but, feels like there should be something there, and I never hear about it in ~16 years working on JavaScript
Thanks so much!
Based upon the documentation here, it appears that find is O(n) time complexity, where n is length of the array.
Since your elements are sorted, you can try to do binary search and reduce time complexity to O(log n).
This is the basic binary search iterative algorithm:
function binarySearchIterative (nums, target) {
let res = -1;
let left = 0;
let right = nums.length;
while (left <= right && res === -1) {
const mid = Math.floor(left + (right - left)/2);
if (nums[mid] === target) {
res = mid;
}
else if (nums[mid] > target) {
right--;
}
else {
left++;
}
}
return res;
};
I'm not aware of any options for Array.prototype.findIndex that start from the end... I do know for sure that using Array.prototype.reverse is very expensive and you could make your own algorithm like this if you know that you're likely to find the result you need near the end:
const bigArray = [
{ createdAt: 1 },
{ createdAt: 2 },
{ createdAt: 3 }
];
// Add the function to Array.prototype
Array.prototype.findIndexFromEnd = function (cond) {
for(let i = this.length - 1; i >= 0; i--) {
if(cond(this[i])) return i;
}
return -1;
}
// Gives 1 as expected
console.log(bigArray.findIndexFromEnd(x => x.createdAt == 2));
// Or use an external function if you don't want to edit the prototype
function findIndexFromEnd(array, cond) {
for(let i = array.length - 1; i >= 0; i--) {
if(cond(array[i])) return i;
}
return -1;
}
// Gives 1 as expected
console.log(findIndexFromEnd(bigArray, (x) => x.createdAt == 2));

How to reduce the time complexity of this problem?

After having a long difficult coding challenge, there was a problem that bugged me. I thought about it for an adequate time but couldn't find the way to solve it. Here, I am providing a problem and example below.
Input
v : an array of numbers.
q : 2 dimensional array with 3 elements in nested array.
Description
v is an array and q is a commands that does different thing according to its nested element.
if first element of nested array is 1 => second and third element of the nested array becomes the index and it returns sum[second:third+1] (As you can see, it is inclusive)
if first element of nested array is 2 => element of second index becomes the third. same as v[second] = third
Input example
v : [1,2,3,4,5]
q : [[1,2,4], [2,3,8], [1,2,4]]
Example
With a provided example, it goes like
command is [1,2,4] => first element is 1. it should return sum from v[2] to v[4] (inclusive) => 12.
command is [2,3,8] => first element is 2. it switches v[3] to 8. (now v is [1,2,3,8,5])
command is [1,2,4] => first element is 1. it should return sum from v[2] to v[4] (inclusive) => 16, as the third index has been changed from the previous command.
So the final answer is [12, 16]
Question.
The code below is how I solved, however, this is O(n**2) complexity. I wonder how I can reduce the time complexity in this case.
I tried making a hash object, but it didn't work. I can't think of a good way to make a cache in this case.
function solution(v, q) {
let answer = [];
for (let i = 0; i < q.length; i++) {
let [a, b, c] = q[i];
if (a === 1) {
let sum = 0;
for (let i = b; i <= c; i++) {
sum += v[i];
}
answer.push(sum);
} else if (a === 2) {
v[b] = c;
}
}
return answer;
}
This type of problem can typically be solved more efficiently with a Fenwick tree
Here is an implementation:
class BinaryIndexedTree extends Array {
constructor(length) {
super(length + 1);
this.fill(0);
}
add(i, delta) {
i++; // make index 1-based
while (i < this.length) {
this[i] += delta;
i += i & -i; // add least significant bit
}
}
sumUntil(i) {
i++; // make index 1-based
let sum = 0;
while (i) {
sum += this[i];
i -= i & -i;
}
return sum;
}
}
function solution(values, queries) {
const tree = new BinaryIndexedTree(values.length);
values.forEach((value, i) => tree.add(i, value));
const answer = [];
for (const [a, b, c] of queries) {
if (a === 1) {
answer.push(tree.sumUntil(c) - tree.sumUntil(b - 1));
} else {
tree.add(b, c - values[b]);
values[b] = c;
}
}
return answer;
}
let answer = solution([1,2,3,4,5], [[1,2,4], [2,3,8], [1,2,4]]);
console.log(answer);
Time Complexity
The time complexity of running tree.add or tree.sumUntil once is O(log𝑛), where 𝑛 is the size of the input values (values.length). So this is also the time complexity of running one query.
The creation of the tree costs O(𝑛), as this is the size of the tree
The initialisation of the tree with values costs O(𝑛log𝑛), as really each value in the input acts as a query that updates a value from 0 to the actual value.
Executing the queries costs O(𝑚log𝑛) where 𝑚 is the number of queries (queries.length)
So in total, we have a time complexity of O(𝑛 + 𝑛log𝑛 + 𝑚log𝑛) = O((𝑚+𝑛)log𝑛)
Further reading
For more information on Fenwick trees, see BIT: What is the intuition behind a binary indexed tree and how was it thought about?

Smallest Common Multiple of a range of numbers - works on Codepen.io and not on freecodecamp.org

I'm making the Freecodecamp certifications, and there's one problem I cannot see a solution to: the task is to calculate the Least Common Multiple (LCM) for an array of integers (that also means a RANGE of integers, between a min and a max value).
The snippet below gives the correct answer here on SO, on Codepen.io, on my local environment. But not on freecodecamp.org.
function smallestCommons(arr) {
// sorting and cloning the array
const fullArr = createArray(arr.sort((a, b) => a - b))
// calculating the theoretical limit of the size of the LCM
const limit = fullArr.reduce((a, c) => a * c, 1)
// setting the number to start the iteration with
let i = fullArr[0]
// iteration to get the LCM
for (i; i <= limit; i++) {
// if every number in the fullArr divide
// the number being tested (i), then it's the LCM
if (fullArr.every(e => !(i % e))) {
// stop the for loop
break
}
}
// return LCM
return i;
}
// utility function to create the fullArr const in the
// main function
function createArray([a, b]) {
const r = []
for (let i = b; i >= a; i--) {
r.push(i)
}
return r
}
// displaying the results
console.log(smallestCommons([23, 18]));
The error what I see:
the code works correctly with 4 other arrays on freecodecamp.org
the code gives false results - or no results at all for the array [23, 18]. If I get a result, it's not consistent (like 1,000,000 once, then 3,654,236 - I made these numbers up, but the behavior is like that). The result of the [23, 18] input should be 6,056,820 (and it's that here on SO, but not on freecodecamp.org)
As this code is far from optimal I have a feeling that the code execution just runs out of resources at one point, but I get no error for that.
I read the hints on the page (yes, I tried the solutions, and they do work), but I'd like to submit my own code: I know my algorithm is (theoretically) good (although not optimal), but I'd like to make it work in practice too.
I also see that this question had caused problems to others (it's been asked on SO), but I don't feel it's a duplicate.
Does anyone have any ideas?
As it turned out it was a resource problem - the algorithm in my question was correct theoretically but wasn't optimal or effective.
Here's one that is more efficient in solving this problem:
// main function
function smallestCommons(arr) {
const fullArr = createArray(arr.sort((a, b) => a - b))
return findLcm(fullArr, fullArr.length);
}
// creating the range of numbers based on a min and a max value
function createArray([a, b]) {
const r = []
for (let i = b; i >= a; i--) {
r.push(i)
}
return r
}
// smallest common multiple of n numbers
function findLcm(arr, n) {
let ans = arr[0];
for (let i = 1; i < n; i++) {
ans = (((arr[i] * ans)) /
(gcd(arr[i], ans)));
}
return ans;
}
// greatest common divisor
function gcd(a, b) {
if (b == 0) return a;
return gcd(b, a % b);
}
console.log(smallestCommons([1, 5]));
console.log(smallestCommons([5, 1]));
console.log(smallestCommons([2, 10]));
console.log(smallestCommons([23, 18]));
This method was OK on the testing sandbox environment.

Does this JavaScript function have a linear or quadratic time complexity?

I'm trying to get my head around this solution in a Google interview video: https://youtu.be/XKu_SEDAykw?t=1139.
Though they say it is linear in the video, I'm not 100% certain if (and why) the entire solution is linear rather than quadratic?
Because the find()/includes() method is nested in the for loop, that would make me assume it has a run-time of O(N * N).
But find()/includes() is searching an array that grows 1 step at a time, making me think the run-time in fact just O(N + N)?
Here's my version of the solution in JavaScript:
const findSum = (arr, val) => {
let searchValues = [val - arr[0]];
for (let i = 1; i < arr.length; i++) {
let searchVal = val - arr[i];
if (searchValues.includes(arr[i])) {
return true;
} else {
searchValues.push(searchVal);
}
};
return false;
};
My workings:
When i = 1, searchValues.length = 0
When i = 2, searchValues.length = 1
When i = 3, searchValues.length = 2
Shouldn't that imply a linear run-time of O(N + (N - 1))? Or am I missing something?!
Thanks for your help!
Yes, your solution is quadratic, because as you mentioned .includes traverses the array, so does for. In the interview however they talk about an unordered_set for the lookup array, which implies that this could be implemented as a HashSet, which has O(1) lookup/insertion time, making the algorithm O(n) (and O(n²) worst, worst case). The JS equivalent would be a Set:
const findSum = (arr, sum) =>
arr.some((set => n => set.has(n) || !set.add(sum - n))(new Set));

Javascript can this poker straight detection function be optimised?

Looking to optimise my code to get even more speed. Currently the code detects any poker hand and takes ~350ms to do 32000 iterations. The function to detect straights, however seems to be taking the biggest individual chunk of time at about 160ms so looking if any way to optimise it further.
The whole code was originally written in php since that is what I'm most familiar with but despite php 7's speed boost it still seems to be slower than javascript. What I found when translating to javascript though is that many of php's built in functions are not present in javascript which caused unforeseen slowdowns. It is still faster overall than the original php code though but I'm looking to see if it can be optimised more. Perhaps the answer is no, but I thought I'd check anyway.
I have written the functions range and arrays_equal since these are either missing from javascript or don't quite work properly.
function straight(handval) {
if (arrays_equal(handval.slice(0, 4),[2, 3, 4, 5]) && handval[handval.length-1] == 14) {//if is Ace 2345
return [4,14];
}
else {//if normal straight
for (let i = handval.length - 5; i >= 0; i--) {
let subhand = handval.slice(i, i + 5);
if (arrays_equal(subhand, range(subhand[0], subhand[subhand.length-1]))) {
return [4,subhand[4]];
}
} return [0]
}
}
function arrays_equal(a,b) { return !!a && !!b && !(a<b || b<a); }
function range(start, end) {
let arr = [];
for (let i = start; i <= end; i++) {
arr.push(i);
}
return arr;
}
Handval comes in as a simple array of 5-7 elements of numbers from 2-14 representing the cards So for example it could be [6,8,4,11,13,2] or [8,4,13,8,10].
EDIT: The function is called and sorted at the same time with this code:
straight(handval.slice(0).sort(sortNumber));
function sortNumber(a,b) { return a - b; }
You could just go from right to left and count the number of sequential numbers:
function straight(handval) {
if([2, 3, 4, 5].every((el, i) => handval[i] === el) && handval[handval.length-1] === 14)
return [4, 14];
let count = 1;
for(let i = handval.length - 1; i >= 1; i -= 1) {
if(handval[i] === handval[i - 1] + 1) {
count += 1;
if(count === 5) return [ 4, handval[i + 3] ];
} else {
count = 1;
}
}
return [0];
}
That is way faster as it:
1) does not create intermediate arrays on every iteration, which you did with range and slice
2) does not compare arrays as strings, which requires a typecast and a string comparison, which is way slower than comparing two numbers against each other
3) It does not check all 3 ranges on its own (1 - 5, 2 - 6, 3 - 7), but does all that in one run, so it only iterates 5 positions instead of 3 x 5.

Categories

Resources