Look at this crazy question... I have an array with 30.000 items, and I have to run something like this over it:
const bigArray = [
{ createdAt: 1 },
{ createdAt: 2 },
{ createdAt: 3 },
// And so on... 30.000
];
const found = bigArray.find(x => x.createdAt > 29950)
And the thing here, is that I know that 100% of the time, that element will be in the index 29.950 (approx). Because that array is already sorted by createdAt (coming from the backend)
How does .find works? Does it iterates starting from the first element? Is there a way to say "I know it's closer to the end... Change your behavior"?
Of course there is the alternative of doing something like:
bigArray.reverse()
const prevIndex = bigArray.findIndex(x => x.createdAt <= 29950);
const found = bigArray[prevIndex - 1];
bigArray.reverse()
But I'm not sure if that's gonna be actually worst (because of the fact that there we'll also have some multiples unnecessary iterations... I guess).
Who can give me some clues about this?
It's not that I have a bug here... Not even a performance issue (because 30.000 is not that much), but, feels like there should be something there, and I never hear about it in ~16 years working on JavaScript
Thanks so much!
Based upon the documentation here, it appears that find is O(n) time complexity, where n is length of the array.
Since your elements are sorted, you can try to do binary search and reduce time complexity to O(log n).
This is the basic binary search iterative algorithm:
function binarySearchIterative (nums, target) {
let res = -1;
let left = 0;
let right = nums.length;
while (left <= right && res === -1) {
const mid = Math.floor(left + (right - left)/2);
if (nums[mid] === target) {
res = mid;
}
else if (nums[mid] > target) {
right--;
}
else {
left++;
}
}
return res;
};
I'm not aware of any options for Array.prototype.findIndex that start from the end... I do know for sure that using Array.prototype.reverse is very expensive and you could make your own algorithm like this if you know that you're likely to find the result you need near the end:
const bigArray = [
{ createdAt: 1 },
{ createdAt: 2 },
{ createdAt: 3 }
];
// Add the function to Array.prototype
Array.prototype.findIndexFromEnd = function (cond) {
for(let i = this.length - 1; i >= 0; i--) {
if(cond(this[i])) return i;
}
return -1;
}
// Gives 1 as expected
console.log(bigArray.findIndexFromEnd(x => x.createdAt == 2));
// Or use an external function if you don't want to edit the prototype
function findIndexFromEnd(array, cond) {
for(let i = array.length - 1; i >= 0; i--) {
if(cond(array[i])) return i;
}
return -1;
}
// Gives 1 as expected
console.log(findIndexFromEnd(bigArray, (x) => x.createdAt == 2));
Related
I have a function that gets a number and has to calculate the multiplication for each number with the rest of the numbers in the sequence.
If the input is 10, it should calculate the multiplication between 1x1, 1x2, 1x3, .... 10x1, 10x2, 10x3, .... 10x10. (passing through all the numbers sequentially)
So I thought at first sight that I need a double loop to do all possible multiplications but for big numbers it executes following O(n*n) which is too slow.
I heard there is a way to use only one loop. Do you know any post related with this subject? The only ones I found doesn't take into count that I need to perform the calculation foreach number by the rest of the numbers of the array.
Here the code:
for(i=1;i<=n;i++){
for(j=1;j<=n;j++){
// do i*j
}
}
Here's a way to do it with one loop (map), with the help of recursion.
const order = 10;
let sequence = [];
for (let i = 0; i < order; i++) {
sequence.push(i + 1);
}
const getMulTable = (order, table = []) => {
if (order === 1) {
table.push(sequence);
return table;
}
table = getMulTable(order - 1, table);
table.push(sequence.map(el => el * order));
return table;
}
console.log(getMulTable(order));
I'm not sure if this reduces the time complexity, but this is the shortest way I know of to doing it.
Use collection methods like map.
let n = 10
let nCopy = n
let arr = []
while (nCopy > 0) {
arr.push(nCopy)
nCopy--
}
arr.sort((a, b) => a - b)
let res = []
for (let i = 1; i <= n; i++) {
res.push(arr.map(a => a * i))
}
console.log(res.flat())
As the question states, I am trying to solve a leetcode problem. The solutions are available online but, I want to implement my own solution. I have built my logic. Logic is totally fine. However, I am unable to optimize the code as the time limit is exceeding for the large numbers.
Here's my code:
let count = 0;
const climbingStairs = (n, memo = [{stairs: null}]) => {
if(n === memo[n]) {
count += memo[n].stairs;
}
if(n < 0) return;
if(n === 0) return memo[n].stairs = count++;
memo[n] = climbingStairs(n - 1, memo) + climbingStairs(n - 2, memo);
return memo[n];
}
climbingStairs(20); //running fine on time
climbingStairs(40); //hangs as the code isn't optimized
console.log(count); //the output for the given number
The code optimization using the memoization object is not working. I have tried multiple ways but still, facing issues. Any help would be appreciated in optimizing the code. Thanks!
no need for count value, you can memoize this way:
const climbStairs = (n, memo = []) => {
if(n <= 2) return n;
if(memo[n]) {
return memo[n];
}
memo[n] = climbStairs(n - 1, memo) + climbStairs(n - 2, memo);
return memo[n];
}
Actually, you do not store a value, but NaN to the array.
You need to return zero to get a numerical value for adding.
Further more, you assign in each call a new value, even if you already have this value in the array.
A good idea is to use only same types (object vs number) in the array and not mixed types, because you need a differen hndling for each type.
const climbingStairs = (n, memo = [1]) => {
if (n < 0) return 0;
return memo[n] ??= climbingStairs(n - 1, memo) + climbingStairs(n - 2, memo);
}
console.log(climbingStairs(5));
console.log(climbingStairs(20));
console.log(climbingStairs(40));
After having a long difficult coding challenge, there was a problem that bugged me. I thought about it for an adequate time but couldn't find the way to solve it. Here, I am providing a problem and example below.
Input
v : an array of numbers.
q : 2 dimensional array with 3 elements in nested array.
Description
v is an array and q is a commands that does different thing according to its nested element.
if first element of nested array is 1 => second and third element of the nested array becomes the index and it returns sum[second:third+1] (As you can see, it is inclusive)
if first element of nested array is 2 => element of second index becomes the third. same as v[second] = third
Input example
v : [1,2,3,4,5]
q : [[1,2,4], [2,3,8], [1,2,4]]
Example
With a provided example, it goes like
command is [1,2,4] => first element is 1. it should return sum from v[2] to v[4] (inclusive) => 12.
command is [2,3,8] => first element is 2. it switches v[3] to 8. (now v is [1,2,3,8,5])
command is [1,2,4] => first element is 1. it should return sum from v[2] to v[4] (inclusive) => 16, as the third index has been changed from the previous command.
So the final answer is [12, 16]
Question.
The code below is how I solved, however, this is O(n**2) complexity. I wonder how I can reduce the time complexity in this case.
I tried making a hash object, but it didn't work. I can't think of a good way to make a cache in this case.
function solution(v, q) {
let answer = [];
for (let i = 0; i < q.length; i++) {
let [a, b, c] = q[i];
if (a === 1) {
let sum = 0;
for (let i = b; i <= c; i++) {
sum += v[i];
}
answer.push(sum);
} else if (a === 2) {
v[b] = c;
}
}
return answer;
}
This type of problem can typically be solved more efficiently with a Fenwick tree
Here is an implementation:
class BinaryIndexedTree extends Array {
constructor(length) {
super(length + 1);
this.fill(0);
}
add(i, delta) {
i++; // make index 1-based
while (i < this.length) {
this[i] += delta;
i += i & -i; // add least significant bit
}
}
sumUntil(i) {
i++; // make index 1-based
let sum = 0;
while (i) {
sum += this[i];
i -= i & -i;
}
return sum;
}
}
function solution(values, queries) {
const tree = new BinaryIndexedTree(values.length);
values.forEach((value, i) => tree.add(i, value));
const answer = [];
for (const [a, b, c] of queries) {
if (a === 1) {
answer.push(tree.sumUntil(c) - tree.sumUntil(b - 1));
} else {
tree.add(b, c - values[b]);
values[b] = c;
}
}
return answer;
}
let answer = solution([1,2,3,4,5], [[1,2,4], [2,3,8], [1,2,4]]);
console.log(answer);
Time Complexity
The time complexity of running tree.add or tree.sumUntil once is O(log𝑛), where 𝑛 is the size of the input values (values.length). So this is also the time complexity of running one query.
The creation of the tree costs O(𝑛), as this is the size of the tree
The initialisation of the tree with values costs O(𝑛log𝑛), as really each value in the input acts as a query that updates a value from 0 to the actual value.
Executing the queries costs O(𝑚log𝑛) where 𝑚 is the number of queries (queries.length)
So in total, we have a time complexity of O(𝑛 + 𝑛log𝑛 + 𝑚log𝑛) = O((𝑚+𝑛)log𝑛)
Further reading
For more information on Fenwick trees, see BIT: What is the intuition behind a binary indexed tree and how was it thought about?
Given I have an array of numbers for example [14,6,10] - How can I find possible combinations/pairs that can add upto a given target value.
for example I have [14,6,10], im looking for a target value of 40
my expected output will be
10 + 10 + 6 + 14
14 + 14 + 6 + 6
10 + 10 + 10 + 10
*Order is not important
With that being said, this is what I tried so far:
function Sum(numbers, target, partial) {
var s, n, remaining;
partial = partial || [];
s = partial.reduce(function (a, b) {
return a + b;
}, 0);
if (s === target) {
console.log("%s", partial.join("+"))
}
for (var i = 0; i < numbers.length; i++) {
n = numbers[i];
remaining = numbers.slice(i + 1);
Sum(remaining, target, partial.concat([n]));
}
}
>>> Sum([14,6,10],40);
// returns nothing
>>> Sum([14,6,10],24);
// return 14+10
It is actually useless since it will only return if the number can be used only once to sum.
So how to do it?
You could add the value of the actual index as long as the sum is smaller than the wanted sum or proceed with the next index.
function getSum(array, sum) {
function iter(index, temp) {
var s = temp.reduce((a, b) => a + b, 0);
if (s === sum) result.push(temp);
if (s >= sum || index >= array.length) return;
iter(index, temp.concat(array[index]));
iter(index + 1, temp);
}
var result = [];
iter(0, []);
return result;
}
console.log(getSum([14, 6, 10], 40));
.as-console-wrapper { max-height: 100% !important; top: 0; }
For getting a limited result set, you could specify the length and check it in the exit condition.
function getSum(array, sum, limit) {
function iter(index, temp) {
var s = temp.reduce((a, b) => a + b, 0);
if (s === sum) result.push(temp);
if (s >= sum || index >= array.length || temp.length >= limit) return;
iter(index, temp.concat(array[index]));
iter(index + 1, temp);
}
var result = [];
iter(0, []);
return result;
}
console.log(getSum([14, 6, 10], 40, 5));
.as-console-wrapper { max-height: 100% !important; top: 0; }
TL&DR : Skip to Part II for the real thing
Part I
#Nina Scholz answer to this fundamental problem just shows us a beautiful manifestation of an algorithm. Honestly it confused me a lot for two reasons
When i try [14,6,10,7,3] with a target 500 it makes 36,783,575 calls to the iter function without blowing the call stack. Yet memory shows no significant usage at all.
My dynamical programming solution goes a little faster (or may be not) but there is no way it can do above case without exhousting the 16GB memory.
So i shelved my solution and instead started investigating her code a little further on dev tools and discoverd both it's beauty and also a little bit of it's shortcomings.
First i believe this algorithmic approach, which includes a very clever use of recursion, might possibly deserve a name of it's own. It's very memory efficient and only uses up memory for the constructed result set. The stack dynamically grows and shrinks continuoously up to nowhere close to it's limit.
The problem is, while being very efficient it still makes huge amounts of redundant calls. So looking into that, with a slight modification the 36,783,575 calls to iter can be cut down to 20,254,744... like 45% which yields a much faster code. The thing is the input array must be sorted ascending.
So here comes a modified version of Nina's algorithm. (Be patient.. it will take like 25 secs to finalize)
function getSum(array, sum) {
function iter(index, temp) {cnt++ // counting iter calls -- remove in production code
var s = temp.reduce((a, b) => a + b, 0);
sum - s >= array[index] && iter(index, temp.concat(array[index]));
sum - s >= array[index+1] && iter(index + 1, temp);
s === sum && result.push(temp);
return;
}
var result = [];
array.sort((x,y) => x-y); // this is a very cheap operation considering the size of the inpout array should be small for reasonable output.
iter(0, []);
return result;
}
var cnt = 0,
arr = [14,6,10,7,3],
tgt = 500,
res;
console.time("combos");
res = getSum(arr,tgt);
console.timeEnd("combos");
console.log(`source numbers are ${arr}
found ${res.length} unique ways to sum up to ${tgt}
iter function has been called ${cnt} times`);
Part II
Eventhough i was impressed with the performance, I wasn't comfortable with above solution for no solid reason that i can name. The way it works on side effects and the very hard to undestand double recursion and such disturbed me.
So here comes my approach to this question. This is many times more efficient compared to the accepted solution despite i am going functional in JS. We have still have room to make it a little faster with ugly imperative ways.
The difference is;
Given numbers: [14,6,10,7,3]
Target Sum: 500
Accepted Answer:
Number of possible ansers: 172686
Resolves in: 26357ms
Recursive calls count: 36783575
Answer Below
Number of possible ansers: 172686
Resolves in: 1000ms
Recursive calls count: 542657
function items2T([n,...ns],t){cnt++ //remove cnt in production code
var c = ~~(t/n);
return ns.length ? Array(c+1).fill()
.reduce((r,_,i) => r.concat(items2T(ns, t-n*i).map(s => Array(i).fill(n).concat(s))),[])
: t % n ? []
: [Array(c).fill(n)];
};
var cnt = 0, result;
console.time("combos");
result = items2T([14, 6, 10, 7, 3], 500)
console.timeEnd("combos");
console.log(`${result.length} many unique ways to sum up to 500
and ${cnt} recursive calls are performed`);
Another important point is, if the given array is sorted descending then the amount of recursive iterations will be reduced (sometimes greatly), allowing us to squeeze out more juice out of this lemon. Compare above with the one below when the input array is sorted descending.
function items2T([n,...ns],t){cnt++ //remove cnt in production code
var c = ~~(t/n);
return ns.length ? Array(c+1).fill()
.reduce((r,_,i) => r.concat(items2T(ns, t-n*i).map(s => Array(i).fill(n).concat(s))),[])
: t % n ? []
: [Array(c).fill(n)];
};
var cnt = 0, result;
console.time("combos");
result = items2T([14, 10, 7, 6, 3], 500)
console.timeEnd("combos");
console.log(`${result.length} many unique ways to sum up to 500
and ${cnt} recursive calls are performed`);
I have an array of prime numbers:
const primes = [3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,79,83,89,97]
I want to find the first number in this list that is <= the number given.
For example ... getHighestPrimeNumber(58) ... should return 53, being the prime number with the greatest value which is also less than or equal to 58
Expect results:
getHighestPrimeNumber(58) === 53
getHighestPrimeNumber(53) === 53
getHighestPrimeNumber(52) === 47
My current approach is to iterate through the prime numbers but this is very inefficient, especially given that there may be 10,000+ numbers in the list - thanks
Vanilla JS or Lodash is fine
Since you posted this with lodash tag just FYI that this with it is trivial due to _.sortedIndex:
const primes = [3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,79,83,89,97]
const closestPrime = (n) => {
let index = _.sortedIndex(primes, n)
return primes[index] == n ? primes[index] : primes[index-1]
}
console.log(closestPrime(58))
console.log(closestPrime(53))
console.log(closestPrime(52))
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.10/lodash.min.js"></script>
Seems like a field for a divide and conquer approach. Something like a Binary search:
const primes = [3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,79,83,89,97]
function findHighestPrimeNumberRecursive(arr, ceiling) {
const midpoint = arr.length/2
if(arr[midpoint ] === ceiling){
// we found it!
return primes[arr.length-1];
} else {
if(arr[midpoint] <== ceiling) {
return findHighestPrimeNumberRecursive(arr.slice(0, midpoint), ceiling);
} else {
return findHighestPrimeNumberRecursive(arr.slice(midpoint, arr.length), ceiling);
}
}
}
function getHighestPrimeNumber(ceiling) {
return findHighestPrimeNumberRecursive(primes, ceiling);
}
This is a good task for a binary search:
const bisect = (needle, haystack) => {
let lo = 0;
let hi = haystack.length;
while (lo <= hi) {
const mid = ~~((hi - lo) / 2 + lo);
if (haystack[mid] === needle) {
return needle;
}
else if (haystack[mid] > needle) {
hi = mid - 1;
}
else {
lo = mid + 1;
}
}
return haystack[hi];
};
const getHighestPrimeNumber = n => {
const primes = [3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,79,83,89,97];
return bisect(n, primes);
};
console.log(getHighestPrimeNumber(58) === 53);
console.log(getHighestPrimeNumber(53) === 53);
console.log(getHighestPrimeNumber(52) === 47);
A couple of notes:
You'll likely want to make your prime number array a parameter to getHighestPrimeNumber so it isn't created and garbage collected on every function call. At this point, you might as well just call the binary search directly.
If you're concerned about queries over and under the bounds of the array, you can handle those according to some policy, for example: return haystack[Math.min(hi,haystack.length-1)];.
Binary search is O(n log(n)) time complexity. Set lookups are O(1), so you may experience a performance boost if you maintain a set in addition to the array and try lookups there first.