Dynamic Programming: Implementing a solution using memoization - javascript

As the question states, I am trying to solve a leetcode problem. The solutions are available online but, I want to implement my own solution. I have built my logic. Logic is totally fine. However, I am unable to optimize the code as the time limit is exceeding for the large numbers.
Here's my code:
let count = 0;
const climbingStairs = (n, memo = [{stairs: null}]) => {
if(n === memo[n]) {
count += memo[n].stairs;
}
if(n < 0) return;
if(n === 0) return memo[n].stairs = count++;
memo[n] = climbingStairs(n - 1, memo) + climbingStairs(n - 2, memo);
return memo[n];
}
climbingStairs(20); //running fine on time
climbingStairs(40); //hangs as the code isn't optimized
console.log(count); //the output for the given number
The code optimization using the memoization object is not working. I have tried multiple ways but still, facing issues. Any help would be appreciated in optimizing the code. Thanks!

no need for count value, you can memoize this way:
const climbStairs = (n, memo = []) => {
if(n <= 2) return n;
if(memo[n]) {
return memo[n];
}
memo[n] = climbStairs(n - 1, memo) + climbStairs(n - 2, memo);
return memo[n];
}

Actually, you do not store a value, but NaN to the array.
You need to return zero to get a numerical value for adding.
Further more, you assign in each call a new value, even if you already have this value in the array.
A good idea is to use only same types (object vs number) in the array and not mixed types, because you need a differen hndling for each type.
const climbingStairs = (n, memo = [1]) => {
if (n < 0) return 0;
return memo[n] ??= climbingStairs(n - 1, memo) + climbingStairs(n - 2, memo);
}
console.log(climbingStairs(5));
console.log(climbingStairs(20));
console.log(climbingStairs(40));

Related

Can we change the iteration direction of Array.find?

Look at this crazy question... I have an array with 30.000 items, and I have to run something like this over it:
const bigArray = [
{ createdAt: 1 },
{ createdAt: 2 },
{ createdAt: 3 },
// And so on... 30.000
];
const found = bigArray.find(x => x.createdAt > 29950)
And the thing here, is that I know that 100% of the time, that element will be in the index 29.950 (approx). Because that array is already sorted by createdAt (coming from the backend)
How does .find works? Does it iterates starting from the first element? Is there a way to say "I know it's closer to the end... Change your behavior"?
Of course there is the alternative of doing something like:
bigArray.reverse()
const prevIndex = bigArray.findIndex(x => x.createdAt <= 29950);
const found = bigArray[prevIndex - 1];
bigArray.reverse()
But I'm not sure if that's gonna be actually worst (because of the fact that there we'll also have some multiples unnecessary iterations... I guess).
Who can give me some clues about this?
It's not that I have a bug here... Not even a performance issue (because 30.000 is not that much), but, feels like there should be something there, and I never hear about it in ~16 years working on JavaScript
Thanks so much!
Based upon the documentation here, it appears that find is O(n) time complexity, where n is length of the array.
Since your elements are sorted, you can try to do binary search and reduce time complexity to O(log n).
This is the basic binary search iterative algorithm:
function binarySearchIterative (nums, target) {
let res = -1;
let left = 0;
let right = nums.length;
while (left <= right && res === -1) {
const mid = Math.floor(left + (right - left)/2);
if (nums[mid] === target) {
res = mid;
}
else if (nums[mid] > target) {
right--;
}
else {
left++;
}
}
return res;
};
I'm not aware of any options for Array.prototype.findIndex that start from the end... I do know for sure that using Array.prototype.reverse is very expensive and you could make your own algorithm like this if you know that you're likely to find the result you need near the end:
const bigArray = [
{ createdAt: 1 },
{ createdAt: 2 },
{ createdAt: 3 }
];
// Add the function to Array.prototype
Array.prototype.findIndexFromEnd = function (cond) {
for(let i = this.length - 1; i >= 0; i--) {
if(cond(this[i])) return i;
}
return -1;
}
// Gives 1 as expected
console.log(bigArray.findIndexFromEnd(x => x.createdAt == 2));
// Or use an external function if you don't want to edit the prototype
function findIndexFromEnd(array, cond) {
for(let i = array.length - 1; i >= 0; i--) {
if(cond(array[i])) return i;
}
return -1;
}
// Gives 1 as expected
console.log(findIndexFromEnd(bigArray, (x) => x.createdAt == 2));

Smallest Common Multiple of a range of numbers - works on Codepen.io and not on freecodecamp.org

I'm making the Freecodecamp certifications, and there's one problem I cannot see a solution to: the task is to calculate the Least Common Multiple (LCM) for an array of integers (that also means a RANGE of integers, between a min and a max value).
The snippet below gives the correct answer here on SO, on Codepen.io, on my local environment. But not on freecodecamp.org.
function smallestCommons(arr) {
// sorting and cloning the array
const fullArr = createArray(arr.sort((a, b) => a - b))
// calculating the theoretical limit of the size of the LCM
const limit = fullArr.reduce((a, c) => a * c, 1)
// setting the number to start the iteration with
let i = fullArr[0]
// iteration to get the LCM
for (i; i <= limit; i++) {
// if every number in the fullArr divide
// the number being tested (i), then it's the LCM
if (fullArr.every(e => !(i % e))) {
// stop the for loop
break
}
}
// return LCM
return i;
}
// utility function to create the fullArr const in the
// main function
function createArray([a, b]) {
const r = []
for (let i = b; i >= a; i--) {
r.push(i)
}
return r
}
// displaying the results
console.log(smallestCommons([23, 18]));
The error what I see:
the code works correctly with 4 other arrays on freecodecamp.org
the code gives false results - or no results at all for the array [23, 18]. If I get a result, it's not consistent (like 1,000,000 once, then 3,654,236 - I made these numbers up, but the behavior is like that). The result of the [23, 18] input should be 6,056,820 (and it's that here on SO, but not on freecodecamp.org)
As this code is far from optimal I have a feeling that the code execution just runs out of resources at one point, but I get no error for that.
I read the hints on the page (yes, I tried the solutions, and they do work), but I'd like to submit my own code: I know my algorithm is (theoretically) good (although not optimal), but I'd like to make it work in practice too.
I also see that this question had caused problems to others (it's been asked on SO), but I don't feel it's a duplicate.
Does anyone have any ideas?
As it turned out it was a resource problem - the algorithm in my question was correct theoretically but wasn't optimal or effective.
Here's one that is more efficient in solving this problem:
// main function
function smallestCommons(arr) {
const fullArr = createArray(arr.sort((a, b) => a - b))
return findLcm(fullArr, fullArr.length);
}
// creating the range of numbers based on a min and a max value
function createArray([a, b]) {
const r = []
for (let i = b; i >= a; i--) {
r.push(i)
}
return r
}
// smallest common multiple of n numbers
function findLcm(arr, n) {
let ans = arr[0];
for (let i = 1; i < n; i++) {
ans = (((arr[i] * ans)) /
(gcd(arr[i], ans)));
}
return ans;
}
// greatest common divisor
function gcd(a, b) {
if (b == 0) return a;
return gcd(b, a % b);
}
console.log(smallestCommons([1, 5]));
console.log(smallestCommons([5, 1]));
console.log(smallestCommons([2, 10]));
console.log(smallestCommons([23, 18]));
This method was OK on the testing sandbox environment.

Javascript can this poker straight detection function be optimised?

Looking to optimise my code to get even more speed. Currently the code detects any poker hand and takes ~350ms to do 32000 iterations. The function to detect straights, however seems to be taking the biggest individual chunk of time at about 160ms so looking if any way to optimise it further.
The whole code was originally written in php since that is what I'm most familiar with but despite php 7's speed boost it still seems to be slower than javascript. What I found when translating to javascript though is that many of php's built in functions are not present in javascript which caused unforeseen slowdowns. It is still faster overall than the original php code though but I'm looking to see if it can be optimised more. Perhaps the answer is no, but I thought I'd check anyway.
I have written the functions range and arrays_equal since these are either missing from javascript or don't quite work properly.
function straight(handval) {
if (arrays_equal(handval.slice(0, 4),[2, 3, 4, 5]) && handval[handval.length-1] == 14) {//if is Ace 2345
return [4,14];
}
else {//if normal straight
for (let i = handval.length - 5; i >= 0; i--) {
let subhand = handval.slice(i, i + 5);
if (arrays_equal(subhand, range(subhand[0], subhand[subhand.length-1]))) {
return [4,subhand[4]];
}
} return [0]
}
}
function arrays_equal(a,b) { return !!a && !!b && !(a<b || b<a); }
function range(start, end) {
let arr = [];
for (let i = start; i <= end; i++) {
arr.push(i);
}
return arr;
}
Handval comes in as a simple array of 5-7 elements of numbers from 2-14 representing the cards So for example it could be [6,8,4,11,13,2] or [8,4,13,8,10].
EDIT: The function is called and sorted at the same time with this code:
straight(handval.slice(0).sort(sortNumber));
function sortNumber(a,b) { return a - b; }
You could just go from right to left and count the number of sequential numbers:
function straight(handval) {
if([2, 3, 4, 5].every((el, i) => handval[i] === el) && handval[handval.length-1] === 14)
return [4, 14];
let count = 1;
for(let i = handval.length - 1; i >= 1; i -= 1) {
if(handval[i] === handval[i - 1] + 1) {
count += 1;
if(count === 5) return [ 4, handval[i + 3] ];
} else {
count = 1;
}
}
return [0];
}
That is way faster as it:
1) does not create intermediate arrays on every iteration, which you did with range and slice
2) does not compare arrays as strings, which requires a typecast and a string comparison, which is way slower than comparing two numbers against each other
3) It does not check all 3 ranges on its own (1 - 5, 2 - 6, 3 - 7), but does all that in one run, so it only iterates 5 positions instead of 3 x 5.

Find possible numbers in array that can sum to a target value

Given I have an array of numbers for example [14,6,10] - How can I find possible combinations/pairs that can add upto a given target value.
for example I have [14,6,10], im looking for a target value of 40
my expected output will be
10 + 10 + 6 + 14
14 + 14 + 6 + 6
10 + 10 + 10 + 10
*Order is not important
With that being said, this is what I tried so far:
function Sum(numbers, target, partial) {
var s, n, remaining;
partial = partial || [];
s = partial.reduce(function (a, b) {
return a + b;
}, 0);
if (s === target) {
console.log("%s", partial.join("+"))
}
for (var i = 0; i < numbers.length; i++) {
n = numbers[i];
remaining = numbers.slice(i + 1);
Sum(remaining, target, partial.concat([n]));
}
}
>>> Sum([14,6,10],40);
// returns nothing
>>> Sum([14,6,10],24);
// return 14+10
It is actually useless since it will only return if the number can be used only once to sum.
So how to do it?
You could add the value of the actual index as long as the sum is smaller than the wanted sum or proceed with the next index.
function getSum(array, sum) {
function iter(index, temp) {
var s = temp.reduce((a, b) => a + b, 0);
if (s === sum) result.push(temp);
if (s >= sum || index >= array.length) return;
iter(index, temp.concat(array[index]));
iter(index + 1, temp);
}
var result = [];
iter(0, []);
return result;
}
console.log(getSum([14, 6, 10], 40));
.as-console-wrapper { max-height: 100% !important; top: 0; }
For getting a limited result set, you could specify the length and check it in the exit condition.
function getSum(array, sum, limit) {
function iter(index, temp) {
var s = temp.reduce((a, b) => a + b, 0);
if (s === sum) result.push(temp);
if (s >= sum || index >= array.length || temp.length >= limit) return;
iter(index, temp.concat(array[index]));
iter(index + 1, temp);
}
var result = [];
iter(0, []);
return result;
}
console.log(getSum([14, 6, 10], 40, 5));
.as-console-wrapper { max-height: 100% !important; top: 0; }
TL&DR : Skip to Part II for the real thing
Part I
#Nina Scholz answer to this fundamental problem just shows us a beautiful manifestation of an algorithm. Honestly it confused me a lot for two reasons
When i try [14,6,10,7,3] with a target 500 it makes 36,783,575 calls to the iter function without blowing the call stack. Yet memory shows no significant usage at all.
My dynamical programming solution goes a little faster (or may be not) but there is no way it can do above case without exhousting the 16GB memory.
So i shelved my solution and instead started investigating her code a little further on dev tools and discoverd both it's beauty and also a little bit of it's shortcomings.
First i believe this algorithmic approach, which includes a very clever use of recursion, might possibly deserve a name of it's own. It's very memory efficient and only uses up memory for the constructed result set. The stack dynamically grows and shrinks continuoously up to nowhere close to it's limit.
The problem is, while being very efficient it still makes huge amounts of redundant calls. So looking into that, with a slight modification the 36,783,575 calls to iter can be cut down to 20,254,744... like 45% which yields a much faster code. The thing is the input array must be sorted ascending.
So here comes a modified version of Nina's algorithm. (Be patient.. it will take like 25 secs to finalize)
function getSum(array, sum) {
function iter(index, temp) {cnt++ // counting iter calls -- remove in production code
var s = temp.reduce((a, b) => a + b, 0);
sum - s >= array[index] && iter(index, temp.concat(array[index]));
sum - s >= array[index+1] && iter(index + 1, temp);
s === sum && result.push(temp);
return;
}
var result = [];
array.sort((x,y) => x-y); // this is a very cheap operation considering the size of the inpout array should be small for reasonable output.
iter(0, []);
return result;
}
var cnt = 0,
arr = [14,6,10,7,3],
tgt = 500,
res;
console.time("combos");
res = getSum(arr,tgt);
console.timeEnd("combos");
console.log(`source numbers are ${arr}
found ${res.length} unique ways to sum up to ${tgt}
iter function has been called ${cnt} times`);
Part II
Eventhough i was impressed with the performance, I wasn't comfortable with above solution for no solid reason that i can name. The way it works on side effects and the very hard to undestand double recursion and such disturbed me.
So here comes my approach to this question. This is many times more efficient compared to the accepted solution despite i am going functional in JS. We have still have room to make it a little faster with ugly imperative ways.
The difference is;
Given numbers: [14,6,10,7,3]
Target Sum: 500
Accepted Answer:
Number of possible ansers: 172686
Resolves in: 26357ms
Recursive calls count: 36783575
Answer Below
Number of possible ansers: 172686
Resolves in: 1000ms
Recursive calls count: 542657
function items2T([n,...ns],t){cnt++ //remove cnt in production code
var c = ~~(t/n);
return ns.length ? Array(c+1).fill()
.reduce((r,_,i) => r.concat(items2T(ns, t-n*i).map(s => Array(i).fill(n).concat(s))),[])
: t % n ? []
: [Array(c).fill(n)];
};
var cnt = 0, result;
console.time("combos");
result = items2T([14, 6, 10, 7, 3], 500)
console.timeEnd("combos");
console.log(`${result.length} many unique ways to sum up to 500
and ${cnt} recursive calls are performed`);
Another important point is, if the given array is sorted descending then the amount of recursive iterations will be reduced (sometimes greatly), allowing us to squeeze out more juice out of this lemon. Compare above with the one below when the input array is sorted descending.
function items2T([n,...ns],t){cnt++ //remove cnt in production code
var c = ~~(t/n);
return ns.length ? Array(c+1).fill()
.reduce((r,_,i) => r.concat(items2T(ns, t-n*i).map(s => Array(i).fill(n).concat(s))),[])
: t % n ? []
: [Array(c).fill(n)];
};
var cnt = 0, result;
console.time("combos");
result = items2T([14, 10, 7, 6, 3], 500)
console.timeEnd("combos");
console.log(`${result.length} many unique ways to sum up to 500
and ${cnt} recursive calls are performed`);

How to explain cached fibonacci algorithm complexity

I'm trying to explain correctly cached fibonacci algorithm complexity. Here is the code (https://jsfiddle.net/msthhbgy/2/):
function allFib(n) {
var memo = [];
for (var i = 0; i < n; i++) {
console.log(i + ":" + fib(i, memo))
}
}
function fib(n, memo) {
if (n < 0) return 0;
else if (n === 1) return 1;
else if (memo[n]) return memo[n];
memo[n] = fib(n - 1, memo) + fib(n - 2, memo);
return memo[n];
}
allFib(5);
The solution is taken from "Cracking the coding interview" and adapted to javascript.
So here is a "not very nice" tree of function calls
I was thinking like that: "The left most branch (bold one) is where the evaluation is happening" and it is definitely the number passed to the allFib function for the first time. So the complexity is O(n). Everything that is to the right will be taken from cache and will not require extra function calls". is it correct? also how to connect this to the tree "theory". The depth and the height of the tree in this case is 4 but not 5 (close to n but not it). I want the answer to be not intuitive but more reliable.
Here is a function that really uses the cache:
function Fibonacci() {
var memo = [0, 1];
this.callCount = 0;
this.calc = function(n) {
this.callCount++;
return n <= 0 ? 0
: memo[n] || (memo[n] = this.calc(n - 1) + this.calc(n - 2));
}
}
var fib = new Fibonacci();
console.log('15! = ', fib.calc(15));
console.log('calls made: ', fib.callCount);
fib.callCount = 0; // reset counter
console.log('5! = ', fib.calc(5));
console.log('calls made: ', fib.callCount);
fib.callCount = 0;
console.log('18! = ', fib.calc(18));
console.log('calls made: ', fib.callCount);
The number of function calls made is:
(n - min(i,n))*2+1
Where i is the last entry in memo.
This you can see as follows with the example of n = 18 and i = 15:
The calls are made in this order:
calc(18)
calc(17) // this.calc(n-1) with n=18
calc(16) // this.calc(n-1) with n=17
calc(15) // this.calc(n-1) with n=16, this can be returned from memo
calc(14) // this.calc(n-2) with n=16, this can be returned from memo
calc(15) // this.calc(n-2) with n=17, this can be returned from memo
calc(16) // this.calc(n-2) with n=18, this can be returned from memo
The general pattern is that this.calc(n-1) and this.calc(n-2) are called just as many as times (of course), with in addition the original call calc(n).
Here is an animation for when you call fib.calcfor the first time as fib.calc(5). The arrows show the calls that are made. The more to the left, the deeper the recursion. The bubbles are colored when the corresponding result is stored in memo:
This evidently is O(n) when i is a given constant.
First, check for negative n, and move the value to zero.
Then check if a value is cached, take the value. If not, assign the value to the cache and return the result.
For the special cases of n === 0 or n === 1 assign n.
function fibonacci(number) {
function f(n) {
return n in cache ?
cache[n] :
cache[n] = n === 0 || n === 1 ? n : f(n - 1) + f(n - 2);
}
var cache = [];
return f(number);
}
console.log(fibonacci(15));
console.log(fibonacci(5));
Part with predefined values in cache, as Thomas suggested.
function fibonacci(number) {
function f(n) {
return n in cache ?
cache[n] :
cache[n] = f(n - 1) + f(n - 2);
}
var cache = [0, 1];
return f(number);
}
console.log(fibonacci(15));
console.log(fibonacci(5));

Categories

Resources