JavaScript find first number in array that is <= to given number - javascript

I have an array of prime numbers:
const primes = [3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,79,83,89,97]
I want to find the first number in this list that is <= the number given.
For example ... getHighestPrimeNumber(58) ... should return 53, being the prime number with the greatest value which is also less than or equal to 58
Expect results:
getHighestPrimeNumber(58) === 53
getHighestPrimeNumber(53) === 53
getHighestPrimeNumber(52) === 47
My current approach is to iterate through the prime numbers but this is very inefficient, especially given that there may be 10,000+ numbers in the list - thanks
Vanilla JS or Lodash is fine

Since you posted this with lodash tag just FYI that this with it is trivial due to _.sortedIndex:
const primes = [3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,79,83,89,97]
const closestPrime = (n) => {
let index = _.sortedIndex(primes, n)
return primes[index] == n ? primes[index] : primes[index-1]
}
console.log(closestPrime(58))
console.log(closestPrime(53))
console.log(closestPrime(52))
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.10/lodash.min.js"></script>

Seems like a field for a divide and conquer approach. Something like a Binary search:
const primes = [3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,79,83,89,97]
function findHighestPrimeNumberRecursive(arr, ceiling) {
const midpoint = arr.length/2
if(arr[midpoint ] === ceiling){
// we found it!
return primes[arr.length-1];
} else {
if(arr[midpoint] <== ceiling) {
return findHighestPrimeNumberRecursive(arr.slice(0, midpoint), ceiling);
} else {
return findHighestPrimeNumberRecursive(arr.slice(midpoint, arr.length), ceiling);
}
}
}
function getHighestPrimeNumber(ceiling) {
return findHighestPrimeNumberRecursive(primes, ceiling);
}

This is a good task for a binary search:
const bisect = (needle, haystack) => {
let lo = 0;
let hi = haystack.length;
while (lo <= hi) {
const mid = ~~((hi - lo) / 2 + lo);
if (haystack[mid] === needle) {
return needle;
}
else if (haystack[mid] > needle) {
hi = mid - 1;
}
else {
lo = mid + 1;
}
}
return haystack[hi];
};
const getHighestPrimeNumber = n => {
const primes = [3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,79,83,89,97];
return bisect(n, primes);
};
console.log(getHighestPrimeNumber(58) === 53);
console.log(getHighestPrimeNumber(53) === 53);
console.log(getHighestPrimeNumber(52) === 47);
A couple of notes:
You'll likely want to make your prime number array a parameter to getHighestPrimeNumber so it isn't created and garbage collected on every function call. At this point, you might as well just call the binary search directly.
If you're concerned about queries over and under the bounds of the array, you can handle those according to some policy, for example: return haystack[Math.min(hi,haystack.length-1)];.
Binary search is O(n log(n)) time complexity. Set lookups are O(1), so you may experience a performance boost if you maintain a set in addition to the array and try lookups there first.

Related

Can we change the iteration direction of Array.find?

Look at this crazy question... I have an array with 30.000 items, and I have to run something like this over it:
const bigArray = [
{ createdAt: 1 },
{ createdAt: 2 },
{ createdAt: 3 },
// And so on... 30.000
];
const found = bigArray.find(x => x.createdAt > 29950)
And the thing here, is that I know that 100% of the time, that element will be in the index 29.950 (approx). Because that array is already sorted by createdAt (coming from the backend)
How does .find works? Does it iterates starting from the first element? Is there a way to say "I know it's closer to the end... Change your behavior"?
Of course there is the alternative of doing something like:
bigArray.reverse()
const prevIndex = bigArray.findIndex(x => x.createdAt <= 29950);
const found = bigArray[prevIndex - 1];
bigArray.reverse()
But I'm not sure if that's gonna be actually worst (because of the fact that there we'll also have some multiples unnecessary iterations... I guess).
Who can give me some clues about this?
It's not that I have a bug here... Not even a performance issue (because 30.000 is not that much), but, feels like there should be something there, and I never hear about it in ~16 years working on JavaScript
Thanks so much!
Based upon the documentation here, it appears that find is O(n) time complexity, where n is length of the array.
Since your elements are sorted, you can try to do binary search and reduce time complexity to O(log n).
This is the basic binary search iterative algorithm:
function binarySearchIterative (nums, target) {
let res = -1;
let left = 0;
let right = nums.length;
while (left <= right && res === -1) {
const mid = Math.floor(left + (right - left)/2);
if (nums[mid] === target) {
res = mid;
}
else if (nums[mid] > target) {
right--;
}
else {
left++;
}
}
return res;
};
I'm not aware of any options for Array.prototype.findIndex that start from the end... I do know for sure that using Array.prototype.reverse is very expensive and you could make your own algorithm like this if you know that you're likely to find the result you need near the end:
const bigArray = [
{ createdAt: 1 },
{ createdAt: 2 },
{ createdAt: 3 }
];
// Add the function to Array.prototype
Array.prototype.findIndexFromEnd = function (cond) {
for(let i = this.length - 1; i >= 0; i--) {
if(cond(this[i])) return i;
}
return -1;
}
// Gives 1 as expected
console.log(bigArray.findIndexFromEnd(x => x.createdAt == 2));
// Or use an external function if you don't want to edit the prototype
function findIndexFromEnd(array, cond) {
for(let i = array.length - 1; i >= 0; i--) {
if(cond(array[i])) return i;
}
return -1;
}
// Gives 1 as expected
console.log(findIndexFromEnd(bigArray, (x) => x.createdAt == 2));

How can I make my code below more efficient in terms of performance?

The code below compares two arrays and checks that the elements at matching indices on both arrays have similar prime factors. If that is true, the count of matching factors ("matching") increases by 1.
/* eslint-disable no-console */
const primeFactors = (n) => {
let number = n;
const factors = [];
let divisor = 2;
while (number >= 2) {
if (number % divisor === 0) {
factors.push(divisor);
number /= divisor;
} else {
divisor += 1;
}
}
return factors;
};
const solution = (A, B) => {
let matching = 0;
for (let index = 0; index < A.length; index += 1) {
const a = A[index];
const b = B[index];
let aFactors = primeFactors(a);
aFactors = new Set(aFactors);
aFactors = Array.from(aFactors);
aFactors = aFactors.sort((first, second) => first - second);
let bFactors = primeFactors(b);
bFactors = new Set(bFactors);
bFactors = Array.from(bFactors);
bFactors = bFactors.sort((first, second) => first - second);
if (JSON.stringify(aFactors) === JSON.stringify(bFactors)) {
matching += 1;
}
}
return matching;
};
This will return 1 since only 15 and 75 at matching indices have similar prime factors (3 and 5 each)
console.log(solution([15, 10, 3], [75, 30, 5]));
How can I make this algorithm more efficient? It currently has an efficiency score of 84%, having failed two optimization tests for large data sets.
Converting an array to a set and back to an array seems like much of a wasted effort. Why not eliminate duplicates right in primeFactors?
while (number >= 2) {
if (number % divisor === 0) {
factors.push(divisor);
while (number % divisor === 0) {
number /= divisor;
}
} else {
divisor += 1;
}
}
There is no need to sort the arrays obtained as above. They are already sorted. There is also no need to stringify them. Just compare them element by element.
The fundamental speedup comes from the observation that the two numbers have the same prime composition if and only if they have the same prime compositions with their gcd. The gcd is very easy to compute; it also tends to be much smaller, and hence much easier to decompose, than its arguments. Besides, it requires only one decomposition, rather than two as in your solution. Consider
same_prime_composition(a, b)
g = gcd(a, b)
primes = primeFactors(g)
return is_decomposable(a/g, primes) && is_decomposable(b/g, primes)
prime_decomposable(x, primes)
for p in primes
while (x % p == 0)
x /= p
return x === 1
It might be beneficial to compute the prime numbers beforehand.
I don't know if javascript supports divmod. If it does, there is even more room for optimization.
A quick speedup is to have PrimeFactors stop at sqrt(n). The value of number at that time is necessarily the last prime factor.

Is there a way to avoid number to string conversion & nested loops for performance?

I just took a coding test online and this one question really bothered me. My solution was correct but was rejected for being unoptimized. The question is as following:
Write a function combineTheGivenNumber taking two arguments:
numArray: number[]
num: a number
The function should check all the concatenation pairs that can result in making a number equal to num and return their count.
E.g. if numArray = [1, 212, 12, 12] & num = 1212 then we will have return value of 3 from combineTheGivenNumber
The pairs are as following:
numArray[0]+numArray[1]
numArray[2]+numArray[3]
numArray[3]+numArray[2]
The function I wrote for this purpose is as following:
function combineTheGivenNumber(numArray, num) {
//convert all numbers to strings for easy concatenation
numArray = numArray.map(e => e+'');
//also convert the `hay` to string for easy comparison
num = num+'';
let pairCounts = 0;
// itereate over the array to get pairs
numArray.forEach((e,i) => {
numArray.forEach((f,j) => {
if(i!==j && num === (e+f)) {
pairCounts++;
}
});
});
return pairCounts;
}
console.log('Test 1: ', combineTheGivenNumber([1,212,12,12],1212));
console.log('Test 2: ', combineTheGivenNumber([4,21,42,1],421));
From my experience, I know conversion of number to string is slow in JS, but I am not sure whether my approach is wrong/lack of knowledge or does the tester is ignorant of this fact. Can anyone suggest further optimization of the code snipped?
Elimination of string to number to string will be a significant speed boost but I am not sure how to check for concatenated numbers otherwise.
Elimination of string to number to string will be a significant speed boost
No, it won't.
Firstly, you're not converting strings to numbers anywhere, but more importantly the exercise asks for concatenation so working with strings is exactly what you should do. No idea why they're even passing numbers. You're doing fine already by doing the conversion only once for each number input, not every time your form a pair. And last but not least, avoiding the conversion will not be a significant improvement.
To get a significant improvement, you should use a better algorithm. #derpirscher is correct in his comment: "[It's] the nested loop checking every possible combination which hits the time limit. For instance for your example, when the outer loop points at 212 you don't need to do any checks, because regardless, whatever you concatenate to 212, it can never result in 1212".
So use
let pairCounts = 0;
numArray.forEach((e,i) => {
if (num.startsWith(e)) {
//^^^^^^^^^^^^^^^^^^^^^^
numArray.forEach((f,j) => {
if (i !== j && num === e+f) {
pairCounts++;
}
});
}
});
You might do the same with suffixes, but it becomes more complicated to rule out concatenation to oneself there.
Optimising further, you can even achieve a linear complexity solution by putting the strings in a lookup structure, then when finding a viable prefix just checking whether the missing part is an available suffix:
function combineTheGivenNumber(numArray, num) {
const strings = new Map();
for (const num of numArray) {
const str = String(num);
strings.set(str, 1 + (strings.get(str) ?? 0));
}
const whole = String(num);
let pairCounts = 0;
for (const [prefix, pCount] of strings) {
if (!whole.startsWith(prefix))
continue;
const suffix = whole.slice(prefix.length);
if (strings.has(suffix)) {
let sCount = strings.get(suffix);
if (suffix == prefix) sCount--; // no self-concatenation
pairCounts += pCount*sCount;
}
}
return pairCounts;
}
(the proper handling of duplicates is a bit difficile)
I like your approach of going to strings early. I can suggest a couple of simple optimizations.
You only need the numbers that are valid "first parts" and those that are valid "second parts"
You can use the javascript .startsWith and .endsWith to test for those conditions. All other strings can be thrown away.
The lengths of the strings must add up to the length of the desired answer
Suppose your target string is 8 digits long. If you have 2 valid 3-digit "first parts", then you only need to know how many valid 5-digit "second parts" you have. Suppose you have 9 of them. Those first parts can only combine with those second parts, and give you 2 * 9 = 18 valid pairs.
You don't actually need to keep the strings!
It struck me that if you know you have 2 valid 3-digit "first parts", you don't need to keep those actual strings. Knowing that they are valid 2-digit first parts is all you need to know.
So let's build an array containing:
How many valid 1-digit first parts do we have?,
How many valid 2-digit first parts do we have?,
How many valid 3-digit first parts do we have?,
etc.
And similarly an array containing the number of valid 1-digit second parts, etc.
X first parts and Y second parts can be combined in X * Y ways
Except if the parts are the same length, in which case we are reusing the same list, and so it is just X * (Y-1).
So not only do we not need to keep the strings, but we only need to do the multiplication of the appropriate elements of the arrays.
5 1-char first parts & 7 3-char second parts = 5 * 7 = 35 pairs
6 2-char first part & 4 2-char second parts = 6 * (4-1) = 18 pairs
etc
So this becomes extremely easy. One pass over the strings, tallying the "first part" and "second part" matches of each length. This can be done with an if and a ++ of the relevant array element.
Then one pass over the lengths, which will be very quick as the array of lengths will be very much shorter than the array of actual strings.
function combineTheGivenNumber(numArray, num) {
const sElements = numArray.map(e => "" + e);
const sTarget = "" + num;
const targetLength = sTarget.length
const startsByLen = (new Array(targetLength)).fill(0);
const endsByLen = (new Array(targetLength)).fill(0);
sElements.forEach(sElement => {
if (sTarget.startsWith(sElement)) {
startsByLen[sElement.length]++
}
if (sTarget.endsWith(sElement)) {
endsByLen[sElement.length]++
}
})
// We can now throw away the strings. We have two separate arrays:
// startsByLen[1] is the count of strings (without attempting to remove duplicates) which are the first character of the required answer
// startsByLen[2] similarly the count of strings which are the first 2 characters of the required answer
// etc.
// and endsByLen[1] is the count of strings which are the last character ...
// and endsByLen[2] is the count of strings which are the last 2 characters, etc.
let pairCounts = 0;
for (let firstElementLength = 1; firstElementLength < targetLength; firstElementLength++) {
const secondElementLength = targetLength - firstElementLength;
if (firstElementLength === secondElementLength) {
pairCounts += startsByLen[firstElementLength] * (endsByLen[secondElementLength] - 1)
} else {
pairCounts += startsByLen[firstElementLength] * endsByLen[secondElementLength]
}
}
return pairCounts;
}
console.log('Test 1: ', combineTheGivenNumber([1, 212, 12, 12], 1212));
console.log('Test 2: ', combineTheGivenNumber([4, 21, 42, 1], 421));
Depending on a setup, the integer slicing can be marginally faster
Although in the end it falls short
Also, when tested on higher N values, the previous answer exploded in jsfiddle. Possibly a memory error.
As far as I have tested with both random and hand-crafted values, my solution holds. It is based on an observation, that if X, Y concantenated == Z, then following must be true:
Z - Y == X * 10^(floor(log10(Y)) + 1)
an example of this:
1212 - 12 = 1200
12 * 10^(floor((log10(12)) + 1) = 12 * 10^(1+1) = 12 * 100 = 1200
Now in theory, this should be faster then manipulating strings. And in many other languages it most likely would be. However in Javascript as I just learned, the situation is a bit more complicated. Javascript does some weird things with casting that I haven't figured out yet. In short - when I tried storing the numbers(and their counts) in a map, the code got significantly slower making any possible gains from this logarithm shenanigans evaporate. Furthermore, storing them in a custom-crafted data structure isn't guaranteed to be faster since you have to build it etc. Also it would be quite a lot of work.
As it stands this log comparison is ~ 8 times faster in a case without(or with just a few) matches since the quadratic factor is yet to kick in. As long as the possible postfix count isn't too high, it will outperform the linear solution. Unfortunately it is still quadratic in nature with the breaking point depending on a total number of strings as well as their length.
So if you are searching for a needle in a haystack - for example you are looking for a few pairs in a huge heap of numbers, this can help. In the other case of searching for many matches, this won't help. Similarly, if the input array was sorted, you could use binary search to push the breaking point further up.
In the end, unless you manage to figure out how to store ints in a map(or some custom implementation of it) in a way that doesn't completely kill the performance, the linear solution of the previous answer will be faster. It can still be useful even with the performance hit if your computation is going to be memory heavy. Storing numbers takes less space then storing strings.
var log10 = Math.log(10)
function log10floored(num) {
return Math.floor(Math.log(num) / log10)
}
function combineTheGivenNumber(numArray, num) {
count = 0
for (var i=0; i!=numArray.length; i++) {
let portion = num - numArray[i]
let removedPart = Math.pow(10, log10floored(numArray[i]))
if (portion % (removedPart * 10) == 0) {
for (var j=0; j!=numArray.length; j++) {
if (j != i && portion / (removedPart * 10) == numArray[j] ) {
count += 1
}
}
}
}
return count
}
//The previous solution, that I used for timing, comparison and check purposes
function combineTheGivenNumber2(numArray, num) {
const strings = new Map();
for (const num of numArray) {
const str = String(num);
strings.set(str, 1 + (strings.get(str) ?? 0));
}
const whole = String(num);
let pairCounts = 0;
for (const [prefix, pCount] of strings) {
if (!whole.startsWith(prefix))
continue;
const suffix = whole.slice(prefix.length);
if (strings.has(suffix)) {
let sCount = strings.get(suffix);
if (suffix == prefix) sCount--; // no self-concatenation
pairCounts += pCount*sCount;
}
}
return pairCounts;
}
var myArray = []
for (let i =0; i!= 10000000; i++) {
myArray.push(Math.floor(Math.random() * 1000000))
}
var a = new Date()
t1 = a.getTime()
console.log('Test 1: ', combineTheGivenNumber(myArray,15285656));
var b = new Date()
t2 = b.getTime()
console.log('Test 2: ', combineTheGivenNumber2(myArray,15285656));
var c = new Date()
t3 = c.getTime()
console.log('Test1 time: ', t2 - t1)
console.log('test2 time: ', t3 - t2)
Small update
As long as you are willing to take a performance hit with the setup and settle for the ~2 times performance, using a simple "hashing" table can help.(Hashing tables are nice and tidy, this is a simple modulo lookup table. The principle is similar though.)
Technically this isn't linear, practicaly it is enough for the most cases - unless you are extremely unlucky and all your numbers fall in the same bucket.
function combineTheGivenNumber(numArray, num) {
count = 0
let size = 1000000
numTable = new Array(size)
for (var i=0; i!=numArray.length; i++) {
let idx = numArray[i] % size
if (numTable[idx] == undefined) {
numTable[idx] = [numArray[i]]
} else {
numTable[idx].push(numArray[i])
}
}
for (var i=0; i!=numArray.length; i++) {
let portion = num - numArray[i]
let removedPart = Math.pow(10, log10floored(numArray[i]))
if (portion % (removedPart * 10) == 0) {
if (numTable[portion / (removedPart * 10) % size] != undefined) {
let a = numTable[portion / (removedPart * 10) % size]
for (var j=0; j!=a.length; j++) {
if (j != i && portion / (removedPart * 10) == a[j] ) {
count += 1
}
}
}
}
}
return count
}
Here's a simplified, and partially optimised approach with 2 loops:
// let's optimise 'combineTheGivenNumber', where
// a=array of numbers AND n=number to match
const ctgn = (a, n) => {
// convert our given number to a string using `toString` for clarity
// this isn't entirely necessary but means we can use strict equality later
const ns = n.toString();
// reduce is an efficient mechanism to return a value based on an array, giving us
// _=[accumulator], na=[array number] and i=[index]
return a.reduce((_, na, i) => {
// convert our 'array number' to an 'array number string' for later concatenation
const nas = na.toString();
// iterate back over our array of numbers ... we're using an optimised/reverse loop
for (let ii = a.length - 1; ii >= 0; ii--) {
// skip the current array number
if (i === ii) continue;
// string + number === string, which lets us strictly compare our 'number to match'
// if there's a match we increment the accumulator
if (a[ii] + nas === ns) ++_;
}
// we're done
return _;
}, 0);
}

Trying to find sum of factorial number using big-int npm gives wrong answer

I am doing euler problem where you need to find the sum of integers of a factorial number. so for example 10! is 3 + 6 + 2 + 8 + 8 + 0 + 0 = 27.
I wrote this using big-int library to deal with large numbers.
factorialize =(num)=> {
if (num < 0) {
return -1;
}
else if (num == 0) {
return 1;
}
else {
return (num * factorialize(num - 1));
}
}
findFactorialSum=(x)=>{
let total=0;
let result = bigInt(factorialize(x));
// let result=factorialize(x).toString().split("").map(el => parseInt(el));
// result.split("");
let converted = result.toString().split("").map(el => parseInt(el));
console.log(converted);
for(let i=0;i<=converted.length-1;i++)
{
total=total+converted[i]
}
console.log(total);
return total;
}
this works for small factorials and gives right answers but as soon as you go for something bigger then 12 it gives wrong answers, for example for 100 I get 683 but the answer according to the site should be 648 ><. I am guessing the big int library i am using returns wrong number but it worked for smaller numbers so I don't see what the issue can be.
I'm assuming the BigInt library you are using takes a big number as a string. Something like
bigint("23837458934509644434537504952635462348")
You are doing
let result = bigInt(factorialize(x));
The call to factorialize(100) has already overflowed Javascript's MAX_SAFE_INTEGER and passes the wrong string to the bigInt call.
You have to use BigInts to calculate the factorial as well.
Additionally to Jeril's answer which is your curlpit, you can also use reduce to calculate the sum of an Array. Demo:
const factorialize = (bigNum) => {
if (bigNum.lt(0)) {
return bigInt(-1);
} else if (bigNum.eq(0)) {
return bigInt(1);
} else {
return bigNum.times(factorialize(bigNum.minus(1)));
}
};
const findFactorialSum = (x) => {
const result = factorialize(bigInt(x)),
total = result.toString().split('')
.reduce((sum, digit) => sum + +digit, 0);
console.log(result.toString().split('').join('+') + ' = ' + total);
return total;
};
findFactorialSum(10); // 27
findFactorialSum(15); // 45
findFactorialSum(20); // 54
<script src="https://peterolson.github.io/BigInteger.js/BigInteger.min.js"></script>

Find possible numbers in array that can sum to a target value

Given I have an array of numbers for example [14,6,10] - How can I find possible combinations/pairs that can add upto a given target value.
for example I have [14,6,10], im looking for a target value of 40
my expected output will be
10 + 10 + 6 + 14
14 + 14 + 6 + 6
10 + 10 + 10 + 10
*Order is not important
With that being said, this is what I tried so far:
function Sum(numbers, target, partial) {
var s, n, remaining;
partial = partial || [];
s = partial.reduce(function (a, b) {
return a + b;
}, 0);
if (s === target) {
console.log("%s", partial.join("+"))
}
for (var i = 0; i < numbers.length; i++) {
n = numbers[i];
remaining = numbers.slice(i + 1);
Sum(remaining, target, partial.concat([n]));
}
}
>>> Sum([14,6,10],40);
// returns nothing
>>> Sum([14,6,10],24);
// return 14+10
It is actually useless since it will only return if the number can be used only once to sum.
So how to do it?
You could add the value of the actual index as long as the sum is smaller than the wanted sum or proceed with the next index.
function getSum(array, sum) {
function iter(index, temp) {
var s = temp.reduce((a, b) => a + b, 0);
if (s === sum) result.push(temp);
if (s >= sum || index >= array.length) return;
iter(index, temp.concat(array[index]));
iter(index + 1, temp);
}
var result = [];
iter(0, []);
return result;
}
console.log(getSum([14, 6, 10], 40));
.as-console-wrapper { max-height: 100% !important; top: 0; }
For getting a limited result set, you could specify the length and check it in the exit condition.
function getSum(array, sum, limit) {
function iter(index, temp) {
var s = temp.reduce((a, b) => a + b, 0);
if (s === sum) result.push(temp);
if (s >= sum || index >= array.length || temp.length >= limit) return;
iter(index, temp.concat(array[index]));
iter(index + 1, temp);
}
var result = [];
iter(0, []);
return result;
}
console.log(getSum([14, 6, 10], 40, 5));
.as-console-wrapper { max-height: 100% !important; top: 0; }
TL&DR : Skip to Part II for the real thing
Part I
#Nina Scholz answer to this fundamental problem just shows us a beautiful manifestation of an algorithm. Honestly it confused me a lot for two reasons
When i try [14,6,10,7,3] with a target 500 it makes 36,783,575 calls to the iter function without blowing the call stack. Yet memory shows no significant usage at all.
My dynamical programming solution goes a little faster (or may be not) but there is no way it can do above case without exhousting the 16GB memory.
So i shelved my solution and instead started investigating her code a little further on dev tools and discoverd both it's beauty and also a little bit of it's shortcomings.
First i believe this algorithmic approach, which includes a very clever use of recursion, might possibly deserve a name of it's own. It's very memory efficient and only uses up memory for the constructed result set. The stack dynamically grows and shrinks continuoously up to nowhere close to it's limit.
The problem is, while being very efficient it still makes huge amounts of redundant calls. So looking into that, with a slight modification the 36,783,575 calls to iter can be cut down to 20,254,744... like 45% which yields a much faster code. The thing is the input array must be sorted ascending.
So here comes a modified version of Nina's algorithm. (Be patient.. it will take like 25 secs to finalize)
function getSum(array, sum) {
function iter(index, temp) {cnt++ // counting iter calls -- remove in production code
var s = temp.reduce((a, b) => a + b, 0);
sum - s >= array[index] && iter(index, temp.concat(array[index]));
sum - s >= array[index+1] && iter(index + 1, temp);
s === sum && result.push(temp);
return;
}
var result = [];
array.sort((x,y) => x-y); // this is a very cheap operation considering the size of the inpout array should be small for reasonable output.
iter(0, []);
return result;
}
var cnt = 0,
arr = [14,6,10,7,3],
tgt = 500,
res;
console.time("combos");
res = getSum(arr,tgt);
console.timeEnd("combos");
console.log(`source numbers are ${arr}
found ${res.length} unique ways to sum up to ${tgt}
iter function has been called ${cnt} times`);
Part II
Eventhough i was impressed with the performance, I wasn't comfortable with above solution for no solid reason that i can name. The way it works on side effects and the very hard to undestand double recursion and such disturbed me.
So here comes my approach to this question. This is many times more efficient compared to the accepted solution despite i am going functional in JS. We have still have room to make it a little faster with ugly imperative ways.
The difference is;
Given numbers: [14,6,10,7,3]
Target Sum: 500
Accepted Answer:
Number of possible ansers: 172686
Resolves in: 26357ms
Recursive calls count: 36783575
Answer Below
Number of possible ansers: 172686
Resolves in: 1000ms
Recursive calls count: 542657
function items2T([n,...ns],t){cnt++ //remove cnt in production code
var c = ~~(t/n);
return ns.length ? Array(c+1).fill()
.reduce((r,_,i) => r.concat(items2T(ns, t-n*i).map(s => Array(i).fill(n).concat(s))),[])
: t % n ? []
: [Array(c).fill(n)];
};
var cnt = 0, result;
console.time("combos");
result = items2T([14, 6, 10, 7, 3], 500)
console.timeEnd("combos");
console.log(`${result.length} many unique ways to sum up to 500
and ${cnt} recursive calls are performed`);
Another important point is, if the given array is sorted descending then the amount of recursive iterations will be reduced (sometimes greatly), allowing us to squeeze out more juice out of this lemon. Compare above with the one below when the input array is sorted descending.
function items2T([n,...ns],t){cnt++ //remove cnt in production code
var c = ~~(t/n);
return ns.length ? Array(c+1).fill()
.reduce((r,_,i) => r.concat(items2T(ns, t-n*i).map(s => Array(i).fill(n).concat(s))),[])
: t % n ? []
: [Array(c).fill(n)];
};
var cnt = 0, result;
console.time("combos");
result = items2T([14, 10, 7, 6, 3], 500)
console.timeEnd("combos");
console.log(`${result.length} many unique ways to sum up to 500
and ${cnt} recursive calls are performed`);

Categories

Resources