Looking to optimise my code to get even more speed. Currently the code detects any poker hand and takes ~350ms to do 32000 iterations. The function to detect straights, however seems to be taking the biggest individual chunk of time at about 160ms so looking if any way to optimise it further.
The whole code was originally written in php since that is what I'm most familiar with but despite php 7's speed boost it still seems to be slower than javascript. What I found when translating to javascript though is that many of php's built in functions are not present in javascript which caused unforeseen slowdowns. It is still faster overall than the original php code though but I'm looking to see if it can be optimised more. Perhaps the answer is no, but I thought I'd check anyway.
I have written the functions range and arrays_equal since these are either missing from javascript or don't quite work properly.
function straight(handval) {
if (arrays_equal(handval.slice(0, 4),[2, 3, 4, 5]) && handval[handval.length-1] == 14) {//if is Ace 2345
return [4,14];
}
else {//if normal straight
for (let i = handval.length - 5; i >= 0; i--) {
let subhand = handval.slice(i, i + 5);
if (arrays_equal(subhand, range(subhand[0], subhand[subhand.length-1]))) {
return [4,subhand[4]];
}
} return [0]
}
}
function arrays_equal(a,b) { return !!a && !!b && !(a<b || b<a); }
function range(start, end) {
let arr = [];
for (let i = start; i <= end; i++) {
arr.push(i);
}
return arr;
}
Handval comes in as a simple array of 5-7 elements of numbers from 2-14 representing the cards So for example it could be [6,8,4,11,13,2] or [8,4,13,8,10].
EDIT: The function is called and sorted at the same time with this code:
straight(handval.slice(0).sort(sortNumber));
function sortNumber(a,b) { return a - b; }
You could just go from right to left and count the number of sequential numbers:
function straight(handval) {
if([2, 3, 4, 5].every((el, i) => handval[i] === el) && handval[handval.length-1] === 14)
return [4, 14];
let count = 1;
for(let i = handval.length - 1; i >= 1; i -= 1) {
if(handval[i] === handval[i - 1] + 1) {
count += 1;
if(count === 5) return [ 4, handval[i + 3] ];
} else {
count = 1;
}
}
return [0];
}
That is way faster as it:
1) does not create intermediate arrays on every iteration, which you did with range and slice
2) does not compare arrays as strings, which requires a typecast and a string comparison, which is way slower than comparing two numbers against each other
3) It does not check all 3 ranges on its own (1 - 5, 2 - 6, 3 - 7), but does all that in one run, so it only iterates 5 positions instead of 3 x 5.
Related
I just took a coding test online and this one question really bothered me. My solution was correct but was rejected for being unoptimized. The question is as following:
Write a function combineTheGivenNumber taking two arguments:
numArray: number[]
num: a number
The function should check all the concatenation pairs that can result in making a number equal to num and return their count.
E.g. if numArray = [1, 212, 12, 12] & num = 1212 then we will have return value of 3 from combineTheGivenNumber
The pairs are as following:
numArray[0]+numArray[1]
numArray[2]+numArray[3]
numArray[3]+numArray[2]
The function I wrote for this purpose is as following:
function combineTheGivenNumber(numArray, num) {
//convert all numbers to strings for easy concatenation
numArray = numArray.map(e => e+'');
//also convert the `hay` to string for easy comparison
num = num+'';
let pairCounts = 0;
// itereate over the array to get pairs
numArray.forEach((e,i) => {
numArray.forEach((f,j) => {
if(i!==j && num === (e+f)) {
pairCounts++;
}
});
});
return pairCounts;
}
console.log('Test 1: ', combineTheGivenNumber([1,212,12,12],1212));
console.log('Test 2: ', combineTheGivenNumber([4,21,42,1],421));
From my experience, I know conversion of number to string is slow in JS, but I am not sure whether my approach is wrong/lack of knowledge or does the tester is ignorant of this fact. Can anyone suggest further optimization of the code snipped?
Elimination of string to number to string will be a significant speed boost but I am not sure how to check for concatenated numbers otherwise.
Elimination of string to number to string will be a significant speed boost
No, it won't.
Firstly, you're not converting strings to numbers anywhere, but more importantly the exercise asks for concatenation so working with strings is exactly what you should do. No idea why they're even passing numbers. You're doing fine already by doing the conversion only once for each number input, not every time your form a pair. And last but not least, avoiding the conversion will not be a significant improvement.
To get a significant improvement, you should use a better algorithm. #derpirscher is correct in his comment: "[It's] the nested loop checking every possible combination which hits the time limit. For instance for your example, when the outer loop points at 212 you don't need to do any checks, because regardless, whatever you concatenate to 212, it can never result in 1212".
So use
let pairCounts = 0;
numArray.forEach((e,i) => {
if (num.startsWith(e)) {
//^^^^^^^^^^^^^^^^^^^^^^
numArray.forEach((f,j) => {
if (i !== j && num === e+f) {
pairCounts++;
}
});
}
});
You might do the same with suffixes, but it becomes more complicated to rule out concatenation to oneself there.
Optimising further, you can even achieve a linear complexity solution by putting the strings in a lookup structure, then when finding a viable prefix just checking whether the missing part is an available suffix:
function combineTheGivenNumber(numArray, num) {
const strings = new Map();
for (const num of numArray) {
const str = String(num);
strings.set(str, 1 + (strings.get(str) ?? 0));
}
const whole = String(num);
let pairCounts = 0;
for (const [prefix, pCount] of strings) {
if (!whole.startsWith(prefix))
continue;
const suffix = whole.slice(prefix.length);
if (strings.has(suffix)) {
let sCount = strings.get(suffix);
if (suffix == prefix) sCount--; // no self-concatenation
pairCounts += pCount*sCount;
}
}
return pairCounts;
}
(the proper handling of duplicates is a bit difficile)
I like your approach of going to strings early. I can suggest a couple of simple optimizations.
You only need the numbers that are valid "first parts" and those that are valid "second parts"
You can use the javascript .startsWith and .endsWith to test for those conditions. All other strings can be thrown away.
The lengths of the strings must add up to the length of the desired answer
Suppose your target string is 8 digits long. If you have 2 valid 3-digit "first parts", then you only need to know how many valid 5-digit "second parts" you have. Suppose you have 9 of them. Those first parts can only combine with those second parts, and give you 2 * 9 = 18 valid pairs.
You don't actually need to keep the strings!
It struck me that if you know you have 2 valid 3-digit "first parts", you don't need to keep those actual strings. Knowing that they are valid 2-digit first parts is all you need to know.
So let's build an array containing:
How many valid 1-digit first parts do we have?,
How many valid 2-digit first parts do we have?,
How many valid 3-digit first parts do we have?,
etc.
And similarly an array containing the number of valid 1-digit second parts, etc.
X first parts and Y second parts can be combined in X * Y ways
Except if the parts are the same length, in which case we are reusing the same list, and so it is just X * (Y-1).
So not only do we not need to keep the strings, but we only need to do the multiplication of the appropriate elements of the arrays.
5 1-char first parts & 7 3-char second parts = 5 * 7 = 35 pairs
6 2-char first part & 4 2-char second parts = 6 * (4-1) = 18 pairs
etc
So this becomes extremely easy. One pass over the strings, tallying the "first part" and "second part" matches of each length. This can be done with an if and a ++ of the relevant array element.
Then one pass over the lengths, which will be very quick as the array of lengths will be very much shorter than the array of actual strings.
function combineTheGivenNumber(numArray, num) {
const sElements = numArray.map(e => "" + e);
const sTarget = "" + num;
const targetLength = sTarget.length
const startsByLen = (new Array(targetLength)).fill(0);
const endsByLen = (new Array(targetLength)).fill(0);
sElements.forEach(sElement => {
if (sTarget.startsWith(sElement)) {
startsByLen[sElement.length]++
}
if (sTarget.endsWith(sElement)) {
endsByLen[sElement.length]++
}
})
// We can now throw away the strings. We have two separate arrays:
// startsByLen[1] is the count of strings (without attempting to remove duplicates) which are the first character of the required answer
// startsByLen[2] similarly the count of strings which are the first 2 characters of the required answer
// etc.
// and endsByLen[1] is the count of strings which are the last character ...
// and endsByLen[2] is the count of strings which are the last 2 characters, etc.
let pairCounts = 0;
for (let firstElementLength = 1; firstElementLength < targetLength; firstElementLength++) {
const secondElementLength = targetLength - firstElementLength;
if (firstElementLength === secondElementLength) {
pairCounts += startsByLen[firstElementLength] * (endsByLen[secondElementLength] - 1)
} else {
pairCounts += startsByLen[firstElementLength] * endsByLen[secondElementLength]
}
}
return pairCounts;
}
console.log('Test 1: ', combineTheGivenNumber([1, 212, 12, 12], 1212));
console.log('Test 2: ', combineTheGivenNumber([4, 21, 42, 1], 421));
Depending on a setup, the integer slicing can be marginally faster
Although in the end it falls short
Also, when tested on higher N values, the previous answer exploded in jsfiddle. Possibly a memory error.
As far as I have tested with both random and hand-crafted values, my solution holds. It is based on an observation, that if X, Y concantenated == Z, then following must be true:
Z - Y == X * 10^(floor(log10(Y)) + 1)
an example of this:
1212 - 12 = 1200
12 * 10^(floor((log10(12)) + 1) = 12 * 10^(1+1) = 12 * 100 = 1200
Now in theory, this should be faster then manipulating strings. And in many other languages it most likely would be. However in Javascript as I just learned, the situation is a bit more complicated. Javascript does some weird things with casting that I haven't figured out yet. In short - when I tried storing the numbers(and their counts) in a map, the code got significantly slower making any possible gains from this logarithm shenanigans evaporate. Furthermore, storing them in a custom-crafted data structure isn't guaranteed to be faster since you have to build it etc. Also it would be quite a lot of work.
As it stands this log comparison is ~ 8 times faster in a case without(or with just a few) matches since the quadratic factor is yet to kick in. As long as the possible postfix count isn't too high, it will outperform the linear solution. Unfortunately it is still quadratic in nature with the breaking point depending on a total number of strings as well as their length.
So if you are searching for a needle in a haystack - for example you are looking for a few pairs in a huge heap of numbers, this can help. In the other case of searching for many matches, this won't help. Similarly, if the input array was sorted, you could use binary search to push the breaking point further up.
In the end, unless you manage to figure out how to store ints in a map(or some custom implementation of it) in a way that doesn't completely kill the performance, the linear solution of the previous answer will be faster. It can still be useful even with the performance hit if your computation is going to be memory heavy. Storing numbers takes less space then storing strings.
var log10 = Math.log(10)
function log10floored(num) {
return Math.floor(Math.log(num) / log10)
}
function combineTheGivenNumber(numArray, num) {
count = 0
for (var i=0; i!=numArray.length; i++) {
let portion = num - numArray[i]
let removedPart = Math.pow(10, log10floored(numArray[i]))
if (portion % (removedPart * 10) == 0) {
for (var j=0; j!=numArray.length; j++) {
if (j != i && portion / (removedPart * 10) == numArray[j] ) {
count += 1
}
}
}
}
return count
}
//The previous solution, that I used for timing, comparison and check purposes
function combineTheGivenNumber2(numArray, num) {
const strings = new Map();
for (const num of numArray) {
const str = String(num);
strings.set(str, 1 + (strings.get(str) ?? 0));
}
const whole = String(num);
let pairCounts = 0;
for (const [prefix, pCount] of strings) {
if (!whole.startsWith(prefix))
continue;
const suffix = whole.slice(prefix.length);
if (strings.has(suffix)) {
let sCount = strings.get(suffix);
if (suffix == prefix) sCount--; // no self-concatenation
pairCounts += pCount*sCount;
}
}
return pairCounts;
}
var myArray = []
for (let i =0; i!= 10000000; i++) {
myArray.push(Math.floor(Math.random() * 1000000))
}
var a = new Date()
t1 = a.getTime()
console.log('Test 1: ', combineTheGivenNumber(myArray,15285656));
var b = new Date()
t2 = b.getTime()
console.log('Test 2: ', combineTheGivenNumber2(myArray,15285656));
var c = new Date()
t3 = c.getTime()
console.log('Test1 time: ', t2 - t1)
console.log('test2 time: ', t3 - t2)
Small update
As long as you are willing to take a performance hit with the setup and settle for the ~2 times performance, using a simple "hashing" table can help.(Hashing tables are nice and tidy, this is a simple modulo lookup table. The principle is similar though.)
Technically this isn't linear, practicaly it is enough for the most cases - unless you are extremely unlucky and all your numbers fall in the same bucket.
function combineTheGivenNumber(numArray, num) {
count = 0
let size = 1000000
numTable = new Array(size)
for (var i=0; i!=numArray.length; i++) {
let idx = numArray[i] % size
if (numTable[idx] == undefined) {
numTable[idx] = [numArray[i]]
} else {
numTable[idx].push(numArray[i])
}
}
for (var i=0; i!=numArray.length; i++) {
let portion = num - numArray[i]
let removedPart = Math.pow(10, log10floored(numArray[i]))
if (portion % (removedPart * 10) == 0) {
if (numTable[portion / (removedPart * 10) % size] != undefined) {
let a = numTable[portion / (removedPart * 10) % size]
for (var j=0; j!=a.length; j++) {
if (j != i && portion / (removedPart * 10) == a[j] ) {
count += 1
}
}
}
}
}
return count
}
Here's a simplified, and partially optimised approach with 2 loops:
// let's optimise 'combineTheGivenNumber', where
// a=array of numbers AND n=number to match
const ctgn = (a, n) => {
// convert our given number to a string using `toString` for clarity
// this isn't entirely necessary but means we can use strict equality later
const ns = n.toString();
// reduce is an efficient mechanism to return a value based on an array, giving us
// _=[accumulator], na=[array number] and i=[index]
return a.reduce((_, na, i) => {
// convert our 'array number' to an 'array number string' for later concatenation
const nas = na.toString();
// iterate back over our array of numbers ... we're using an optimised/reverse loop
for (let ii = a.length - 1; ii >= 0; ii--) {
// skip the current array number
if (i === ii) continue;
// string + number === string, which lets us strictly compare our 'number to match'
// if there's a match we increment the accumulator
if (a[ii] + nas === ns) ++_;
}
// we're done
return _;
}, 0);
}
I'm building an app and in one of my functions I need to generate random & unique 4 digit codes. Obviously there is a finite range from 0000 to 9999 but each day the entire list will be wiped and each day I will not need more than the available amount of codes which means it's possible to have unique codes for each day. Realistically I will probably only need a few hundred codes a day.
The way I've coded it for now is the simple brute force way which would be to generate a random 4 digit number, check if the number exists in an array and if it does, generate another number while if it doesn't, return the generated number.
Since it's 4 digits, the runtime isn't anything too crazy and I'm mostly generating a few hundred codes a day so there won't be some scenario where I've generated 9999 codes and I keep randomly generating numbers to find the last remaining one.
It would also be fine to have letters in there as well instead of just numbers if it would make the problem easier.
Other than my brute force method, what would be a more efficient way of doing this?
Thank you!
Since you have a constrained number of values that will easily fit in memory, the simplest way I know of is to create a list of the possible values and select one randomly, then remove it from the list so it can't be selected again. This will never have a collision with a previously used number:
function initValues(numValues) {
const values = new Array(numValues);
// fill the array with each value
for (let i = 0; i < values.length; i++) {
values[i] = i;
}
return values;
}
function getValue(array) {
if (!array.length) {
throw new Error("array is empty, no more random values");
}
const i = Math.floor(Math.random() * array.length);
const returnVal = array[i];
array.splice(i, 1);
return returnVal;
}
// sample code to use it
const rands = initValues(10000);
console.log(getValue(rands));
console.log(getValue(rands));
console.log(getValue(rands));
console.log(getValue(rands));
This works by doing the following:
Generate an array of all possible values.
When you need a value, select one from the array with a random index.
After selecting the value, remove it from the array.
Return the selected value.
Items are never repeated because they are removed from the array when used.
There are no collisions with used values because you're always just selecting a random value from the remaining unused values.
This relies on the fact that an array of integers is pretty well optimized in Javascript so doing a .splice() on a 10,000 element array is still pretty fast (as it can probably just be memmove instructions).
FYI, this could be made more memory efficient by using a typed array since your numbers can be represented in 16-bit values (instead of the default 64 bits for doubles). But, you'd have to implement your own version of .splice() and keep track of the length yourself since typed arrays don't have these capabilities built in.
For even larger problems like this where memory usage becomes a problem, I've used a BitArray to keep track of previous usage of values.
Here's a class implementation of the same functionality:
class Randoms {
constructor(numValues) {
this.values = new Array(numValues);
for (let i = 0; i < this.values.length; i++) {
this.values[i] = i;
}
}
getRandomValue() {
if (!this.values.length) {
throw new Error("no more random values");
}
const i = Math.floor(Math.random() * this.values.length);
const returnVal = this.values[i];
this.values.splice(i, 1);
return returnVal;
}
}
const rands = new Randoms(10000);
console.log(rands.getRandomValue());
console.log(rands.getRandomValue());
console.log(rands.getRandomValue());
console.log(rands.getRandomValue());
Knuth's multiplicative method looks to work pretty well: it'll map numbers 0 to 9999 to a random-looking other number 0 to 9999, with no overlap:
const hash = i => i*2654435761 % (10000);
const s = new Set();
for (let i = 0; i < 10000; i++) {
const n = hash(i);
if (s.has(n)) { console.log(i, n); break; }
s.add(n);
}
To implement it, simply keep track of an index that gets incremented each time a new one is generated:
const hash = i => i*2654435761 % (10000);
let i = 1;
console.log(
hash(i++),
hash(i++),
hash(i++),
hash(i++),
hash(i++),
);
These results aren't actually random, but they probably do the job well enough for most purposes.
Disclaimer:
This is copy-paste from my answer to another question here. The code was in turn ported from yet another question here.
Utilities:
function isPrime(n) {
if (n <= 1) return false;
if (n <= 3) return true;
if (n % 2 == 0 || n % 3 == 0) return false;
for (let i = 5; i * i <= n; i = i + 6) {
if (n % i == 0 || n % (i + 2) == 0) return false;
}
return true;
}
function findNextPrime(n) {
if (n <= 1) return 2;
let prime = n;
while (true) {
prime++;
if (isPrime(prime)) return prime;
}
}
function getIndexGeneratorParams(spaceSize) {
const N = spaceSize;
const Q = findNextPrime(Math.floor(2 * N / (1 + Math.sqrt(5))))
const firstIndex = Math.floor(Math.random() * spaceSize);
return [firstIndex, N, Q]
}
function getNextIndex(prevIndex, N, Q) {
return (prevIndex + Q) % N
}
Usage
// Each day you bootstrap to get a tuple of these parameters and persist them throughout the day.
const [firstIndex, N, Q] = getIndexGeneratorParams(10000)
// need to keep track of previous index generated.
// it’s a seed to generate next one.
let prevIndex = firstIndex
// calling this function gives you the unique code
function getHashCode() {
prevIndex = getNextIndex(prevIndex, N, Q)
return prevIndex.toString().padStart(4, "0")
}
console.log(getHashCode());
Explanation
For simplicity let’s say you want generate non-repeat numbers from 0 to 35 in random order. We get pseudo-randomness by polling a "full cycle iterator"†. The idea is simple:
have the indexes 0..35 layout in a circle, denote upperbound as N=36
decide a step size, denoted as Q (Q=23 in this case) given by this formula‡
Q = findNextPrime(Math.floor(2 * N / (1 + Math.sqrt(5))))
randomly decide a starting point, e.g. number 5
start generating seemingly random nextIndex from prevIndex, by
nextIndex = (prevIndex + Q) % N
So if we put 5 in we get (5 + 23) % 36 == 28. Put 28 in we get (28 + 23) % 36 == 15.
This process will go through every number in circle (jump back and forth among points on the circle), it will pick each number only once, without repeating. When we get back to our starting point 5, we know we've reach the end.
†: I'm not sure about this term, just quoting from this answer
‡: This formula only gives a nice step size that will make things look more "random", the only requirement for Q is it must be coprime to N
This problem is so small I think a simple solution is best. Build an ordered array of the 10k possible values & permute it at the start of each day. Give the k'th value to the k'th request that day.
It avoids the possible problem with your solution of having multiple collisions.
My function fails (returns undefined) when the answer is a negative odd number only.
Otherwise it works.
Can anyone see why?
Instructions:
You are given an array (which will have a length of at least 3, but could be very large) containing integers. The array is either entirely comprised of odd integers or entirely comprised of even integers except for a single integer N. Write a method that takes the array as an argument and returns this "outlier" N.
My code:
function findOutlier(integers) {
let binary = integers.map((int, i) => int % 2);
let count = 0;
for (let i = 0; i < binary.length; i++) {
if (binary[i] == 0)
count++;
}
if (count > 1) {
return integers[binary.indexOf(1)]
} else {
return integers[binary.indexOf(0)]
}
}
The JavaScript % operator returns negative numbers in certain cases (when the left-hand side is negative and the right is positive). Thus your .indexOf(1) won't find the -1 in the array.
You could fix it in the .map() callback by using (i) => i & 1 to directly check the least-significant bit.
If it were me, I would interpret the warning in the assignment that the array could be "very large" as a warning that iteration should be minimized. Thus I'd be tempted to approach the problem differently. Once you've seen more than one even number or more than one odd number, you can assume that the first number that doesn't fit the pattern is the outlier. (Oh, and it occurs to me that the provision that the array always has at least 3 elements is another hint at the desired solution: you only need to check the first 3 elements to determine whether the input array is almost-all-even or almost-all-odd.)
So maybe something like:
function outlier(integers) {
function par(i) { return i & 1; }
let parity = par(integers[0]);
if (parity != par(integers[1])) {
if (parity == par(integers[2]))
// [0] and [2] are the true parity so [1] is the outlier
return integers[1];
// [1] and [2] are the true parity so [0] is the outlier
return integers[0];
}
return integers.find((i) => par(i) != parity);
}
Javascript % will return negative numbers when you use negative numbers on left hand side of %.
For example -3%2 = -1.
To handle such case you can change your code like:
function findOutlier(integers) {
let binary = integers.map((int, i) => Math.abs(int) % 2 );
let count = 0;
for (let i = 0; i < binary.length; i++) {
if (binary[i] == 0)
count++;
}
if (count > 1) {
return integers[binary.indexOf(1)]
} else {
return integers[binary.indexOf(0)]
}
}
This can work but there are better ways to solve the above problem with better time complexity. As mentioned in other answers(You only need 3 passes to decide outlier will be even or odd)
You could take an object for the storing of the first two indices of either odd (false) or even (true) values and exit early if at least two of one and one of the other type is found.
const isEven = value => value % 2 === 0;
function find(array) {
var indices = { true: [], false: [] },
i, key;
for (i = 0; i < array.length; i++) {
key = isEven(array[i]);
if (indices[key].length < 2) indices[key].push(i);
if (indices.true.length > 1 && indices.false.length === 1) return indices.false[0];
if (indices.false.length > 1 && indices.true.length === 1) return indices.true[0];
}
return indices;
}
console.log(find([1, 0, 0]));
console.log(find([0, 1, 0]));
console.log(find([0, 0, 1]));
console.log(find([0, 1, 1]));
console.log(find([1, 0, 1]));
console.log(find([1, 1, 0]));
I'm making the Freecodecamp certifications, and there's one problem I cannot see a solution to: the task is to calculate the Least Common Multiple (LCM) for an array of integers (that also means a RANGE of integers, between a min and a max value).
The snippet below gives the correct answer here on SO, on Codepen.io, on my local environment. But not on freecodecamp.org.
function smallestCommons(arr) {
// sorting and cloning the array
const fullArr = createArray(arr.sort((a, b) => a - b))
// calculating the theoretical limit of the size of the LCM
const limit = fullArr.reduce((a, c) => a * c, 1)
// setting the number to start the iteration with
let i = fullArr[0]
// iteration to get the LCM
for (i; i <= limit; i++) {
// if every number in the fullArr divide
// the number being tested (i), then it's the LCM
if (fullArr.every(e => !(i % e))) {
// stop the for loop
break
}
}
// return LCM
return i;
}
// utility function to create the fullArr const in the
// main function
function createArray([a, b]) {
const r = []
for (let i = b; i >= a; i--) {
r.push(i)
}
return r
}
// displaying the results
console.log(smallestCommons([23, 18]));
The error what I see:
the code works correctly with 4 other arrays on freecodecamp.org
the code gives false results - or no results at all for the array [23, 18]. If I get a result, it's not consistent (like 1,000,000 once, then 3,654,236 - I made these numbers up, but the behavior is like that). The result of the [23, 18] input should be 6,056,820 (and it's that here on SO, but not on freecodecamp.org)
As this code is far from optimal I have a feeling that the code execution just runs out of resources at one point, but I get no error for that.
I read the hints on the page (yes, I tried the solutions, and they do work), but I'd like to submit my own code: I know my algorithm is (theoretically) good (although not optimal), but I'd like to make it work in practice too.
I also see that this question had caused problems to others (it's been asked on SO), but I don't feel it's a duplicate.
Does anyone have any ideas?
As it turned out it was a resource problem - the algorithm in my question was correct theoretically but wasn't optimal or effective.
Here's one that is more efficient in solving this problem:
// main function
function smallestCommons(arr) {
const fullArr = createArray(arr.sort((a, b) => a - b))
return findLcm(fullArr, fullArr.length);
}
// creating the range of numbers based on a min and a max value
function createArray([a, b]) {
const r = []
for (let i = b; i >= a; i--) {
r.push(i)
}
return r
}
// smallest common multiple of n numbers
function findLcm(arr, n) {
let ans = arr[0];
for (let i = 1; i < n; i++) {
ans = (((arr[i] * ans)) /
(gcd(arr[i], ans)));
}
return ans;
}
// greatest common divisor
function gcd(a, b) {
if (b == 0) return a;
return gcd(b, a % b);
}
console.log(smallestCommons([1, 5]));
console.log(smallestCommons([5, 1]));
console.log(smallestCommons([2, 10]));
console.log(smallestCommons([23, 18]));
This method was OK on the testing sandbox environment.
Given I have an array of numbers for example [14,6,10] - How can I find possible combinations/pairs that can add upto a given target value.
for example I have [14,6,10], im looking for a target value of 40
my expected output will be
10 + 10 + 6 + 14
14 + 14 + 6 + 6
10 + 10 + 10 + 10
*Order is not important
With that being said, this is what I tried so far:
function Sum(numbers, target, partial) {
var s, n, remaining;
partial = partial || [];
s = partial.reduce(function (a, b) {
return a + b;
}, 0);
if (s === target) {
console.log("%s", partial.join("+"))
}
for (var i = 0; i < numbers.length; i++) {
n = numbers[i];
remaining = numbers.slice(i + 1);
Sum(remaining, target, partial.concat([n]));
}
}
>>> Sum([14,6,10],40);
// returns nothing
>>> Sum([14,6,10],24);
// return 14+10
It is actually useless since it will only return if the number can be used only once to sum.
So how to do it?
You could add the value of the actual index as long as the sum is smaller than the wanted sum or proceed with the next index.
function getSum(array, sum) {
function iter(index, temp) {
var s = temp.reduce((a, b) => a + b, 0);
if (s === sum) result.push(temp);
if (s >= sum || index >= array.length) return;
iter(index, temp.concat(array[index]));
iter(index + 1, temp);
}
var result = [];
iter(0, []);
return result;
}
console.log(getSum([14, 6, 10], 40));
.as-console-wrapper { max-height: 100% !important; top: 0; }
For getting a limited result set, you could specify the length and check it in the exit condition.
function getSum(array, sum, limit) {
function iter(index, temp) {
var s = temp.reduce((a, b) => a + b, 0);
if (s === sum) result.push(temp);
if (s >= sum || index >= array.length || temp.length >= limit) return;
iter(index, temp.concat(array[index]));
iter(index + 1, temp);
}
var result = [];
iter(0, []);
return result;
}
console.log(getSum([14, 6, 10], 40, 5));
.as-console-wrapper { max-height: 100% !important; top: 0; }
TL&DR : Skip to Part II for the real thing
Part I
#Nina Scholz answer to this fundamental problem just shows us a beautiful manifestation of an algorithm. Honestly it confused me a lot for two reasons
When i try [14,6,10,7,3] with a target 500 it makes 36,783,575 calls to the iter function without blowing the call stack. Yet memory shows no significant usage at all.
My dynamical programming solution goes a little faster (or may be not) but there is no way it can do above case without exhousting the 16GB memory.
So i shelved my solution and instead started investigating her code a little further on dev tools and discoverd both it's beauty and also a little bit of it's shortcomings.
First i believe this algorithmic approach, which includes a very clever use of recursion, might possibly deserve a name of it's own. It's very memory efficient and only uses up memory for the constructed result set. The stack dynamically grows and shrinks continuoously up to nowhere close to it's limit.
The problem is, while being very efficient it still makes huge amounts of redundant calls. So looking into that, with a slight modification the 36,783,575 calls to iter can be cut down to 20,254,744... like 45% which yields a much faster code. The thing is the input array must be sorted ascending.
So here comes a modified version of Nina's algorithm. (Be patient.. it will take like 25 secs to finalize)
function getSum(array, sum) {
function iter(index, temp) {cnt++ // counting iter calls -- remove in production code
var s = temp.reduce((a, b) => a + b, 0);
sum - s >= array[index] && iter(index, temp.concat(array[index]));
sum - s >= array[index+1] && iter(index + 1, temp);
s === sum && result.push(temp);
return;
}
var result = [];
array.sort((x,y) => x-y); // this is a very cheap operation considering the size of the inpout array should be small for reasonable output.
iter(0, []);
return result;
}
var cnt = 0,
arr = [14,6,10,7,3],
tgt = 500,
res;
console.time("combos");
res = getSum(arr,tgt);
console.timeEnd("combos");
console.log(`source numbers are ${arr}
found ${res.length} unique ways to sum up to ${tgt}
iter function has been called ${cnt} times`);
Part II
Eventhough i was impressed with the performance, I wasn't comfortable with above solution for no solid reason that i can name. The way it works on side effects and the very hard to undestand double recursion and such disturbed me.
So here comes my approach to this question. This is many times more efficient compared to the accepted solution despite i am going functional in JS. We have still have room to make it a little faster with ugly imperative ways.
The difference is;
Given numbers: [14,6,10,7,3]
Target Sum: 500
Accepted Answer:
Number of possible ansers: 172686
Resolves in: 26357ms
Recursive calls count: 36783575
Answer Below
Number of possible ansers: 172686
Resolves in: 1000ms
Recursive calls count: 542657
function items2T([n,...ns],t){cnt++ //remove cnt in production code
var c = ~~(t/n);
return ns.length ? Array(c+1).fill()
.reduce((r,_,i) => r.concat(items2T(ns, t-n*i).map(s => Array(i).fill(n).concat(s))),[])
: t % n ? []
: [Array(c).fill(n)];
};
var cnt = 0, result;
console.time("combos");
result = items2T([14, 6, 10, 7, 3], 500)
console.timeEnd("combos");
console.log(`${result.length} many unique ways to sum up to 500
and ${cnt} recursive calls are performed`);
Another important point is, if the given array is sorted descending then the amount of recursive iterations will be reduced (sometimes greatly), allowing us to squeeze out more juice out of this lemon. Compare above with the one below when the input array is sorted descending.
function items2T([n,...ns],t){cnt++ //remove cnt in production code
var c = ~~(t/n);
return ns.length ? Array(c+1).fill()
.reduce((r,_,i) => r.concat(items2T(ns, t-n*i).map(s => Array(i).fill(n).concat(s))),[])
: t % n ? []
: [Array(c).fill(n)];
};
var cnt = 0, result;
console.time("combos");
result = items2T([14, 10, 7, 6, 3], 500)
console.timeEnd("combos");
console.log(`${result.length} many unique ways to sum up to 500
and ${cnt} recursive calls are performed`);