Recursion time complexity - javascript

I am looking for help in understanding what the Time/Space complexity of my solution, for finding permutations of an array, is. I know that because I am using an Array.forEach method that my time complexity includes O(n), however since I am also using recursion I do not know how the time complexity changes. The recursion seems to me to be in O(n) time complexity as well. Does that make the overall time complexity of the algorithm O(n^2)? And for space complexity is that 0(n) as well? since each recursion call returns a bigger memory array? Thanks in advance.
function getPermutations(array) {
if (array.length <= 1){
return array.length === 0 ? array : [array];
}
let lastNum = array[array.length - 1]
let arrayWithoutLastNum = array.slice(0, array.length - 1);
let permutations = getPermutations(arrayWithoutLastNum);
let memory = []
permutations.forEach(element => {
for(let i = 0; i <= element.length; i++){
let elementCopy = element.slice(0);
elementCopy.splice(i, 0, lastNum)
memory.push(elementCopy)
}
})
return memory
}

Related

Getting the time complexity

I wrote this algorithm. Can you help me calculate 'the time complexity' ?
I don't have nested function, but have .includes inside map.
function prime(num) {
for (var i = 2; i < num; i++) if (num % i === 0) return false;
return num > 1;
}
const function = (dataA, dataB) => {
let temp = {};
let tempArray = [];
dataB.forEach(function (x) {
temp[x] = (temp[x] || 0) + 1;
});
dataA.map(item => {
if (dataB.includes(item) && !prime(temp[item])) {
tempArray.push(item);
} else if (!dataB.includes(item)) {
tempArray.push(item);
}
});
return tempArray;
};
console.log('Input A:', A);
console.log('Input B:', B);
console.log('Output:', function(A, B));
Some observations:
B.includes has a time complexity of O(B.length)
isPrime has a time complexity of O(num). Since the argument is the frequency of a value in B, it is bounded by the size of B, and so it has a worst case of O(B.length), as isPrime is only called when B.includes is called, it is thus irrelevant for the overall time complexity.
As B.includes is called as many times as there are values in A, the overall time complexity is O(A.length * B.length)
The complexity can be reduced by replacing B.includes(item) with count[item], and then isPrime becomes determining. If isPrime is extended with memoization, the total cost of all isPrime calls together is O(A.length + B.length), so then that is also the overall time complexity.
This cannot be further reduced, as even without a call to isPrime, the iteration over both input arrays is necessary and already represents that time complexity.
Your isPrime function has a worst-case time complexity of O(n).
The forEach function is a separate O(n) function.
The map function that evaluates each item in dataA and does O(n) in each of includes and isPrime function.
So, the total time complexity is O(n^2).

Why does my sieve not perform well for finding primes?

I wrote two prime finder functions and the sieve only performs about 10% better. I'm using two optimizations for the simple version.
Don't check even numbers
Only check up to the square root or j * j <= i. ( equivalent )
and one optimization for the sieve version
Only check up to the square root or i * i <= n. ( equivalent )
What optimizations can I add to the sieve?
My sieve is pretty slow. I don't want to do a bitwise implementation yet, I want to understand if this implementation offers any benefits.
Or if I missed an implementation point.
The inner for loop in the pseudocode here looks interesting / odd
https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes
I don't know how to interpret it. (update: the OP seems to indicate in the comments that it was an issue with incorrect formatting after copy-pasting the pseudocode from Wikipedia, and with the corrected formatting it is clear now)
Here it is:
algorithm Sieve of Eratosthenes is: input: an integer n > 1.
output: all prime numbers from 2 through n.
let A be an array of Boolean values, indexed by integers 2 to n,
initially all set to true.
for i = 2, 3, 4, ..., not exceeding √n do
if A[i] is true
for j = i2, i2+i, i2+2i, i2+3i, ..., not exceeding n do
A[j] := false
return all i such that A[i] is true.
// prime-2
// 2 optimizations - odds and square root
function prime2(n){
const primes = [2];
not_prime: for(let i = 3; i < n; i += 2){
for(let j = 2; j * j <= i; j++){
if(i % j === 0){
continue not_prime;
}
}
primes.push(i);
}
return primes;
}
// prime-3
// sieve implementation
function prime3 (n) {
const primes = [];
const sieve = (new Array(n)).fill(true);
for (let i = 2; i * i <= n; i += 1) {
if (sieve[i]) {
for (let j = i + i; j < n; j += i) {
sieve[j] = false;
}
}
}
makePrimes(sieve, primes, n);
return primes;
};
function makePrimes(sieve, primes, n){
for (let i = 2; i < n; i++) {
if(sieve[i]) {
primes.push(i);
}
}
}
What you see is an expression of the differences in theoretical run time complexities, i.e. the true algorithmic differences between the two algorithms.
Optimal trial division sieve's complexity is O(n1.5/(log n)2)(*) whereas the sieve of Eratosthenes' complexity is O(n log log n).
According to the empirical run time figures posted by Scott Sauyet in the comments,
1e6 279ms 36ms
1e7 6946ms 291ms
-------------------------
n^ 1.40 0.91
the empirical orders of growth are roughly ~n1.4 and ~n in the measured range, which is a good fit.
So your genuine sieve does perform well. The trial division sieve performs as expected. The algorithmic nature of a code will always beat any presence or absence of any secondary optimizations, if we increase the problem size enough.
And comparing performances by measuring them at just one problem-size point is never enough. So even if you see just 10% difference over the "simpler one", if you test at bigger sizes, the difference will be bigger.
If you want some pointers about what can be further improved in your code, do note that you start the inner loop from i+i instead of from i*i, for starters.
Another common optimization is to special-case 2, start from 3 and increment the candidates by 2 and use the inner loop increment of 2*i instead of just i, to achieve instant 2x speedup. This is the simplest form of wheel factorization optimization, which can be further applied, with diminishing returns though for each additional prime. But using 2-3-5-7 is common and should give about another 2x speedup, if memory serves.
Last but not least, make it segmented.
(*) that's π(n)* π(√n) coming from primes, and no more than that, from the composites.

browser optimization of splice to slice

I have the following 2 functions:
//destructive
const getEveryX = (arr, x, offset) => {
const _arr = [...arr];
let _arrArr = [];
if (offset && offset >= arr.length) {
_arrArr.push(_arr.splice(0, offset));
}
while (_arr.length > x) {
_arrArr.push(_arr.splice(0, x));
}
if (_arr.length) {
_arrArr.push(_arr);
}
return _arrArr
}
and
//copying
const getEveryX2 = (arr, x, offset) => {
let _pointer = 0;
const _arrArr = [];
if (offset && offset >= arr.length) {
_arrArr.push(arr.slice(0, offset));
}
while (arr.length >= _pointer + x) {
_arrArr.push(arr.slice(_pointer, _pointer + x));
_pointer += x;
}
if (arr.length) {
_arrArr.push(arr.slice(_pointer, arr.length - 1));
}
return _arrArr;
};
I wrote the second function because I thougt it would be faster to copy the parts I need from the original array instead of copying the original and splicing out the beginning every time (both functions should do the same, first uses splice, second slice) - I tested it and this doesnt seem to be the case, they both take the same time.
My theory is that the compiler knows what I want to do in both cases and creates the same code.
I could also be completely wrong and the second version shouldnt be faster without optimizations.
Do you know what is going on here?
I tested it and this doesnt seem to be the case, they both take the same time.
No, your test case is broken. JSperf doesn't run setup and teardown for each of your snippets runs, it runs a your snippets in a loop between setup and teardown. You are emptying the testArr on the first run, the rest of the iterations only measures the while (testArr.length > 1) condition evaluation (yielding false).
I've updated the benchmark, and as expected slice is now performing better.

Does this JavaScript function have a linear or quadratic time complexity?

I'm trying to get my head around this solution in a Google interview video: https://youtu.be/XKu_SEDAykw?t=1139.
Though they say it is linear in the video, I'm not 100% certain if (and why) the entire solution is linear rather than quadratic?
Because the find()/includes() method is nested in the for loop, that would make me assume it has a run-time of O(N * N).
But find()/includes() is searching an array that grows 1 step at a time, making me think the run-time in fact just O(N + N)?
Here's my version of the solution in JavaScript:
const findSum = (arr, val) => {
let searchValues = [val - arr[0]];
for (let i = 1; i < arr.length; i++) {
let searchVal = val - arr[i];
if (searchValues.includes(arr[i])) {
return true;
} else {
searchValues.push(searchVal);
}
};
return false;
};
My workings:
When i = 1, searchValues.length = 0
When i = 2, searchValues.length = 1
When i = 3, searchValues.length = 2
Shouldn't that imply a linear run-time of O(N + (N - 1))? Or am I missing something?!
Thanks for your help!
Yes, your solution is quadratic, because as you mentioned .includes traverses the array, so does for. In the interview however they talk about an unordered_set for the lookup array, which implies that this could be implemented as a HashSet, which has O(1) lookup/insertion time, making the algorithm O(n) (and O(n²) worst, worst case). The JS equivalent would be a Set:
const findSum = (arr, sum) =>
arr.some((set => n => set.has(n) || !set.add(sum - n))(new Set));

Find possible numbers in array that can sum to a target value

Given I have an array of numbers for example [14,6,10] - How can I find possible combinations/pairs that can add upto a given target value.
for example I have [14,6,10], im looking for a target value of 40
my expected output will be
10 + 10 + 6 + 14
14 + 14 + 6 + 6
10 + 10 + 10 + 10
*Order is not important
With that being said, this is what I tried so far:
function Sum(numbers, target, partial) {
var s, n, remaining;
partial = partial || [];
s = partial.reduce(function (a, b) {
return a + b;
}, 0);
if (s === target) {
console.log("%s", partial.join("+"))
}
for (var i = 0; i < numbers.length; i++) {
n = numbers[i];
remaining = numbers.slice(i + 1);
Sum(remaining, target, partial.concat([n]));
}
}
>>> Sum([14,6,10],40);
// returns nothing
>>> Sum([14,6,10],24);
// return 14+10
It is actually useless since it will only return if the number can be used only once to sum.
So how to do it?
You could add the value of the actual index as long as the sum is smaller than the wanted sum or proceed with the next index.
function getSum(array, sum) {
function iter(index, temp) {
var s = temp.reduce((a, b) => a + b, 0);
if (s === sum) result.push(temp);
if (s >= sum || index >= array.length) return;
iter(index, temp.concat(array[index]));
iter(index + 1, temp);
}
var result = [];
iter(0, []);
return result;
}
console.log(getSum([14, 6, 10], 40));
.as-console-wrapper { max-height: 100% !important; top: 0; }
For getting a limited result set, you could specify the length and check it in the exit condition.
function getSum(array, sum, limit) {
function iter(index, temp) {
var s = temp.reduce((a, b) => a + b, 0);
if (s === sum) result.push(temp);
if (s >= sum || index >= array.length || temp.length >= limit) return;
iter(index, temp.concat(array[index]));
iter(index + 1, temp);
}
var result = [];
iter(0, []);
return result;
}
console.log(getSum([14, 6, 10], 40, 5));
.as-console-wrapper { max-height: 100% !important; top: 0; }
TL&DR : Skip to Part II for the real thing
Part I
#Nina Scholz answer to this fundamental problem just shows us a beautiful manifestation of an algorithm. Honestly it confused me a lot for two reasons
When i try [14,6,10,7,3] with a target 500 it makes 36,783,575 calls to the iter function without blowing the call stack. Yet memory shows no significant usage at all.
My dynamical programming solution goes a little faster (or may be not) but there is no way it can do above case without exhousting the 16GB memory.
So i shelved my solution and instead started investigating her code a little further on dev tools and discoverd both it's beauty and also a little bit of it's shortcomings.
First i believe this algorithmic approach, which includes a very clever use of recursion, might possibly deserve a name of it's own. It's very memory efficient and only uses up memory for the constructed result set. The stack dynamically grows and shrinks continuoously up to nowhere close to it's limit.
The problem is, while being very efficient it still makes huge amounts of redundant calls. So looking into that, with a slight modification the 36,783,575 calls to iter can be cut down to 20,254,744... like 45% which yields a much faster code. The thing is the input array must be sorted ascending.
So here comes a modified version of Nina's algorithm. (Be patient.. it will take like 25 secs to finalize)
function getSum(array, sum) {
function iter(index, temp) {cnt++ // counting iter calls -- remove in production code
var s = temp.reduce((a, b) => a + b, 0);
sum - s >= array[index] && iter(index, temp.concat(array[index]));
sum - s >= array[index+1] && iter(index + 1, temp);
s === sum && result.push(temp);
return;
}
var result = [];
array.sort((x,y) => x-y); // this is a very cheap operation considering the size of the inpout array should be small for reasonable output.
iter(0, []);
return result;
}
var cnt = 0,
arr = [14,6,10,7,3],
tgt = 500,
res;
console.time("combos");
res = getSum(arr,tgt);
console.timeEnd("combos");
console.log(`source numbers are ${arr}
found ${res.length} unique ways to sum up to ${tgt}
iter function has been called ${cnt} times`);
Part II
Eventhough i was impressed with the performance, I wasn't comfortable with above solution for no solid reason that i can name. The way it works on side effects and the very hard to undestand double recursion and such disturbed me.
So here comes my approach to this question. This is many times more efficient compared to the accepted solution despite i am going functional in JS. We have still have room to make it a little faster with ugly imperative ways.
The difference is;
Given numbers: [14,6,10,7,3]
Target Sum: 500
Accepted Answer:
Number of possible ansers: 172686
Resolves in: 26357ms
Recursive calls count: 36783575
Answer Below
Number of possible ansers: 172686
Resolves in: 1000ms
Recursive calls count: 542657
function items2T([n,...ns],t){cnt++ //remove cnt in production code
var c = ~~(t/n);
return ns.length ? Array(c+1).fill()
.reduce((r,_,i) => r.concat(items2T(ns, t-n*i).map(s => Array(i).fill(n).concat(s))),[])
: t % n ? []
: [Array(c).fill(n)];
};
var cnt = 0, result;
console.time("combos");
result = items2T([14, 6, 10, 7, 3], 500)
console.timeEnd("combos");
console.log(`${result.length} many unique ways to sum up to 500
and ${cnt} recursive calls are performed`);
Another important point is, if the given array is sorted descending then the amount of recursive iterations will be reduced (sometimes greatly), allowing us to squeeze out more juice out of this lemon. Compare above with the one below when the input array is sorted descending.
function items2T([n,...ns],t){cnt++ //remove cnt in production code
var c = ~~(t/n);
return ns.length ? Array(c+1).fill()
.reduce((r,_,i) => r.concat(items2T(ns, t-n*i).map(s => Array(i).fill(n).concat(s))),[])
: t % n ? []
: [Array(c).fill(n)];
};
var cnt = 0, result;
console.time("combos");
result = items2T([14, 10, 7, 6, 3], 500)
console.timeEnd("combos");
console.log(`${result.length} many unique ways to sum up to 500
and ${cnt} recursive calls are performed`);

Categories

Resources