I'm trying to get my head around this solution in a Google interview video: https://youtu.be/XKu_SEDAykw?t=1139.
Though they say it is linear in the video, I'm not 100% certain if (and why) the entire solution is linear rather than quadratic?
Because the find()/includes() method is nested in the for loop, that would make me assume it has a run-time of O(N * N).
But find()/includes() is searching an array that grows 1 step at a time, making me think the run-time in fact just O(N + N)?
Here's my version of the solution in JavaScript:
const findSum = (arr, val) => {
let searchValues = [val - arr[0]];
for (let i = 1; i < arr.length; i++) {
let searchVal = val - arr[i];
if (searchValues.includes(arr[i])) {
return true;
} else {
searchValues.push(searchVal);
}
};
return false;
};
My workings:
When i = 1, searchValues.length = 0
When i = 2, searchValues.length = 1
When i = 3, searchValues.length = 2
Shouldn't that imply a linear run-time of O(N + (N - 1))? Or am I missing something?!
Thanks for your help!
Yes, your solution is quadratic, because as you mentioned .includes traverses the array, so does for. In the interview however they talk about an unordered_set for the lookup array, which implies that this could be implemented as a HashSet, which has O(1) lookup/insertion time, making the algorithm O(n) (and O(n²) worst, worst case). The JS equivalent would be a Set:
const findSum = (arr, sum) =>
arr.some((set => n => set.has(n) || !set.add(sum - n))(new Set));
Related
I am looking for help in understanding what the Time/Space complexity of my solution, for finding permutations of an array, is. I know that because I am using an Array.forEach method that my time complexity includes O(n), however since I am also using recursion I do not know how the time complexity changes. The recursion seems to me to be in O(n) time complexity as well. Does that make the overall time complexity of the algorithm O(n^2)? And for space complexity is that 0(n) as well? since each recursion call returns a bigger memory array? Thanks in advance.
function getPermutations(array) {
if (array.length <= 1){
return array.length === 0 ? array : [array];
}
let lastNum = array[array.length - 1]
let arrayWithoutLastNum = array.slice(0, array.length - 1);
let permutations = getPermutations(arrayWithoutLastNum);
let memory = []
permutations.forEach(element => {
for(let i = 0; i <= element.length; i++){
let elementCopy = element.slice(0);
elementCopy.splice(i, 0, lastNum)
memory.push(elementCopy)
}
})
return memory
}
After having a long difficult coding challenge, there was a problem that bugged me. I thought about it for an adequate time but couldn't find the way to solve it. Here, I am providing a problem and example below.
Input
v : an array of numbers.
q : 2 dimensional array with 3 elements in nested array.
Description
v is an array and q is a commands that does different thing according to its nested element.
if first element of nested array is 1 => second and third element of the nested array becomes the index and it returns sum[second:third+1] (As you can see, it is inclusive)
if first element of nested array is 2 => element of second index becomes the third. same as v[second] = third
Input example
v : [1,2,3,4,5]
q : [[1,2,4], [2,3,8], [1,2,4]]
Example
With a provided example, it goes like
command is [1,2,4] => first element is 1. it should return sum from v[2] to v[4] (inclusive) => 12.
command is [2,3,8] => first element is 2. it switches v[3] to 8. (now v is [1,2,3,8,5])
command is [1,2,4] => first element is 1. it should return sum from v[2] to v[4] (inclusive) => 16, as the third index has been changed from the previous command.
So the final answer is [12, 16]
Question.
The code below is how I solved, however, this is O(n**2) complexity. I wonder how I can reduce the time complexity in this case.
I tried making a hash object, but it didn't work. I can't think of a good way to make a cache in this case.
function solution(v, q) {
let answer = [];
for (let i = 0; i < q.length; i++) {
let [a, b, c] = q[i];
if (a === 1) {
let sum = 0;
for (let i = b; i <= c; i++) {
sum += v[i];
}
answer.push(sum);
} else if (a === 2) {
v[b] = c;
}
}
return answer;
}
This type of problem can typically be solved more efficiently with a Fenwick tree
Here is an implementation:
class BinaryIndexedTree extends Array {
constructor(length) {
super(length + 1);
this.fill(0);
}
add(i, delta) {
i++; // make index 1-based
while (i < this.length) {
this[i] += delta;
i += i & -i; // add least significant bit
}
}
sumUntil(i) {
i++; // make index 1-based
let sum = 0;
while (i) {
sum += this[i];
i -= i & -i;
}
return sum;
}
}
function solution(values, queries) {
const tree = new BinaryIndexedTree(values.length);
values.forEach((value, i) => tree.add(i, value));
const answer = [];
for (const [a, b, c] of queries) {
if (a === 1) {
answer.push(tree.sumUntil(c) - tree.sumUntil(b - 1));
} else {
tree.add(b, c - values[b]);
values[b] = c;
}
}
return answer;
}
let answer = solution([1,2,3,4,5], [[1,2,4], [2,3,8], [1,2,4]]);
console.log(answer);
Time Complexity
The time complexity of running tree.add or tree.sumUntil once is O(log𝑛), where 𝑛 is the size of the input values (values.length). So this is also the time complexity of running one query.
The creation of the tree costs O(𝑛), as this is the size of the tree
The initialisation of the tree with values costs O(𝑛log𝑛), as really each value in the input acts as a query that updates a value from 0 to the actual value.
Executing the queries costs O(𝑚log𝑛) where 𝑚 is the number of queries (queries.length)
So in total, we have a time complexity of O(𝑛 + 𝑛log𝑛 + 𝑚log𝑛) = O((𝑚+𝑛)log𝑛)
Further reading
For more information on Fenwick trees, see BIT: What is the intuition behind a binary indexed tree and how was it thought about?
I have the following 2 functions:
//destructive
const getEveryX = (arr, x, offset) => {
const _arr = [...arr];
let _arrArr = [];
if (offset && offset >= arr.length) {
_arrArr.push(_arr.splice(0, offset));
}
while (_arr.length > x) {
_arrArr.push(_arr.splice(0, x));
}
if (_arr.length) {
_arrArr.push(_arr);
}
return _arrArr
}
and
//copying
const getEveryX2 = (arr, x, offset) => {
let _pointer = 0;
const _arrArr = [];
if (offset && offset >= arr.length) {
_arrArr.push(arr.slice(0, offset));
}
while (arr.length >= _pointer + x) {
_arrArr.push(arr.slice(_pointer, _pointer + x));
_pointer += x;
}
if (arr.length) {
_arrArr.push(arr.slice(_pointer, arr.length - 1));
}
return _arrArr;
};
I wrote the second function because I thougt it would be faster to copy the parts I need from the original array instead of copying the original and splicing out the beginning every time (both functions should do the same, first uses splice, second slice) - I tested it and this doesnt seem to be the case, they both take the same time.
My theory is that the compiler knows what I want to do in both cases and creates the same code.
I could also be completely wrong and the second version shouldnt be faster without optimizations.
Do you know what is going on here?
I tested it and this doesnt seem to be the case, they both take the same time.
No, your test case is broken. JSperf doesn't run setup and teardown for each of your snippets runs, it runs a your snippets in a loop between setup and teardown. You are emptying the testArr on the first run, the rest of the iterations only measures the while (testArr.length > 1) condition evaluation (yielding false).
I've updated the benchmark, and as expected slice is now performing better.
I'm making the Freecodecamp certifications, and there's one problem I cannot see a solution to: the task is to calculate the Least Common Multiple (LCM) for an array of integers (that also means a RANGE of integers, between a min and a max value).
The snippet below gives the correct answer here on SO, on Codepen.io, on my local environment. But not on freecodecamp.org.
function smallestCommons(arr) {
// sorting and cloning the array
const fullArr = createArray(arr.sort((a, b) => a - b))
// calculating the theoretical limit of the size of the LCM
const limit = fullArr.reduce((a, c) => a * c, 1)
// setting the number to start the iteration with
let i = fullArr[0]
// iteration to get the LCM
for (i; i <= limit; i++) {
// if every number in the fullArr divide
// the number being tested (i), then it's the LCM
if (fullArr.every(e => !(i % e))) {
// stop the for loop
break
}
}
// return LCM
return i;
}
// utility function to create the fullArr const in the
// main function
function createArray([a, b]) {
const r = []
for (let i = b; i >= a; i--) {
r.push(i)
}
return r
}
// displaying the results
console.log(smallestCommons([23, 18]));
The error what I see:
the code works correctly with 4 other arrays on freecodecamp.org
the code gives false results - or no results at all for the array [23, 18]. If I get a result, it's not consistent (like 1,000,000 once, then 3,654,236 - I made these numbers up, but the behavior is like that). The result of the [23, 18] input should be 6,056,820 (and it's that here on SO, but not on freecodecamp.org)
As this code is far from optimal I have a feeling that the code execution just runs out of resources at one point, but I get no error for that.
I read the hints on the page (yes, I tried the solutions, and they do work), but I'd like to submit my own code: I know my algorithm is (theoretically) good (although not optimal), but I'd like to make it work in practice too.
I also see that this question had caused problems to others (it's been asked on SO), but I don't feel it's a duplicate.
Does anyone have any ideas?
As it turned out it was a resource problem - the algorithm in my question was correct theoretically but wasn't optimal or effective.
Here's one that is more efficient in solving this problem:
// main function
function smallestCommons(arr) {
const fullArr = createArray(arr.sort((a, b) => a - b))
return findLcm(fullArr, fullArr.length);
}
// creating the range of numbers based on a min and a max value
function createArray([a, b]) {
const r = []
for (let i = b; i >= a; i--) {
r.push(i)
}
return r
}
// smallest common multiple of n numbers
function findLcm(arr, n) {
let ans = arr[0];
for (let i = 1; i < n; i++) {
ans = (((arr[i] * ans)) /
(gcd(arr[i], ans)));
}
return ans;
}
// greatest common divisor
function gcd(a, b) {
if (b == 0) return a;
return gcd(b, a % b);
}
console.log(smallestCommons([1, 5]));
console.log(smallestCommons([5, 1]));
console.log(smallestCommons([2, 10]));
console.log(smallestCommons([23, 18]));
This method was OK on the testing sandbox environment.
Given I have an array of numbers for example [14,6,10] - How can I find possible combinations/pairs that can add upto a given target value.
for example I have [14,6,10], im looking for a target value of 40
my expected output will be
10 + 10 + 6 + 14
14 + 14 + 6 + 6
10 + 10 + 10 + 10
*Order is not important
With that being said, this is what I tried so far:
function Sum(numbers, target, partial) {
var s, n, remaining;
partial = partial || [];
s = partial.reduce(function (a, b) {
return a + b;
}, 0);
if (s === target) {
console.log("%s", partial.join("+"))
}
for (var i = 0; i < numbers.length; i++) {
n = numbers[i];
remaining = numbers.slice(i + 1);
Sum(remaining, target, partial.concat([n]));
}
}
>>> Sum([14,6,10],40);
// returns nothing
>>> Sum([14,6,10],24);
// return 14+10
It is actually useless since it will only return if the number can be used only once to sum.
So how to do it?
You could add the value of the actual index as long as the sum is smaller than the wanted sum or proceed with the next index.
function getSum(array, sum) {
function iter(index, temp) {
var s = temp.reduce((a, b) => a + b, 0);
if (s === sum) result.push(temp);
if (s >= sum || index >= array.length) return;
iter(index, temp.concat(array[index]));
iter(index + 1, temp);
}
var result = [];
iter(0, []);
return result;
}
console.log(getSum([14, 6, 10], 40));
.as-console-wrapper { max-height: 100% !important; top: 0; }
For getting a limited result set, you could specify the length and check it in the exit condition.
function getSum(array, sum, limit) {
function iter(index, temp) {
var s = temp.reduce((a, b) => a + b, 0);
if (s === sum) result.push(temp);
if (s >= sum || index >= array.length || temp.length >= limit) return;
iter(index, temp.concat(array[index]));
iter(index + 1, temp);
}
var result = [];
iter(0, []);
return result;
}
console.log(getSum([14, 6, 10], 40, 5));
.as-console-wrapper { max-height: 100% !important; top: 0; }
TL&DR : Skip to Part II for the real thing
Part I
#Nina Scholz answer to this fundamental problem just shows us a beautiful manifestation of an algorithm. Honestly it confused me a lot for two reasons
When i try [14,6,10,7,3] with a target 500 it makes 36,783,575 calls to the iter function without blowing the call stack. Yet memory shows no significant usage at all.
My dynamical programming solution goes a little faster (or may be not) but there is no way it can do above case without exhousting the 16GB memory.
So i shelved my solution and instead started investigating her code a little further on dev tools and discoverd both it's beauty and also a little bit of it's shortcomings.
First i believe this algorithmic approach, which includes a very clever use of recursion, might possibly deserve a name of it's own. It's very memory efficient and only uses up memory for the constructed result set. The stack dynamically grows and shrinks continuoously up to nowhere close to it's limit.
The problem is, while being very efficient it still makes huge amounts of redundant calls. So looking into that, with a slight modification the 36,783,575 calls to iter can be cut down to 20,254,744... like 45% which yields a much faster code. The thing is the input array must be sorted ascending.
So here comes a modified version of Nina's algorithm. (Be patient.. it will take like 25 secs to finalize)
function getSum(array, sum) {
function iter(index, temp) {cnt++ // counting iter calls -- remove in production code
var s = temp.reduce((a, b) => a + b, 0);
sum - s >= array[index] && iter(index, temp.concat(array[index]));
sum - s >= array[index+1] && iter(index + 1, temp);
s === sum && result.push(temp);
return;
}
var result = [];
array.sort((x,y) => x-y); // this is a very cheap operation considering the size of the inpout array should be small for reasonable output.
iter(0, []);
return result;
}
var cnt = 0,
arr = [14,6,10,7,3],
tgt = 500,
res;
console.time("combos");
res = getSum(arr,tgt);
console.timeEnd("combos");
console.log(`source numbers are ${arr}
found ${res.length} unique ways to sum up to ${tgt}
iter function has been called ${cnt} times`);
Part II
Eventhough i was impressed with the performance, I wasn't comfortable with above solution for no solid reason that i can name. The way it works on side effects and the very hard to undestand double recursion and such disturbed me.
So here comes my approach to this question. This is many times more efficient compared to the accepted solution despite i am going functional in JS. We have still have room to make it a little faster with ugly imperative ways.
The difference is;
Given numbers: [14,6,10,7,3]
Target Sum: 500
Accepted Answer:
Number of possible ansers: 172686
Resolves in: 26357ms
Recursive calls count: 36783575
Answer Below
Number of possible ansers: 172686
Resolves in: 1000ms
Recursive calls count: 542657
function items2T([n,...ns],t){cnt++ //remove cnt in production code
var c = ~~(t/n);
return ns.length ? Array(c+1).fill()
.reduce((r,_,i) => r.concat(items2T(ns, t-n*i).map(s => Array(i).fill(n).concat(s))),[])
: t % n ? []
: [Array(c).fill(n)];
};
var cnt = 0, result;
console.time("combos");
result = items2T([14, 6, 10, 7, 3], 500)
console.timeEnd("combos");
console.log(`${result.length} many unique ways to sum up to 500
and ${cnt} recursive calls are performed`);
Another important point is, if the given array is sorted descending then the amount of recursive iterations will be reduced (sometimes greatly), allowing us to squeeze out more juice out of this lemon. Compare above with the one below when the input array is sorted descending.
function items2T([n,...ns],t){cnt++ //remove cnt in production code
var c = ~~(t/n);
return ns.length ? Array(c+1).fill()
.reduce((r,_,i) => r.concat(items2T(ns, t-n*i).map(s => Array(i).fill(n).concat(s))),[])
: t % n ? []
: [Array(c).fill(n)];
};
var cnt = 0, result;
console.time("combos");
result = items2T([14, 10, 7, 6, 3], 500)
console.timeEnd("combos");
console.log(`${result.length} many unique ways to sum up to 500
and ${cnt} recursive calls are performed`);