Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I've been doing a lot of the practise tests on codility and though all my solutions work, they always have hidden stress tests which include massive arrays, the following code for example needs to take array A and find the lowest positive number missing from the sequence.
e.g: given A = [1, 3, 6, 4, 1, 2], the function should return 5. 5 is the value of i which increments in the loop. Function returns whenever index i is not found in array.
It works correctly but codility tested it with a 1-100,000 array and it timed out at the 6 second limit. most of my codes on the practise tests tend to fail for the same reason. Any help greatly appreciated.
console.log(solution([1, 3, 6, 4, 1, 2]));
function solution(A) {
let i = 0
for (i = 1; i <= A.length ; i++){
if(A.includes(i) == false){
return i
}
}
return i
}
There is no way to look through N items in less than O(n) which is what you're doing. The issue is you're looking through the array N times as well - this means you run N*N times and can be improved.
The most "typical" way to improve this approach is to use a Set or similar data structure with amortised (usually) constant-time access to elements. In your case this would look like:
console.log(solution([1, 3, 6, 4, 1, 2]))
function solution(A) {
// build existing elements, in O(N)
const values = new Set(A)
let i = 0
for (i = 1; i <= A.length; i++){
if(!values.has(i)){
return i
}
}
return i
}
This runs in O(N) (creating the set) + O(N) iterating the array and performing constant time work each time.
Your code loops through every item in the array for every item in the array. This gives a worst-case runtime of O(n^2). You can get a much better result if you sort the array, and then iterate through it looking for any missing numbers.
function compareNumeric(a,b) {
return a - b;
}
function solution(A) {
A.sort(compareNumeric);
let expect = 0;
for( let i=0; i<A.length; i++) {
if( A[i] > expect+1) return expect+1;
if( A[i] === expect+1) expect++;
}
return expect+1;
}
console.time('simple');
console.log(solution([1, 3, 6, 4, 1, 2]));
console.timeEnd('simple');
// worst-case: all numbers from 1-1M exist but are randomly shuffled in the array
const bigArray = Array.from({length:1000000},(_,i)=>i+1);
function shuffle(a) {
// credit: https://stackoverflow.com/questions/6274339/how-can-i-shuffle-an-array
for (let i = a.length - 1; i > 0; i--) {
const j = Math.floor(Math.random() * (i + 1));
[a[i], a[j]] = [a[j], a[i]];
}
return a;
}
shuffle(bigArray);
console.time('worst-case');
console.log(solution(bigArray));
console.timeEnd('worst-case');
This will give you a runtime of O(n log n) and should be the fastest possible solution as far as I can tell.
Please check the two solutions, it will take the time complexity O(n * log n) or O(n) so it will be much faster than previous work.
const arrayGenerator = (len) => {
const array = [];
for(let i = 0; i < len; i ++) {
array.push(Math.floor(Math.random() * len) + 1);
}
return array;
}
function sort_array_solution(A) {
A.map(a=>a).sort((a, b)=>a - b)
let ans = 1;
for(let i = 0; i < A.length; i ++) {
if(A[i] < ans) continue;
if(A[i] > ans) return ans;
ans ++;
}
return ans;
}
function object_key_solution(A) {
const obj = {}
for(let i = 0; i < A.length; i ++) {
obj[A[i]] = 1;
}
for(let ans = 1; ; ans ++) {
if(!obj.hasOwnProperty(ans)) return ans;
ans ++;
}
return ans;
}
console.log(sort_array_solution([1,2]))
console.log(object_key_solution([1,2]))
const arr = arrayGenerator(10);
console.log(sort_array_solution(arr))
console.log(object_key_solution(arr))
const arr1 = arrayGenerator(100000);
console.log(sort_array_solution(arr1))
console.log(object_key_solution(arr1))
I have a problem where, given an array of integers, I need to find sets of three numbers that add up to equal zero. The below solution works but isn't as optimal as I'd like and I am looking for ways to optimize it to avoid unnecessary processing.
What I am doing below is I am iterating through the all combinations of numbers while eliminating iterating through the same indices in each nested loop and I am checking if the three numbers in the inner most loop add up to zero. If yes, I am converting the array to a string and if the string isn't already in the results array I am adding it. Right before returning I am then converting the strings back to an array.
I appreciate any suggestions on how to further optimize this or if I missed out on some opportunity to implement better. I am not looking for a total refactor, just some adjustments that will improve performance.
var threeSum = function(nums) {
const sorted = nums.sort();
if(sorted.length && (sorted[0] > 0 || sorted[sorted.length-1] < 0)) {
return [];
}
let result = [];
for(let i=0; i < sorted.length; i++) {
for(let z=i+1; z < sorted.length; z++) {
for(let q=z+1; q < sorted.length; q++) {
if(sorted[i]+sorted[z]+sorted[q] === 0) {
const temp = [sorted[i], sorted[z], sorted[q]].join(',');
if(!result.includes(temp)) {
result.push(temp);
}
}
}
}
}
return result.map(str => str.split(','));
};
Sample Input: [-1,0,1,2,-1,-4]
Expected Output: [[-1,-1,2],[-1,0,1]]
One obvious optimisation is to precalculate the sum of the two first numbers just before the third nested loop. Then compare in the third loop if that number equals the opposite of the third iterated number.
Second optimisation is to take advantage of the fact that your items are sorted and use a binary search for the actual negative of the sum of the two first terms in the rest of the array instead of the third loop. This second optimisation brings complexity from O(N3) down to O(N2LogN)
Which leads to the third optimisation, for which you can store in a map the sum as key and as value, an array of the different pairs which sum to the sum so that each time you want to operate the binary search again, first you check if the sum already exists in that map and if it does you can simply output the combination of each pair found at that sum’s index in the map coupled with the negative sum.
The OP's solution runs in O(N³) time with no additional storage.
The classic "use a hash table" solution to find the missing element can bring that down to O(N²) time with O(N) storage.
The solution involves building a number map using an object. (You could use a Map object as well, but then you can't be as expressive with ++ and -- operators). Then just an ordinary loop and inner loop to evaluate all the pairs. For each pair, find if the negative sum of those pairs is in the map.
function threeSum(nums) {
var nummap = {}; // map a value to the number of ocurrances in nums
var solutions = new Set(); // map of solutions as strings
// map each value in nums into the number map
nums.forEach((val) => {
var k = nummap[val] ? nummap[val] : 0; // k is the number of times val appears in nummap
nummap[val] = k+1; // increment by 1 and update
});
// for each pair of numbers, see if we can find a solution the number map
for (let i = 0; i < nums.length; i++) {
var ival = nums[i];
nummap[ival]--;
for (let j = i+1; j < nums.length; j++) {
var jval = nums[j];
nummap[jval]--;
var target = -(ival + jval); // this could compute "-0", but it works itself out since 0==-0 and toString will strip the negative off
// if target is in the number map, we have a solution
if (nummap[target]) {
// sort this three sum solution and insert into map of available solutions
// we do this to filter out duplicate solutions
var tmp = [];
tmp[0] = ival;
tmp[1] = jval;
tmp[2] = target;
tmp.sort();
solutions.add(tmp.toString());
}
nummap[jval]++; // restore original instance count in nummap
}
nummap[ival]--;
}
for (s of solutions.keys()) {
console.log(s);
}
}
threeSum([9,8,7,-15, -9,0]);
var threeSum = function(unsortedNums) {
const nums = unsortedNums.sort();
if(nums.length && (nums[0] > 0 || nums[nums.length-1] < 0)) {
return [];
}
const result = new Map();
for(let i=0; i < nums.length; i++) {
for(let z=i+1; z < nums.length; z++) {
for(let q=z+1; q < nums.length; q++) {
if(nums[i]+nums[z]+nums[q] === 0) {
const toAdd = [nums[i], nums[z], nums[q]];
const toAddStr = toAdd.join(',');
if(!result.has(toAddStr)) {
result.set(toAddStr, toAdd);
}
}
}
}
}
return Array.from(result.values());
};
Recently I had an interview question as follows:
Let us consider we have two sorted arrays of different length. Need to find the common elements in two arrays.
var a=[1,2,3,4,5,6,7,8,9,10];
var b = [2,4,5,7,11,15];
for(var i=0;i<a.length;i++){
for(var j=0;j<b.length;j++){
if(a[i]==b[j]){
console.log(a[i],b[j])
}
}
}
I wrote like above. The interviewer said let now assume a have 2000 elements and b have 3000 elements. Then how you wrote in a more efficient way?
Please explain your answers with sample code. So I can understand more clearly.
The easiest way!!
var a = [1,2,3,4,5,6,7,8,9,10];
var b = [2,4,5,7,11,15];
for(let i of a){
if(b.includes(i)){
console.log(i)
}
}
--------- OR --------------
var c = a.filter(value => b.includes(value))
console.log(c)
Since the arrays are sorted, binary search is the key.
Basically, you're searching an item in an array.
You compare the item against the middle index of the array (length / 2)
If both are equal, you found it.
If item is inferior than the one at the middle index of the array, compare item against the index being at index length / 4 -> ((0 + length / 2) / 2), if it's inferior, at index ((length / 2) + length) / 2 (the middle of upper part) and so on.
That way, if in example you have to search item in a 40 000 length array, at worse, you find out that item isn't in the array with 16 comparisons :
I'm searching for "something" in an array with 40 000 indexes, minimum index where I can find it is 0, the maximum is 39999.
"something" > arr[20000]. Let's assume that. I know that now the minimum index to search is 20001 and the maximum is 39999. I'm now searching for the middle one, (20000 + 39999) / 2.
Now, "something" < arr[30000], it limits the search from indexes 20001 to 29999. (20000 + 30000) / 2 = 25000.
"something" > arr[25000], I have to search from 25001 to 29999. (25000 + 30000) / 2 = 27500
"something" < arr[27500], I have to search from 25001 to 27499. (25000 + 27500) / 2 = 26250
"something" > arr[26250], I have to search from 26251 to 27499. (26250 + 27500) / 2 = 26875
"something" < arr[26875], I have to search from 26251 to 26874. (26250 + 26875) / 2 = 26563
And so on... Of course, you have to round and stuff to avoid floating indexes
var iteration = 1;
function bSearch(item, arr)
{
var minimumIndex = 0;
var maximumIndex = arr.length - 1;
var index = Math.round((minimumIndex + maximumIndex) / 2);
while (true)
{
++iteration;
if (item == arr[index])
{
arr.splice(0, minimumIndex);
return (true);
}
if (minimumIndex == maximumIndex)
{
arr.splice(0, minimumIndex);
return (false);
}
if (item < arr[index])
{
maximumIndex = index - 1;
index = Math.ceil((minimumIndex + maximumIndex) / 2);
}
else
{
minimumIndex = index + 1;
index = Math.floor((minimumIndex + maximumIndex) / 2);
}
}
}
var arrA;
var arrB;
for (var i = 0; i < arrA.length; ++i)
{
if (bSearch(arrA[i], arrB))
console.log(arrA[i]);
}
console.log("number of iterations : " + iteration);
You could use a nested approach by checking the index of each array and find the values by incrementing the indices. If equal values are found, increment both indices.
Time complexity: max. O(n+m), where n is the length of array a and m is the length of array b.
var a = [1, 2, 3, 4, 5, 6, 8, 10, 11, 15], // left side
b = [3, 7, 8, 11, 12, 13, 15, 17], // right side
i = 0, // index for a
j = 0; // index for b
while (i < a.length && j < b.length) { // prevent running forever
while (a[i] < b[j]) { // check left side
++i; // increment index
}
while (b[j] < a[i]) { // check right side
++j; // increment
}
if (a[i] === b[j]) { // check equalness
console.log(a[i], b[j]); // output or collect
++i; // increment indices
++j;
}
}
since both arrays are sorted just save the lastest match index . then start your inner loop from this index .
var lastMatchedIndex = 0;
for(var i=0;i<a.length;i++){
for(var j=lastMatchIndex ;j<b.length;j++){
if(a[i]==b[j]){
console.log(a[i],b[j]);
lastMatchedIndex = j;
break;
}
}
}
=================
UPDATE :
As Xufox mentioned in comments if a[i] is lower than b[i] then u have break loop since it has no point to continue the loop .
var lastMatchedIndex = 0;
for(var i=0;i<a.length;i++){
if(a[i]<b[i]){
break;
}
for(var j=lastMatchIndex ;j<b.length;j++){
if(a[i]==b[j]){
console.log(a[i],b[j]);
lastMatchedIndex = j;
break;
}
if(a[i]<b[j]){
lastMatchedIndex = j;
break;
}
}
}
An optimal strategy would be one where you minimize the amount of comparisons and array readings.
Theoretically what you want is to alternate which list you are progressing through so as to avoid unnecessary comparisons. Giving that the lists are sorted we know that no number to the left of any index in a list can ever be smaller than the current index.
Assuming the following list A = [1,5], list B = [1,1,3,4,5,6] and indexes a and b both starting at 0, you would want your code to go like this:
A[a] == 1, B[b] == 1
A[a] == B[b] --> add indexes to results and increase b (B[b] == 1)
A[a] == B[b] --> add indexes to results and increase b (B[b] == 3)
A[a] < B[b] --> don't add indexes to results and increase a (A[a] == 5)
A[a] > B[b] --> don't add indexes to results and increase b (B[b] == 4)
A[a] > B[b] --> don't add indexes to results and increase b (B[b] == 5)
A[a] == B[b] --> add indexes to results and increase b (B[b] == 6)
A[a] < B[b] --> don't add indexes to results and increase a (A is at the end, so we terminate and return results)
Below is my JavaScript performing the above described algorithm:
//Parameters
var listA = [];
var listB = [];
//Parameter initialization
(function populateListA() {
var value = 0;
while (listA.length < 200) {
listA.push(value);
value += Math.round(Math.random());
}
})();
(function populateListB() {
var value = 0;
while (listB.length < 300) {
listB.push(value);
value += Math.round(Math.random());
}
})();
//Searcher function
function findCommon(listA, listB) {
//List of results to return
var results = [];
//Initialize indexes
var indexA = 0;
var indexB = 0;
//Loop through list a
while (indexA < listA.length) {
//Get value of A
var valueA = listA[indexA];
var result_1 = void 0;
//Get last result or make a first result
if (results.length < 1) {
result_1 = {
value: valueA,
indexesInA: [],
indexesInB: []
};
results.push(result_1);
}
else {
result_1 = results[results.length - 1];
}
//If higher than last result, make new result
//Push index to result
if (result_1.value < valueA) {
//Make new object
result_1 = {
value: valueA,
indexesInA: [indexA],
indexesInB: []
};
//Push to list
results.push(result_1);
}
else {
//Add indexA to list
result_1.indexesInA.push(indexA);
}
//Loop through list b
while (indexB < listB.length) {
//Get value of B
var valueB = listB[indexB];
//If b is less than a, move up list b
if (valueB < valueA) {
indexB++;
continue;
}
//If b is greather than a, break and move up list a
if (valueB > valueA) {
break;
}
//If b matches a, append index to result
result_1.indexesInB.push(indexB);
//Move up list B
indexB++;
}
//Move up list A
indexA++;
}
//Return all results with values in both lines
return results.filter(function (result) { return result.indexesInB.length > 0; });
}
//Run
var result = findCommon(listA, listB);
//Output
console.log(result);
We could iterate one array and find the duplicate in the other, but each time we find a match, we move to the matched element + 1 for the next iteration in the nested loop. It works because both arrays are sorted. So each match the array to compare is shorter (from left to right).
We could also break the nested loop when the element of the second array is greater than the first (it's shorter from right to left), because we will never find a match (since the array is ordered, there are only greater values remaining), here and example finding duplicates in two arrays of 10k elements, takes roughly 15 miliseconds:
var arr = [];
var arr2 = [];
for(let i = 0; i<9999; i++){
arr.push(i);
arr2.push(i+4999)
}
var k = 0;//<-- the index we start to compare
var res = [];
for (let i = 0; i < arr2.length; i++) {
for (let j = k; j < arr.length; j++) {
if (arr2[i] === arr[j]) {
res.push(arr2[i]);
k = j + 1;//<-- updates the index
break;
} else if (arr[j] > arr2[i]) {//<-- there is no need to keep going
break;
}
}
}
console.log(res.length)
I did not print res, because it has 5000 elements.
You can build a hash with first array (irrespective of they are sorted or not) and iterate the second array and check for existence in the hash!
let arr1 = [10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150],
arr2 = [15,30,45,60,75,90,105,120,135,150,165]
hash = arr1.reduce((h,e)=> (h[e]=1, h), {}), //iterate first array once
common = arr2.filter(v=>hash[v]); //iterate secod array once
console.log('Cpmmon elements: ', common);
Not sure but this may help
let num1 = [2, 3, 6, 6, 5];
let num2 = [1, 3, 6, 4];
var array3 = num1.filter((x) => {
return num2.indexOf(x) != -1
})
console.log(array3);
I sometimes find it convenient to turn one list into a hashset.
var hashA = {};
for(var i=0; i<a.length; i++) {hashA[a[i]] = true;}
then you can search the hashset.
for(var i=0; i<b.length; i++) {if(hashA[b[i]]) {console.log(b[i]);}}
This isnt as fast as the binary search of course because you have to take time to build the hashset, but its not bad, and if you need to keep the list and do a lot of future searching it might be the best option. Also, I know javascript objects arent really just hashsets, its complicated, but it mostly works pretty well.
Honestly though, for 3000 items I wouldnt change the code. Thats still not big enough to be an issue. That will run in like 30ms. So it also depends on how often its going to run. Once an hour? Forget about it. Once per millisecond? Definitely gotta optimize that.
if we are talking about the algorithm to find common elements between two array, then here is my opinion.
function common(arr1, arr2) {
var newArr = [];
newArr = arr1.filter(function(v){ return arr2.indexOf(v) >= 0;})
newArr.concat(arr2.filter(function(v){ return newArr.indexOf(v) >= 0;}));
return newArr;
}
but if you are going to think on performance also, then you should try another ways also.
first check the performance for javascript loop here, it will help you to figure out best way
https://dzone.com/articles/performance-check-on-different-type-of-for-loops-a
https://hackernoon.com/javascript-performance-test-for-vs-for-each-vs-map-reduce-filter-find-32c1113f19d7
I am having a hard time wrapping my brain around this one :/
(Removing Negatives) Given an array X of multiple values (e.g. [-3,5,1,3,2,10]), write a program that removes any negative values in the array. Once your program is done X should be composed of just positive numbers. Do this without creating a temporary array and only using pop method to remove any values in the array.
My thought was to write a loop through the array. If X[i] is negative, start another loop swapping X[j] and X[j+1] until the end of the array. (to preserve the order of the array) then use pop().
When I run the script it looks like the loop is infinite. Also it looks like if there are two negative values in a row the second one may get skipped in the next iteration of i. Is there a simpler way?
var X = [1,-6,-7,8,9];
//test= [1,-7,8,-6,9]
temp = 0
for (i = 0;i<X.length-1;i++){
if (X[i]<0){
for (j = i;j<=X.length-1;j++){
X[j] = temp
X[j] = X[j+1]
X[j+1] = temp
}
if(X[X.length-1] < 0){X.pop()}
}
};
console.log(X);
Very similar to your mentioned approach, except there's no reason to maintain order (unless that is missing from the description). Loop in reverse and when a negative is found, swap it with the last element and pop. If we first pop all negatives off of the end, we know the last element is not negative.
var x = [1, -6, -7, 8, 9, -3];
// strip all negatives off the end
while (x.length && x[x.length - 1] < 0) {
x.pop();
}
for (var i = x.length - 1; i >= 0; i--) {
if (x[i] < 0) {
// replace this element with the last element (guaranteed to be positive)
x[i] = x[x.length - 1];
x.pop();
}
}
document.body.innerHTML = '<pre>' + JSON.stringify(x, null, 4) + '</pre>';
This solution has linear complexity as it only iterates the list once.
Sort the array first so the negative numbers are at the end.
We can sort with a callback that moves the negative numbers to the end.
Then iterate backwards and remove the last indices with pop as long as they are negative.
What we're left with is the positive values.
var X = [-3, 5, 3, 8, 1,-6,-7,8,9];
X.sort(function(a,b) {
return b - a;
});
for (var i=X.length; i--;) {
if ( X[i] < 0 ) X.pop();
}
document.body.innerHTML = '<pre>' + JSON.stringify(X, null, 4) + '</pre>';
There are many good answers already. Here's a straightforward filter that doesn't sort the array and uses an auxiliary array index j <= i:
function removeNeg(arr) {
var j = 0;
// filter array
for (var i = 0; i < arr.length; i++) {
if (arr[i] >= 0) arr[j++] = arr[i];
}
// pop excess elements
while (j < arr.length) arr.pop();
}
This is really the C programmer's approach to James Montagne's answer, which is neater, because it pops as you go.
var x = [1, -6, -7, 8, 9];
var i = 0;
while (i < x.length) {
if (x[i] < 0) {
x[i] = x[x.length - 1];
x.pop();
} else {
i++;
}
}
just pop, no other methods of Array used
Here's a very simple solution that doesn't require sorting. For every element, shift it, push it if it is not negative. Do this a number of times equivalent to the array size. This can either be done with shift/push or pop/unshift.
var origLen = X.length;
for(var i = 0; i < origLen; i++) {
var val = X.pop();
if(val > 0)
X.unshift(val);
}