What is the runtime complexity of this function? - javascript

I believe it's quadratic O(n^2) but not 100% sure due to uncertainty of how the .filter() and .map() operations work in JavaScript.
The big question I have is whether the entire filter() operation completes before starting a single map() operation, or if it's smart enough to perform the map() operation while it's already iterating within the filter() operation.
The method
function subscribedListsFromSubscriptions(subscriptions: Subscription[]) {
return new Set(listSubscriptions.filter((list) => {
return list.subscribed;
}).map((list) => {
return list.list_id;
}));
}
Example input data
let subscriptions = [ {
list_id: 'abc',
subscribed: false
}, {
list_id: 'ghi',
subscribed: false
}];
From what I see
It appears to be:
filter() for each element of subscriptions - time n
map() for each remaining element - time n (at maximum)
new Set() for each remaining element - time n (at maximum)
For the new Set() operation, I'm guessing it's creating a new object and adding each element to the created instance.
If there were many duplicates in data, one might expect the efficiency to increase. But we don't expect many duplicates in data, and from my understanding of 'Big O', the maximal limit is what's used.
From this analysis, I'm expecting the time complexity to be either O(n^2) or O(n^3). But as stated, I'm unsure of how to interpret it for certain.
Any help in this would be greatly appreciated. Thanks in advance!

I think your interpretation of the order of operations is correct: filter, then map, then create a Set.
However, in order for this algorithm to reach O(n^2), you would have to create a nested loop, for example:
create the Set for each element of the array
compare each element witch each other element in the array.
This is not the case here. In the worst case scenario (no duplicates), the algorithm will iterate the input array three times, meaning the O(3*n) complexity which is still linear, not quadratic.

Related

Time complexity of pushing elements using Object.values(test[i])

What would be the time complexity of doing something like:
// assume all values are primitives
const result = [];
for(const test of tests){
result.push(Object.values(test));
}
I know that Object.keys is O(n) and thus assume the same for Object.values, which makes me believe this is O(n²), but I'm looking for a concrete answer.
since: enter link description here
Object.values Method: Happens in O(N)
and you have a nested "for", your believes are right and your algorithm have a O(n²) time complexity resolution

Intersect multiple arrays of objects

So first of all, I am not expecting a specific solution to my problem, but instead some insights from more experienced developers that could enlighten me and put me on the right track. As I am not yet experienced enough in algorithms and data structures and I take this as a challenge for myself.
I have n number of arrays, where n >= 2.
They all contain objects and in the end, I want an array that contains only the common elements between all these arrays.
array1 = [{ id: 1 }, { id: 2 }, { id: 6 }, { id: 10 }]
array2 = [{ id: 2 }, { id: 4 }, { id: 10 }]
array3 = [{ id: 2 }, { id: 3 }, { id: 10 }]
arrayOfArrays = [array1, array2, array3]
intersect = [{ id: 2 }, { id: 10 }]
How would one approach this problem? I have read solutions using Divide And Conquer, or Hash tables, and even using the lodash library but I would like to implement my own solution for once and not rely on anything external, and at the same time practice algorithms.
For efficiency, I would start by locating the shortest array. This should be the one you work with. You can run a reduce on the arrayOfArrays to iterate through and return the index of the shortest length.
const shortestIndex = arrayOfArrays.reduce((accumulator, currentArray, currentIndex) => currentArray.length < arrayOfArrays[index] ? currentIndex : accumulator, 0);
Take the shortest array and call the reduce function again, this will iterate through the array and allow you to accumulate a final value. The second parameter is the starting value, which is a new array.
shortestArray.reduce((accumulator, currentObject) => /*TODO*/, [])
For the code, we basically need to loop through the remaining arrays and make sure it exists in all of them. You can use the every function since it will fail fast meaning the first array it doesn't exist in will trigger it to return false.
Inside the every you can call some to check if there is at least one match.
isMatch = remainingArrays.every(array => array.some(object => object.id === currentObject.id))
If it's a match, add it to the accumulator which will be your final result. Otherwise, just return the accumulator.
return isMatch ? [...accumulator, currentObject] : accumulator;
Putting all that together should get you a decent solution. I'm sure there are more optimizations that could be made, but that's where I would start.
reduce
every
some
The general solution is to iterate through an input and check for each value whether it exists in all of the other inputs. (Time complexity: O(l * n * l) where n is number of arrays and l is the average length of an array)
Following the ideas of the other two answers, we can improve this brute-force approach a bit by
iterating through the smallest input
using a Set for efficient lookup of ids instead of iteration
so it becomes (with O(l * n + min_l * n) = O(n * l))
const arrayOfIdSets = arrayOfArrays.map(arr =>
new Set(arr.map(val => val.id))
);
const smallestArray = arrayOfArrays.reduce((smallest, arr) =>
smallest.length < arr.length ? smallest : arr
);
const intersection = smallestArray.filter(val =>
arrayOfIdSets.every(set => set.has(val.id))
);
A good way to approach these kinds of problems, both in interviews and in just regular life, is to think of the most obvious approach you can come up with, no matter how inefficient, and think think about how you can improve it. This is usually called a "brute force" approach.
So for this problem, perhaps an obvious but inefficient approach would be to iterate through every item in array1 and check if it is in both array2 and array 3, and note it down (in another array) if it is. Then repeat again for each item in array2 and in array 3, making sure to only note down items you haven't noted down before.
We can see that will be inefficient because we'll be looking for a single item in an array many times, which is quite slow for an array. But it'll work!
Now we can get to work improving our solution. One thing to notice is that finding the intersection of 3 arrays is the same as finding the intersection of the third array with the intersection of the first and second array. So we can look for a solution to the simpler problem of the intersection of 2 arrays, to build one of an intersection for 3 arrays.
This is where it's handy to know your datastructures. You want to be able to ask the question, "does this structure contain a particular element?" as quickly as possible. Think about what structures are good for that kind of a lookup (known as search). More experienced engineers have this memorized/learned, but you can reference something like https://www.bigocheatsheet.com/ to see that sets are good at this.
I'll stop there to not give the full solution, but once you've seen that sets are fast at both insertion and search, think about how you can use that to solve your problem.

object access vs array access in javascript

I have a series of data and the size of it increases gradually. I want to find a specific row of my data with its Id. I have two options. first: create an array and push every new row to this array and every time I want a row just search through items in the array or use array prototype function (find). the other option is to create an object and every time a new row comes just add this row as a property (and the property name would be the Id of the row). and when I want to find a row just get the property of this object by its name(Id). Now I want to know which option is the most efficient way? (or is there a third option?)
first option:
const array = [
{
"Id":15659,
"FeederCode":169,
"NmberOfRepetition":1
},
{
"Id":15627,
"FeederCode":98,
"NmberOfRepetition":2
},
{
"Id":15557,
"FeederCode":98,
"NmberOfRepetition":1
}
]
each time a new row comes a new is pushed into this array.
access : array.find(x => x.Id === 15659)
second option:
const object = {
15659:{
"Id":15659,
"FeederCode":169,
"NmberOfRepetition":1
},
15627:{
"Id":15627,
"FeederCode":98,
"NmberOfRepetition":2
},
15557:{
"Id":15557,
"FeederCode":98,
"NmberOfRepetition":1
}
}
each time a new row comes a new property is added to this object.
access : object[15659]
edit: I read somewhere that adding new properties to existing object has too much cost.
In case you are looking forward to perform search operation then you should use Object as it gives better performance as compared to search in Array.
Complexity of search in Object is O(1) and in Array is O(n). Hence, to yield better performance, you should use Object.
Well in the first example you will have to iterate the array every time, when using Find.
In the second example you will be accessing a property directly, leading to O(1) execution time, always fixed, no matter how many items are in there. So for better performance you ought to go by your 2nd way
Reading from objects is faster and takes O(1) time, like #NikhilAggarwal Just said.
But recently I was reading about V8 optimizations and wanted to check, so used benchmark js for confirmation.
Here are my findings -
Number of entries in obj or arr : 100000
Number of fetch operations from Obj: 47,174,859 ops/sec
Number of search operation from Array: 612 ops/sec
If we reduce the entries - The number of operations for object almost remains the same but increases exponentially for arrays.
Number of entries in obj or arr : 100
Number of fetch operations from Obj: 44,264,116 ops/sec
Number of search operation from Array: 520,709 ops/sec
Number of entries in obj or arr : 10
Number of fetch operations from Obj: 46,739,607 ops/sec
Number of search operation from Array: 3,611,517 ops/sec

What is the difference between array.fill() and array.apply()

Array.fill()
Array(10).fill(0);
Array.Apply()
Array.apply(0, new Array(10));
Both are doing similarly same. So what is the difference between them and which one is best for performance?
I got a pretty much answer. But
Update:
Array.fill()
console.log(Array(10).fill(undefined));
Array.Apply()
console.log(Array.apply(undefined, new Array(10)));
Now both are doing similarly same. So what is the difference between them and which one is best for performance?
Both are doing similarly same.
No, they aren't. The first fills the array with the value 0. The second fills it with undefined. Note that the 0 you're passing in the second example is completely ignored; the first argument to Function#apply sets what this is during the call, and Array with multiple arguments doesn't use this at all, so you could pass anything there.
Example:
var first = Array(10).fill(0);
console.log(first);
var second = Array.apply(0, new Array(10));
console.log(second);
.as-console-wrapper {
max-height: 100% !important;
}
So what is the difference between them...
See above. :-) Also, see notes below on the follow-up question.
Subjectively: Array.fill is clearer (to me). :-)
...and which one is best for performance?
It's irrelevant. Use the one that does what you need to do.
In a follow-up, you've asked the difference between
Array(10).fill(undefined)
and
Array.apply(undefined, new Array(10))
The end result of them is the same: An array with entries whose values are undefined. (The entries are really there, e.g. .hasOwnProperty(0) will return true. As opposed to new Array(10) on its own, which creates a sparse array with length == 10 with no entries in it.)
In terms of performance, it's extremely unlikely it matters. Either is going to be plenty fast enough. Write what's clearest and works in your target environments (Array.fill was added in ES2015, so doesn't exist in older environments, although it can easily be polyfilled). If you're really concerned about the difference in performance, write your real-world code both ways and profile it.
Finally: As far as I know there's no particular limit on the size of the array you can use with Array.fill, but Function#apply is subject to the maximum number of arguments for a function call and the maximum stack size in the JavaScript platform (which could be large or small; the spec doesn't set requirements). See the MDN page for more about the limit, but for instance Array.apply(0, new Array(200000)) fails on V8 (the engine in Chrome, Chromium, and Node.js) with a "Maximum call stack size exceeded" error.
I did a test for that:
const calculateApply = function(items){
console.time('calculateApply');
Array.apply(undefined, new Array(items));
console.timeEnd('calculateApply');
}
const calculateFill = function(items){
console.time('calculateFill');
Array(items).fill(undefined);
console.timeEnd('calculateFill');
}
const getTime = function(items){
console.log(`for ${items} items the time of fill is: `)
calculateFill(items)
console.log(`for ${items} items the time of apply is:`)
calculateApply(items)
}
getTime(10)
getTime(100000)
getTime(100000000)
and here is the result:
for 10 items the time of fill is:
calculateFill: 0.481ms
for 10 items the time of apply is:
calculateApply: 0.016ms
for 100000 items the time of fill is:
calculateFill: 2.905ms
for 100000 items the time of apply is:
calculateApply: 1.942ms
for 100000000 items the time of fill is:
calculateFill: 6157.238ms
for 100000000 items the time of apply is:
/Users/n128852/Projects/pruebas/index.js:3
Array.apply(0, new Array(items));
^
RangeError: Maximum call stack size exceeded
https://www.ecma-international.org/ecma-262/6.0/#sec-function.prototype.apply
https://www.ecma-international.org/ecma-262/6.0/#sec-array.prototype.fill
Here you have the information, like you can read, the apply function prepare params to be executed like a tail recursive method. fill, conversely, is iterative.

Finding max/min in a data string

I have a string of data that is a string of x,y pairs like this:
[
[0.519999980926514, 0.0900000035762787],
[0.529999971389771, 0.689999997615814],
[0.519999980926514, 2.25],
[0.850000023841858, 2.96000003814697],
[1.70000004768372, 3.13000011444092],
[1.91999995708466, 3.33999991416931],
[0.839999973773956, 3.5],
[1.57000005245209, 3.38000011444092],
[0.819999992847443, 3.00999999046326],
[1.69000005722046, 2.99000000953674],
[2.98000001907349, 3.23000001907349],
[0.509999990463257, 1.11000001430511],
[0.670000016689301, 1.35000002384186],
[0.660000026226044, 1.26999998092651],
[0.689999997615814, 0.0500000007450581],
[1.30999994277954, 0.0599999986588955],
[0.569999992847443, 0.0299999993294477],
[0.629999995231628, 0.0399999991059303],
[0.720000028610229, 0.0399999991059303],
[0.639999985694885, 0.0399999991059303],
[0.540000021457672, 0.0399999991059303],
[0.550000011920929, 0.0500000007450581],
[0.850000023841858, 0.0399999991059303],
[0.610000014305115, 0.0199999995529652],
[0.509999990463257, 0.0500000007450581],
[0.610000014305115, 0.0599999986588955],
[0.5, 0.0599999986588955],
[0.639999985694885, 0.0599999986588955]
]
What I want to do is find the max and min in each pair.
Is there any way of doing this without going through the entire string and examining each element in the pair?
There is no way to outperform O(n) operations (where n is the number of pairs) on this problem.
In some cases, it is possible to find the maximum in less time, however all algorithms require at least 1 comparison (which is the number of comparisons needed to determine the maximum of a pair).
To do what you want, you ought to turn to JavaScript's wonderful map and apply functions:
function maxOfSubArrays(input) {
// assumes `input` is an array of n-element arrays
return input.map(function(el) { return Math.max.apply(Math, el); });
}
map returns a new array with each element set to the return value of the function applied to an element in the original array (ie mapped[i] = f(input[i])). apply calls a function , unpacking the provided array as the arguments (so Math.max.apply(Math, [1, 2]) is the same as Math.max(1, 2)).
To find the minimum rather than the maximum, use Math.min. To get both, it is easiest to simply return [Math.min..., Math.max...].
EDIT: If I understand your comment correctly, you want to treat it like an nx2 matrix, where n is the number of pairs (also: number of rows). Then, you want to find the maximum of each column. This is relatively easy to do with apply and map:
function maxOfColumns(input) {
return [Math.max.apply(Math, input.map(function(el) { return el[0]; })),
Math.max.apply(Math, input.map(function(el) { return el[1] }))];
}
An attentive reader will note that this creates a copy of the entire data set. For large data sets, this can be a problem. In such a case, using map to construct the columns would not be ideal. However, for most use cases there will not be a significant performance difference.
Here is a JSFiddle that demonstrates both variants: http://jsfiddle.net/utX53/ Look in the JS Console to see the results (Ctrl-Shift-J on Chrome in Win/Lin)
There does not seem to be any particular structure that could be used to speed up the process, which means that O(n) is still the fastest this can be done.
A final word: maxOfColumns can be trivially extended to handle an arbitrary number of columns. I leave it to the reader to figure out how (mostly because it is much less readable than the above).

Categories

Resources