Time complexity of pushing elements using Object.values(test[i]) - javascript

What would be the time complexity of doing something like:
// assume all values are primitives
const result = [];
for(const test of tests){
result.push(Object.values(test));
}
I know that Object.keys is O(n) and thus assume the same for Object.values, which makes me believe this is O(n²), but I'm looking for a concrete answer.

since: enter link description here
Object.values Method: Happens in O(N)
and you have a nested "for", your believes are right and your algorithm have a O(n²) time complexity resolution

Related

Is `Array.prototype.reduceRight` identical to `reverse` followed by `reduce`?

Array.prototype.reduceRight reduces an array to a single value, working right-to-left (i.e. starting from the end of the array).
Is calling reduceRight exactly the same as calling reverse followed by reduce? If so, why does reduceRight exist?
Array#reduceRight is not the same as Array#reverse() -> Array#reduce(). Here is the key difference .reduce()/.reduceRight() do not modify the starting array:
const arr = ["a", "b", "c"];
const combine = arr.reduceRight((a, b) => a+b, "");
console.log(combine);
console.log(arr);
However, .reverse() does:
const arr = ["a", "b", "c"];
const combine = arr.reverse().reduce((a, b) => a+b, "");
console.log(combine);
console.log(arr);
There is also question for a performance - .reverse will incur an additional O(n) processing to reverse the array in-place, that's on top of the .reduce() that already operates at O(n). Yes, the final complexity is still O(n) (we ignore the constants) but having a single pass through the array is faster.
As already mentioned, it is not the same. Here are two more use cases, where those two versions behave differently:
Everything that is based on the index (third argument of the callback function called by both reduce and reduceRight) might behave differently.
Reverse loops are often used when the original array is being modified. Depending on the exact use case, that might work with reduceRight but break with reduce.
It already was mentioned that .reverse() modifyes iinitial array,
In addition, according to spec:
https://tc39.es/ecma262/#sec-array.prototype.reduce
https://tc39.es/ecma262/#sec-array.prototype.reduceright
Implementations are a bit different.
Let me allow the analogy with .push() and .unshift() - they do also quite a same, inserting an element in array, we use push a lot, and unshift quite rarely, but time to time there are some perfect moments for unshift as well as for reduceright

What is the runtime complexity of this function?

I believe it's quadratic O(n^2) but not 100% sure due to uncertainty of how the .filter() and .map() operations work in JavaScript.
The big question I have is whether the entire filter() operation completes before starting a single map() operation, or if it's smart enough to perform the map() operation while it's already iterating within the filter() operation.
The method
function subscribedListsFromSubscriptions(subscriptions: Subscription[]) {
return new Set(listSubscriptions.filter((list) => {
return list.subscribed;
}).map((list) => {
return list.list_id;
}));
}
Example input data
let subscriptions = [ {
list_id: 'abc',
subscribed: false
}, {
list_id: 'ghi',
subscribed: false
}];
From what I see
It appears to be:
filter() for each element of subscriptions - time n
map() for each remaining element - time n (at maximum)
new Set() for each remaining element - time n (at maximum)
For the new Set() operation, I'm guessing it's creating a new object and adding each element to the created instance.
If there were many duplicates in data, one might expect the efficiency to increase. But we don't expect many duplicates in data, and from my understanding of 'Big O', the maximal limit is what's used.
From this analysis, I'm expecting the time complexity to be either O(n^2) or O(n^3). But as stated, I'm unsure of how to interpret it for certain.
Any help in this would be greatly appreciated. Thanks in advance!
I think your interpretation of the order of operations is correct: filter, then map, then create a Set.
However, in order for this algorithm to reach O(n^2), you would have to create a nested loop, for example:
create the Set for each element of the array
compare each element witch each other element in the array.
This is not the case here. In the worst case scenario (no duplicates), the algorithm will iterate the input array three times, meaning the O(3*n) complexity which is still linear, not quadratic.

Why is a hash map get/set considered to have O(1) complexity?

Assume we have the following hash map class in Javascript:
class myHash {
constructor() {
this.list = [];
}
hash(input) {
var checksum = 0;
for (var i = 0; i < input.length; i++) {
checksum += input.charCodeAt(i);
}
return checksum;
}
get(input) {
return (this.list[this.hash(input)]);
}
set(input, value) {
this.list[this.hash(input)] = value;
}
}
The hash function has a loop which has a complexity of O(n) and is called during getters and setters. Doesn't this make the complexity of the hash map O(n)?
When you're performing Big-O analysis you need to be very clear what the variables are. Oftentimes the n is left undefined, or implied, but it's crucial to know what exactly it is.
Let's define n as the number of items in the hash map.
When n is the only variable under consideration then all of the methods are O(1). None of them loops over this.list, and so all operate in constant time with respect to the number of items in the hash map.
But, you object: there's a loop in hash(). How can it be O(1). Well, what is it looping over? Is it looping over the other items in the map? No. It's looping over input—but input.length is not a variable we're considering.
When people analyze hash map performance they normally ignore the length of the strings being passed in. If we do that, then with respect to n hash map performance is O(1).
If you do care about string lengths then you need to add another variable to the analysis.
Let's define n as the number of items in the hash map.
Let's define k as the length of the string being read/written.
The hash function is O(k) since it loops over the input string in linear time. Therefore, get() and set() are also O(k).
Why don't we care about k normally? Why do people only talk about n? It's because k is a factor when analyzing the hash function's performance, but when we're analyzing how well the hash map performs we don't really care about how quickly the hash function runs. We want to know how well the hash map itself is doing, and none of its code is directly impacted by k. Only hash() is, and hash() is not a part of the hash map, it's merely an input to it.
Yes, the string size (k) does matter. (more precisely, the hash function complexity)
Assume:
Get item use array index takes f(n) time
The hash function takes g(k) time
then the complexity is O( f(n)+g(k) ).
We know that g(k) is O(k), and if we assume f(n) is O(1), the complexity becomes O(k)
Furthermore, if we assume the string size k would not great than a constant c, the complexity becomes O(c) which can be rewrite as O(1).
So in fact, given your implementation, the O(1) is correct bound only if
Get item use array index takes O(1)
The string would not longer than a constant c
Notes
Some hash function may itself be O(1), like simply take the first character or the length.
Whether get item use array index takes O(1) should be check, for example in javascript sparse array may take longer to access.

Sort string that express a time and take the one closest

I have an array like this:
const array = ['Monthly', 'Annually', 'Quarterly', 'Annually', 'Quarterly', 'Monthly'];
After removed the duplicates I should get an array like this:
const array = ['Monthly', 'Annually', 'Quarterly'];
Now I should get the shortest period. I have thought to associate every string with a number so transforming the array in this way:
const array = [{name:'Monthly', order:1}, {name:'Annually',order:3}, {name:'Quarterly',order:2}];
and then compute the min according the order.
Do you have some other proposal to improve the algorithm? could be improved?
One small improvement:
Removing duplicates is redundant, as it will require O(n) space, or O(nlogn) time, since it is a variant of element distinctness problem, while finding minimal value can be done in O(n) time and O(1) space.
Also, not sure what your "order" exactly is, if it is something that is calculated once at compile / pre-processing and is constant, that's fine, as long as you don't sort the list for each query

Array.prototype.map() and Array.prototype.forEach()

I've an array (example array below) -
a = [{"name":"age","value":31},
{"name":"height (inches)","value":62},
{"name":"location","value":"Boston, MA"},
{"name":"gender","value":"male"}];
I want to iterate through this array of objects and produce a new Object (not specifically reduce).
I've these two approaches -
a = [{"name":"age","value":31},
{"name":"height (inches)","value":62},
{"name":"location","value":"Boston, MA"},
{"name":"gender","value":"male"}];
// using Array.prototype.map()
b = a.map(function(item){
var res = {};
res[item.name] = item.value;
return res;
});
console.log(JSON.stringify(b));
var newObj = [];
// using Array.prototype.forEach()
a.forEach(function(d){
var obj = {};
obj[d.name] = d.value;
newObj.push(obj)
});
console.log(JSON.stringify(newObj))
Is it not right to just use either one for this sort of operations?
Also, I'd like to understand the use case scenarios where one will be preferred over the other? Or should I just stick to for-loop?
As you've already discussed in the comments, there's no outright wrong answer here. Aside from some rather fine points of performance, this is a style question. The problem you are solving can be solved with a for loop, .forEach(), .reduce(), or .map().
I list them in that order deliberately, because each one of them could be re-implemented using anything earlier in the list. You can use .reduce() to duplicate .map(), for instance, but not the reverse.
In your particular case, unless micro-optimizations are vital to your domain, I'd make the decision on the basis of readability and code-maintenance. On that basis, .map() does specifically and precisely what you're after; someone reading your code will see it and know you're consuming an array to produce another array. You could accomplish that with .forEach() or .reduce(), but because those are capable of being used for more things, someone has to take that extra moment to understand what you ARE using them for. .map() is the call that's most expressive of your intent.
(Yes, that means in essence prioritizing efficiency-of-understanding over efficiency-of-execution. If the code isn't part of a performance bottleneck in a high-demand application, I think that's appropriate.)
You asked about scenarios where another might be preferred. In this case, .map() works because you're outputting an array, and your output array has the same length as your input array. (Again; that's what .map() does). If you wanted to output an array, but you might need to produce two (or zero) elements of output for a single element of input, .map() would be out and I'd probably use .reduce(). (Chaining .filter().map() would also be a possibility for the 'skip some input elements' case, and would be pretty legible)
If you wanted to split the contents of the input array into multiple output arrays, you could do that with .reduce() (by encapsulating all of them as properties of a single object), but .forEach() or the for loop would look more natural to me.
First, either of those will work and with your example there's no reason not to use which ever is more comfortable for your development cycle. I would probably use map since that is what is for; to create "a new array with the results of calling a provided function on every element in this array."
However, are you asking which is the absolute fastest? Then neither of those; the fastest by 2.5-3x will be a simple for-loop (see http://jsperf.com/loop-vs-map-vs-foreach for a simple comparison):
var newObj = [];
for (var i = 0, item; item = a[i]; i++) {
var obj = {};
obj[item.name] = item.value;
newObj.push(obj);
});
console.log(JSON.stringify(newObj));

Categories

Resources