What would be the time complexity of Array.from(). For example:
const set = new Set();
set.add('car');
set.add('cat');
set.add('dog');
console.log(Array.from(set)); // time complexity of making this convertion from Set to Array
It's O(n). When used on an iterable (like a Set), Array.from iterates over the iterable and puts every item returned into the new array, so there's an operation for every item returned by the iterable.
It is always going to be O(n) as the number of iterations would be directly proportional to the number of elements in the set. Actual Time complexity would be O(n) for retrieving values from a set O(n) for pushing into an array.
O(n) + O(n) = O(2n)
But since we do not consider a constant value when calculating using n value it should be O(n).
Related
What would be the time complexity of doing something like:
// assume all values are primitives
const result = [];
for(const test of tests){
result.push(Object.values(test));
}
I know that Object.keys is O(n) and thus assume the same for Object.values, which makes me believe this is O(n²), but I'm looking for a concrete answer.
since: enter link description here
Object.values Method: Happens in O(N)
and you have a nested "for", your believes are right and your algorithm have a O(n²) time complexity resolution
I know that looking up a hash value by key is O(1) (constant time) but is it the same if you find the key that has a certain value? I'm looking up the key using:
const answer = Object.keys(dict).find(key => dict[key] === 1);
dict is an object that has integer keys.
That's O(n) time complexity, since you're performing a linear search on an array (the array returned by Object.keys). At worst, the item you want will be the last item in the array (or it won't be in the array at all), which would require n operations to determine (n being the length of the array) - hence, it's O(n) time.
Is the time complexity to insert an element to a certain index in an empty array is O(1)?
For example:
let array = [];
array[5] = 5;
Since I want to insert element from the end of array to the front, using unshift method would cause O(n) in each operation. I wonder if the above operation will be better than the unshift method.
Thanks!
Assume we have the following hash map class in Javascript:
class myHash {
constructor() {
this.list = [];
}
hash(input) {
var checksum = 0;
for (var i = 0; i < input.length; i++) {
checksum += input.charCodeAt(i);
}
return checksum;
}
get(input) {
return (this.list[this.hash(input)]);
}
set(input, value) {
this.list[this.hash(input)] = value;
}
}
The hash function has a loop which has a complexity of O(n) and is called during getters and setters. Doesn't this make the complexity of the hash map O(n)?
When you're performing Big-O analysis you need to be very clear what the variables are. Oftentimes the n is left undefined, or implied, but it's crucial to know what exactly it is.
Let's define n as the number of items in the hash map.
When n is the only variable under consideration then all of the methods are O(1). None of them loops over this.list, and so all operate in constant time with respect to the number of items in the hash map.
But, you object: there's a loop in hash(). How can it be O(1). Well, what is it looping over? Is it looping over the other items in the map? No. It's looping over input—but input.length is not a variable we're considering.
When people analyze hash map performance they normally ignore the length of the strings being passed in. If we do that, then with respect to n hash map performance is O(1).
If you do care about string lengths then you need to add another variable to the analysis.
Let's define n as the number of items in the hash map.
Let's define k as the length of the string being read/written.
The hash function is O(k) since it loops over the input string in linear time. Therefore, get() and set() are also O(k).
Why don't we care about k normally? Why do people only talk about n? It's because k is a factor when analyzing the hash function's performance, but when we're analyzing how well the hash map performs we don't really care about how quickly the hash function runs. We want to know how well the hash map itself is doing, and none of its code is directly impacted by k. Only hash() is, and hash() is not a part of the hash map, it's merely an input to it.
Yes, the string size (k) does matter. (more precisely, the hash function complexity)
Assume:
Get item use array index takes f(n) time
The hash function takes g(k) time
then the complexity is O( f(n)+g(k) ).
We know that g(k) is O(k), and if we assume f(n) is O(1), the complexity becomes O(k)
Furthermore, if we assume the string size k would not great than a constant c, the complexity becomes O(c) which can be rewrite as O(1).
So in fact, given your implementation, the O(1) is correct bound only if
Get item use array index takes O(1)
The string would not longer than a constant c
Notes
Some hash function may itself be O(1), like simply take the first character or the length.
Whether get item use array index takes O(1) should be check, for example in javascript sparse array may take longer to access.
I have a series of data and the size of it increases gradually. I want to find a specific row of my data with its Id. I have two options. first: create an array and push every new row to this array and every time I want a row just search through items in the array or use array prototype function (find). the other option is to create an object and every time a new row comes just add this row as a property (and the property name would be the Id of the row). and when I want to find a row just get the property of this object by its name(Id). Now I want to know which option is the most efficient way? (or is there a third option?)
first option:
const array = [
{
"Id":15659,
"FeederCode":169,
"NmberOfRepetition":1
},
{
"Id":15627,
"FeederCode":98,
"NmberOfRepetition":2
},
{
"Id":15557,
"FeederCode":98,
"NmberOfRepetition":1
}
]
each time a new row comes a new is pushed into this array.
access : array.find(x => x.Id === 15659)
second option:
const object = {
15659:{
"Id":15659,
"FeederCode":169,
"NmberOfRepetition":1
},
15627:{
"Id":15627,
"FeederCode":98,
"NmberOfRepetition":2
},
15557:{
"Id":15557,
"FeederCode":98,
"NmberOfRepetition":1
}
}
each time a new row comes a new property is added to this object.
access : object[15659]
edit: I read somewhere that adding new properties to existing object has too much cost.
In case you are looking forward to perform search operation then you should use Object as it gives better performance as compared to search in Array.
Complexity of search in Object is O(1) and in Array is O(n). Hence, to yield better performance, you should use Object.
Well in the first example you will have to iterate the array every time, when using Find.
In the second example you will be accessing a property directly, leading to O(1) execution time, always fixed, no matter how many items are in there. So for better performance you ought to go by your 2nd way
Reading from objects is faster and takes O(1) time, like #NikhilAggarwal Just said.
But recently I was reading about V8 optimizations and wanted to check, so used benchmark js for confirmation.
Here are my findings -
Number of entries in obj or arr : 100000
Number of fetch operations from Obj: 47,174,859 ops/sec
Number of search operation from Array: 612 ops/sec
If we reduce the entries - The number of operations for object almost remains the same but increases exponentially for arrays.
Number of entries in obj or arr : 100
Number of fetch operations from Obj: 44,264,116 ops/sec
Number of search operation from Array: 520,709 ops/sec
Number of entries in obj or arr : 10
Number of fetch operations from Obj: 46,739,607 ops/sec
Number of search operation from Array: 3,611,517 ops/sec