I know that looking up a hash value by key is O(1) (constant time) but is it the same if you find the key that has a certain value? I'm looking up the key using:
const answer = Object.keys(dict).find(key => dict[key] === 1);
dict is an object that has integer keys.
That's O(n) time complexity, since you're performing a linear search on an array (the array returned by Object.keys). At worst, the item you want will be the last item in the array (or it won't be in the array at all), which would require n operations to determine (n being the length of the array) - hence, it's O(n) time.
Related
What would be the time complexity of Array.from(). For example:
const set = new Set();
set.add('car');
set.add('cat');
set.add('dog');
console.log(Array.from(set)); // time complexity of making this convertion from Set to Array
It's O(n). When used on an iterable (like a Set), Array.from iterates over the iterable and puts every item returned into the new array, so there's an operation for every item returned by the iterable.
It is always going to be O(n) as the number of iterations would be directly proportional to the number of elements in the set. Actual Time complexity would be O(n) for retrieving values from a set O(n) for pushing into an array.
O(n) + O(n) = O(2n)
But since we do not consider a constant value when calculating using n value it should be O(n).
As array in js are objects but deleting an array element has BigO O(n) whereas deleting an obj element has BigO O(1) ? Why ?
Pls let me know where i am doing wrong !
Thanks
This not an obvious question. In object this is more clearer because delete operator has constant time of complexity, because you are deleting specific property or method and don't iterate over the object.
Array is an object with ordered indexes and we are using for deletion method which iterating over array and delete item such as Array.prototype.splice():
let arr = [1,6,10,99,44]; //If you want delete 10 you have to iterate by 1,6 and 10
arr.splice(2,1); //arr = [1,6,99,44]
And above we have linear time of complexity (BigO(n)), but we can achieve a constant time for deletion item from array:
let arr = [1,6,10,99,44]; //If you want delete last item
arr.length = 4 //arr = [1,6,10,66]
Finally one tip, never use delete operator for array, such as delete arr[2], because length of the array is not changed and you get an array [1,6, empty,99,44]
deleting an obj element has BigO O(1)
That is because objects in JS behave like hashmap
deleting an array element has BigO O(n)
That is because array is special object that keeps its elements on a chunk of memory one by one without free spaces between elements. After deleting an element with i index you should move all elements that had grater than i+1 indexes to fill the released space.
Assume we have the following hash map class in Javascript:
class myHash {
constructor() {
this.list = [];
}
hash(input) {
var checksum = 0;
for (var i = 0; i < input.length; i++) {
checksum += input.charCodeAt(i);
}
return checksum;
}
get(input) {
return (this.list[this.hash(input)]);
}
set(input, value) {
this.list[this.hash(input)] = value;
}
}
The hash function has a loop which has a complexity of O(n) and is called during getters and setters. Doesn't this make the complexity of the hash map O(n)?
When you're performing Big-O analysis you need to be very clear what the variables are. Oftentimes the n is left undefined, or implied, but it's crucial to know what exactly it is.
Let's define n as the number of items in the hash map.
When n is the only variable under consideration then all of the methods are O(1). None of them loops over this.list, and so all operate in constant time with respect to the number of items in the hash map.
But, you object: there's a loop in hash(). How can it be O(1). Well, what is it looping over? Is it looping over the other items in the map? No. It's looping over input—but input.length is not a variable we're considering.
When people analyze hash map performance they normally ignore the length of the strings being passed in. If we do that, then with respect to n hash map performance is O(1).
If you do care about string lengths then you need to add another variable to the analysis.
Let's define n as the number of items in the hash map.
Let's define k as the length of the string being read/written.
The hash function is O(k) since it loops over the input string in linear time. Therefore, get() and set() are also O(k).
Why don't we care about k normally? Why do people only talk about n? It's because k is a factor when analyzing the hash function's performance, but when we're analyzing how well the hash map performs we don't really care about how quickly the hash function runs. We want to know how well the hash map itself is doing, and none of its code is directly impacted by k. Only hash() is, and hash() is not a part of the hash map, it's merely an input to it.
Yes, the string size (k) does matter. (more precisely, the hash function complexity)
Assume:
Get item use array index takes f(n) time
The hash function takes g(k) time
then the complexity is O( f(n)+g(k) ).
We know that g(k) is O(k), and if we assume f(n) is O(1), the complexity becomes O(k)
Furthermore, if we assume the string size k would not great than a constant c, the complexity becomes O(c) which can be rewrite as O(1).
So in fact, given your implementation, the O(1) is correct bound only if
Get item use array index takes O(1)
The string would not longer than a constant c
Notes
Some hash function may itself be O(1), like simply take the first character or the length.
Whether get item use array index takes O(1) should be check, for example in javascript sparse array may take longer to access.
I have a series of data and the size of it increases gradually. I want to find a specific row of my data with its Id. I have two options. first: create an array and push every new row to this array and every time I want a row just search through items in the array or use array prototype function (find). the other option is to create an object and every time a new row comes just add this row as a property (and the property name would be the Id of the row). and when I want to find a row just get the property of this object by its name(Id). Now I want to know which option is the most efficient way? (or is there a third option?)
first option:
const array = [
{
"Id":15659,
"FeederCode":169,
"NmberOfRepetition":1
},
{
"Id":15627,
"FeederCode":98,
"NmberOfRepetition":2
},
{
"Id":15557,
"FeederCode":98,
"NmberOfRepetition":1
}
]
each time a new row comes a new is pushed into this array.
access : array.find(x => x.Id === 15659)
second option:
const object = {
15659:{
"Id":15659,
"FeederCode":169,
"NmberOfRepetition":1
},
15627:{
"Id":15627,
"FeederCode":98,
"NmberOfRepetition":2
},
15557:{
"Id":15557,
"FeederCode":98,
"NmberOfRepetition":1
}
}
each time a new row comes a new property is added to this object.
access : object[15659]
edit: I read somewhere that adding new properties to existing object has too much cost.
In case you are looking forward to perform search operation then you should use Object as it gives better performance as compared to search in Array.
Complexity of search in Object is O(1) and in Array is O(n). Hence, to yield better performance, you should use Object.
Well in the first example you will have to iterate the array every time, when using Find.
In the second example you will be accessing a property directly, leading to O(1) execution time, always fixed, no matter how many items are in there. So for better performance you ought to go by your 2nd way
Reading from objects is faster and takes O(1) time, like #NikhilAggarwal Just said.
But recently I was reading about V8 optimizations and wanted to check, so used benchmark js for confirmation.
Here are my findings -
Number of entries in obj or arr : 100000
Number of fetch operations from Obj: 47,174,859 ops/sec
Number of search operation from Array: 612 ops/sec
If we reduce the entries - The number of operations for object almost remains the same but increases exponentially for arrays.
Number of entries in obj or arr : 100
Number of fetch operations from Obj: 44,264,116 ops/sec
Number of search operation from Array: 520,709 ops/sec
Number of entries in obj or arr : 10
Number of fetch operations from Obj: 46,739,607 ops/sec
Number of search operation from Array: 3,611,517 ops/sec
I'm dealing with an array of "events" where the key of the array is the Unix Timestamp of the event. In other words let's assume we have the following array of event objects in JS:
var MyEventsArray=[];
MyEventsArray[1513957775]={lat:40.671978333333, lng:14.778661666667, eventcode:46};
MyEventsArray[1513957845]={lat:40.674568332333, lng:14.568661645667, eventcode:23};
MyEventsArray[1513957932]={lat:41.674568332333, lng:13.568661645667, eventcode:133};
and so on for thousands rows...
Data are sent along Ajax call and encoded in JSON to be processed with JS. When the data set is received, I have another Unix Timestamp let say 1513957845, coming from another source and I want to find the event that happened at that time...it's quite easy I just need to take the element from the array having the given index (the second in the list above).
Now the question: immagine that the given index is not found (imagine we are looking for UXTimestamp=1513957855) and that this index was not existing in the array but I want to take the closest index (in the example above I would take the element MyEventsArray[1513957845] as it's index 1513957845 is the closest to 1513957855). What can I do to obtain this result?
My difficulties are in handling array index as I when I receive the array I don't know where the index begins.
How the machine will handle situations like that?
Will the machine allocate (and waste) memory for dummy/empty elements placed between each rows or the compiler have some kind of ability to build it's own index and optimize the space? In other words: is it safe to play with index as we're doing or it's better to allocate the array as:
var MyEventsArray=[];
MyEventsArray['1513957775']={lat:40.671978333333, lng:14.778661666667, eventcode:46};
MyEventsArray['1513957845']={lat:40.674568332333, lng:14.568661645667, eventcode:23};
MyEventsArray['1513957932']={lat:41.674568332333, lng:13.568661645667, eventcode:133};
and so on for thousands rows...
In this case the key and the index are clearly different so here it's possible to get the first element with MyArray[0] despite we don't know the key value. Is this approach more expensive (here we must save index and key) in term of memory or the effects are the same for the compiler?
There is no difference between MyEventsArray[1513957775] and MyEventsArray['1513957775']. Deep down, array indexes are just property names, and property names are strings.
Regarding the question of whether these sparse indices will lead to millions of empty cells being allocated, no, that won't happen. Sparse arrays only store what you put in them, not empty space.
If you want to find a key quickly, you can obtain an array of the keys, sort them, and then find the one you want:
var MyEventsArray=[];
MyEventsArray[1513957775]={lat:40.671978333333, lng:14.778661666667, eventcode:46};
MyEventsArray[1513957845]={lat:40.674568332333, lng:14.568661645667, eventcode:23};
MyEventsArray[1513957932]={lat:41.674568332333, lng:13.568661645667, eventcode:133};
var target = 1513957855;
var closest= Object.keys(MyEventsArray)
.map(k => ({ k, delta: Math.abs(target - k) }))
.sort((a, b) => a.delta - b.delta)[0].k;
console.log(closest);
You could take Array#some which allowes to exits the iteration if the delta is getting greater than the last delta.
var array = [];
array[1513957775] = { lat: 40.671978333333, lng: 14.778661666667, eventcode: 46 };
array[1513957845] = { lat: 40.674568332333, lng: 14.568661645667, eventcode: 23 };
array[1513957932] = { lat: 41.674568332333, lng: 13.568661645667, eventcode: 133 };
var key = 0,
search = 1513957855;
Object.keys(array).some(function (k) {
if (Math.abs(k - search) > Math.abs(key - search)) {
return true;
}
key = k;
});
console.log(key);
You can use Object.keys(MyEventsArray) to get an array of the keys (which are strangely expressed as strings); you could then iterate through that and find the closest match.
var MyEventsArray=[];
MyEventsArray[1513957775]={lat:40.671978333333, lng:14.778661666667, eventcode:46};
MyEventsArray[1513957845]={lat:40.674568332333, lng:14.568661645667, eventcode:23};
MyEventsArray[1513957932]={lat:41.674568332333, lng:13.568661645667, eventcode:133};
Object.keys(MyEventsArray)
["1513957775", "1513957845", "1513957932"]
Reference: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array