What is the difference between array.fill() and array.apply() - javascript

Array.fill()
Array(10).fill(0);
Array.Apply()
Array.apply(0, new Array(10));
Both are doing similarly same. So what is the difference between them and which one is best for performance?
I got a pretty much answer. But
Update:
Array.fill()
console.log(Array(10).fill(undefined));
Array.Apply()
console.log(Array.apply(undefined, new Array(10)));
Now both are doing similarly same. So what is the difference between them and which one is best for performance?

Both are doing similarly same.
No, they aren't. The first fills the array with the value 0. The second fills it with undefined. Note that the 0 you're passing in the second example is completely ignored; the first argument to Function#apply sets what this is during the call, and Array with multiple arguments doesn't use this at all, so you could pass anything there.
Example:
var first = Array(10).fill(0);
console.log(first);
var second = Array.apply(0, new Array(10));
console.log(second);
.as-console-wrapper {
max-height: 100% !important;
}
So what is the difference between them...
See above. :-) Also, see notes below on the follow-up question.
Subjectively: Array.fill is clearer (to me). :-)
...and which one is best for performance?
It's irrelevant. Use the one that does what you need to do.
In a follow-up, you've asked the difference between
Array(10).fill(undefined)
and
Array.apply(undefined, new Array(10))
The end result of them is the same: An array with entries whose values are undefined. (The entries are really there, e.g. .hasOwnProperty(0) will return true. As opposed to new Array(10) on its own, which creates a sparse array with length == 10 with no entries in it.)
In terms of performance, it's extremely unlikely it matters. Either is going to be plenty fast enough. Write what's clearest and works in your target environments (Array.fill was added in ES2015, so doesn't exist in older environments, although it can easily be polyfilled). If you're really concerned about the difference in performance, write your real-world code both ways and profile it.
Finally: As far as I know there's no particular limit on the size of the array you can use with Array.fill, but Function#apply is subject to the maximum number of arguments for a function call and the maximum stack size in the JavaScript platform (which could be large or small; the spec doesn't set requirements). See the MDN page for more about the limit, but for instance Array.apply(0, new Array(200000)) fails on V8 (the engine in Chrome, Chromium, and Node.js) with a "Maximum call stack size exceeded" error.

I did a test for that:
const calculateApply = function(items){
console.time('calculateApply');
Array.apply(undefined, new Array(items));
console.timeEnd('calculateApply');
}
const calculateFill = function(items){
console.time('calculateFill');
Array(items).fill(undefined);
console.timeEnd('calculateFill');
}
const getTime = function(items){
console.log(`for ${items} items the time of fill is: `)
calculateFill(items)
console.log(`for ${items} items the time of apply is:`)
calculateApply(items)
}
getTime(10)
getTime(100000)
getTime(100000000)
and here is the result:
for 10 items the time of fill is:
calculateFill: 0.481ms
for 10 items the time of apply is:
calculateApply: 0.016ms
for 100000 items the time of fill is:
calculateFill: 2.905ms
for 100000 items the time of apply is:
calculateApply: 1.942ms
for 100000000 items the time of fill is:
calculateFill: 6157.238ms
for 100000000 items the time of apply is:
/Users/n128852/Projects/pruebas/index.js:3
Array.apply(0, new Array(items));
^
RangeError: Maximum call stack size exceeded
https://www.ecma-international.org/ecma-262/6.0/#sec-function.prototype.apply
https://www.ecma-international.org/ecma-262/6.0/#sec-array.prototype.fill
Here you have the information, like you can read, the apply function prepare params to be executed like a tail recursive method. fill, conversely, is iterative.

Related

How does V8 optimise the creation of very large arrays?

Recently, I had to work on optimising a task that involved the creation of really large arrays (~ 10⁸ elements).
I tested a few different methods, and, according to jsperf, the following option seemed to be the fastest.
var max = 10000000;
var arr = new Array(max);
for (let i = 0; i < max; i++) {
arr[i] = true;
}
Which was ~ 85% faster than
var max = 10000000;
var arr = [];
for (let i = 0; i < max; i++) {
arr.push(true);
}
And indeed, the first snippet was much faster in my actual app as well.
However, my understanding was that the V8 engine was able to perform optimised operations on array with PACKED_SMI_ELEMENTS elements kind, as opposed to arrays of HOLEY_ELEMENTS.
So my question is the following:
if it's true that new Array(n) creates an array that's internally marked with HOLEY_ELEMENTS, (which I believe is true) and
if it's true that [] creates an array that's internally marked with PACKED_SMI_ELEMENTS (which I'm not too sure is true)
why is the first snippet faster than the second one?
Related questions I've been through:
Create a JavaScript array containing 1...N
Most efficient way to create a zero filled JavaScript array?
V8 developer here. The first snippet is faster because new Array(max) informs V8 how big you want the array to be, so it can allocate an array of the right size immediately; whereas in the second snippet with []/.push(), the array starts at zero capacity and has to be grown several times, which includes copying its existing elements to a new backing store.
https://www.youtube.com/watch?v=m9cTaYI95Zc is a good presentation but probably should have made it clearer how small the performance difference between packed and holey elements is, and how little you should worry about it.
In short: whenever you know how big you need an array to be, it makes sense to use new Array(n) to preallocate it to that size. When you don't know in advance how large it's going to be in the end, then start with an empty array (using [] or new Array() or new Array(0), doesn't matter) and grow it as needed (using a.push(...) or a[a.length] = ..., doesn't matter).
Side note: your "for loop with new Array() and push" benchmark creates an array that's twice as big as you want.

What is the runtime complexity of this function?

I believe it's quadratic O(n^2) but not 100% sure due to uncertainty of how the .filter() and .map() operations work in JavaScript.
The big question I have is whether the entire filter() operation completes before starting a single map() operation, or if it's smart enough to perform the map() operation while it's already iterating within the filter() operation.
The method
function subscribedListsFromSubscriptions(subscriptions: Subscription[]) {
return new Set(listSubscriptions.filter((list) => {
return list.subscribed;
}).map((list) => {
return list.list_id;
}));
}
Example input data
let subscriptions = [ {
list_id: 'abc',
subscribed: false
}, {
list_id: 'ghi',
subscribed: false
}];
From what I see
It appears to be:
filter() for each element of subscriptions - time n
map() for each remaining element - time n (at maximum)
new Set() for each remaining element - time n (at maximum)
For the new Set() operation, I'm guessing it's creating a new object and adding each element to the created instance.
If there were many duplicates in data, one might expect the efficiency to increase. But we don't expect many duplicates in data, and from my understanding of 'Big O', the maximal limit is what's used.
From this analysis, I'm expecting the time complexity to be either O(n^2) or O(n^3). But as stated, I'm unsure of how to interpret it for certain.
Any help in this would be greatly appreciated. Thanks in advance!
I think your interpretation of the order of operations is correct: filter, then map, then create a Set.
However, in order for this algorithm to reach O(n^2), you would have to create a nested loop, for example:
create the Set for each element of the array
compare each element witch each other element in the array.
This is not the case here. In the worst case scenario (no duplicates), the algorithm will iterate the input array three times, meaning the O(3*n) complexity which is still linear, not quadratic.

what is use of adding length property to objects in javascript?

Array do support length property as well as string does, but objects don't inherently have a length property.
in case we add a length properly as below.
{"a":0,"b":0,length:2}
what is the use case scenario for above code
An object doesn't have a length, per se. It depends on what this length represents.
For example, in the code you posted, it's not immediately obvious why the length is 4, but it might make sense in the context of what that object actually represents.
Here are both methods of getting the length.
You'll notice if you keep track of your own length the display code is a bit shorter but you need a whole function to add a new key, you'd even need another function to remove them.
object.keys changes the keys into an array which we can get the length from this, of course, take a few milliseconds more as it has to do the convert.
I generally run with the assumption a computer is going to make fewer mistakes than me so, if I can, I should load as much work as possible onto it.
// initial set up
let obj1 = {"a":0,"b":0};
let obj2 = {"a":0,"b":0,length:2};
// initial lengths
console.log("obj1: "+Object.keys(obj1).length);
console.log("obj2: "+obj2.length);
// adding standard way
obj1["c"] = 0;
// adding to accomidate with length
function addObj2(key, value) {
obj2[key] = value;
obj2.length++;
}
addObj2("c",0);
console.log("--");
// new lengths
console.log("obj1: "+Object.keys(obj1).length);
console.log("obj2: "+obj2.length);
I hope this makes sense.

Set of pairs of numbers in Javascript

ES6 has a new Set data structure for storing sets of unique objects. However it is based on object references as opposed to value comparisons. As far as I can tell this makes it impossible to have a set of pairs of numbers without stringifying.
For example, typing in Chrome's console (needs Chrome 38+):
> var s = new Set();
< undefined
> s.add([2, 3]);
< Set {[2, 3]}
> s.has([2, 3])
< false <--- was hoping for 'true'
This appears to be by design: since I passed a different array of [2, 3] to has(), it returns false, because although the contents is the same it only looks at object references, and I allocated a new and different array to pass to has(). I would need to store a reference to the original array I passed to add() to check with has(), but this is not always possible. For example if the number pairs represent co-ordinates, I might need to check if the set has [obj.x, obj.y], but this will always return false since it allocates a new array.
The workaround is to stringify the arrays and key on strings like "2, 3" instead. However in something performance-sensitive like a game engine, it is unfortunate if every set access needs to make a string allocation and convert and concatenate number strings.
Does ES6 provide any feature to solve this problem without stringifying, or is there any feature on the horizon with ES7 that could help as well?
It is not perfectly optimal for very compute-intensive tasks, but you could use a concatenated string using template literals for a more idiomatic approach that still maintains efficiency, e.g.
set.add(`${x}_${y}`);
and retrieval:
set.get(`${i}_${j}`);
(note I've purposely avoided use of , as a delimeter since it can be confusing in some fields such as finance).
Another thing that could be done is grabbing the width of the first dimension to flatten an array if you know the bounds e.g.
set.get(x+y*width);
or if you're working with small numbers in general (not exceeding 10,000s) and don't know what the max width would be, you could use an arbitrary very large number. This is slightly less optimal but still better than string concat:
set.get(x+y*Math.floor(Math.sqrt(Number.MAX_SAFE_INTEGER)));
Again, these are not perfect solutions since they do not work with very large numbers where x*y may exceed Number.MAX_SAFE_INTEGER, but they are some things in your toolbox without needing to know a fixed array size.
[Super late here, but since ES7 had not fixed things after all and I noticed this was not specifically mentioned if others are weighing the pros/cons, two approaches (the first explicitly does not solve, the second may possibly)]
As you've noted [2, 3] === [2, 3] is false, meaning you can't use Set like this; however, is Set really the best option for you?
You may find that using a two-level data structure like this will be better for you
var o = {};
function add(o, x, y) {
if (!o[x]) o[x] = {};
o[x][y] = true;
}
function has(o, x, y) {
return !!(o[x] && o[x][y]);
}
function del(o, x, y) {
if (!o[x]) return;
delete o[x][y];
// maybe delete `o[x]` if keys.length === 0
}
You could do a similar structure with a Map pointing to Sets if you wanted to use ES6
is there any feature on the horizon with ES7 that could help as well?
There is a proposal in ECMAScript 7 to add Value Objects. Basically, it's a new immutable data type where identical value objects are compared by value, not by reference.
Depending on what kinds of value objects are implemented and/or if custom ones can be defined, they may solve this issue.

Can I make a "Virtual Array" in JavaScript?

I'm calling a JavaScript function that wants an array of things to display. It displays a count, and displays the items one by one. Everything works when I pass it a normal JavaScript array.
But I have too many items to hold in memory at once. What I'd like to do, is pass it an object with the same interface as an array, and have my method(s) be called when the function tries to access the data. And in fact, if I pass the following:
var featureArray = {length: count, 0: func(0)};
then the count is displayed, and the first item is correctly displayed. But I don't want to assign all the entries, or I'll run out of memory. And the function currently crashes when the user tries to display the second item. I want to know when item 1 is accessed, and return func(1) for item 1, and func(2) for item 2, etc. (i.e., delaying the creation of the item until it is requested).
Is this possible in JavaScript?
If I understand correctly, this would help:
var object = {length: count, data: function (whatever) {
// create your item
}};
Then, instead of doing array[1], array[2], et cetera, you'd do object.data(1), object.data(2), and so on.
Since there seems to be a constraint that the data must be accessed using array indexing via normal array indexing arr[index] and that can't be changed, then the answer is that NO, you can't override array indexing in Javascript to change how it works and make some sort of virtual array that only fetches data upon demand. It was proposed for ECMAScript 4 and rejected as a feature.
See these two other posts for other discussion/confirmation:
How would you overload the [] operator in Javascript
In javascript, can I override the brackets to access characters in a string?
The usual way to solve this problem would be to switch to using a method such as .get(n) to request the data and then the implementor of .get() can virtualize however much they want.
P.S. Others indicate that you could use a Proxy object for this in Firefox (not supported in other browsers as far as I know), but I'm not personally familiar with Proxy objects as it's use seems rather limited to code that only targets Firefox right now.
Yes, generating items on the go is possible. You will want to have a look at Lazy.js, a library for producing lazily computed/loaded sequences.
However, you will need to change your function that accepts this sequence, it will need to be consumed differently than a plain array.
If you really need to fake an array interface, you'd use Proxies. Unfortunately, it is only a harmony draft and currently only supported in Firefox' Javascript 1.8.5.
Assuming that the array is only accessed in an iteration, i.e. starting with index 0, you might be able to do some crazy things with getters:
var featureArray = (function(func) {
var arr = {length: 0};
function makeGetter(i) {
arr.length = i+1;
Object.defineProperty(arr, i, {
get: function() {
var val = func(i);
Object.defineProperty(arr, i, {value:val});
makeGetter(i+1);
return val;
},
configurable: true,
enumerable: true
});
}
makeGetter(0);
return arr;
}(func));
However, I'd recommend to avoid that and rather switch the library that is expecting the array. This solution is very errorprone if anything else is done with the "array" but accessing its indices in order.
Thank you to everyone who has commented and answered my original question - it seems that this is not (currently) supported by JavaScript.
I was able to get around this limitation, and still do what I wanted. It uses an aspect of the program that I did not mention in my original question (I was trying to simplify the question), so it is understandable that other's couldn't recommend this. That is, it doesn't technically answer my original question, but I'm sharing it in case others find it useful.
It turns out that one member of the object in each array element is a callback function. That is (using the terminology from my original question), func(n) is returning an object, which contains a function in one member, which is called by the method being passed the data. Since this callback function knows the index it is associated with (at least, when being created by func(n)), it can add the next item in the array (or at least ensure that it is already there) when it is called. A more complicated solution might go a few ahead, and/or behind, and/or could cleanup items not near the current index to free memory. This all assumes that the items will be accessed consecutively (which is the case in my program).
E.g.,
1) Create a variable that will stay in scope (e.g., a global variable).
2) Call the function with an object like I gave as an example in my original question:
var featureArray = {length: count, 0: func(0)};
3) func() can be something like:
function func(r) {
return {
f : function() {featureArray[r + 1] = func(r + 1); DoOtherStuff(r); }
}
}
Assuming that f() is the member with the function that will be called by the external function.

Categories

Resources