apologies if this question has been asked before but I'm finding it hard to word the question in a way that might have been asked before.
Q: Is it more efficient to have something like:
mass[128] = {0.0}
speed[128] = {0.0}
age[128] = {0}
Or:
properties[128] = {mass=0.0, speed=0.0, age=0}
And why? Is there a simple rule to always bear in mind, (are few larger arrays better than many small etc)?
I'm writing in JS using Chrome. Reading and writing to elements very often.
Thanks very much!
In general, the answer here is: Do what makes the most sense to let you write the simplest, clearest code; and worry about any performance or memory issue if you actually run into one.
Using an array of objects with named properties will likely be more efficient in terms of access time on a modern JavaScript engine, and will likely be less efficient in terms of memory use. In both cases, the difference will be incredibly minor and probably imperceptible.
If your values are numbers and your arrays can be of fixed size, you might use typed arrays, since they really are arrays (where as normal arrays aren't1 unless the JavaScript engine can do it as an optimization). But there are downsides to typed arrays (their being fixed size, for instance), so again, if and when it becomes necessary...
Example of an array of objects with named properties:
var properties = [
{mass: 0, speed: 0, age: 0},
{mass: 1, speed: 1, age: 1},
// ...
];
If you're using ES2015 (you said you're using Chrome, so you can), you might make that a const:
const properties = [
{mass: 0, speed: 0, age: 0},
{mass: 1, speed: 1, age: 1},
// ...
];
That only makes properties a constant, not the contents of the array it points to, so you can still add, remove, or amend entries as desired.
1 That's a post on my anemic little blog.
Related
I'm trying to find the best way to group by category and iterate products in O (n) to get some insights from the categories.
I have the sample data:
[
{
"code": 25754,
"description": "ADAPTADOR BLUETOOH USB RECEPTOR DE AUDIO P2",
"price": 5.0,
"stock": 10,
"category": {
"id": 1,
"name": "Adapters"
}
},
{
"code": 20212,
"description": "ADAPTADOR CONECTOR HDMI FEMEA L / FEMEA",
"price": 2.8,
"stock": 20,
"category": {
"id": 2,
"name": "Eletronics"
}
},
]
I need to invert the relationship, having a list of categories with corresponding products, and for that i wrote this solution
function group_by_categories(products) {
const categories = {}
for (const product of products) {
const { category, ...cleanedProduct } = product
categories[category.id] = categories[category.id] || category
categories[category.id].products = categories[category.id].products || []
categories[category.id].products.push(cleanedProduct)
}
return Object.values(categories)
}
// returns
[
{ id: 1, name: 'Adapters', products: [ [Object] ] },
{ id: 2, name: 'Eletronics', products: [ [Object] ] }
]
But I am struggling in two things.
Is it the best way to reverse the relationship? How can I replicate this in another language like C, where I have no objects to use as unique keys?
Once you have this type of data, the only way to iterate categories and products (see how many items a category has, for example) is in O (n²)?
I appreciate all the help, even if you can only answer one question. Also, sorry for my bad English, I'm trying to be as clear as possible here.
So you have 3 issues. 1) Use the code/id as keys instead, 2) Use sets instead of arrays, and 3) use an appropriate data structure to avoid duplicating work that you've already done.
You really just want to map the connections not all the data. I believe code is probably unique to the product, so that is your key. Your category id is also likely unique, so you only need to consider that. The mapping structures should only concern themselves with the minimal amount of unique data. This may increase performance a fair bit as well as the amount of data that get's copied is probably 1/10 to 1/100 of yours in terms of # of characters (of course translating that exactly to time saved is difficult). Of course, that's a minor point compared to the O(N^2) performance, but just that would likely speed things up by a bit by that alone.
You should be using sets as well as the hash (object). Idea here is O(1) lookup, O(1) size check, O(1) inclusion test (IE: Does the category X have code Y in it?), and O(1) mapping back to the original data (code Y is a product with this information).
The other key thing is you really just want to map the connections not all the data. I believe code is probably unique to the product, so that is your key. Your category mapping structure should only concern itself with the minimal amount of unique data (this will increase performance by a lot as well).
Keep in mind, if you have access to an actual database that is probably the more ideal solution, 95% or so of the time once you start wanting to do more complex queries. I would say you almost certainly should be using one, there's probably a database that will suit your needs.
That being said, what you need is this if you don't need to go too far with your queries. It looks like you need to answer these three questions:
Given the code, what is the record (sample data)? This is just a simple code:product object. IE: {25754: {code: ..., price: ..., stock: ..., ...}, {20212: {code: ..., price: ..., stock: ..., ...}}
Given a category, what are the codes in that category? In this case you have a category_id:set(codes) lookup. Very important that you add codes to a set and not a list/array. Sets have O(1) to add/delete/inclusion, while lists/arrays have O(1) add, O(N) delete and O(N) inclusion check.
Given a category, how many are in that category? This is just a data[category].size check (length instead of size in some languages).
Main thing is to use dictionaries and sets for performance.
Time to build the lookups is likely O(P) where P is the total number products. Performance for the queries should be O(1) for each one you need to do.
To avoid O(N^2) performance the lookup should only be calculated once. Should you need to add/remove products from categories, you should adjust the lookup itself and not rebuild it every time. This may mean storing it in a database, building when you first run the app and keeping it in memory, or if the # of products isn't too much, building it at each request (by only using the minimal amount of data to build it, it may be more practical). Generally speaking, even with 1000 products, it should take probably milliseconds to build the category lookup and iterate over them.
Basically your category code should look like his after you've written out the methods.
...
category_lookup = build_category_lookup() # O(P)
product_lookup = build_product_lookup() # O(P)
...
products_for_category = product_lookup[category_id] # O(1)
products_count = products_for_category.length # O(1) | C: sizeof(products_for_category)
...
(Mostly code in ruby these days, so snake_case)
I'm brand new to javascript (2nd week of learning) so I'm sorry if this is a stupid question! I've found 2 ways to do the thing I'm trying to do and need some advice on whether the second way is acceptable.
I know a class can have properties that take arrays where the array lists multiple values of that one property. However, if I have a class with properties that are group-able (relate to the same aspect of the object) can I make them into one property that takes an array rather than listing each as a property?
So, if I have a Tent class with a mainUses property, I know I can pass a new instance an array for a tent that has multiple main uses.
class Tent {
constructor(mainUses){
this.mainUses = mainUses;}};
const myTent = new Tent (["Backpacking","Mountaineering","Bikepacking"]);
And if I also have a minimum pack weight, standard weight and maximum weight for the tent I can do:
class Tent {
constructor(mainUses,minWeight,standardWeight,maxWeight){
this.mainUses = mainUses;
this.minWeight = minWeight;
this.standardWeight = standardWeight;
this.maxWeight = maxWeight;}};
const myTent = new Tent (["Backpacking","Mountaineering","Bikepacking"], "2kg","2.2kg","2.4kg");
But what about if I group the 3 weights into one 'super-property' (for want of a better description) like this:
class Tent {
constructor(mainUses,weight){
this.mainUses = mainUses;
this.weight = weight;}};
And then pass the 'super-property' an array listing the 3 weights:
const myTent = new Tent (["Backpacking","Mountaineering","Bikepacking"],["2kg","2.2kg","2.4kg"]);
And then, so that they can still easily be accessed, add a comment to the class listing the indices to use when I make a new instance or want to access something.
/*minWeight [0], standardWeight [1], maxWeight[2]*/
So that myTent.minWeight would become myTent.weight[0].
Is there any reason I shouldn't do it this way? It appears to work for what I'm trying to do but I'm worried that it's bad form/hacky/wrong-for-some-other-reason to use it like this?
I've tried to search for an example of it being used this way but I don't really know how to describe it succinctly enough to search effectively. The example I've used doesn't show it well but where I have lots of properties that I could group into a single array, it ends up being much simpler. I feel like it's semantically...off somehow?
You could do it like this, but if you have different properties in the same array it would maybe be better to use an associative array. In my opinion, it would be better organized. It would also be more object oriented and it would still be ok if you add more properties.
const myTent = [
{ item: 'Backpacking',
weight: '2kg',
},
{ item: 'Mountaineering',
weight: '2.2kg',
},
{ item: 'Bikepacking',
weight: '2.4kg',
},
];
Objects are generally used instead of arrays for this purpose because it is much easier to not get confused about which element means which thing when they are named.
// Weight property
{
min: 2,
standard: 2.2,
max: 2.4,
unit: 'kg',
}
I have two arrays,
const pets = ["dog", "cat", "hamster"]
const wishlist = ["bird", "snake"]
I want to append wishlist to pets, which can be done using two methods,
Method 1:
pets.push.apply(pets,wishlist)
Which results in: [ 'dog', 'cat', 'hamster', 'bird', 'snake' ]
Method 2:
pets.push(...wishlist)
Which also results in: [ 'dog', 'cat', 'hamster', 'bird', 'snake' ]
Is there is a difference between these two methods in terms of performance when I deal with larger data?
Both Function.prototype.apply and the spread syntax may cause a stack overflow when applied to large arrays:
let xs = new Array(500000),
ys = [], zs;
xs.fill("foo");
try {
ys.push.apply(ys, xs);
} catch (e) {
console.log("apply:", e.message)
}
try {
ys.push(...xs);
} catch (e) {
console.log("spread:", e.message)
}
zs = ys.concat(xs);
console.log("concat:", zs.length)
Use Array.prototype.concat instead. Besides avoiding stack overflows concat has the advantage that it also avoids mutations. Mutations are considered harmful, because they can lead to subtle side effects.
But that isn't a dogma. If you are wihtin a function scope and perform mutations to improve performance and relieve garbage collection you can perform mutations, as long as they aren't visible in the parent scope.
With push you are appending to the existing array, with spread operator you are creating a copy.
a=[1,2,3]
b=a
a=[...a, 4]
alert(b);
=> 1, 2, 3
a=[1,2,3]
b=a
a.push(4)
alert(b);
=> 1, 2, 3, 4
push.apply as well:
a=[1,2,3]
c=[4]
b=a
Array.prototype.push.apply(a,c)
alert(b);
=> 1, 2, 3, 4
concat is a copy
a=[1,2,3]
c=[4]
b=a
a=a.concat(c)
alert(b);
=> 1, 2, 3
By reference is preferable, especially for larger arrays.
Spread operator is a fast way of doing a copy which traditionally would be done with something like:
a=[1,2,3]
b=[]
a.forEach(i=>b.push(i))
a.push(4)
alert(b);
=> 1, 2, 3
If you need a copy, use the spread operator, it's fast for this. Or use concat as pointed out by #ftor. If not, use push. Keep in mind, however, there are some contexts where you can't mutate. Additionally, with any of these functions you will get a shallow copy, not a deep copy. For deep copy you will need lodash. Read more here : https://slemgrim.com/mutate-or-not-to-mutate/
For appending to a large array the spread operator is vastly faster. I don't know how #ftor / #Liau Jian Jie drew their conclusions, possibly bad tests.
Chrome 71.0.3578.80 (Official Build) (64-bit), FF 63.0.3 (64-bit), & Edge 42.17134.1.0
It makes sense since concat() makes a copy of the array and doesn't even attempt to use the same memory.
The thing about "mutations" doesn't seem to be based on anything; if you're overwriting your old array, concat() has no benefits.
The only reason to not use ... would be stack overflow, I agree with the other answers that you can't use ... or apply.
But even then just using a for {push()} is more or less twice as fast as concat() in all browsers and wont overflow.
There's no reason to use concat() unless you need to keep the old array.
Apart from what ftor pointed out, Array.prototype.concat is, on average, at least 1.4x faster than the array spread operator.
See results here:
https://jsperf.com/es6-add-element-to-create-new-array-concat-vs-spread-op
You can run the test on your own browser and machine here: https://www.measurethat.net/Benchmarks/Show/579/1/arrayprototypeconcat-vs-spread-operator
Interpreting the question as which is more performant in general, using .push() as an example it looks like apply is [only slightly] faster (except for MS Edge, see below).
Here's a performance test on just the overhead on calling a function dynamically for the two methods.
function test() { console.log(arguments[arguments.length - 1]); }
var using = (new Array(200)).fill(null).map((e, i) => (i));
test(...using);
test.apply(null, using)
I tested in Chrome 71.0.3578.80 (Official Build) (64-bit), FF 63.0.3 (64-bit), & Edge 42.17134.1.0, and these were my resulsts after running them a few times on their own The initial results were always skewed one way or the other
As you can see Edge seems to have a better implementation for apply than it does for ... (but don't try to compare the results across browsers, we cant tell whether Edge has a better apply than the others, a worse ..., or a bit of both from this data).
Given this, unless you're targeting Edge specifically, I'd say go with ... just as it reads cleaner, especially if you need to pass an object back in to apply for this.
It's possible that it's dependent on the size of the array, too, so like #Jaromanda X said, do your own testing and change 200 if you need to really make sure.
The other answers interpreted the question as which would be better for .push() specifically, and get caught up on the 'problem' being solved, simply recommending to just use .concat(), which is basically the standard why are you doing it that way? which can irk some people coming from google that aren't looking for solutions to do with .push() (say, Math.max, or your own custom function).
The answer of user6445533 was accepted as the answer, but I feel the test case is abit weird. That doesn't seems like how you should use spread operator usually.
Why cant just do like :
let newPets = [...pets, ...wishlist]
It will not face any stackoverflow issue as described.
Like Hashbrown has mentioned, it may gives you performance benefit as well.
*I'm also in the middle of learning ES6. Sorry if I'm wrong.
if you are using ES2015 then the spread operator is the way to go. Using the spread operator your code looks less verbose and much cleaner compared to other approach. When it comes to speed, I believe there will be little to choose between both the approaches.
I have an array declared as
var arr = new Array();
Then i have an array of objects which are returned by Server. And each object in this array has three fields(always). I have to loop through this and add to the arr array conditionally.
Since this arr is not pre-allocated, it hits performance for large number in the main array.
Is there any way i can pre-allocate the arr array after i get the main response array so that i can avoid this performance issue?
Also how do i get the size of the object?
Thanks.
Suppose you have 10 objects, and you are going to pass three values from each object to an array. You can initialize your array with length 30 (10*3) by passing the integer 30 to the Array constructor as such:
var numObjects = 10;
var myArray = new Array(3*numObjects);
Please refer to my jsperf benchmark for a proof of the performance gained. In short summary, pre-sizing your array is ~25% faster in Firefox 38, ~81% faster in Chrome 42, and ~16% faster in Internet Explorer 11. Numbers will vary by the individual's experience who runs these benchmarks, but the trend will remain consistent. The optimal performance will result from pre-sizing your arrays.
http://jsperf.com/array-growth-dynamic-vs-preset
A more thorough discussion of this topic has occured here on SO at
How to initialize an array's length in javascript?
Thank whatever deity you believe in (or not) that Javascript does not have any direct access to memory allocation. That would have been truly horrible considering the quality of much of the JS littering the interwebs.
Javascript will by itself allocate memory to arrays on creation and reclaim the memory when it is garbage collected. Pre-Filling an array will have no positive effect on memory usage or performance.
Edit: I was wrong. See #ThisClark's answer.
MDN has a pretty good article on how memory management and GC work in javascript.
You can filter your array using filter function like the below example
var result = [
{
age: 15
},
{
age: 21
},
{
age: 25
}
];
function isGreaterThan20(obj) {
return obj.age > 20;
}
var arr = result.filter(isGreaterThan20);
// arr becomes [{ age: 21}, { age: 25}]
If you need to pre-allocate an array with defined size, use new Array(size)
I have looked everywhere for this but nobody seems to use associative arrays in objects. Here is my object:
var player = {
Level: 1,
Stats: [{Defense : 5}, {Attack: 1}, {Luck: 3}]
};
I need to access the values of Defense, Attack, and Luck, but how?
I have tried this but it hasn't worked:
player.Stats.Defense
player.Stats.Attack
player.Stats.Luck
Any ideas? Thanks!
P.S. Does it make a difference that I am using jQuery?
You've said you're in control of the structure. If so, change it to this:
var player = {
Level: 1,
Stats: {Defense : 5, Attack: 1, Luck: 3}
};
Note that Stats is now an object, not an array. Then you access that information the way you tried to, player.Stats.Defense and so on. There's no reason to make Stats an array of dissimilar objects, that just makes your life difficult.
You've used the term "associative array" which makes me think you have a PHP background. That term isn't commonly used in the JavaScript world, to avoid confusion with arrays. "Object," "map," or "dictionary" are the terms usually used, probably in that order, all referring to objects ({}). Probably nine times out of ten, if you would use an associative array for something in PHP, you'd use an object for it in JavaScript (in addition to using objects for the same sort of thing you use objects for in PHP).
P.S. Does it make a difference that I am using jQuery?
No, this is language-level rather than library-level, but it's a perfectly reasonable question.
(Making this a CW answer because it's basically what all the comments on the question are saying.)
as Stats: [{Defense : 5}, {Attack: 1}, {Luck: 3}] is array of objects, you need to do:
player.Stats[0].Defense
player.Stats[1].Attack
player.Stats[2].Luck
Here player.Stats is an array of objects. So you'll have to use index for accessing those objects.
var player = {
Level: 1,
Stats: [{Defense : 5}, {Attack: 1}, {Luck: 3}]
};
Use these :
player.Stats[0].Defense
player.Stats[1].Attack
player.Stats[2].Luck