Why is nulling a value faster than undefining it (javascript v8) - javascript

Heyho tested a bit around with maps vs objects in Javascript and found out that nulling is much faster (2,5 times) than putting the value to undefined or compared to simply delete the property
https://www.measurethat.net/Benchmarks/Show/15592/0/compare-nulling-undefining-and-deleting-of-javascript-o
If you are wondering why i allways create the map and or the javascript Object i did it so that every test has the same "overhead".
EDIT:
also got this one which got a logical mistake in it(setting a value to from null to null or setting it from undefined to undefined)
https://www.measurethat.net/Benchmarks/Show/15587/0/delete-vs-null-vs-undefined-vs-void-0-vs-objectcreatenu
and here the result is a bit more extreme 4mio ops (from null to null) vs 4k ops (undefined to undefined)
I know the test isnt really relevant its pure interest im asking :).

(V8 developer here.)
Microbenchmarks are misleading! Don't waste your time on them.
Setting an object property to null or to undefined has the same speed. Guaranteed.
Considering the significant difference that your test reproducibly shows, I got curious and dug in a bit. Turns out measurethat.net's framework code is... let's say... far from perfect: it uses "direct eval" in a way that introduces huge performance artifacts for accessing globals (that's one of several reasons why "never use direct eval!" is common advice), and one of JavaScript's historical accidents is that while null is a reserved keyword, undefined is just a global variable. See here for how ridiculous it gets:
https://www.measurethat.net/Benchmarks/Show/15627/0/accessing-null-vs-undefined
If you know what's going on, you can sidestep the issue:
https://www.measurethat.net/Benchmarks/Show/15635/0/null-vs-undefined-iiffe
But keep in mind that in a real app that doesn't use eval, you wouldn't see any of these differences; you're just playing games with a bad benchmark runner there!
Another thing that's really weird on that site is that when editing test cases and "validating" them, they produce very different results (I've seen 100x!) compared to "running" them after submitting them.
In short, I wouldn't trust any of the reported numbers on that site.
All that said, it makes sense that delete a.a is slower, because deleting an object property is a more complicated operation than overwriting an existing property's value. Importantly (and your benchmark doesn't show this, because it's too simple!), deleting properties often has non-local effects, i.e. it's not the deletion itself that's necessarily slow, but other parts of your app might get slowed down as side effects. We generally recommend not to use the delete keyword at all. Deleting entries in Maps is a different story: that's perfectly fine, as Maps are built to support that efficiently.

Javascript objects have to also account for which keys are "enumerable". Deleting a key removes it from the list of keys that enumerable. This is similar to removing an element from an array which is a slower operation than simply overriding a value with null.
I'm not sure why setting the value to undefined is slower. I would assume it's because of similar reasons as a missing key has a value of undefined.

I would say that it's because delete actually have to delete, compared to null that will just empty its content.
Comparison
Delete remove the property, making it slower but free some memory.
Nullifying is faster but the property, while null, the property still exists.
An object with 3000 property that are null takes more space in the ram than an empty object.
In conclusion
While performance and the technical aspects of JavaScript are very interesting, and performances in general should'nt be completely ignored, this is beyond unnoticeable and you should'nt care about it in real-life scenarios.
Note:
This is my personal understanding and should'nt be taken as official informations. Feel free to correct me in the comments.

Related

Why is getting from Map slower than getting from object?

I'm considering migrating my state management layer to using Map versus using a standard object.
From what I've read, Map is effectively a hash table whereas Objects use hidden classes under the hood. Generally it's advised, where the properties are likely to be dynamically added or removed it's more efficient to use Map.
I set up a little test and to my surprise accessing values in the Object version was faster.
https://jsfiddle.net/mfbx9da4/rk4hocwa/20/
The article also mentions fast and slow properties. Perhaps the reason my code sample in test1 is so fast is because it is using fast properties? This seems unlikely as the object has 100,000 keys. How can I tell if the object is using fast properties or dictionary lookup? Why would the Map version be slower?
And yes, in practice, looks like a premature optimization, root of all evil ... etc etc. However, I'm interested in the internals and curious to know of best practices of choosing Map over Object.
(V8 developer here.)
Beware of microbenchmarks, they are often misleading.
V8's object system is implemented the way it is because in many cases it turns out to be very fast -- as you can see here.
The primary reason why we recommend using Map for map-like use cases is because of non-local performance effects that the object system can exhibit when certain parts of the machinery get "overloaded". In a small test like the one you have created, you won't see this effect, because nothing else is going on. In a large app (using many objects with many properties in many different usage patterns), it's still not guaranteed (because it depends on what the rest of the app is doing) but there's a good chance that using Maps where appropriate will improve overall performance -- if the overall system previously happened to run into one of the unfortunate situations.
Another reason is that Maps handle deletion of entries much better than Objects do, because that's a use case their implementation explicitly anticipates as common.
That said, as you already noted, worrying about such details in the abstract is a case of premature optimization. If you have a performance problem, then profile your app to figure out where most time is being spent, and then focus on improving those areas. If you do end up suspecting that the use of objects-as-maps is causing issues, then I recommend to change the implementation in the app itself, and measure (with the real app!) whether it makes a difference.
(See here for a related, similarly misleading microbenchmark, where even the microbenchmark itself started producing opposite results after minor modifications: Why "Map" manipulation is much slower than "Object" in JavaScript (v8) for integer keys?. That's why we recommend benchmarking with real apps, not with simplistic miniature scenarios.)

'marking' an object for garbage collection in NodeJS

I'm working with some code in NodeJS, and some objects (i.e, 'events') will be medium-lived, and then discarded.
I don't want them becoming a memory burden when I stop using them, and I want to know if there is a way to mark an object to be garbage-collected by the V8 engine. (or better yet- completely destroy the object on command)
I understand that garbage collection is automatic, but since these objects will, 60% of the time, outlive the young generation, I would like to make sure there is a way they don't camp out in the old-generation for a while after they are discarded, while avoiding the inefficiency of searching the entire thing.
I've looked around, and so far can't find anything in the NodeJS docs. I have two main questions:
Would this even be that good? Would it be worth it to be able to 'mark' large amounts of unused objects to be gc'ed? (possibly 100+ at a time)
Is there even a way to do this?
Anything (speculation, hints, articles) would be appreciated. Thanks!
(V8 developer here.) There's no way to do this, and you don't need to worry about it. Marking works the other way round: the GC finds and marks live objects. Dead objects are never marked, and there's no explicit act of destroying them. The GC never even looks at dead objects. Which also means that dead objects are not a burden.
"Garbage collector" really is a misleading term: it doesn't actually find or collect garbage; instead it finds non-garbage and keeps it, and everything it hasn't found it just ignores by assuming that the respective memory regions are free.
In theory, there could be a way to manually add (the memory previously occupied by) objects to the "free list"; but there's a fundamental problem with that: part of the point of automatic memory management is that automating it provides better security and stability than relying on manual memory management (with programmers being humans, and humans making mistakes). That means that by design, a GC can't trust anyone else to declare objects as unreachable; it would always insist on verifying that claim -- which is equivalent to disregarding it, as the only way to verify it is to run a full regular GC cycle.

JavaScript: performance constraints of `delete` keyword

I'm trying to better learn how JS works under the hood and I've heard in the past that the delete keyword (specifically node.js or browsers using V8) results in poor performance, so I want to see if I can figure out what the benefits/detriments are for using that keyword.
I believe the reasoning for not using delete is that removing a property leads to a rebuilding of hidden class transitions and thus a recompiling of the inline cache. However, I believe it is also true that the object prototype will no longer enumerate that property, so if the object is used heavily the upfront cost may eventually pay off.
So:
Are my assumptions about the tradeoffs correct?
If they are correct, is one factor more important than the other (e.g. is rebuilding the IC much more expensive than many prototype enumerations)?
V8 developer here. Short answer: "it depends".
Having an unused property doesn't hurt; there is no general "enumeration cost" unless you actually perform explicit enumerations. In other words, an "enumeration cost" only exists if you find yourself doing something like this:
for (var p in object) {
if (p === old_property_that_I_could_have_deleted) continue;
/* process other properties... */
}
The key reason why it's hard to give a concrete answer (or to provide a canonical example where an effect would be measurable) is because the effects are non-local: they depend both on what exactly you're doing with the object in question, and on what the rest of your app is doing. Deleting a property from one object may well cause operations on other objects to become slower. Or faster. It depends.
To take a step back and look at the high-level situation: JavaScript as a language sort of assumes that objects are represented as dictionaries. Deleting an entry in a dictionary should be perfectly fine, which is why it makes sense that the delete operator exists. In practice, it turns out that an engine can achieve huge performance improvements for read-heavy apps, which is by far the most common case, if it does not store objects as dictionaries, but instead more like something that resembles C/C++ structs. However, such an object representation is (1) generally hard/inefficient to do when properties get deleted, and (2) the engine may well interpret even the first deletion of a property as a hint that the programmer wants this particular object to behave like a dictionary, so it might switch the internal representation over. If a fast-to-modify dictionary is what you wanted, then that's fine (it will provide a benefit even); however if you wanted the object to remain in slow-to-modify/fast-to-read mode, you would perceive the transition to fast-to-modify/slow-to-read dictionary mode as a performance problem.
Thankfully there is a great solution nowadays: when you want a dictionary, use a Map or Set. Engines can (and usually will) assume that you'll want to delete entries from these, so the implementations are optimized for making that possible without negative side effects; in particular no hidden classes are involved.
A few remarks on your assumptions: deleting a property makes an object (mostly) leave the system of hidden class transitions, no transitions will be rebuilt. There is no single global "inline cache", there are many inline caches sprinkled all over your functions. They don't get rebuilt, they just transition to slower and slower modes the more different cases they have to handle. (That's generally how caching works: caching a single case provides huge speedups; on the other end of the scale if you have as many different cases as executions, then a cache just wastes time and memory without providing any benefit.) Again the effect of dictionary-mode objects depends on the overall situation: an inline cache dealing with (mostly) dictionary-mode objects typically exhibits performance somewhere in between (1) an inline cache that only has to deal with objects sharing the single same hidden class, and (2) an inline cache that has to deal with hundreds or thousands of different hidden classes.

Experimenting with auto-removed items from WeakSet/WeakMap (via garbage collection) in Node.js when .size doesn't exist?

#1. Workaround for lack of .size property?
In JavaScript, I've never used either WeakSet or WeakMap before, and I don't know that much about garbage collection in general (I'm a PHP + JS developer, so this is the first time I've really needed to think about garbage collection). But I think I have a good use case for WeakMap right now. So I'd like to at least start experimenting with it.
The main thing I want to confirm in my experiments is the automatic removal of objects when they've been garbage collected. This would be easy to test if I could just access a WeakSet.size / WeakMap.size property on the instances to check their size, but they don't exist on the "weak" versions.
If possible, I'm guessing that the results could vary seeing that the size is going to depend on whether the garbage collector has run yet. But that's ok, as none of this experimentation code will be used in production... I just want to confirm that I actually understand how garbage collection and WeakSet/WeakMap are working. The idea of using this feature without being able to test (and therefore fully understand) it makes me very uneasy, and I'm concerned that I'll end up finding out about memory leaks when its too late (in production).
Are there any workarounds or alternatives to deal with the lack of WeakSet.size and WeakMap.size... at least just for debugging/testing/learning purposes?
If not a .size workaround, is there maybe a way to check the memory usage of my WeakMap collection instances? That would be just as useful, as that's where the main concern is.
The only thing I can think of right now is checking the memory of the entire Node.js process... which doesn't seem very reliable to me.
#2. What is .length for?
Also I'm a bit confused about why there is a .length property on the class constructor/instance prototype of both WeakSet and WeakMap (not on your instances of them).
According to:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WeakSet
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WeakMap
...both pages say that .length is:
The value of the length property is 0.
Are they literally just hard-coded to the number 0 at all times? What's the point of that?
Searching for either "WeakSet.length" or "WeakMap.length" on Google (including the double quotes for exact results) yields no further information, only about 20 results for each, which are just mirrors of the MDN pages.
The size won't be available for either WeakSet or WeakMap since their keys are just references to objects, and these are handled by the Garbage Collector. Since the collector cannot be manually controlled (or shouldn't be), it will free the memory of these objects, once they are no longer referenced, at any point during runtime. The workaround you suggest to implement a way to see its current size would not be effective nor recommended considering this.
The length is there since both WeakSet and WeakMap are created through their prototype counterparts. Given how the collector will take care of clearing the object reference at any point,
As for experiment with them, you can try them out in Chrome and expose the garbage collector (and manually calling it) to see how the WeakMap clears itself after an object reference is lost (explained in this answer). Otherwise, you may still see the reference within the WeakMap or WeakSet since devtools usually prevents the garbage collector from running.

Managing memory in JavaScript

Suppose I ensure that properties are deleted and objects are set to null after use in JavaScript prpgram.
1) Will this help me in better memory management? How do I verify this?
2) As far as I know, after deletion object representation will be changed to dictionary. Will this affect the overall performance?
Will this help me in better memory management?
Unlikely. Usually the objects go out of scope on their own anyway, and setting the variables holding them to null is just unnecessary - if not even slowing down the execution.
How do I verify this?
Profile the memory consumption of your application with and without those lines.
As far as I know, after deletion object representation will be changed to dictionary. Will this affect the overall performance?
Yes it will, adversely, so don't do this. Instead of deleteing properties, you should just set them to null via assignment, if you really care about destroying the reference.
But seriously, there is in general absolutely no need to "manage memory in javascript". The garbage collector works quite fine, and there's very rarely a need to help it. Unless you are programming in odd ways or doing terrible (or very advanced and critical) stuff, don't worry.

Categories

Resources