Javascript Object Big-O - javascript

Coming from Java, Javascript object reminds me of HashMap in Java.
Javascript:
var myObject = {
firstName: "Foo",
lastName: "Bar",
email: "foo#bar.com"
};
Java:
HashMap<String, String> myHashMap = new HashMap<String, String>();
myHashMap.put("firstName", "Foo");
myHashMap.put("lastName", "Bar");
myHashMap.put("email", "foo#bar.com");
In Java HashMap, it uses the hashcode() function of the key to determine the bucket location (entries) for storage, and retrieval. Majority of the time, for basic operations such as put() and get(), the performance is constant time, until a hash collision occurs which becomes O(n) for these basic operations because it forms a linked list in order to store the collided entries.
My question is:
How does Javascript stores object?
What is the performance of operations?
Will there ever be any collision or other scenarios which will degrade the performance like in Java
Thanks!

Javascript looks like it stores things in a map, but that's typically not the case. You can access most properties of an object as if they were an index in a map, and assign new properties at runtime, but the backing code is much faster and more complicated than just using a map.
There's nothing requiring VMs not to use a map, but most try to detect the structure of the object and create an efficient in-memory representation for that structure. This can lead to a lot of optimizations (and deopts) while the program is running, and is a very complicated situation.
This blog post, linked in the question comments by #Zirak, has a quite good discussion of the common structures and when VMs may switch from a struct to a map. It can often seem unpredictable, but is largely based on a set of heuristics within the VM and how many different objects it believes it has seen. That is largely related to the properties (and their types) of return values, and tends to be centered around each function (especially constructor functions).
There are a few questions and articles that dig into the details (but are hopefully still understandable without a ton of background):
slow function call in V8 when using the same key for the functions in different objects
Why is getting a member faster than calling hasOwnProperty?
http://mrale.ph/blog/2013/08/14/hidden-classes-vs-jsperf.html (and the rest of this blog)
The performance varies greatly, based on the above. Worst case should be a map access, best case is a direct memory access (perhaps even a deref).
There are a large number of scenarios that can have performance impacts, especially given how the JITter and VM will create and destroy hidden classes at runtime, as they see new variations on an object. Suddenly encountering a new variant of an object that was presumed to be monomorphic before can cause the VM to switch back to a less-optimal representation and stop treating the object as an in-memory struct, but the logic around that is pretty complicated and well-covered in this blog post.
You can help by making sure objects created from the same constructor tend to have very similar structures, and making things as predictable as possible (good for you, maintenance, and the VM). Having known properties for each object, set types for those properties, and creating objects from constructors when you can should let you hit most of the available optimizations and have some awfully quick code.

Related

Why should I use immutablejs over object.freeze?

I have researched on net about the benefits of immutablejs over Object.freeze() but didn't find anything satisfying!
My question is why I should use this library and work with non native data structures when I can freeze a plain old javascript object?
I don't think you understood what immutablejs offers. It's not a library which just turns your objects immutable, it's a library around working with immutable values.
Without simply repeating their docs and mission statement, I'll state two things it provides:
Types. They implemented (immutable) infinite ranges, stacks, ordered sets, lists, ...
All of their types are implemented as Persistent Data Structures.
I lied, here's a quote of their mission statement:
Immutable data cannot be changed once created, leading to much simpler application development, no defensive copying, and enabling advanced memoization and change detection techniques with simple logic. Persistent data presents a mutative API which does not update the data in-place, but instead always yields new updated data.
I urge you to read the articles and videos they link to and more about Persistent Data Structures (since they're the thing immutablejs is about), but I'll summarise in a sentence or so:
Let's imagine you're writing a game and you have a player which sits on a 2d plane. Here, for instance, is Bob:
var player = {
name: 'Bob',
favouriteColor: 'moldy mustard',
x: 4,
y: 10
};
Since you drank the FP koolaid you want to freeze the player (brrr! hope Bob got a sweater):
var player = Object.freeze({
name: 'Bob',
...
});
And now enter your game loop. On every tick the player's position is changed. We can't just update the player object since it's frozen, so we copy it over:
function movePlayer(player, newX, newY) {
return Object.freeze(Object.assign({}, player, { x: newX, y: newY }));
}
That's fine and dandy, but notice how much useless copying we're making: On every tick, we create a new object, iterate over one of our objects and then assign some new values on top of them. On every tick, on every one of your objects. That's quite a mouthful.
Immutable wraps this up for you:
var player = Immutable.Map({
name: 'Bob',
...
});
function movePlayer(player, newX, newY) {
return player.set('x', newX).set('y', newY);
}
And through the ノ*✧゚ magic ✧゚*ヽ of persistent data structures they promise to do the least amount of operations possible.
There is also the difference of mindsets. When working with "a plain old [frozen] javascript object" the default actions on the part of everything is to assume mutability, and you have to work the extra mile to achieve meaningful immutability (that's to say immutability which acknowledges that state exists). That's part of the reason freeze exists: When you try to do otherwise, things panic. With Immutablejs immutability is, of course, the default assumption and it has a nice API on top of it.
That's not to say all's pink and rosy with cherry on top. Of course, everything has its downsides, and you shouldn't cram Immutable everywhere just because you can. Sometimes, just freezeing an object is Good Enough. Heck, most of the time that's more than enough. It's a useful library which has its niche, just don't get carried away with the hype.
According to my benchmarks, immutable.js is optimized for write operations, faster than Object.assign(), however, it is slower for read operations. So the descision depends the type of your application and its read/write ratio. Following are the summary of the benchmarks results:
-- Mutable
Total elapsed = 103 ms = 50 ms (read) + 53 ms (write).
-- Immutable (Object.assign)
Total elapsed = 2199 ms = 50 ms (read) + 2149 ms (write).
-- Immutable (immutable.js)
Total elapsed = 1690 ms = 638 ms (read) + 1052 ms (write).
-- Immutable (seamless-immutable)
Total elapsed = 91333 ms = 31 ms (read) + 91302 ms (write).
-- Immutable (immutable-assign (created by me))
Total elapsed = 2223 ms = 50 ms (read) + 2173 ms (write).
Ideally, you should profile your application before introducing any performance optimization, however, immutability is one of those design decision must be decided early. When you start using immutable.js, you need to use it throughout your entire application to get the performance benefits, because interop with plain JS objects using fromJS() and toJS() is very costly.
PS: Just found out that deep freeze'ed array (1000 elements) become very slow to update, about 50 times slower, therefore you should only use deep freeze in development mode only. Benchmarks results:
-- Immutable (Object.assign) + deep freeze
Total elapsed = 45903 ms = 96 ms (read) + 45807 ms (write).
Both of them don't make the object deeply immutable.
However, using Object.freeze you'll have to create the new instances of the object / array by yourself, and they won't have structural sharing. So every change which will require deeply copying everything, and the old collection will be garbage collected.
immutablejs on the other hand will manage the collections, and when something changes, the new instance will use the parts of the old instance that haven't changed, so less copying and garbage collecting.
There are a couple of major differences between Object.freeze() and immutable.js.
Let's address the performance cost first. Object.freeze() is shallow. It will make the object immutable, but the nested properties and methods inside said object can still be mutated. The Object.freeze() documentation addresses this and even goes on to provide a "deepFreeze" function, which is even more costly in terms of performance. Immutable.js on the other hand will make the object as a whole (nested properties, method, etc) immutable at a lower cost.
Additionally should you ever need to clone an immutable variable Object.freeze() will force you to create an entirely new variable, while Immutable.js can reuse the existing immutable variable to create the clone more efficiently. Here's an interesting quote about this from this article:
"Immutable methods like .set() can be more efficient than cloning
because they let the new object reference data in the old object: only
the changed properties differ. This way you can save memory and
performance versus constantly deep-cloning everything."
In a nutshell, Immutable.js makes logical connections between the old and new immutable variables, thus improving the performance of cloning and the space frozen variables take in memory. Object.freeze() sadly does not - every time you clone a new variable from a frozen object you basically write all the data anew, and there is no logical connection between the two immutable variables even if (for some odd reason) they hold identical data.
So in terms of performance, especially if you constantly make use of immutable variables in your program, Immutable.js is a great choice. However, performance is not everything and there are some big caveats to using Immutable.js. Immutable.js uses it's own data structure, which makes debugging, or even just logging data to the console, a royal pain. It also might lead to a loss of basic JavaScript functionality (for example, you cannot use ES6 de-structuring with it) The Immutable.js documentation is infamously impossible to understand (because it was originally written for use only within Facebook itself), requiring a lot of web-searching even when simple issues arise.
I hope this covers the most important aspects of both approaches and helps you decide which will work best for you.
Object.freeze does not do any deep freezing natively, I believe that immutable.js does.
The same with any library -- why use underscore, jquery, etc etc.
People like re-using the wheels that other people built :-)
The biggest reason that comes to mind - outside of having a functional api that helps with immutable updates, is the structural sharing utilized by Immutable.js. If you have an application that needs enforced immutability (ie, you're using Redux) then if you're only using Object.freeze then you're going to be making a copy for every 'mutation'. This isn't really efficient over time, since this will lead to GC thrasing. With Immutable.js, you get structural sharing baked in (as opposed to having to implement an object pool/a structural sharing model of your own) since the data structures returned from immutable are Tries. This means that all mutations are still referenced within the data structure, so GC thrashing is kept to a minimum. More about this is on Immutable.js's docsite (and a great video going into more depth by the creator, Lee Byron):
https://facebook.github.io/immutable-js/

JavaScript usage of new keyword and memory management

What is the basic difference between these two statements from the memory standpoint. Just want to know making objects with new does anything special about the memory allocation and garbage collection or both are identical.
I have to load a huge binary data to an array so want to have an idea.
Another question is can i force de-allocation of any memory from the JavaScript directly? like Gc.Collect() in c# or delete operator?
var x=8;
var y=new Number(8);
Thanks for your help in advance
Difference: none.
As for forcing deallocation: no.
(you can set all references to null; but that may be an unnecessary hint to the GC)
Javascript is fully managed and doesn't provide an API like C# to "order" the GC to do stuff. Indeed, you may even find that some objects end up tied to the DOM and aren't deleted until their associated nodes are. And each browser is a different flavour.

How would you explain Javascript Typed Arrays to someone with no programming experience outside of Javascript?

I have been messing with Canvas a lot lately, developing some ideas I have for a web-based game. As such I've recently run into Javascript Typed Arrays. I've done some reading for example at MDN and I just can't understand anything I'm finding. It seems most often, when someone is explaining Typed Arrays, they use analogies to other languages that are a little beyond my understanding.
My experience with "programming," if you can call it that (and not just front-end scripting), is pretty much limited to Javascript. I do feel as though I understand Javascript pretty well outside of this instance, however. I have deeply investigated and used the Object.prototype structure of Javascript, and more subtle factors such as variable referencing and the value of this, but when I look at any information I've found about Typed Arrays, I'm just lost.
With this frame-of-reference in mind, can you describe Typed Arrays in a simple, usable way? The most effective depicted use-case, for me, would be something to do with Canvas image data. Also, a well-commented Fiddle would be most appreciated.
In typed programming languages (to which JavaScript kinda belongs) we usually have variables of fixed declared type that can be dynamically assigned values.
With Typed Arrays it's quite the opposite.
You have a fixed chunk of data (represented by ArrayBuffer) that you do not access directly. Instead this data is accessed by views. Views are created at run time and they effectively declare some portion of the buffer to be of a certain type. These views are sub-classes of ArrayBufferView. The views define the certain continuous portion of this chunk of data as elements of an array of a certain type. Once the type is declared browser knows the length and content of each element, as well as a number of such elements. With this knowledge browsers can access individual elements much more efficiently.
So we dynamically assigning a type to a portion of what actually is just a buffer. We can assign multiple views to the same buffer.
From the Specs:
Multiple typed array views can refer to the same ArrayBuffer, of different types,
lengths, and offsets.
This allows for complex data structures to be built up in the ArrayBuffer.
As an example, given the following code:
// create an 8-byte ArrayBuffer
var b = new ArrayBuffer(8);
// create a view v1 referring to b, of type Int32, starting at
// the default byte index (0) and extending until the end of the buffer
var v1 = new Int32Array(b);
// create a view v2 referring to b, of type Uint8, starting at
// byte index 2 and extending until the end of the buffer
var v2 = new Uint8Array(b, 2);
// create a view v3 referring to b, of type Int16, starting at
// byte index 2 and having a length of 2
var v3 = new Int16Array(b, 2, 2);
The following buffer and view layout is created:
This defines an 8-byte buffer b, and three views of that buffer, v1,
v2, and v3. Each of the views refers to the same buffer -- so v1[0]
refers to bytes 0..3 as a signed 32-bit integer, v2[0] refers to byte
2 as a unsigned 8-bit integer, and v3[0] refers to bytes 2..3 as a
signed 16-bit integer. Any modification to one view is immediately
visible in the other: for example, after v2[0] = 0xff; v21 = 0xff;
then v3[0] == -1 (where -1 is represented as 0xffff).
So instead of declaring data structures and filling them with data, we take data and overlay it with different data types.
I spend all my time in javascript these days, but I'll take a stab at quick summary, since I've used typed arrays in other languages, like Java.
The closest thing I think you'll find in the way of comparison, when it comes to typed arrays, is a performance comparison. In my head, Typed Arrays enable compilers to make assumptions they can't normally make. If someone is optimizing things at the low level of a javascript engine like V8, those assumptions become valuable. If you can say, "Data will always be of size X," (or something similar), then you can, for instance, allocate memory more efficiently, which lets you (getting more jargon-y, now) reduce how many times you go to access memory and it's not in a CPU cache. Accessing CPU cache is much faster than having to go to RAM, I believe. When doing things at a large scale, those time savings add up quick.
If I were to do up a jsfiddle (no time, sorry), I'd be comparing the time it takes to perform certain operations on typed arrays vs non-typed arrays. For example, I imagine "adding 100,000 items" being a performance benchmark I'd try, to compare how the structures handle things.
What I can do is link you to: http://jsperf.com/typed-arrays-vs-arrays/7
All I did to get that was google "typed arrays javascript performance" and clicked the first item (I'm familiar with jsperf, too, so that helped me decide).

Why will ES6 WeakMap's not be enumerable?

Before my re-entry in JavaScript (and related) I've done lots of ActionScript 3 and there they had a Dictionary object that had weak keys just like the upcoming WeakMap; but the AS3 version still was enumerable like a regular generic object while the WeakMap specifically has no .keys() or .values().
The AS3 version allowed us to rig some really interesting and usefull constructs but I feel the JS version is somewhat limited. Why is that?
If the Flash VM could do it then what is keeping browsers from doing same? I read how it would be 'non-deterministic' but that is sort of the point right?
Finally found the real answer: http://tc39wiki.calculist.org/es6/weak-map/
A key property of Weak Maps is the inability to enumerate their keys. This is necessary to prevent attackers observing the internal behavior of other systems in the environment which share weakly-mapped objects. Should the number or names of items in the collection be discoverable from the API, even if the values aren't, WeakMap instances might create a side channel where one was previously not available.
It's a tradeoff. If you introduce object <-> object dictionaries that support enumerability, you have two options with relation to garbage collection:
Consider the key entry a strong reference that prevents garbage collection of the object that's being used as a key.
Make it a weak reference that allows its keys to be garbage collected whenever every other reference is gone.
If you do #1 you will make it extremely easy to to shoot yourself in the foot by leaking large objects into memory all over the place. On the other hand, if you go with option #2, your key dictionary becomes dependent on the state of garbage collection in the application, which will inevitably lead to impossible to track down bugs.

Why aren't strings mutable? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Why can't strings be mutable in Java and .NET?
Why .NET String is immutable?
Several languages have chosen for this, such as C#, Java, and Python. If it is intended to save memory or gain efficiency for operations like compare, what effect does it have on concatenation and other modifying operations?
Immutable types are a good thing generally:
They work better for concurrency (you don't need to lock something that can't change!)
They reduce errors: mutable objects are vulnerable to being changed when you don't expect it which can introduce all kinds of strange bugs ("action at a distance")
They can be safely shared (i.e. multiple references to the same object) which can reduce memory consumption and improve cache utilisation.
Sharing also makes copying a very cheap O(1) operation when it would be O(n) if you have to take a defensive copy of a mutable object. This is a big deal because copying is an incredibly common operation (e.g. whenever you want to pass parameters around....)
As a result, it's a pretty reasonable language design choice to make strings immutable.
Some languages (particularly functional languages like Haskell and Clojure) go even further and make pretty much everything immutable. This enlightening video is very much worth a look if you are interested in the benefits of immutability.
There are a couple of minor downsides for immutable types:
Operations that create a changed string like concatenation are more expensive because you need to construct new objects. Typically the cost is O(n+m) for concatenating two immutable Strings, though it can go as low as O(log (m+n)) if you use a tree-based string data structure like a Rope. Plus you can always use special tools like Java's StringBuilder if you really need to concatenate Strings efficiently.
A small change on a large string can result in the need to construct a completely new copy of the large String, which obviously increases memory consumption. Note however that this isn't usually a big issue in garbage-collected languages since the old copy will get garbage collected pretty quickly if you don't keep a reference to it.
Overall though, the advantages of immutability vastly outweigh the minor disadvantages. Even if you are only interested in performance, the concurrency advantages and cheapness of copying will in general make immutable strings much more performant than mutable ones with locking and defensive copying.
It's mainly intended to prevent programming errors. For example, Strings are frequently used as keys in hashtables. If they could change, the hashtable would become corrupted. And that's just one example where having a piece of data change while you're using it causes problems. Security is another: if you checking whether a user is allowed to access a file at a given path before executing the operation they requested, the string containing the path better not be mutable...
It becomes even more important when you're doing multithreading. Immutable data can be safely passed around between threads while mutable data causes endless headaches.
Basically, immutable data makes the code that works on it easier to reason about. Which is why purely functional languages try to keep everything immutable.
In Java not only String but all primitive Wrapper classes (Integer, Double, Character etc) are immutable. I am not sure of the exact reason but I think these are the basic data types on which all the programming schemes work. If they change, things could go wild. To be more specific, I'll use an example: Say you have opened a socket connection to a remote host. The host name would be a String and port would be Integer. What if these values are modified after the connection is established.
As far as performance is concerned, Java allocates memory to these classes from a separate memory section called Literal Pool, and not from stack or Heap. The Literal Pool is indexed and if you use a string "String" twice, they point to the same object from Literal pool.
Having strings as immutable also allows the new string references easy, as the same/similar strings will be readily available from the pool of the Strings previously created. Thereby reducing the cost of new object creation.

Categories

Resources