Consider the following Javascript function (1):
function setData(domElement) {
domElement.myDataProperty = {
'suppose': 'this',
'object': 'is',
'static': 'and',
'pretty': 'big'
};
};
Now what I don't like about this function is that the exact same object is created every time the function is called. Since the object does not change I would rather create it just once. So we could make the following adjustments (2):
var dataObject = {
'suppose': 'this',
'object': 'is',
'static': 'and',
'pretty': 'big'
};
function setData(domElement) {
domElement.myDataProperty = dataObject;
};
Now the object is created once when the script is loaded and stored in dataObject. But let's assume that setData is called only occasionally -- most of the times that the script is loaded the function is not used. What I don't like about this function in that case is that the object is always created and held in memory, including many occasions in which it will never be used. I figured you could do something like this to strike the ideal balance (3):
var dataObject;
function setData(domElement) {
if (!dataObject) {
dataObject = {
'suppose': 'this',
'object': 'is',
'static': 'and',
'pretty': 'big'
};
}
domElement.myDataProperty = dataObject;
};
Would that make sense? I figure it depends on when the interpreter decides to create an object. Does it really wait until it passes the !dataObject condition, or does it enter the function, tries to be smart and decides to construct it in advance? Perhaps different Javascript engines have different policies with regard to this?
Then of course there is the question of whether these optimizations will ever matter in practice. Obviously this depends on factors like the size of the object, the speed of the engine, the amount of resources available, etc.. But in general, which one would you say is the more significant optimization: from (1) to (2) or from (2) to (3)?
The answer is, you're not supposed to know. The examples you showed have very little difference between them. The only way you'd ever reasonably worry about this is if you had actual evidence that one way or another was noticably harming performance or memory usage on a particular interpreter. Until then, it's the interpreter's job to worry about that stuff for you.
That said, if you really want to know... Try it and find out. call the different versions 1,000,000 times and see what difference it makes.
Make a giant version of the object and see if that makes a dent. Watch task manager. Try different browsers. Report back your results. It's a much better way to find out than just asking a bunch of jerks on the internet what they guess might be the case.
just keep in mind that object has to be in memory anyway, regardless ... as source text
A new object must be created -- it cannot not be, partially because the spec requires it, but mostly because alternative behaviour would be counter intuitive, take:
function f() {
return {a : "b", c: "d"};
}
o=f();
alert([o.c, o.e]); // Alerts "b,"
delete o.c;
o.e="f";
o=f();
alert([o.c, o.e]); // If the object was only created once this would produce ",f"
Do you really expect a new object expression to not actually produce the object you're asking for? Because that's what you seem to want.
Conceivably you just want to do:
var myFunction = (function(){
var object = {a: "b", c: "d"};
return function() { return object; }
})();
Which would get the effect you want, although you would have to realise that the object you're returning is a completely mutable object that can be changed, and everyone would be sharing that same mutating instance.
First, I'd implement it in situation #2 and load it once immediately after the page is loaded.
If there was a problem with page speed, I would measure the time taken for specific tasks within the page.
If it was very expensive to create the object (relatively speaking), then I would move to situation #3.
There's no point in adding the 'if' statement if it really doesn't buy you anything... and in this case, creating a simple/big object is no sweat off your CPU's back. Without measurements, you're not optimizing - you're just shooting blind.
It's actually a fairly common method of initializing things that I've personally used in C++ and Java.
First, this optimization will never matter in practice.
Second, the last function is exactly as good as the first function. Well, almost. In the first I suppose you're at the mercy of the garbage collector, which should destroy the old object when you reassign domElement.myDataProperty. Still, without knowing exactly how the garbage collector works on your target platform (and it can be very different across browsers), you can't be sure you're saving any work at all really.
Try all three of them in a couple of browsers and find out which is faster.
Related
I'm just curious. Maybe someone knows what JavaScript engines can optimize in 2013 and what they can't? Any assumptions for nearest future? I was looking for some good articles, but still there is no "bible" in the internet.
Ok, let's focus on single quesiton:
Suppose I have a function which is called every 10ms or in a tight loop:
function bottleneck () {
var str = 'Some string',
arr = [1,2,3,4],
job = function () {
// do something;
};
// Do something;
// console.log(Date.getTime());
}
I do not need to calculate the initial values for variables every time, as you see. But, if I move them to upper scope, I will loose on variable lookup. So is there a way to tell Javasript engine to do such an obvious thing - precalculate variables' initial values?
I've careated a jsperf to clear my question. I'm experimenting with different types. I'm especially interested in functions and primitives.
if you need to call a function every 10ms, and it's a bottleneck, the first thought you should have is "I shouldn't call this function every 10ms". Something went wrong in the architecting you did. That said, see 1b in http://jsperf.com/variables-caching/2, which is about four times faster than your "cached" version - the main reason being that for every variable in your code, you're either moving up scope, or redeclaring. In 1b, we go up scope once, to get "initials", then set up local aliasses for its content, from local reference. Much time is saved.
(Concerns V8)
Well the array data itself is not created but an unique array object needs to be created every-time. The backing array for the values 1,2,3,4 is shared by these objects.
The string is interned and it is actually fastest to copy paste same string everywhere as a literal rather than referencing some common variable. But for maintenance you don't really want to do that.
Don't create any new function inside a hot function, if your job function references any variables from the bottleneck function then first of all those variables will become context allocated and slow to access anywhere even in the outer function and it will prevent inlining of the bottleneck function as of now. Inlining is a big deal optimization you don't want to miss when otherwise possible.
I have a function that is called every 1 second.
var latestObject; //this updated separately, it depends on user input so it may not be different every second
var previousObject;
function Tick(object) {
if (latestObject !== previousObject) { //Problem is here
previousObject = latestObject; //or here
//do stuff with latestObject;
}
}
However when latestObject is updated it's properties are changed, the variable is not set to a different object. So previousObject and latestObject are always equal and the do stuff never happens.
I could do:
function Tick(object) {
var latestObjectString = JSON.stringify(latestObject);
if (latestObjectString !== previousObject) { //Problem is here
previousObject = latestObjectString; //or here
//do stuff with latestObject;
}
}
But then I'm doing JSON.stringify once every second, this seems inefficient, especially as latestObject is quite big, and quite deep.
Wouldn't it be better set previousObject to be a copy of latestObject, so that when properties on latestObject are changed, previousObject stays the same, and then this only happens when the objects are different which is less often than every second? But wouldn't there be a problem as copyOfObject == Object would never be true?
(the object is mostly properties, but has a few functions that don't ever change).
(No jQuery)
Description of the problem
The problem here is indeed related to the fact, that the same object is assigned to two different variables. Even if you change it in one place, the other changes it also.
This example shows you what really happens (jsFiddle: http://jsfiddle.net/tadeck/4hFC2/):
var objA = {'a':10, 'b': 20};
var objB = objA; // same instance assigned to both names
objB.a = 30; // instance is modified, its "a" property is changed
// now, both objA.a and objB.a show "30", as objA and objB is the same instance
However, having two different objects is not so ideal either, as comparing them is non-trivial (proof here: http://jsfiddle.net/tadeck/GN2m4/).
Solution no. 1. for comparing the objects
To solve this problem:
You need to use two different objects (eg. by using some solution similar to jQuery's .extend() to construct new object from existing object). You currently achieve that part using unnecessary serialization.
You need to compare them in a little more complex way (pretty universal solution for that is here: https://stackoverflow.com/a/1144249/548696).
In comparison to this, your solution may look less complex (at least in terms of code). I suggest using some JS performance tests to find out, which is more reasonable. JSON.stringify() is not always natively supported, so it may be doing things similarly complex (and resource-consuming), as the alternative solution I mentioned.
Solution no. 2. for solving the overall issue of detecting the changes
The other option is to rebuild your script and use eg. flags for marking the object as changed by user input. That would save you the processing of whole objects each second and may result in large efficiency gains.
The things you need to do in this case, are:
In your user-input handlers set the flag whenever user changes some part of the object,
Optionally, you could first compare the specific value with the original object (if user has changed it quickly and then reverted the change, just mark the value as not changed),
To limit the processing of the changed object, you could even mark, which properties were changed (so you process only these properties, nothing else),
To achieve part of this solution, you could even use JavaScript setters and getters, as described by John Resig.
But, as I mentioned, it may require rebuilding your script (which we haven't seen, so we cannot say if it is necessary or it can be applied rather easily).
I'm using John Resig's recipe for JavaScript 'classes' and inheritance. I've stripped my code back to something like this for this question:
MyClass = Class.extend({
// create an <h3>Hello world!</h3> in the HTML document
init : function (divId) {
this._divId = divId;
this._textDiv = document.createElement("h3");
this._textDiv.innerHTML = "Hello world!";
document.getElementById(divId).appendChild(this._textDiv);
},
// remove the <h3> and delete this object
remove : function () {
var container = document.getElementById(this._divId);
container.parentNode.removeChild(container);
// can I put some code here to release this object?
}
});
All works well:
var widget = new MyClass("theDivId");
...
widget.remove();
I'm going to have hundreds of these things on a page (obviously with some sensible functionality) and I'd like a simple way to release the memory for each object. I understand I can use widget = null; and trust the GC releases the object when required (?), but can I do something explicit in the remove() method? I know that placing this = null; at the end of remove() doesn't work ;)
there is no ways to destroy objects manually, only way is to free all links to your object and trust removal to GC
actually in your code you should clear this._textDiv = null and container = null in remove method too, because it can be a problem for GC in some browsers
No. You don't have any way of accessing the garbage collector directly. As you say, the best you can do is make sure the object is no longer referenced.
IMO, it's better that way. The garbage collector is much smarter than you (and me) because years of research has gone into writing the thing, and even when you try and make optimisations, you're likely still not doing a better job than it would.
Of course if you're interfacing with a JS engine you will be able to control the execution and force garbage collection (among much more), although I very much doubt you're in that position. If you're interested, download and compile spider monkey (or v8, or whatever engine tickles your fancy), and in the repl I think its gc() for both.
That brings me to another point, since the standard doesn't define the internals of garbage collection, even if you manage to determine that invoking the gc at some point in your code is helpful, it's likely that that will not reap the same benefits across all platforms.
this is a keyword, to which you cannot assign any value. The only way to remove objects from a scope is to manually assign nullto every variable.
This method doesn't always work, however: In some implementations of the XMLHttpRequest, one has to reset the onreadystate and open functions, before the XMLHttpRequest object is freed from the memory.
Is it possible to create an object container where changes can be tracked
Said object is a complex nested object of data. (compliant with JSON).
The wrapper allows you to get the object, and save changes, without specifically stating what the changes are
Does there exist a design pattern for this kind of encapsulation
Deep cloning is not an option since I'm trying to write a wrapper like this to avoid doing just that.
The solution of serialization should only be considered if there are no other solutions.
An example of use would be
var foo = state.get();
// change state
state.update(); // or state.save();
client.tell(state.recentChange());
A jsfiddle snippet might help : http://jsfiddle.net/Raynos/kzKEp/
It seems like implementing an internal hash to keep track of changes is the best option.
[Edit]
To clarify this is actaully done on node.js on the server. The only thing that changes is that the solution can be specific to the V8 implementation.
Stripping away the javascript aspect of this problem, there are only three ways to know if something has changed:
Keep a copy or representation to compare with.
Observe the change itself happening in-transit.
Be notified of the change.
Now take these concepts back to javascript, and you have the following patterns:
Copy: either a deep clone, full serialization, or a hash.
Observe: force the use of a setter, or tap into the javascript engine (not very applicable)
Notify: modifying the code that makes the changes to publish events (again, not very applicable).
Seeing as you've ruled out a deep clone and the use of setters, I think your only option is some form of serialisation... see a hash implementation here.
You'll have to wrap all your nested objects with a class that reports you when something changes. The thing is, if you put an observer only in the first level object, you'll only receive notifications for the properties contained in this object.
For example, imagine you have this object:
var obj = new WrappedObject({
property1: {
property1a: "foo",
property1b: 20,
}
})
If you don't wrap the object contained in porperty1, you'll only receive a "get" event for property1, and just that, because when someone runs obj.property1.property1a = "bar" the only interaction that you'll have with obj, will be when it asks for the reference of the object contained in property1, and the modification will happen in an unobserved object.
The best approach I can imagine, is iterating over all the properties when you wrap the first object, and constructing recursively a wrapper object for every typeOf(property) == "Object".
I hope my understanding of your question was right. Sorry if not! It's my first answer here :$.
There's something called reactive programming that kind of resembles what you ask about, but its more involved and would probably be overkill.
It seems like you would like to keep a history of values, correct? This shouldn't be too hard as long as you restrit changes to a setter function. Of course, this is more difficult in javascript than it is in some other languages. Real private fields demand some clever use of closures.
Assuming you can do all of that, just write something like this into the setter.
function setVal(x)
{
history.push(value);
value = x;
}
You can use the solution that processing.js uses.
Write the script that accesses the wrapped object normally...
var foo = state.get();
foo.bar = "baz";
state.update();
client.tell(state.recentChange());
...but in the browser (or on the server if loading speed is important) before it runs, parse the code and convert it to this,
var foo = state.get();
state.set(foo, "bar", "baz");
state.update();
client.tell(state.recentChange());
This could also be used to do other useful things, like operator overloading:
// Before conversion
var a=new Vector(), b=new Vector();
return a + b * 3;
// After conversion
var a=new Vector(), b=new Vector();
return Vector.add(a,Vector.multiply(b,3));
It would appear that node-proxy implements a way of doing this by wrapping a proxy around the entire object. I'll look into more detail as to how it works.
https://github.com/samshull/node-proxy
In terms of memory consumption, are these equivalent or do we get a new function instance for every object in the latter?
var f=function(){alert(this.animal);}
var items=[];
for(var i=0;i<10;++i)
{
var item={"animal":"monkey"};
item.alertAnimal=f;
items.push(item);
}
and
var items=[];
for(var i=0;i<10;++i)
{
var item={"animal":"monkey"};
item.alertAnimal=function(){alert(this.animal);};
items.push(item);
}
EDIT
I'm thinking that in order for closure to work correctly, the second instance would indeed create a new function each pass. Is this correct?
You should pefer the first method, since the second one creates a function every time the interpreter passes that line.
Regarding your edit: We are in the same scope all the time, since JavaScript has function scope instead of block scope, so this might be optimizable, but i did not encounter an implementation that doesn't create it every time. I would recommend not to rely on this (probably possible) optimization, since implementations that lack support could likely exceed memory limits if you use this technique extensively (which is bad, since you do not know what implementation will run it, right?).
I am not an expert, but it seems to me that different javascript engines could be handling this in different ways.
For example, V8 has something called hidden classes, which could affect memory consumption when accessing the same property. Maybe somebody can confirm or deny this.