In angular, I have an object that will be exposed across my application via a service.
Some of the fields on that object are dynamic, and will be updated as normal by bindings in the controllers that use the service. But some of the fields are computed properties, that depend on the other fields, and need to be dynamically updated.
Here's a simple example (which is working on jsbin here). My service model exposes fields a, b and c where c is calculated from a + B in calcC(). Note, in my real application the calculations are a lot more complex, but the essence is here.
The only way I can think to get this to work, is to bind my service model to the $rootScope, and then use $rootScope.$watch to watch for any of the controllers changing a or b and when they do, recalculating c. But that seems ugly. Is there a better way of doing this?
A second concern is performance. In my full application a and b are big lists of objects, which get aggregated down to c. This means that the $rootScope.$watch functions will be doing a lot of deep array checking, which sounds like it will hurt performance.
I have this all working with an evented approach in BackBone, which cuts down the recalculation as much as possible, but angular doesn't seem to play well with an evented approach. Any thoughts on that would be great too.
Here's the example application.
var myModule = angular.module('myModule', []);
//A service providing a model available to multiple controllers
myModule.factory('aModel', function($rootScope) {
var myModel = {
a: 10,
b: 10,
c: null
};
//compute c from a and b
calcC = function() {
myModel.c = parseInt(myModel.a, 10) * parseInt(myModel.b, 10);
};
$rootScope.myModel = myModel;
$rootScope.$watch('myModel.a', calcC);
$rootScope.$watch('myModel.b', calcC);
return myModel;
});
myModule.controller('oneCtrl', function($scope, aModel) {
$scope.aModel = aModel;
});
myModule.controller('twoCtrl', function($scope, aModel) {
$scope.anotherModel = aModel;
});
Although from a high level, I agree with the answer by bmleite ($rootScope exists to be used, and using $watch appears to work for your use case), I want to propose an alternative approach.
Use $rootScope.$broadcast to push changes to a $rootScope.$on listener, which would then recalculate your c value.
This could either be done manually - i.e. when you would be actively changing a or b values, or possibly even on a short timeout to throttle the frequency of the updates. A step further from that would be to create a 'dirty' flag on your service, so that c is only calculated when required.
Obviously such an approach means a lot more involvement in recalculation in your controllers, directives etc - but if you don't want to bind an update to every possible change of a or b, the issue becomes a matter of 'where to draw the line'.
I must admit, the first time I read your question and saw your example I thought to myself "this is just wrong", however, after looking into it again I realized it wasn't so bad as I thought it would be.
Let's face the facts, the $rootScope is there to be used, if you want to share anything application-wide, that's the perfect place to put it. Of course you will need to careful, it's something that's being shared between all the scopes so you don't want to inadvertently change it. But let's face it, that's not the real problem, you already have to be careful when using nested controllers (because child scopes inherit parent scope properties) and non-isolated scope directives. The 'problem' is already there and we shouldn't use it as an excuse not follow this approach.
Using $watch also seems to be a good idea. It's something that the framework already provides you for free and does exactly what you need. So, why reinventing the wheel? The idea is basically the same as an 'change' event approach.
On a performance level, your approach can be in fact 'heavy', however it will always depend on the frequency you update the a and b properties. For example, if you set a or b as the ng-model of an input box (like on your jsbin example), c will be re-calculated every time the user types something... that's clearly over-processing. If you use a soft approach and update a and/or b solely when necessary, then you shouldn't have performance problems. It would be the same as re-calculate c using 'change' events or a setter&getter approach. However, if you really need to re-calculate c on real-time (i.e: while the user is typing) the performance problem will always be there and is not the fact that you are using $rootScope or $watch that will help improve it.
Resuming, in my opinion, your approach is not bad (at all!), just be careful with the $rootScope properties and avoid ´real-time´ processing.
I realize this is a year and a half later, but since I've recently had the same decision to make, I thought I'd offer an alternative answer that "worked for me" without polluting $rootScope with any new values.
It does, however still rely on $rootScope. Rather than broadcasting messages, however, it simply calls $rootScope.$digest.
The basic approach is to provide a single complex model object as a field on your angular service. You can provide more than as you see fit, just follow the same basic approach, and make sure each field hosts a complex object whose reference doesn't change, i.e. don't re-assign the field with a new complex object. Instead, only modify the fields of this model object.
var myModule = angular.module('myModule', []);
//A service providing a model available to multiple controllers
myModule.service('aModel', function($rootScope, $timeout) {
var myModel = {
a: 10,
b: 10,
c: null
};
//compute c from a and b
calcC = function() {
myModel.c = parseInt(myModel.a, 10) * parseInt(myModel.b, 10);
};
calcC();
this.myModel = myModel;
// simulate an asynchronous method that frequently changes the value of myModel. Note that
// not appending false to the end of the $timeout would simply call $digest on $rootScope anyway
// but we want to explicitly not do this for the example, since most asynchronous processes wouldn't
// be operating in the context of a $digest or $apply call.
var delay = 2000; // 2 second delay
var func = function() {
myModel.a = myModel.a + 10;
myModel.b = myModel.b + 5;
calcC();
$rootScope.$digest();
$timeout(func, delay, false);
};
$timeout(func, delay, false);
});
Controllers that wish to depend on the service's model are then free to inject the model into their scope. For example:
$scope.theServiceModel = aModel.myModel;
And bind directly to the fields:
<div>A: {{theServiceModel.a}}</div>
<div>B: {{theServiceModel.b}}</div>
<div>C: {{theServiceModel.c}}</div>
And everything will automatically update whenever the values update within the service.
Note that this will only work if you inject types that inherit from Object (e.g. array, custom objects) directly into the scope. If you inject primitive values like string or number directly into scope (e.g. $scope.a = aModel.myModel.a) you will get a copy put into scope and will thus never receive a new value on update. Typically, best practice is to just inject the whole model object into the scope, as I did in my example.
In general, this is probably not a good idea. It's also (in general) bad practice to expose the model implementation to all of its callers, if for no other reason than refactoring becomes more difficult and onerous. We can easily solve both:
myModule.factory( 'aModel', function () {
var myModel = { a: 10, b: 10 };
return {
get_a: function () { return myModel.a; },
get_b: function () { return myModel.a; },
get_c: function () { return myModel.a + myModel.b; }
};
});
That's the best practice approach. It scales well, only gets called when it's needed, and doesn't pollute $rootScope.
PS: You could also update c when either a or b is set to avoid the recalc in every call to get_c; which is best depends on your implementation details.
From what I can see of your structure, having a and b as getters may not be a good idea but c should be a function...
So I could suggest
myModule.factory( 'aModel', function () {
return {
a: 10,
b: 10,
c: function () { return this.a + this.b; }
};
});
With this approach you cannot ofcourse 2 way bind c to an input variable.. But two way binding c does not make any sense either because if you set the value of c, how would you split up the value between a and b?
Related
For quite sometime I've been wondering about this question: when working with AngularJS, should I use directly the model object properties on the view or can I use a function to get the that property value?
I've been doing some minor home projects in Angular, and (specially working with read-only directives or controllers) I tend to create scope functions to access and display scope objects and their properties values on the views, but performance-wise, is this a good way to go?
This way seems easier for maintaining the view code, since, if for some reason the object is changed (due to a server implementation or any other particular reason), I only have to change the directive's JS code, instead of the HTML.
Here's an example:
//this goes inside directive's link function
scope.getPropertyX = function() {
return scope.object.subobject.propX;
}
in the view I could simply do
<span>{{ getPropertyX() }}</span>
instead of
<span>{{ object.subobject.propX }}</span>
which is harder to maintain, amidst the HTML clutter that sometimes it's involved.
Another case is using scope functions to test properties values for evaluations on a ng-if, instead of using directly that test expression:
scope.testCondition = function() {
return scope.obj.subobj.propX === 1 && scope.obj.subobj.propY === 2 && ...;
}
So, are there any pros/cons of this approach? Could you provide me with some insight on this issue? It's been bothering me lately, on how an heavy app might behave when, for example a directive can get really complex, and on top of that could be used inside a ng-repeat that could generate hundreds or thousands of its instances.
Thank you
I don't think creating functions for all of your properties is a good idea. Not just will there be more function calls being made every digest cycle to see if the function return value has changed but it really seems less readable and maintainable to me. It could add a lot of unnecessary code to your controllers and is sort of making your controller into a view model. Your second case seems perfectly fine, complex operations seems like exactly what you would want your controller to handle.
As for performance it does make a difference according to a test I wrote (fiddle, tried to use jsperf but couldn't get different setup per test). The results are almost twice as fast, i.e. 223,000 digests/sec using properties versus 120,000 digests/sec using getter functions. Watches are created for bindings that use angular's $parse.
One thing to think about is inheritance. If you uncomment the ng-repeat list in the fiddle and inspect the scope of one of the elements you can see what I'm talking about. Each child scope that is created inherits the parent scope's properties. For objects it inherits a reference, so if you have 50 properties on your object it only copies the object reference value to the child scope. If you have 50 manually created functions it will copy each of those function to each child scope that it inherits from. The timings are slower for both methods, 126,000 digests/sec for properties and 80,000 digests/sec with getter functions.
I really don't see how it would be any easier for maintaining your code and it seems more difficult to me. If you want to not have to touch your HTML if the server object changes it would probably be better to do that in a javascript object instead of putting getter functions directly on your scope, i.e.:
$scope.obj = new MyObject(obj); // MyObject class
In addition, Angular 2.0 will be using Object.observe() which should increase performance even more, but would not improve the performance using getter functions on your scope.
It looks like this code is all executed for each function call. It calls contextGetter(), fnGetter(), and ensureSafeFn(), as well as ensureSafeObject() for each argument, for the scope itself and for the return value.
return function $parseFunctionCall(scope, locals) {
var context = contextGetter ? contextGetter(scope, locals) : scope;
var fn = fnGetter(scope, locals, context) || noop;
if (args) {
var i = argsFn.length;
while (i--) {
args[i] = ensureSafeObject(argsFn[i](scope, locals), expressionText);
}
}
ensureSafeObject(context, expressionText);
ensureSafeFunction(fn, expressionText);
// IE stupidity! (IE doesn't have apply for some native functions)
var v = fn.apply
? fn.apply(context, args)
: fn(args[0], args[1], args[2], args[3], args[4]);
return ensureSafeObject(v, expressionText);
};
},
By contrast, simple properties are compiled down to something like this:
(function(s,l /**/) {
if(s == null) return undefined;
s=((l&&l.hasOwnProperty("obj"))?l:s).obj;
if(s == null) return undefined;
s=s.subobj;
if(s == null) return undefined;
s=s.A;
return s;
})
Performance wise - it is likely to matter little
Jason Goemaat did a great job providing a Benchmarking Fiddle. Where you can change the last line from:
setTimeout(function() { benchmark(1); }, 500);
to
setTimeout(function() { benchmark(0); }, 500);
to see the difference.
But he also frames the answer as properties are twice as fast as function calls. In fact, on my mid-2014 MacBook Pro, properties are three times faster.
But equally, the difference between calling a function or accessing the property directly is 0.00001 seconds - or 10 microseconds.
This means that if you have 100 getters, they will be slower by 1ms compared to having 100 properties accessed.
Just to put things in context, the time it takes sensory input (a photon hitting the retina) to reach our consciousness is 300ms (yes, conscious reality is 300ms delayed). So you'd need 30,000 getters on a single view to get the same delay.
Code quality wise - it could matter a great deal
In the days of assembler, software was looked like this:
A collection of executable lines of code.
But nowadays, specifically for software that have even the slightest level of complexity, the view is:
A social interaction between communication objects.
The latter is concerned much more about how behaviour is established via communication objects, than the actual low-level implementation. In turn, the communication is granted by an interface, which is typically achieved using the query or command principle. What matters, is the interface of (or the contract between) collaborating objects, not the low-level implementation.
By inspecting properties directly, you are hit the internals of an object bypassing its interface, thus couple the caller to the callee.
Obviously, with getters, you may rightly ask "what's the point". Well, consider these changes to implantation that won't affect the interface:
Checking for edge cases, such as whether the property is defined at all.
Change from getName() returning first + last names rather than just the name property.
Deciding to store the property in mutable construct.
So even an implementation seemingly simple may change in a way that if using getters would only require single change, where without will require many changes.
I vote getters
So I argue that unless you have a profiled case for optimisation, you should use getters.
Whatever you put in the {{}} is going be evaluated A LOT. It has to be evaluated each digest cycle in order to know if the value changed or not. Thus, one very important rule of angular is to make sure you don't have expensive operations in any $watches, including those registered through {{}}.
Now the difference between referencing the property directly or having a function do nothing else but return it, to me, seems negligible. (Please correct me if I'm wrong)
So, as long as your functions aren't performing expensive operations, I think it's really a matter of personal preference.
Is there a way to extract a variable that is closed over by a function?
In the (JavaScript-like) R language values that are
closed-over can be accessed by looking up the function's scope
directly. For example, the constant combinator takes a value and returns
a function that always yields said value.
K = function (self) {
function () {
self
}
}
TenFunction = K(10)
TenFunction()
10
In R the value bound to "self" can be looked up directly.
environment(TenFunction)[[ "self" ]]
10
In R this is a perfectly normal and acceptable thing to want to do. Is
there a similar mechanism in JavaScript?
My motivation is that I'm working with functions that I
create with an enclosed value called "self". I'd like to be able
to extract that data back out of the function. A mock example loosely
related to my problem is.
var Velocity = function (self) {
return function (time) {
return self.vx0 + self.ax * time
}
}
var f = Velocity({vx0: 10, ax: 100})
I'd really like to extract the values of self.vx0 and self.ax as they are
difficult to recover by other means. Is there a function "someFun" that does this?
someFun(f).self
{vx0: 10, ax: 100}
Any help or insights would be appreciated. If any clarification is needed leave a comment below and I'll edit my question.
Not as you have described, no. Function objects support very few reflective methods, most of which are deprecated or obsolete. There is a good reason for this: while closures are a common way to implement lexically scoped functions, they are not the only way, and in some cases they may not be the fastest. Javascript probably avoids exposing such details to allow implementations more flexibility to improve performance.
That said, you can get around this in various ways. One approach is to add an argument to the inner function telling it that it should return the value of a certain variable rather than doing what it usually does. Alternatively, you can store the variable alongside the function.
For an example of an alternative implementation technique, look up "lambda lifting". Some implementations may use different approaches in different situations.
Edit
An even better reason not to allow that sort of reflection is that it breaks the function abstraction rather horribly, and in doing so exposes hairy details of how the function was produced. If you want that sort of access, you really want an object, not a function.
Updated with three methods that work, and the original one that does not
I have made an angular js directive, and I am trying to access the ctrl.$modelValue. It does not work in the main flow.
I have three potential solutions, all of which have drawbacks.
Method 1 does not work as I would hope, and I can't find any other property available in the directive directly accessible in this way.
Method 2 works because it waits until the current flow is complete, and then executes in the next moment. This happens to be after the angular js lifecycle is complete, and the controller seems to be hooked up to the model at this point. This does not seem ideal to me, as it is waiting for all execution to finish. If it is possible, I would prefer to run my code as soon as the controller is linked to the model, and not after all the code in current flow completes.
Method 3 works well, accessing the model from the $scope, and determining what the model is from the string representation accessed on the attrs object. The drawback is that this method uses eval in order to get hold of the addressed value - and as we all know, eval is evil.
Method 4 works, but it seems like an overly complex way to access a simple property. I can't believe that there is not a simpler way than the string manipulation, and while loop. I am not confident that the function for accessing properties is 100% robust. At least I might like to change it to use a for loop.
Which method should I use, or is there a 5th method that has no drawbacks?
DEMO: http://jsfiddle.net/billymoon/VE9dX/9/
HTML:
<div ng-app="myApp">
<div ng-controller="inControl">
I like to drink {{drink.type}}<br>
<input my-dir ng-model="drink.type"></input>
</div>
</div>
Javascript:
var app = angular.module('myApp', []);
app.controller('inControl', function($scope) {
$scope.drink = {type:'water'};
});
app.directive('myDir', function(){
return {
restrict: 'A',
require: 'ngModel',
link: function($scope, element, attrs, ctrl) {
// Method 1
// logs NaN
console.log({"method-1": ctrl.$modelValue});
// Method 2
// on next tick it is ok
setTimeout(function(){
console.log({"method-2": ctrl.$modelValue});
},0);
// Method 3
// using eval to access model on scope is ok
// eval is necessary in case model is chained
// like `drink.type`
console.log({"method-3": eval("$scope."+attrs.ngModel)});
// Method 4
// using complex loop to access model on scope
// which does same thing as eval method, without eval
var getProperty = function(obj, prop) {
var parts = prop.split('.'),
last = parts.pop(),
l = parts.length,
i = 1,
current = parts[0];
while((obj = obj[current]) && i < l) {
current = parts[i];
i++;
}
if(obj) {
return obj[last];
}
}
console.log({"method-4": getProperty($scope,attrs.ngModel)});
}
};
});
There are quite a few alternatives, some better than others depending on your requirements, e.g. should you be notified if the view value changes, or the model value, or are you happy with the initial value.
Just to know the initial value you can use either of the following:
console.log('$eval ' + $scope.$eval(attrs.ngModel));
console.log('$parse ' + $parse(attrs.ngModel)($scope));
Both $eval and $parse end result is the same, however $eval sits off $scope, where $parse is an Angular service which converts an expression into a function. The returned $parse function can then be invoked and passed a context (usually scope) in order to retrieve the expression's value. In addition, if the expression is assignable the returned $parse function will have an assign property. The assign property is a function that can be used to change the expression's value on the given context. See $parse docs.
If you need to be informed when the model value changes, you could use $watch, however there are better ways when dealing with ngModel. If you need to track changes to the the model value when the change occurs on the model itself, i.e. inside your code, you can use modelCtrl.$formatters:
ctrl.$formatters.push(function(value){
console.log('Formatter ' + value);
});
Note that $formatters are only called when the model value changes from within your code, and NOT when the model changes from user input. You can also use $formatters to alter the model view value, e.g. convert the display text to uppercase without changing the underlying model value.
When you need to be informed of model value changes that occurs from user input, you can use either modelCtrl.$parsers or modelCtrl.$viewChangeListeners. They are called whenever user input changes the underlying model value:
ctrl.$viewChangeListeners.push(function(){
console.log('$viewChangeListener ' + ctrl.$modelValue, arguments);
});
ctrl.$parsers.push(function(value){
console.log('$parsers ' + value, arguments);
return value;
});
$parsers allows you to change the value from user input to model if you need to, where $viewChangeListeners just lets you know when the input value changed.
To sum up, if you only need the initial value, use either $eval or $parse, if you need to know when the value changes you need a combination of $formatters and $parsers/$viewChangeListeners.
The following fiddle shows all these and more options, based on your original fiddle:
http://jsfiddle.net/VE9dX/6/
Instead of using the native eval use the $eval function on the $scope object:
console.log($scope.$eval(attrs.ngModel))
See this fiddle: http://jsfiddle.net/VE9dX/7/
I have slightly modified the example from the following URL (http://docs.angularjs.org/cookbook/helloworld) as follows, placing the name value within an attrs object property:
<!doctype html>
<html ng-app>
<head>
<script src="http://code.angularjs.org/1.2.9/angular.min.js"></script>
<script>
function HelloCntl($scope) {
$scope.attrs = {
name : 'World'
}
}
</script>
</head>
<body>
<div ng-controller="HelloCntl">
Your name: <input type="text" ng-model="attrs.name"/>
<hr/>
Hello {{attrs.name || "World"}}!
</div>
</body>
</html>
One benefit I can see is that the HTML source code can be searched for /attrs\.\w+/ (e.g.) if there is ever a need to easily find all such attributes within the view rather than the controller (e.g. a search for name could collide with form element names). Also within the controller I can only imagine that partitioning attributes necessary for the front end might lend itself to better organization.
Is anybody else using such a level of abstraction. Are there any possible specific further benefits to it's usage? And most importantly, might there be any specific drawbacks to it.
It's recommended that you always use a dot in your ngModels in order to avoid potential issues with prototypal inheritance that are discussed in Angular's Guide to Understanding Scopes:
This issue with primitives can be easily avoided by following the
"best practice" of always have a '.' in your ng-models – watch 3
minutes worth. Misko demonstrates the primitive binding issue with
ng-switch.
Prototypal inheritance and primitives
In javascripts' approach to inheritance reading and writing to a primitive act differently. When reading, if the primitive doesn't exist on the current scope it tries to find it on any parent scope. However, if you write to a primitive that doesn't exist on the current scope it immediately creates one on that scope.
You can see the problem this can cause in this fiddle that has 3 scopes- one parent and two children that are siblings. First type something in the "parent" and you'll see that both children are updated. Then type something different in one of the children. Now. only that child is updated, because the write caused the child to creates it's own copy of the variable. If you now update the parent again, only the other child will track it. And if you type something into the sibling child all three scopes will now have their own copies.
This can obviously cause lots of issues.
Prototypal inheritance and objects
Try the same experiment with this fiddle in which each ngModel uses a property of an object instead of a primitive. Now both reading and writing act consistently.
When you write to a property of an object it acts just like reading does (and the opposite of how writing to a primitive does). If the object you're writing to does not exist on the current scope it looks up it's parent chain trying to find that object. If it finds one with that name then it writes to the property on that found object.
So, while in the primitive example we started with 1 variable and then after writing to the children ended up with 3 copies of the variable- when we use an object we only ever have the one property on the one object.
Since we almost always, perhaps just always, want this consistent behavior the recommendation is to only use objects properties, not primitives in an ngModel or, said more commonly, "always use a dot in your ngModel"
I do this as well. I also put all action functions (button clicks etc) into a $scope.actions object. And since i use socket.io i put those callbacks into a $scope.events object it usually keeps my controllers nice and organized and easily able to find the function i need to if i need to do any editing.
app.controller('Ctrl',['$scope', function ($scope) {
$scope.data = {
//contains data like arrays,strings,numbers etc
};
$scope.actions = {
//contains callback functions for actions like button clicks, select boxes changed etc
};
$scope.events = {
//contains callback functions for socket.io events
}
]);
Then in like my templates I can do like
<input ng-click="actions.doSomething()">
I also do a partial of this for services. I use a private and public data object
app.factory('$sysMsgService',['$rootScope',function($rootScope){
//data that the outside scope does not need to see.
var privateData = {};
var service = {
data:{
//contains the public data the service needs to keep track of
},
//service functions defined after this
};
return service;
}]);
Is it possible to create an object container where changes can be tracked
Said object is a complex nested object of data. (compliant with JSON).
The wrapper allows you to get the object, and save changes, without specifically stating what the changes are
Does there exist a design pattern for this kind of encapsulation
Deep cloning is not an option since I'm trying to write a wrapper like this to avoid doing just that.
The solution of serialization should only be considered if there are no other solutions.
An example of use would be
var foo = state.get();
// change state
state.update(); // or state.save();
client.tell(state.recentChange());
A jsfiddle snippet might help : http://jsfiddle.net/Raynos/kzKEp/
It seems like implementing an internal hash to keep track of changes is the best option.
[Edit]
To clarify this is actaully done on node.js on the server. The only thing that changes is that the solution can be specific to the V8 implementation.
Stripping away the javascript aspect of this problem, there are only three ways to know if something has changed:
Keep a copy or representation to compare with.
Observe the change itself happening in-transit.
Be notified of the change.
Now take these concepts back to javascript, and you have the following patterns:
Copy: either a deep clone, full serialization, or a hash.
Observe: force the use of a setter, or tap into the javascript engine (not very applicable)
Notify: modifying the code that makes the changes to publish events (again, not very applicable).
Seeing as you've ruled out a deep clone and the use of setters, I think your only option is some form of serialisation... see a hash implementation here.
You'll have to wrap all your nested objects with a class that reports you when something changes. The thing is, if you put an observer only in the first level object, you'll only receive notifications for the properties contained in this object.
For example, imagine you have this object:
var obj = new WrappedObject({
property1: {
property1a: "foo",
property1b: 20,
}
})
If you don't wrap the object contained in porperty1, you'll only receive a "get" event for property1, and just that, because when someone runs obj.property1.property1a = "bar" the only interaction that you'll have with obj, will be when it asks for the reference of the object contained in property1, and the modification will happen in an unobserved object.
The best approach I can imagine, is iterating over all the properties when you wrap the first object, and constructing recursively a wrapper object for every typeOf(property) == "Object".
I hope my understanding of your question was right. Sorry if not! It's my first answer here :$.
There's something called reactive programming that kind of resembles what you ask about, but its more involved and would probably be overkill.
It seems like you would like to keep a history of values, correct? This shouldn't be too hard as long as you restrit changes to a setter function. Of course, this is more difficult in javascript than it is in some other languages. Real private fields demand some clever use of closures.
Assuming you can do all of that, just write something like this into the setter.
function setVal(x)
{
history.push(value);
value = x;
}
You can use the solution that processing.js uses.
Write the script that accesses the wrapped object normally...
var foo = state.get();
foo.bar = "baz";
state.update();
client.tell(state.recentChange());
...but in the browser (or on the server if loading speed is important) before it runs, parse the code and convert it to this,
var foo = state.get();
state.set(foo, "bar", "baz");
state.update();
client.tell(state.recentChange());
This could also be used to do other useful things, like operator overloading:
// Before conversion
var a=new Vector(), b=new Vector();
return a + b * 3;
// After conversion
var a=new Vector(), b=new Vector();
return Vector.add(a,Vector.multiply(b,3));
It would appear that node-proxy implements a way of doing this by wrapping a proxy around the entire object. I'll look into more detail as to how it works.
https://github.com/samshull/node-proxy