JSON.stringify missing superclass properties when using a subclass with Spine - javascript

I'm seeing strange behaviour using JSON.stringify against a subclassed model in Spine, and I'm hoping someone can help!
Here's a simplified excerpt from some code that we've got on one of our projects:
define([
"jquery",
"underscore"
],
function ($, _) {
var SuperClass = Spine.Model.sub();
SuperClass.configure("SuperClass", "SuperClassProperty");
var SubClass = SuperClass.sub();
SubClass.configure("SubClass", "SubClassProperty");
var instance = new SubClass({ SuperClassProperty: "Super", SubClassProperty: "Sub" });
console.log(instance);
var json = JSON.stringify(instance);
console.log(json);
});
The "console.log(instance)" is printing out exactly what I would expect in this scenario:
result
SubClassProperty: "Sub"
SuperClassProperty: "Super"
cid: "c-0"
__proto__: ctor
However, when I use JSON.stringify against the instance, this is all that I am returned:
{"SubClassProperty":"Sub"}
Why doesn't the SuperClassProperty get included in the stringify?
I've ruled out a problem with the JSON.stringify method by forcing JSON2 to override Chrome's native JSON object; both implementations yield the same result. It looks like stringify will delegate to the "toJSON" function on the object if there is one - and in this case there is (as part of Spine).
So it looks like either (a) this is a bug in Spine, or (b) I'm doing something incorrectly, which is the more likely option.
I know I can work around this problem by re-configuring the superclass properties on the subclass as well:
SubClass.configure("SubClass", "SuperClassProperty", "SubClassProperty");
However this seems counter-intuitive to me (what's the point of subclassing?), so I'm hoping that's not the answer.
Update: I've done some debugging through the Spine source code, and from what I can tell the problem is the way that I'm configuring the subclass:
var SubClass = SuperClass.sub();
SubClass.configure("SubClass", "SubClassProperty");
Calling "configure" here appears to wipe out the attributes from SuperClass. The "toJSON" implementation on the Model prototype is as follows:
Model.prototype.toJSON = function() {
return this.attributes();
};
Since the attributes collection is reset when SubClass is configured, the SuperClass properties don't come through in the JSON string.
I'm not sure if I shouldn't be calling "configure" on subclassed objects, but I can't find anywhere in the documentation that says I should be doing something else - this is the only reference I can find for subclassing Models (from: http://spinejs.com/docs/models):
Models can be also be easily subclassed:
var User = Contact.sub();
User.configure("User");

As I suspected, the problem was in the way that I'm using Spine. This comment from the author of Spine infers that using "configure" on a subclass will wipe out the attributes of the superclass. I have to admit I don't understand why this is; it seems counter-intuitive to me, but at least I now know that it's not a bug.
In case anyone else runs into this issue, the way I've worked around it is by adding my own extension method to the Spine Model as follows:
(function () {
var Model = Spine.Model;
Model.configureSub = function () {
var baseAttributes = this.attributes.slice();
this.configure.apply(this, arguments);
this.attributes = baseAttributes.concat(this.attributes);
return this;
};
})();
Now to configure my subclass:
var SubClass = SuperClass.sub();
SubClass.configureSub("SubClass", "SubClassProperty");
And now my JSON correctly reflects the properties from both the super and subclasses.

Related

JavaScript: Is the nesting of constructor instances inside a constructed 'wrapper' problematic?

Hopefully this question won't be flagged as too subjective but I'm newish to OOP and struggling a bit when it come to sharing data between parts of my code that I think should be separated to some extent.
I'm building a (non-geo) map thing (using leaflet.js which is superduper) which has a map (duh) and a sidebar that basically contains a UI (toggling markers both individually and en masse, searching said marker toggles as well as other standard UI behaviour). Slightly confused about organisation too (how modular is too modular but I can stumble through that myself I guess). I am using a simple JSON file for my settings for the time being.
I started with static methods stored in objects which is essentially unusable or rather un-reusable so I went for nested constructors (kinda) so I could pass the parent scope around for easier access to my settings and states properties:
function MainThing(settings) {
this.settings = options;
this.states = {};
}
function SubthingMaker(parent) {
this.parent = parent;
}
SubthingMaker.prototype.method = function() {
var data = this.parent.settings.optionOne;
console.log(data);
this.parent.states.isVisible = true;
};
MainThing.prototype.init = function() {
this.subthing = new SubthingMaker(this);
// and some other fun stuff
};
And then I could just create and instance of MainThing and run MainThing.init() and it should all work lovely. Like so:
var options = {
"optionOne": "Hello",
"optionTwo": "Goodbye"
}
var test = new MainThing(options);
test.init();
test.subthing.method();
Should I really be nesting in this manner or will it cause me problems in some way? If this is indeed okay, should I keep going deeper if needed (maybe the search part of my ui wants its own section, maybe the map controls should be separate from DOM manipulation, I dunno) or should I stay at this depth? Should I just have separate constructors and store them in an object when I create an instance of them? Will that make it difficult to share/reference data stored elsewhere?
As regards my data storage, is this an okay way to handle it or should I be creating a controller for my data and sending requests and submissions to it when necessary, even if that data is then tucked away in simple JSON format? this.parent does really start to get annoying after a while, I suppose I should really be binding if I want to change my scope but it just doesn't seem to be an elegant way to access the overall state data of the application especially since the UI needs to check the state for almost everything it does.
Hope you can help and I hope I don't come across as a complete idiot, thanks!
P.S. I think the code I posted works but if it doesn't, its the general idea I was hoping to capture not this specific example. I created a much simpler version of my actual code because I don't want incur the wrath of the SO gods with my first post. (Yes, I did just use a postscript.)
An object may contain as many other objects as are appropriate for doing it's job. For example, an object may contain an Array as part of its instance data. Or, it may contain some other custom object. This is normal and common.
You can create/initialize these other objects that are part of your instance data in either your constructor or in some other method such as a .init() method whichever is more appropriate for your usage and design.
For example, you might have a Queue object:
function Queue() {
this.q = [];
}
Queue.prototype.add = function(item) {
this.q.push(item);
return this;
}
Queue.prototype.next = function() {
return this.q.shift();
}
var q = new Queue();
q.add(1);
q.add(2);
console.log(q.next()); // 1
This creates an Array object as part of its constructor and then uses that Array object in the performance of its function. There is no difference here whether this creates a built-in Array object or it calls new on some custom constructor. It's just another Javascript object that is being used by the host object to perform its function. This is normal and common.
One note is that what you are doing with your MainThing and SubthingMaker violates OOP principles, because they are too tightly coupled and have too wide access to each other internals:
SubthingMaker.prototype.method = function() {
// it reads something from parent's settings
var data = this.parent.settings.optionOne;
console.log(data);
// it changes parent state directly
this.parent.states.isVisible = true;
};
While better idea could be to make them less dependent.
It is probably OK for the MainThing to have several "subthings" as your main thing looks like a top-level object which will coordinate smaller things.
But it would be better to isolate these smaller things, ideally they should work even there is no MainThing or if you have some different main thing:
function SubthingMaker(options) {
// no 'parent' here, it just receives own options
this.options = options;
}
SubthingMaker.prototype.method = function() {
// use own options, instead of reading then through the MainThing
var data = this.options.optionOne;
console.log(data);
// return the data from the method instead of
// directly modifying something in MainThing
return true;
this.parent.states.isVisible = true;
};
MainThing.prototype.doSomething = function() {
// MainThing calls the subthing and modifies own data
this.parent.states.isVisible = this.subthing.method();
// and some other fun stuff
};
Also to avoid confusion, it is better not to use parent / child terms in this case. What you have here is aggregation or composition of objects, while parent / child are usually used to describe the inheritance.

Javascript 'normal' objects vs module pattern

Currently I'm developing a large scale Javascript app(single page) and I've searched around the web to find some best practices. Most projects use the module pattern so the objects doesn't pollute the global namespace. At this moment I use normal objects:
function LoginModel(){
this.model = new MyModel();
this.getModel = function(){
return this.model;
};
}
This is readable and easy to maintain(my opinion). Is it better to use the module pattern just because of the namespacing or does it has other advantages I'm not aware of(counter memory leaks, ... )? Furthermore, I've already splitted up the files to have a nice MVC pattern and destroy every object when needed(counter memory leaks). So the main question is: do I need, in my case, to use the module pattern or not?
Module pattern:
var LoginModel = (function(){
var model = MyModel;
function getModel(){
return model;
};
return {
getModel: getModel
};
});
The module pattern is better for overall code organization. It lets you have data, logic and functions that are private to that scope.
In your second example getModel() is the only way to get the model from the outside. The variable declared int he module are hidden unless explicitly exposed. This can be a very handy thing.
And there's not really any drawback, other than being a little more complex. You just get more options for organization and encapsulation.
I'd use a plain object until my model gets complex enough to need more structure and some private scoping. And when you hit that point, it's trivial to redefine it as a revealing module without breaking any of the code that uses it.
If you're only going to be using one instance per page, I don't see the need to involve the new keyword. So personally I would create a revealing module like you did in your last example, and expose an object with the "public" properties.
Though I don't see your point with the getModel() function, since MyModel is obviously accessable outside of the scope.
I would have rewritten it slightly:
var LoginModel = (function(model, window, undefined){
function init(){ } // or whatever
function doSomethingWithModel(){
console.log(model);
}
return { init: init };
})(MyModel, window);
If you're uncertain of which modules that will get a model, you can use loose augumentation and change
})(MyModel, window);
to
})(MyModel || {}, window);
If you need several instances of a module, it would look something like this:
var LoginModel = (function(model, window, undefined){
function loginModel(name){ // constructor
this.name = name; // something instance specific
}
loginModel.prototype.getName = function(){
return this.name;
};
return loginModel;
})(MyModel, window);
var lm1 = new LoginModel('foo');
var lm2 = new LoginModel('bar');
console.log(lm1.getName(), lm2.getName()); // 'foo', 'bar'
There's several concepts conflated in your question
With what you call the "normal object", the function becomes a constructor function and requires the new keyword.
The second example uses the Revealing Module Pattern inside of an IIFE. This is the most popular variant of the Module Pattern, but unfortunately deeply flawed. For an explanation of the differences see my answer here, and for its flaws, see here.
Now, your question poses a false dichotomy -- normal objects or module pattern? You don't have to make a choice -- a normal object can use the module pattern simply by putting whatever it wants to keep private inside its closure scope. For example,
function LoginModel(){
var _notifyListeners = function(){
// Do your stuff here
};
this.model = new MyModel();
this.getModel = function(){
_notifyListeners();
return this.model;
};
}
This is an example of a "normal object" using the module pattern. What you have to avoid doing is what the Revealing Module Pattern does -- putting everything into closure scope. You should only put the things you want to keep private inside the closure scope.

Require.js: When extending a module can I use prototype directly or I should use extend as well?

If I am using Require.js for managing modules in a project with Backbone.js (which also has underscore), when extending a module I can do something like this
require(['Home'], function(home) {
'use strict';
var view = home.View.prototype;
_.extend(view,{
anotherTitle: 'Welcome Jean Luc Picard';
});
});
or
require(['Home'], function(home) {
'use strict';
var view = home.View.prototype;
view.anotherTitle= 'Welcome Jean Luc Picard'; //this is a new attribute
});
what is the most approrpiate way to do it?
You should use extend.
Remember that modifying a prototype of a Backbone type (which is what I assume home.View to be) will extend/modify that prototype for all consumers, which might be unexpected behavior. Even when using require, you can modify those dependencies directly and impact subsequent require calls (this happens because you're modifying the dependency in the require cache, so subsequent requests to the same dependency don't need to occur).
Using .extend() is the best way to create your own extended/custom types. Behind the scenes, extend() does in fact modify the prototype, but it's the prototype of the new object type created and effectively uses the backbone type as a super.
So, to create your own type:
// Create a new object whose prototype is the combination of Backbone.View
// and the object passed to extend
var MyCustomView = Backbone.View.extend({ ... });
var instance = new MyCustomView();
If, however, home.View is a custom view type already, it's functionally probably equivalent to either do a .extend on it or adding things to the prototype manually. However, for consistency and clarity's sake, you should always use the handy dandy extend method to create a sub type.
var MySubCustomView = MyCustomView.extend({ ... });
var instance = new MySubCustomView();

Javascript Module pattern - how to reveal all methods?

I have module pattern done like this:
var A = (function(x) {
var methodA = function() { ... }
var methodB = function() { ... }
var methodC = function() { ... }
...
...
return {
methA: methodA,
methB: methodB
}
})(window)
This code let's me call only methA and methB() on A which is what I want and what I like. Now the problem I have - I want to unit test it with no pain ot at least with minimal efforts.
First I though I can simply return this but I was wrong. It returns window object.(can someone explain why?).
Second - I found solution somewhere online - to include this method inside my return block:
__exec: function() {
var re = /(\(\))$/,
args = [].slice.call(arguments),
name = args.shift(),
is_method = re.test(name),
name = name.replace(re, ''),
target = eval(name);
return is_method ? target.apply(this, args) : target;
}
This method let's me call the methods like this: A.__exec('methA', arguments);
It is almost what I want, but quite ugly. I would prefer A.test.methA() where test would never be used in production code - just for revealing private methods.
EDIT
I see people telling me to test the big thing instead of the small parts. Let me explain. In my opinion API should reveal only the needed methods not a bunch of internal functions. The internals because of their small size and limited functionality are much easier to test then test the whole thing and guess which part gone wrong.
While I may be wrong, I would still like to see how I could return references to all the methods from the object itself :).
Answer to your first question(you return this, but it returns window, not the object you wanted): in javascript this inside the function returns global object unless this function is a method of the object.
Consider next examples:
1) this points to the global object ():
function(){
return this;
}
2) this points to the object:
var obj = {
value: "foo",
getThisObject: function(){
return this;
}
}
Your case is example #1, because you have a function, that returns an object. This function is not a method of any object.
The best answer to your second question is to test only public methods, but if
that is so important for you, I can propose next:
create your modules dynamically on server side.
How it works:
create separate scripts for functionality you want;
create tests for these separate scripts;
create method that will combine scripts into one however you want;
to load script, reference to the combining scripts method.
Hopefully, it can solve your problem. Good luck!
Why not use namespaces to add your modules and public methods to js engine. Like this:
window['MyApp']['MODULE1'] = { "METHOD1" : {}, "METHOD2" : {}};
I write modules like this Sample module in JavaScript.
And test it like this: Simple unit testing in JavaScript
The use of eval() is generally not good idea.

Emacs: Finding function definition in etags

Syntax check in js2-mode is awesome.
But sometimes I just want to define a function named "delete" or "new" even thought that's not a good idea. Js2-mode seems to treat this as an error.
How can I use build-in keywords as function name in js2-mode? I need your help.
================================================
I am sorry for my stupid question...
I'm using etags.
but writing someting like:
exports.new = function() {
};
seems etags will treat this as the definition of 'exports.new', not 'new'.
TAGS
};exports.new248,8614
So I'm trying to write something like:
function new() {
}
exports.new = new;
How stupid I am !!!
So my questiong turn back to how to make etags find the definition of 'new', not 'exports.new' ?
Thanks. :)
"Js2-mode seems to treat this as an error"
It is an error, isn't it?
I really don't understand why you'd want to do it, but the following works:
someObject["new"] = function() {
alert("This is the 'new' function.");
}
someObject["new"]();
Assuming someObject already exists as an object. Or:
var someObject = {
"new" : function() {},
"delete" : function() {}
};
someObject["new"]();
someObject["delete"]();
In the browser you can say window["new"] = function() {}, but you can't call the resulting function with new(), you have to say window["new"]().
In node.js I believe the equivalent would be global["new"] = function() {}. I don't use node, but I assume this would create a global function called "new", but you wouldn't be able to call it with the new() syntax, you'd have to say global["new"]().
I do not recommend doing this.

Categories

Resources