Saving only changed attributes in Backbone.js - javascript

I'm attempting to save only changed attributes of my model by doing the following:
this.model.set("likes", this.model.get("likes") + 1);
this.model.save();
and extending the Backbone prototype like so:
var backbone_common_extension = {
sync: function(method,model,options) {
options.contentType = 'application/json';
if (method == 'update') {
options.data = JSON.stringify(model.changedAttributes() || {});
}
console.log(options.data);
Backbone.sync.call(this, method, model, options);
}
};
_.extend(Backbone.Model.prototype, backbone_common_extension);
The problem is that model.changedAttributes() is always empty. I've tried passing {silent: true} on the set method, but same thing. How can I keep backbone from clearing out changedAttributes() before sync?

Relying on changedAttributes for this sort of thing tends not to work very well, Backbone doesn't really have a well defined set/commit/save/rollback system so you'll find that changedAttributes will get emptied when you don't expect it to.
Lucky for you, that doesn't matter; unlucky for you, it doesn't matter because your whole approach is wrong. What happens if you load your model from the server and then five likes come in from other browsers? Your approach would overwrite all those likes and data would get lost.
You should let the server manage the total number of likes and your model should simple send a little "there is one more like" message and let the server respond with the new total. I'd do it more like this:
// On the appropriate model...
add_like: function() {
var _this = this;
$.ajax({
type: 'post',
url: '/some/url/that/increments/the/likes',
success: function(data) {
_this.set('likes', data.likes);
},
error: function() {
// Whatever you need...
}
});
}
Then the /some/url/... handler on the server would send a simple "add one" update to its database and send back the updated number of likes. That leaves the database responsible for managing the data and concurrency issues; databases are good at this sort of thing, client-side JavaScript not so much.

Related

Calculated fields in json API?

I'm wondering to which extend should we add in an API values that can be calculated from the raw data and extra available information (from the browser session, user interface.. or whatever)
For example, we could have an API returning this JSON:
{
ownder_id: '123',
canAddComment: true,
etc...
}
That gives us the value "canAddComment" directly.
Or we could have just this:
{
ownder_id: '123',
etc...
}
Where, comments can be calculated from the owner_id. For example, a user can add comments if the session owner_id is different from the received from the API.
We could have done this in Javascript instead:
//supposing we have a session object with our browser session values
var params = {owner_id: session.owner_id};
$.post(apiURL, params, function(data){
var result = JSON.parse(data);
//processing the data
result["canAddComments"] = result.owner_id !== session.owner_id;
//do whatever with it
});
What would be the best approach? What's the recommendation in this kind of cases?
I'm not exactly sure what you want to know. But my first reaction is you make a function out of it.
// constructor
function Comment() {
this.owner_id;
this.canAddComment = function() {
return session.user_id === this.owner_id;
}
}
var session = {
user_id: 22,
name: 'Kevin'
};
var my_comment = new Comment();
my_comment.owner_id = 15;
if(my_comment.canAddComment()) {
$.ajax({
// now read the form and submit the data ...
})
}
else {
alert('Only the author of this message can update ...');
}
EDIT:
My question was mainly if this should better be calculated in the backend side or calculated after retrieving the API data.
Not sure if there is a generic answer. A few arguments: if you want the method to be secret, you must do it on server side. In other cases, I think it makes perfect sense to let the client PC use its processing power.
Okay, one example: sorting. Let's say there is a table full of data; clicking on the head sorts the table according to that column. Does it make sense to send all the data to the server, let it sort the table and return an array of keys? No, I think it makes more sense to let the client process this.
http://tablesorter.com/docs/
Similar with Google Maps: you drag a marker to some place, then the program must calculate the closest 5 bus stops. Well, obviously you calculate all of this on client side.

How can I cleanly pull a Parse.Object relation's records when fetching the object?

In the Parse JavaScript guide, on the subject of Relational Data it is stated that
By default, when fetching an object, related Parse.Objects are not
fetched. These objects' values cannot be retrieved until they have
been fetched.
They also go on to state that when a relation field exists on a Parse.Object, one must use the relation's query().find() method. The example provided in the docs:
var user = Parse.User.current();
var relation = user.relation("likes");
relation.query().find({
success: function(list) {
// list contains the posts that the current user likes.
}
});
I understand how this is a good thing, in terms of SDK design, because it prevents one from potentially grabbing hundreds of related records unnecessarily. Only get the data you need at the moment.
But, in my case, I know that there will never be a time when I'll have more than say ten related records that would be fetched. And I want those records to be fetched every time, because they will be rendered in a view.
Is there a cleaner way to encapsulate this functionality by extending Parse.Object?
Have you tried using include("likes")?
I'm not as familiar with he JavaScript API as the ObjC API.. so in the example below I'm not sure if "objectId" is the actual key name you need to use...
var user = Parse.User.current();
var query = new Parse.Query(Parse.User);
query.equalTo(objectId, user.objectId);
query.include("likes")
query.find({
success: function(user) {
// Do stuff
}
});
In general, you want to think about reverse your relationship. I'm not sure it is a good idea be adding custom value to the User object. Think about creating a Like type and have it point to the user instead.
Example from Parse docs:
https://parse.com/docs/js_guide#queries-relational
var query = new Parse.Query(Comment);
// Retrieve the most recent ones
query.descending("createdAt");
// Only retrieve the last ten
query.limit(10);
// Include the post data with each comment
query.include("post");
query.find({
success: function(comments) {
// Comments now contains the last ten comments, and the "post" field
// has been populated. For example:
for (var i = 0; i < comments.length; i++) {
// This does not require a network access.
var post = comments[i].get("post");
}
}
});
Parse.Object's {Parse.Promise} fetch(options) when combined with Parse.Promise's always(callback) are the key.
We may override fetch method when extending Parse.Object to always retrieve the relation's objects.
For example, let's consider the following example, where we want to retrieve a post and its comments (let's assume this is happening inside a view that wants to render the post and its comments):
var Post = Parse.Object.extend("Post"),
postsQuery = new Parse.Query(Post),
myPost;
postsQuery.get("xWMyZ4YEGZ", {
success: function(post) {
myPost = post;
}
).then(function(post) {
post.relation("comments").query().find({
success: function(comments) {
myPost.comments = comments;
}
});
});
If we had to do this every time we wanted to get a post and its comments, it would get very repetitive and very tiresome. And, we wouldn't be DRY, copying and pasting like 15 lines of code every time.
So, instead, let's encapsulate that by extending Parse.Object and overriding its fetch function, like so:
/*
models/post.js
*/
window.myApp = window.myApp || {};
window.myApp.Post = Parse.Object.extend("Post", {
fetch: function(options) {
var _arguments = arguments;
this.commentsQuery = this.relation("comments").query();
return this.commentsQuery.find({
success: (function(_this) {
return function(comments) {
return _this.comments = comments;
};
})(this)
}).always((function(_this) {
return function() {
return _this.constructor.__super__.fetch.apply(_this, _arguments);
};
})(this));
}
});
Disclaimer: you have to really understand how closures and IIFEs work, in order to fully grok how the above works, but here's what will happen when fetch is called on an existing Post, at a descriptive level:
Attempt to retrieve the post's comments and set it to the post's comments attribute
Regardless of the outcome of the above (whether it fails or not) operation, always perform the post's default fetch operation, and invoke all of that operation's callbacks

Ext JS Store POST request filter params

I have a Store configured with a proxy to POST data to the server. I add records to this store dynamically. After calling the sync() method on the store the data gets sent to the server. But looking at the network traffic I see that the whole records data is sent. How can I configure the store to send only individual data (like only IDs of the record)?
I have tried seeting the WriteAllFields property to false on the JSON writter connected to the proxy but this did not help
I have also tried this approach: Ext.JS Prevent Proxy from sending extra fields but the request was not even performed
var documentStore = Ext.getStore('Document');
var trashStore = Ext.getStore('TrashDocuments');
documentStore.each (function(record) {
console.debug(record);
//record.phantom = true;
//record.setDirty();
trashStore.add(record);
documentStore.remove(record);
});
var newWriter = Ext.create('Ext.data.writer.Json',{
getRecordData: function(record){
return {'id':record.data.id};
}
});
trashStore.getProxy().setWritter(newWritter);
trashStore.sync({
success: function()
{
console.debug("success!!");
},
failure: function()
{
console.debug("failed...");
},
callback: function()
{
console.debug("calling callback");
},
scope: this
});
console.debug("END");
writeAllFields config is not working because that is only meaningful for non-phantom records; any fields of a phantom (new) record will be viewed as "changed" and therefore included in the request packet.
To exclude specific fields from being included in the request, add persist:false to the fields you don't want to include.
That being said, I don't quite understand why you'd only want to write the ids of a newly added record. Unless you are explicitly having Ext JS create those ids for you, what are you actually going to be sending to server? I don't know your use case, but typically it is desirable to have your persistence layer (e.g., a database) assign identifiers to your records.

Is it a code smell if I have the need to save a Backbone.Collection?

I've been trying to wrap my head around best RESTful practices while using BackboneJS. I feel like I've written myself into a bit of a knot and could use some guidance.
My scenario is this: a user wants to create a new Playlist with N items in it. The data for the N items is coming from a third-party API in bursts of 50 items. As such, I want to add a new, empty Playlist and, as the bursts of 50 come in, save the items and add to my Playlist.
This results in my Playlist model having a method, addItems, which looks like:
addItems: function (videos, callback) {
var itemsToSave = new PlaylistItems();
var self = this;
// Create a new PlaylistItem with each Video.
videos.each(function (video) {
var playlistItem = new PlaylistItem({
playlistId: self.get('id'),
video: video
});
itemsToSave.push(playlistItem);
});
itemsToSave.save({}, {
success: function () {
// OOF TERRIBLE.
self.fetch({
success: function () {
// TODO: For some reason when I call self.trigger then allPlaylists triggers fine, but if I go through fetch it doesnt trigger?
self.trigger('reset', self);
if (callback) {
callback();
}
}
});
},
error: function (error) {
console.error("There was an issue saving" + self.get('title'), error);
}
});
}
ItemsToSave is generally a Collection with 50 items in it. Since BackboneJS does not provide a Save for Collections, I wrote my own. I didn't care much for creating a Model wrapper for my Collection.
So, when I call Save, none of my items have IDs. The database assigns the IDs, but that information isn't implicitly updated by Backbone because I'm saving a Collection and not a Model. As such, once the save is successful, I call fetch on my Playlist to retrieve the updated information. This is terrible because a Playlist could have thousands of items in it -- I don't want to be fetching thousands of items every time I save multiple.
So, I'm thinking maybe I need to override the Collection's parse method and manually map the server's response back to the Collection.
This all seems... overkill/wrong. Am I doing something architecturally incorrect? How does a RESTful architecture handle such a scenario?
My opinion is do what works and feels clean enough and disregard what the RESTafarians credence might be. Bulk create, bulk update, bulk delete are real world use cases that the REST folk just close their eyes and pretend don't exist. Something along these lines sounds like a reasonable first attempt to me:
create a bulkAdd method or override add carefully if you are feeling confident
don't make models or add them to the collection yet though
do your bulk POST or whatever to get them into the database and get the assigned IDs back
then add them as models to the collection

ExtJS 4 - Update/Refresh single record

I have a problem that's bugging me.
I have a grid and when i dblclick on a item I want to open a window to edit that item. Pretty standard stuff. The problem is, i want to be sure the record is up to date, because other people using the program may have changed it or even deleted it.
I could reload the store, but i only want one specific record to be checked... So i figured i would just go get the data, build another record and replace the existing one in the store but i really want to know the best way to do this
Bear in mind RESTful proxy is not an option for me, even though i don't know if the update operation works in this case ( server -> client).
EDIT:
this may help somebody:
all i did was copy the data and raw objects from the new record to the old one and then "commit" the changes. worked for me.
Thank you.
ExtJS 4.1
I had a similar problem and as an experiment tried
sStore.load({
id: mskey,
addRecords: true
});
where mskey is a uuid of a currently loaded record.
I did not remove the existing record first (as an experiment) and it updated the existing record that had the same id with the new data from the server (via the model --> REST proxy). Perfect!
I know you said you are not using a REST proxy, but this might help others who found this post searching for search terms like your topic name (which is how I got here!)
So, it looks like 'addRecords' means add or update.
FYI,
Murray
The best way to do something like this would be to reload the record in the event which opens the window. So where you would for example load the record from the grid store into a form within the window, you can use your model to load from the id.
Item.load(id, { success: function(r) { form.loadRecord(r); } });
Once saved, you should probably also call refresh on the grid view, which will redraw the changes from the save event. You can also use refreshNode (see grid view documentation) on the exact record in the store if you're concerned about performance.
Of course you do not have to use the restful proxy with this, you can use any proxy as long as it will load the single record.
With ExtJS 4.1, here is an override :
In CoffeeScript :
# Adds "reload" to models
Ext.define "Ext.ux.data.Model",
override: "Ext.data.Model",
# callBack is called with success:boolean
reload: (callBack) ->
Ext.getClass(#).load #getId(),
success : (r, o) =>
for k, v of r.data
#data[k] = v
#commit()
callBack(true) if Ext.isFunction(callBack)
failure: =>
callBack(false) if Ext.isFunction(callBack)
In JS (did not test) :
Ext.define("Ext.ux.data.Model", {
override: "Ext.data.Model",
reload: function(callBack) {
var me = this;
return Ext.getClass(this).load(this.getId(), {
success: function(r, o) {
var k;
for (k in r.data) {
me.data[k] = r.data[k];
}
me.commit();
if (Ext.isFunction(callBack)) {
callBack(true);
}
},
failure: function() {
if (Ext.isFunction(callBack)) {
callBack(false);
}
}
});
}
});
I created an override on the Ext.data.Model to add an additional method that can be used to update the data of an existing record (model instance).
Ext.define('Ext.overrides.data.Model', {
override: 'Ext.data.Model',
/**
* Refresh the data of a record from the server
*/
reloadData: function(cb) {
var me = this;
var id = me.getId();
Ext.ModelManager.getModel(me.modelName).load(id, {
callback: function(record, operation, success) {
if (!success) {
Ext.Error.raise('Problem reloading data from server in record');
}
if (!record) {
Ext.Error.raise('No record from server to reload data from');
}
//change the data of the record without triggering anything
Ext.apply(me.data, record.getData());
//call a final callback if it was supplied
if (cb) {
cb.apply(me, arguments);
}
}
});
return me;
}
});
This is how you can use it. It's actually pretty simple:
myRecord.reloadData(function(record, operation, success) {
//Done updating the data in myRecord
});
Internally it uses the load method on the associated model to create a new record. That new record is based on the same id as the original record that the reloadData method was called on. In the callback the data of the new record is applied to the data of the original record. No events triggered, which is probably hat you want.
This is Ext 4.2.1. There's probably dozens of scenario's that this solution breaks but we can always refine can't we.
Update: This solution basically implements the same as the one by #Drasill. Oh well... This one was tested though.

Categories

Resources