I have some collections which are related to others through an ID.
For instance, I have collections Post and Comments. I want to display the number of comments to each posts. Therefore, I have a field in Post called numComments. I could update this number in a method every time a comment with same postId is either inserted og removed but I will instead use some hooks/observers to ensure the number is always updated.
Therefore, I have created a file server/observers.js with content
Comments.find().observe({
added: function(document) {
Posts.update({ postId: document.postId }, { $inc: { numComments: 1 } });
},
changed: function(document) {
},
removed: function(document) {
Posts.update({ postId: document.postId }, { $inc: { numComments: -1 } });
},
});
I like this kind of solution but is it a good way to do it?
My problem is that since I implemented this functionality, the console window prints an awful lot of errors/warnings. I suspect it is because of the observers.
In the documentation (http://docs.meteor.com/#/full/observe), it says:
observe returns a live query handle, which is an object with a stop method. Call stop with no arguments to stop calling the callback functions and tear down the query. The query will run forever until you call this (..)
I am not sure what it means but I think the observers should be stopped manually.
Have a look at this answer. It might lead you in the right direction, since the example is very similar to what you want. You don't need a dedicated field in your collection to get a reactive counting of your comments, you can build it in your publish function.
I am not sure what it means but I think the observers should be
stopped manually
You're right. In the example linked above, the query is wrapped inside a handle variable. Notice the
self.onStop(function () {
handle.stop();
});
It allows you to make sure that no observers will still be running once you stop publishing.
Related
I am using extjs 3.4 and I want to make records to appear when I use getModifiedRecords.
dtl.store.getAt(i).markDirty();
This is what I did and when I console.log() it I can see
dirty:true and also
modified:{idStyle: "TEST-4", idStyleDtl: 2052, color: 6, colorNm: "BLACK", s1: 0, …}
It clearly shows it has been modified and there's dirty flag but when I do ojbStore.getModifiedRecords() it returns just empty [ ]. I don't understand why it won't return the modified records.. Is there any other condition I need to change?
Thanks
So lets start with solution(click X button and look into the console): FIDDLE
Now what happens here is that i manually call my own added function in store called markDirtyFix and pass it a record(which i want to mark as dirty) to put it in modified-records list. Actually it is a copied function afterEdit from store. I just commented fireEvent of update(because this forces request to be sent on update, and you want to make it dirty "only locally"):
afterEdit: function(record){
if(this.modified.indexOf(record) == -1){
this.modified.push(record);
}
this.fireEvent('update', this, record, Ext.data.Record.EDIT);//removed
},
So the miracle happens and store.getModifiedRecords() returns your record in array included.
Now talk about problem. So the problem is that markDirty() looks like this(from official docs):
markDirty: function(){
this.dirty = true;
if(!this.modified){
this.modified = {};
}
this.fields.each(function(f) {
this.modified[f.name] = this.data[f.name];
},this);
}
and here is not anything that would call afterEdit or something like this to make your records dirty on a store level(well as you see it becomes dirty only on a record level).
Maybe someone say it is meant to be used only on a record level but markDirty doc description says:
Marking a record dirty causes the phantom to be returned by Ext.data.Store.getModifiedRecords ..
So it should have worked(but didn't).
I'm trying to call save on a restangularized object, but the save method is completely ignoring any changes made to the object, it seems to have bound the original unmodified object.
When I run this in the debugger I see that when my saveSkill method (see below) is entered right before I call save on it the skill object will reflect the changes I made to it's name and description fields. If I then do a "step into" I go into Restangular.save method. However, the 'this' variable within the restangular.save method has my old skill, with the name and description equal to whatever they were when loaded. It's ignoring the changes I made to my skill.
The only way I could see this happening is if someone called bind on the save, though I can't why rectangular would do that? My only guess is it's due to my calling $object, but I can't find much in way of documentation to confirm this.
I'm afraid I can't copy and paste, all my code examples are typed by hand so forgive any obvious syntax issues as typos. I don't know who much I need to describe so here is the shortened version, I can retype more if needed:
state('skill.detail', {
url: '/:id',
data: {pageTitle: 'Skill Detail'},
tempalte: 'template.tpl.html'
controller: 'SkillFormController',
resolve: {
isCreate: (function(){ return false;},
skill: function(SkillService, $stateParams){
return SkillService.get($stateParams.id, {"$expand": "people"}).$object;
},
});
my SkillService looks like this:
angular.module('project.skill').('SkillService', ['Restangular, function(Retangular) {
var route="skills";
var SkillService= Restangular.all(route);
SkillService.restangularize= function(element, parent) {
var skill=Restangular.restangluarizeElement(parent, elment, route);
return skill;
};
return SkillService;
}];
Inside of my template.tpl.html I have your standard text boxes bound to name and description, and a save button. The save button calls saveSkill(skill) of my SkillFormController which looks like this:
$scope.saveSkill=function(skill) {
skill.save().then(function returnedSkill) {
toaster.pop('success', "YES!", returnedSkill.name + " saved.");
...(other irrelevant stuff)
};
If it matters I have an addElementTransformer hook that runs a method calling skilll.addRestangularMethod() to add a getPeople method to all skill objects. I don't include the code since I doubt it's relevant, but if needed to I can elaborate on it.
I got this to work, though I honestly still don't know entirely why it works I know the fix I used.
First, as stated in comments restangular does bind all of it's methods to the original restangularizedObject. This usually works since it's simply aliasing the restangularied object, so long as you use that object your modifications will work.
This can be an issue with Restangular.copy() vs angular.copy. Restangualar.copy() makes sure to restangularize the copied object properly, rebinding restangualr methods to the new copy objects. If you call only Angular.copy() instead of Restangualar.copy() you will get results like mine above.
However, I was not doing any copy of the object (okay, I saved a master copy to revert to if cancel was hit, but that used Restangular.copy() and besides which wasn't being used in my simple save scenario).
As far as I can tell my problem was using the .$object call on the restangular promise. I walked through restangular enough to see it was doing some extra logic restangularizing methods after a promise returns, but I didn't get to the point of following the $object's logic. However, replacing the $object call with a then() function that did nothing but save the returned result has fixed my issues. If someone can explain how I would love to update this question, but I can't justify using work time to try to further hunt down a fixed problem even if I really would like to understand the cause better.
I'm using Parse cloud code to update some counters on a user when after_delete is called on certain classes. A user has counters for subscriptions, followers and following that are incremented in the before_save for subscriptions and follows and decremented in the before_delete for the same classes.
The issue I'm running into is when a user is deleted. The after_delete function destroys all related subscriptions/follows, but this triggers an update to the (deleted) user via before_delete for subscriptions/follows. This always causes the before_delete to error out.
Perhaps I'm conceptually mixed up on the best way to accomplish this, but I can't figure out how to properly set up the following code in follow before_delete:
var fromUserPointer = follow.get("fromUser");
var toUserPointer = follow.get("toUser");
fromUserPointer.fetch().then( function(fromUser){
// update following counter
// if from user is already deleted, none of the rest of the promise chain is executed
}.then( function (fromUser){
return toUserPointer.fetch();
}.then( function(toUser){
// update followers count
}
Is there a way to determine if the fromUserPointer and toUserPointer point to a valid object short of actually performing the fetch?
Its not an error to not find the user, but by not handling the missing object case on the fetch, its being treating implicitly as an error.
So...
fromUserPointer.fetch().then(f(result) {
// good stuff
}).then(f(result) {
// good stuff
}).then(f(result) {
// good stuff
}, f(error) {
// this is good stuff too, if there's no mode of failure
// above that would cause you to want NOT to delete, then...
response.success();
});
I've a big logic problem using Node/Mongoose/Socket.io ... Let's say I got a Server model which is often called at the same time in my application, some calls involve updating datas in the model.
db.Server.findOne({_id: reference.server}).exec(function(error, server) {
catches.error(error);
if (server !== null) {
server.anything = "ahah";
server.save(function(error) { });
}
}
Sometimes, 2 people will call this at the same time, while the first person will save() the other guy could already have findOne the "server" and got the "old object" which isn't up-to-date and save() it.
The big problem here is when the second guy will save() the "server" (the "old object") it will literally overwrite the changes of the first one ... You can imagine the big conflicts it will create on my application.
I thought about changing all the save() methods to update() which get rid of the problem but at some point in the project it's very tricky to use the update() directly, and not as practical.
Is there a way to "lock" the findOne() call while someone is updating it ? Like when you findOne() you also say "hey i will update this soon so don't let people find it right now" (with Mongoose, or even MongoDb)
It's been a while i'm searching i don't find any answer :(
Hope you understood my problem ;) thank you !
As you can tell here, this is not the best way to handle processing updates on your data. If you consider what you are asking to do it essentially boils down to :
Fetch an object from the database.
Update a property in code.
Save that data back with no guarantee something else modified it.
So where possible you need to avoid that pattern and follow the common sense that you just need to change an existing value where it is presently not set to that value. So this means just processing an "update" type of statement with an operator such as $set:
db.Server.findOneAndUpdate(
{ "_id": refernce.server, "anything": { "$ne": "ahah" } },
{ "$set": { "anything": "ahah" } },
function(err,server) {
if ( server != null ) {
// then something was actually found and modified
// so server now is the updated document
}
}
);
This means you are throwing away any field validation or other save hooks for mongoose, but it is an "atomic" form of update in that reading and writing are not separate operations, which is how you are currently implementing.
If you are looking to implement some type of "locking" then a similar approach is your best way to do this. So if you want to set a "state" on a document to show that someone is currently editing it, then maintain a field to do so and build it into your queries.
For "reading" a document and getting the information that you want to present to an "edit" then you would do something like this:
db.Server.findOneAndUpdate(
{ "$_id": docId, "locked": false },
{ "$set": { "locked": true } },
function(err,document) {
}
);
Which means as someone "grabs" the edit then subsequent operations would not be able to do so since they are looking to retrieve a document whose locked state is false, and it no longer is. The same principle applies when committing your edit as a "save", just in reverse:
db.Server.findOneAndUpdate(
{ "$_id": docId, "locked": true },
{ "$set": { "locked": false } },
function(err,document) {
}
);
You can always do more advanced things such as saved revisions or expecting a version number with operations or any other form of handling. But generally speaking, you should be managing this yourself according to your needs
I just realized I posted a similar post on stack overflow. You can find the post here:
How to read/write a document in parallel execution with mongoDB/mongoose
In this post someone told me to keep somedate in memory to avoid this behavior. This is what I did and it works great. But if you are using multi process you need to find a way to share memory between processes.
in my server/server.js
Meteor.methods({
saveOnServer: function() {
var totalCount = Collections.find({
"some": "condition"
}).count();
if (totalCount) {
var customerId = Collections.update('someId', {
"$addToSet": {
objects: object
}
}, function(err) {
if (err) {
throw err;
} else {
return true;
}
});
} else {}
}
});
I'm afraid that when saveOnServer() is called by 2 clients at the same time, it will return the same totalCount for each client and basically end up inserting same integer number into object id. The end goal is to insert row on the server side with an atomic operation that only completes when the totalCount is successfully returned and the document is inserted ensuring that no duplicate id exists? I'm trying to not use the mongodb _id but have my own integer incrementing id column.
I'm wondering how I can ensure that a field gets auto-incremented for each insert operation? I am currently relying on getting the total count of documents. Is a race condition possible here? If so, what is the meteor way of dealing with this?
In Meteor's concurrency model, you can imagine a whole method as an uninterruptible block of stuff that happens. In order for Meteor to switch from running one method midway to say, starting another method, you need to "yield"—the method needs to signal, "I can be interrupted."
Methods yield whenever they do something asynchronous, which in practice means any time you do a database update or call a method with a callback in Meteor 0.6.5 and later. Since you give your update call a callback, Meteor will always try to do something in between the call to update and the update's callback. However, in Meteor 0.6.4.2 and earlier, database updates were uninterruptible regardless of the use of callbacks.
However, multiple calls to saveOnServer will happen in order and do not cause a race condition. You can call this.unblock() to allow multiple calls to saveOnServer to occur "simultaneously"—i.e., not share the same queue, labeled saveOnServer queue, of uninterruptible blocks of stuff.
Given the code you have, another method modifying Collections can change the value of count() between the call and the update.
You can prevent one method from making the other invalid midway by implementing the following data models:
saveOnServer : function () {
// ...
Collections.update({_id:someId, initialized:true, collectionCount: {$gt: 0}},
{$addToSet: {objects: object}});
///...
}
When adding objects to Collections:
insertObject: function() {
//...
var count = Collections.find({some: condition}).count();
Collections.insert({_id:someId, initialized:false, collectionCount: count});
Collections.update({initialized:false},
{$set:{initialized:true}, $inc: {collectionCount: 1}});
}
Note, while this may seem inefficient, it reflects the exact cost of making an update and insert in different methods behave the way you intend. In saveOnServer you cannot insert.
Conversely, if you removed the callback from Collections.update, it will occur synchronously and there will be no race conditioning Meteor 0.6.5 and later.
You can make this collection have a unique key on an index field, and then keep it updated as follows:
1) Whenever you insert into the collection, first do a query to get the maximum index and insert the document with index + 1.
2) To find out the number of documents just do the query to get the max of the index.
Insertion is now a pair of queries, a read and a write, so it can fail. (DB ops can always fail, though.) However, it can never leave the database in an inconsistent state - the Mongo index will guarantee that.
The syntax for building an index in Meteor is this:
MyCollection._ensureIndex('index', {unique: 1});
Another way to do this is from a mechanism hibernate/jpa follows - and that is to set up a collision field. Most of the time, this can be an update timestamp that is set on each update. Just prior to doing any update, query the update timestamp. Then you can specify the update where the update timestamp is what you just fetched. If it has changed in the interim, the update won't happen - and you check the return code/count that the row was updated or not.
JPA does this automatically for you when you add an annotation for this collision field - but this is essentially what it does in behind the scenes