I see that when publishing, the collection._connection.publish_handlers is populated, and so does the collection._connection.method_handlers, and probably other areas.
I want to basically cleanup the memory by removing the references to that collection and it's publication entirely.
Basically each user of the app has a list of collections for that user. There is a publish function that looks like this for the user to get their list of collections:
Meteor.publish('users_collections', function() {
var self = this;
var handle = UsersCollections.find({ownerId: self.userId}).observeChanges({
added: function(id, collectionInfo) {
UsersCollectionManager.addUsersCollection(self.userId, collectionInfo.name);
}
});
});
That publishes that user's list of collections (and any user that connects gets their list).
Once the user gets their list, each of those collections is made reactive with new Meteor.Collection and then published.
UsersCollectionManager.addUsersCollection = function(userId, collectionName) {
if (self.collections[userId].collections[collectionName] === undefined) {
self.collections[userId].collections[collectionName] = new Meteor.Collection(collectionName);
Meteor.publish(collectionName, function() {
return self.collections[userId].collections[collectionName].find();
});
}
};
Once the user disconnects I have a function that gets run.
if that user doesn't have any connections open (ex: if they had multiple windows open and all the connections are closed "all windows closed") then it starts a 30s timeout to:
cleanup all these publish calls and new Meteor.Collection calls" to save memory
As the other user's of the app won't need this user's collections.
I'm not sure how to actually cleanup those from memory.
I don't see a "unpublish" or "Collection.stop" type of methods in the Meteor API.
How would I perform cleanup?
You need to do this in two steps. First, stop and delete the publications, then delete the collection from Meteor.
The first step is fairly easy. It requires you to store the handles of each subscription:
var handles = [];
Meteor.publish('some data', function() {
//Do stuff, send cursors, this.ready(), ...
handles.push(this);
});
And later in time, stop them:
var handle;
while(handle = handles.shift()) {
handle.stop();
}
All your publications are stopped. Now to delete the publication handler. It's less standard :
delete Meteor.default_server.publish_handlers['some data'];
delete Meteor.server.publish_handlers['some data'];
You basically have to burn down the reference to the handler. That's why it's less standard.
For the collection you first need to remove all documents then you have to delete the reference. Fortunately deleting all documents is very easy:
someCollection.remove({}, {multi : true}); //kaboom
Deleting the reference to the collection is trickier. If it is a collection declared without a name (new Mongo.Collection(null);), then you can just delete the variable (delete someCollection), it is not stored anywhere else as far as I know.
If it is a named collection, then it exists in the Mongo database, which means you have to tell Mongo to drop it. I have no idea how to do that at the moment. Maybe the Mongo driver can do it, ot it will need some takeover on the DDP connection.
You can call subscriptionHandle.stop(); on the client. If the user has disconnected, the publications will have stopped anyway.
Related
I am trying to implement a process as described below:
Create a sale_transaction document in the device.
Put the sale_transaction document in Pouch.
Since there's a live replication between Pouch & Couch, let the sale_transaction document flow to Couch.
Upon successful replication of the sale_transaction document to Couch, delete the document in Pouch.
Don't let the deleted sale_transaction document in Pouch, to flow through Couch.
Currently, I have implemented a two-way sync from both databases, where I'm filtering each document that is coming from Couch to Pouch, and vice versa.
For the replication from Couch to Pouch, I didn't want to let sale_transaction documents to go through, since I could just get these documents from Couch.
PouchDb.replicate(remoteDb, localDb, {
// Replicate from Couch to Pouch
live: true,
retry: true,
filter: (doc) => {
return doc.doc_type!=="sale_transaction";
}
})
While for the replication from Pouch to Couch, I put in a filter not to let deleted sale_transaction documents to go through.
PouchDb.replicate(localDb, remoteDb, {
// Replicate from Pouch to Couch
live: true,
retry: true,
filter: (doc) => {
if(doc.doc_type==="sale_transaction" && doc._deleted) {
// These are deleted transactions which I dont want to replicate to Couch
return false;
}
return true;
}
}).on("change", (change) => {
// Handle change
replicateOutChangeHandler(change)
});
I also implemented a change handler to delete the sale_transaction documents in Pouch, after being written to Couch.
function replicateOutChangeHandler(change) {
for(let doc of change.docs) {
if(doc.doc_type==="sale_transaction" && !doc._deleted) {
localDb.upsert(doc._id, function(prevDoc) {
if(!prevDoc._deleted) {
prevDoc._deleted = true;
}
return prevDoc;
}).then((res)=>{
console.log("Deleted Document After Replication",res);
}).catch((err)=>{
console.error("Deleted Document After Replication (ERROR): ",err);
})
}
}
}
The flow of the data seems to be working at first, but when I get the sale_transaction document from Couch, then do some editing, I would then have to repeat the process of writing the document in Pouch, then let it flow to Couch, then delete it in Pouch. But, after some editing with the same document, the document in Couch, has also been deleted.
I am fairly new with Pouch & Couch, specifically in NoSQL, and was wondering if I'm doing something wrong in the process.
For a situation like the one you've described above, I'd suggest tweaking your approach as follows:
Create a PouchDB database as a replication target from CouchDB, but treat this database as a read-only mirror of the CouchDB database, applying whatever transforms you need in order to strip certain document types from the local store. For the sake of this example, let's call this database mirror. The mirror database only gets updated one-way, from the canonical CouchDB database via transform replication.
Create a separate PouchDB database to store all your sales transactions. For the sake our this example, let's call this database user-data.
When the user creates a new sale transaction, this document is written to user-data. Listen for changes on user-data, and when a document is created, use the change handler to create and write the document directly to CouchDB.
At this point, CouchDB is recieving sales transactions from user-data, but your transform replication is preventing them from polluting mirror. You could leave it at that, in which case user-data will have local copies of all sales transactions. On logout, you can just delete the user-data database. Alternatively, you could add some more complex logic in the change handler to delete the document once CouchDB has recieved it.
If you really wanted to get fancy, you could do something even more elaborate. Leave the sales transactions in user-data after it's written to CouchDB, and in your transform replication from CouchDB to mirror, look for these newly-created sales transactions documents. Instead of removing them, just strip them of anything but their _id and _rev fields, and use these as 'receipts'. When one of these IDs match an ID in user-data, that document can be safely deleted.
Whichever method you choose, I suggest you think about your local PouchDB's _changes feed as a worker queue, instead of putting all of this elaborate logic in replication filters. The methods above should all survive offline cases without introducing conflicts, and recover nicely when connectivity is restored. I'd recommend the last solution, though it might be a bit more work than the others. Hope this helps.
Maybe additional field for delete - thus marking the record for deletion.
Then periodic routine running on both Pouch and Couch that scan for marked for deletion records and delete them.
I want to make a homepage where several pieces of data are published, but only when the user first visits the page : one would get the latest 10 articles published but that's it - it won't keep changing.
Is there a way to make the inbuilt pub/sub mechanism turn itself off after a set amount of time or number of records, or another mechanism?
Right now I'm using a very simple setup that doesn't "turn off":
latestNews = new Mongo.Collection('latestNews');
if (Meteor.isClient) {
Meteor.subscribe("latestNews");
}
if (Meteor.isServer) {
Meteor.publish('latestNews', function() {
return latestNews.find({}, {sort: { createdAt: -1 }, limit : 10});
});
}
The pub/sub pattern as it is implemented in Meteor is all about reactive data updates. In your case that would mean if the author or last update date of an article changes then users would see this change immediately reflected on their home page.
However you want to send data once and not update it ever again.
Meteor has a built-in functionality to handle this scenario : Methods. A method is a way for the client to tell the server to execute computations and/or send pure non-reactive data.
//Server code
var lastTenArticlesOptions = {
sort : {
createdAt : -1
},
limit : 10
}
Meteor.methods({
'retrieve last ten articles' : function() {
return latestNews.find({}, lastTenArticlesOptions).fetch()
}
})
Note that contrary to publications we do not send a Mongo.Cursor! Cursors are used in publications as a handy (aka magic) way to tell the server which data to send.
Here, we are sending the data the data directly by fetching the cursor to get an array of articles which will then be EJSON.stringifyied automatically and sent to the client.
If you need to send reactive data to the client and at a later point in time to stop pushing updates, then your best bet is relying on a pub/sub temporarily, and then to manually stop the publication (server-side) or the subscription (client-side) :
Meteor.publish('last ten articles', function() {
return latestNews.find({}, lastTenArticlesOptions)
})
var subscription = Meteor.subscribe('last ten articles')
//Later...
subscription.stop()
On the server-side you would store the publication handle (this) and then manipulate it.
Stopping a subscription or publication does not destroy the documents already sent (the user won't see the last ten articles suddenly disappear).
I've been trying to wrap my head around best RESTful practices while using BackboneJS. I feel like I've written myself into a bit of a knot and could use some guidance.
My scenario is this: a user wants to create a new Playlist with N items in it. The data for the N items is coming from a third-party API in bursts of 50 items. As such, I want to add a new, empty Playlist and, as the bursts of 50 come in, save the items and add to my Playlist.
This results in my Playlist model having a method, addItems, which looks like:
addItems: function (videos, callback) {
var itemsToSave = new PlaylistItems();
var self = this;
// Create a new PlaylistItem with each Video.
videos.each(function (video) {
var playlistItem = new PlaylistItem({
playlistId: self.get('id'),
video: video
});
itemsToSave.push(playlistItem);
});
itemsToSave.save({}, {
success: function () {
// OOF TERRIBLE.
self.fetch({
success: function () {
// TODO: For some reason when I call self.trigger then allPlaylists triggers fine, but if I go through fetch it doesnt trigger?
self.trigger('reset', self);
if (callback) {
callback();
}
}
});
},
error: function (error) {
console.error("There was an issue saving" + self.get('title'), error);
}
});
}
ItemsToSave is generally a Collection with 50 items in it. Since BackboneJS does not provide a Save for Collections, I wrote my own. I didn't care much for creating a Model wrapper for my Collection.
So, when I call Save, none of my items have IDs. The database assigns the IDs, but that information isn't implicitly updated by Backbone because I'm saving a Collection and not a Model. As such, once the save is successful, I call fetch on my Playlist to retrieve the updated information. This is terrible because a Playlist could have thousands of items in it -- I don't want to be fetching thousands of items every time I save multiple.
So, I'm thinking maybe I need to override the Collection's parse method and manually map the server's response back to the Collection.
This all seems... overkill/wrong. Am I doing something architecturally incorrect? How does a RESTful architecture handle such a scenario?
My opinion is do what works and feels clean enough and disregard what the RESTafarians credence might be. Bulk create, bulk update, bulk delete are real world use cases that the REST folk just close their eyes and pretend don't exist. Something along these lines sounds like a reasonable first attempt to me:
create a bulkAdd method or override add carefully if you are feeling confident
don't make models or add them to the collection yet though
do your bulk POST or whatever to get them into the database and get the assigned IDs back
then add them as models to the collection
I have a set of records that I would like to update sequentially in perpetuity. Basically:
Get least recently updated record
Update record
Set date of record to now (aka. send it to the back of the list)
Back to step 1
Here is what I was thinking using Firebase:
// update record function
var updateRecord = function() {
// get least recently updated record
firebaseOOO.limit(1).once('value', function(snapshot) {
key = _.keys(snapshot.val())[0];
/*
* do 1-5 seconds of non-Firebase processing here
*/
snapshot.ref().child(key).transaction(
// update record
function(data) {
return updatedData;
},
// update priority after commit (would like to do it in transaction)
function(error, committed, snap2) {
snap2.ref().setPriority(snap2.dateUpdated);
}
);
});
};
// listen whenever priority changes (aka. new item needs processing)
firebaseOOO.on('child_moved', function(snapshot) {
updateRecord();
});
// kick off the whole thing
updateRecord();
Is this a reasonable thing to do?
In general, this type of daemon is precisely what was envisioned for use with the Firebase NodeJS client. So, the approach looks good.
However, in the on() call it looks like you're dropping the snapshot that's being passed in on the floor. This might be application specific to what you're doing, but it would be more efficient to consume that snapshot in relation to the once() that happens in the updateRecord().
I'm quite new on Meteor and Mongo and even if I don't want it, I need some relations.
I have a Collection called Feeds and another called UserFeeds where I have a feedid and a userid, and I publish the user feeds on the server like this:
Meteor.publish('feeds', function(){
return Feeds.find({_id:{$in:_.pluck(UserFeeds.find({user:this.userId}).fetch(),'feedid')}});
});
I find the user on UserFeeds, fetch it (returns an array) and pluck it to have only the feedid field, and then find those feeds on the Feeds collection.
And subscribe on the client like this:
Deps.autorun(function(){
Meteor.subscribe("feeds");
});
The problem is that when I add a new feed and a new userfeed the client doesn't receive the change, but when I refresh the page the new feed does appear.
Any idea of what I'm missing here?
Thanks.
I've run into this, too. It turns out publish functions on the server don't re-run reactively: if they return a Collection cursor, as you're doing (and as most publish functions do), then the publish function will run once and Meteor will store the cursor and send down updates only when the contents of the cursor change. The important thing here is that Meteor will not re-run the publish function, nor, therefore, the Collection.find(query), when query changes. If you want the publish function to re-run, then the way I've done it so far is to set up the publish function to receive an argument. That way the client, whose collections do update reactively, can re-subscribe reactively. The code would look something like:
// client
Meteor.subscribe('user_feeds');
Deps.autorun(function(){
var allFeeds = UserFeeds.find({user: Meteor.userId()}).fetch();
var feedIds = _.pluck(allFeeds,'feedid');
Meteor.subscribe('feeds',feedids);
});
// server
Meteor.publish('feeds',function(feedids) {
return Feeds.find({_id: {$in: feedids}});
});
I believe the Meteorite package publish-with-relations is designed to solve this problem, although I haven't used it.
EDIT: I believe the publish function will re-run when the userId changes, which means that you can have a server-side check to make sure the user is logged in before publishing sensitive data.
I think your problem is that .fetch() which you use here…
UserFeeds.find({user:this.userId}).fetch()
…removes the reactivity.
.fetch() returns an array instead of a cursor, and that array won't be reactive.
http://docs.meteor.com/#fetch
try this ...
Meteor.autosubscribe(function(){
Meteor.subscribe("feeds");
});
and in the Template JS ...
Template.templateName.feeds = function()
return Feeds.find() # or any specific call
};
in the HTML ...
{{#each feeds}}
do some stuff
{{else}}
no feed
{{/each}}
You can use the reactive-publish package (I am one of authors). It allows you to create publish endpoints which depend on the result of another query. In your case, query on UserFeeds.
Meteor.publish('feeds', function () {
this.autorun(function (computation) {
var feeds = _.pluck(UserFeeds.find({user: this.userId}, {fields: {feedid: 1}}).fetch(), 'feedid');
return Feeds.find({_id: {$in: feeds}});
});
});
The important part is that you limit the UserFeeds fields only to feedid to make sure autorun does not rerun when some other field changes in UserFeeds, a field you do not care about.