I have a set of records that I would like to update sequentially in perpetuity. Basically:
Get least recently updated record
Update record
Set date of record to now (aka. send it to the back of the list)
Back to step 1
Here is what I was thinking using Firebase:
// update record function
var updateRecord = function() {
// get least recently updated record
firebaseOOO.limit(1).once('value', function(snapshot) {
key = _.keys(snapshot.val())[0];
/*
* do 1-5 seconds of non-Firebase processing here
*/
snapshot.ref().child(key).transaction(
// update record
function(data) {
return updatedData;
},
// update priority after commit (would like to do it in transaction)
function(error, committed, snap2) {
snap2.ref().setPriority(snap2.dateUpdated);
}
);
});
};
// listen whenever priority changes (aka. new item needs processing)
firebaseOOO.on('child_moved', function(snapshot) {
updateRecord();
});
// kick off the whole thing
updateRecord();
Is this a reasonable thing to do?
In general, this type of daemon is precisely what was envisioned for use with the Firebase NodeJS client. So, the approach looks good.
However, in the on() call it looks like you're dropping the snapshot that's being passed in on the floor. This might be application specific to what you're doing, but it would be more efficient to consume that snapshot in relation to the once() that happens in the updateRecord().
Related
I have to connect to the external database and get access to its collections. It works fine, when I use it, but the problem is when I need collection hooks, e.g. Collection.after.insert(function(userId, doc)). The hook is not being fired. I have following code:
// TestCollection.js
let database = new MongoInternals.RemoteCollectionDriver("mongodb://127.0.0.1:3001/meteor",
{
oplogUrl: 'mongodb://127.0.0.1:3001/local'
});
let TestCollection = new Mongo.Collection("testCollection", { _driver: database });
module.exports.TestCollection = TestCollection;
console.log(TestCollection.findOne({name: 'testItem'})); // writes out the item correctly
// FileUsingCollection.js
import { TestCollection } from '../collections/TestCollection.js';
console.log(TestCollection.findOne({name: 'testItem'})); // writes out the item correctly second time
TestCollection.after.update(function (userId, doc) {
console.log('after update');
}); // this is NOT being fired when I change the content of remote collection (in external app, which database I am connected)
How to make this work?
EDIT:
I have read many hours about it and I think it might be connected with things like:
- oplog
- replicaSet
But I am newbie to Meteor and can’t find out what are those things about. I have set MONGO_OPLOG_URL and I added oplog parameter to database driver as I read here: https://medium.com/#lionkeng/2-ways-to-share-data-between-2-different-meteor-apps-7b27f18b5de9
but nothing changed. And I don’t know how to use this replicaSet, how to add it to the url. Anybody can help?
You can also try something like below code,
var observer = YourCollections.find({}).observeChanges({
added: function (id, fields) {
}
});
You can also have 'addedBefore(id, fields, before)', 'changed(id, fields)', 'movedBefore(id, before)', 'removed(id)'
For more features goto link.
I'm looking to implement a solution where I can query the Mongoose Database on a regular interval and then store the results to serve to my clients.
I'm assuming this will reduce my response time when my users pull the collection.
I attempted to implement this plan by creating an empty global object and then writing a function that queries the db and then stores the results as the global object mentioned previously. At the end of the function I setTimeout for 60 seconds and then ran the function again. I call this function the first time the server controller gets called when the app is first run.
I then set my clients up so that when they requested the collection, it would first look to see if the global object exists, and if so return that as the response. I figured this would cut my 7-10 second queries down to < 1 sec.
In my novice thinking I assumed that Nodejs being 'single-threaded' something like this could work quite well - but it just seemed to eat up all my RAM and cause fatal errors.
Am I on the right track with my thinking or is it better to query the db every time people pull the collection?
Here is the code in question:
var allLeads = {};
var getAllLeads = function(){
allLeads = {};
console.log('Getting All Leads...');
Lead.find().sort('-lastCalled').exec(function(err, leads) {
if (err) {
console.log('Error getting leads');
} else {
allLeads = leads;
}
});
setTimeout(function(){
getAllLeads();
}, 60000);
};
getAllLeads();
Thanks in advance for your assistance.
I'm using AngularFire. I have some code which is supposed to add a new record to an array of records and using the promise then function, it is supposed to re-evaluate the array to find out which one has the most recent datestamp in the collection.
this.addRecord = function() {
// Add a new record with the value the user has typed in
$scope.records.$add({
"date": (new Date()).toString(),
"value": $scope.newValue
}).then(function( ref ) {
// Use underscore.last to determine which is the
var _newestValue = _.max( $scope.records, function(record) {
return record.date;
})[0].value;
sync.$update({ 'newestValue': _newestValue });
});
// Have angular clear the field
$scope.newValue = '';
}
The problem is that when the promise.then() calls, my local copy of $scope.records is not yet updated with the newest record. So while firebase out on the server now has the new record, when I iterate on $scope.records I get all the records except for the one I just added. After the .then() completes, I can see that the record has been added.
Maybe I'm using the promise wrong? I was under the impression that when AngularFire finally calls the .then() that it would be after angular had added the new record on the server and synced up the local collection.
What's the right way to do this? I just need to reliably know when the record has been added locally. Thanks in advance!
So it turns out using model.$watch was the right way to go. $watch only fires when synced changes are made to the model, so you know you can reliably count on them.
$scope.records.$watch( watchCallback );
watchCallback = function() {
if($scope.loadingModel) return; //Block updates during load
var _newestValue = _.max( $scope.records, function(record) {
return record.date;
})[0].value;
sync.$update({ 'newestValue': _newestValue });
}
AngularFire's model.$add().then() will fire when the changes were sent to the server, not when the local model is synced up to the client. So $add.then would be more appropriately used to confirm that changes were saved, or something along those lines.
I see that when publishing, the collection._connection.publish_handlers is populated, and so does the collection._connection.method_handlers, and probably other areas.
I want to basically cleanup the memory by removing the references to that collection and it's publication entirely.
Basically each user of the app has a list of collections for that user. There is a publish function that looks like this for the user to get their list of collections:
Meteor.publish('users_collections', function() {
var self = this;
var handle = UsersCollections.find({ownerId: self.userId}).observeChanges({
added: function(id, collectionInfo) {
UsersCollectionManager.addUsersCollection(self.userId, collectionInfo.name);
}
});
});
That publishes that user's list of collections (and any user that connects gets their list).
Once the user gets their list, each of those collections is made reactive with new Meteor.Collection and then published.
UsersCollectionManager.addUsersCollection = function(userId, collectionName) {
if (self.collections[userId].collections[collectionName] === undefined) {
self.collections[userId].collections[collectionName] = new Meteor.Collection(collectionName);
Meteor.publish(collectionName, function() {
return self.collections[userId].collections[collectionName].find();
});
}
};
Once the user disconnects I have a function that gets run.
if that user doesn't have any connections open (ex: if they had multiple windows open and all the connections are closed "all windows closed") then it starts a 30s timeout to:
cleanup all these publish calls and new Meteor.Collection calls" to save memory
As the other user's of the app won't need this user's collections.
I'm not sure how to actually cleanup those from memory.
I don't see a "unpublish" or "Collection.stop" type of methods in the Meteor API.
How would I perform cleanup?
You need to do this in two steps. First, stop and delete the publications, then delete the collection from Meteor.
The first step is fairly easy. It requires you to store the handles of each subscription:
var handles = [];
Meteor.publish('some data', function() {
//Do stuff, send cursors, this.ready(), ...
handles.push(this);
});
And later in time, stop them:
var handle;
while(handle = handles.shift()) {
handle.stop();
}
All your publications are stopped. Now to delete the publication handler. It's less standard :
delete Meteor.default_server.publish_handlers['some data'];
delete Meteor.server.publish_handlers['some data'];
You basically have to burn down the reference to the handler. That's why it's less standard.
For the collection you first need to remove all documents then you have to delete the reference. Fortunately deleting all documents is very easy:
someCollection.remove({}, {multi : true}); //kaboom
Deleting the reference to the collection is trickier. If it is a collection declared without a name (new Mongo.Collection(null);), then you can just delete the variable (delete someCollection), it is not stored anywhere else as far as I know.
If it is a named collection, then it exists in the Mongo database, which means you have to tell Mongo to drop it. I have no idea how to do that at the moment. Maybe the Mongo driver can do it, ot it will need some takeover on the DDP connection.
You can call subscriptionHandle.stop(); on the client. If the user has disconnected, the publications will have stopped anyway.
I have the following query:
fire = new Firebase 'ME.firebaseio.com'
users = fire.child 'venues/ID/users'
users.once 'value', (snapshot) ->
# do things with snapshot.val()
...
I am loading 10+ mb of data, and the request takes around 1sec/mb. Is it possible to give the user a progress indicator as content streams in? Ideally I'd like to process the data as it comes in as well (not just notify).
I tried using the on "child_added" event instead, but it doesn't work as expected - instead of children streaming in at a consistent rate, they all come at once after the entire dataset is loaded (which takes 10-15 sec), so in practice it seems to be a less performant version of on "value".
You should be able to optimize your download time from 10-20secs to a few milliseconds by starting with some denormalization.
For example, we could move the images and any other peripherals comprising the majority of the payload to their own path, keep only the meta data (name, email, etc) in the user records, and grab the extras separately:
/users/user_id/name, email, etc...
/images/user_id/...
The number of event listeners you attach or paths you connect to does not have any significant overhead locally or for networking bandwidth (just the payload) so you can do something like this to "normalize" after grabbing the meta data:
var firebaseRef = new Firebase(URL);
firebaseRef.child('users').on('child_added', function(snap) {
console.log('got user ', snap.name());
// I chose once() here to snag the image, assuming they don't change much
// but on() would work just as well
firebaseRef.child('images/'+snap.name()).once('value', function(imageSnap) {
console.log('got image for user ', imageSnap.name());
});
});
You'll notice right away that when you move the bulk of the data out and keep only the meta info for users locally, they will be lightning-fast to grab (all of the "got user" logs will appear right away). Then the images will trickle in one at a time after this, allowing you to create progress bars or process them as they show up.
If you aren't willing to denormalize the data, there are a couple ways you could break up the loading process. Here's a simple pagination approach to grab the users in segments:
var firebaseRef = new Firebase(URL);
grabNextTen(firebaseRef, null);
function grabNextTen(ref, startAt) {
ref.limit(startAt? 11 : 10).startAt(startAt).once('value', function(snap) {
var lastEntry;
snap.forEach(function(userSnap) {
// skip the startAt() entry, which we've already processed
if( userSnap.name() === lastEntry ) { return; }
processUser(userSnap);
lastEntry = userSnap.name();
});
// setTimeout closes the call stack, allowing us to recurse
// infinitely without a maximum call stack error
setTimeout(grabNextTen.bind(null, ref, lastEntry);
});
}
function processUser(snap) {
console.log('got user', snap.name());
}
function didTenUsers(lastEntry) {
console.log('finished up to ', lastEntry);
}
A third popular approach would be to store the images in a static cloud asset like Amazon S3 and simply store the URLs in Firebase. For large data sets in the hundreds of thousands this is very economical, since those solutions are a bit cheaper than Firebase storage.
But I'd highly suggest you both read the article on denormalization and invest in that approach first.