Mongodb Tailable Cursor in nodejs, how to stop stream - javascript

I use below code to get the data from mongodb capped collection
function listen(conditions, callback) {
db.openConnectionsNew( [req.session.client_config.db] , function(err, conn){
if(err) {console.log({err:err}); return next(err);}
coll = db.opened[db_name].collection('messages');
latestCursor = coll.find(conditions).sort({$natural: -1}).limit(1)
latestCursor.nextObject(function(err, latest) {
if (latest) {
conditions._id = {$gt: latest._id}
}
options = {
tailable: true,
awaitdata: true,
numberOfRetries: -1
}
stream = coll.find(conditions, options).sort({$natural: -1}).stream()
stream.on('data', callback)
});
});
}
and then I use sockets.broadcast(roomName,'data',document);
on client side
io.socket.get('/get_messages/', function(resp){
});
io.socket.on('data', function notificationReceivedFromServer ( data ) {
console.log(data);
});
this works perfectly as I am able to see the any new document which is inserted in db.
I can see in mongod -verbose that after each 200ms there is query running with the query {$gt:latest_id} and this is fine, but I have no idea how can i close this query :( I am very new in nodejs and using the mongodb tailable option for the first time and am totally lost, any help or clue is highly appreciated

What is returned from the .stream() method from the Cursor object returned from .find() is an implementation of the node transform stream interface. Specifically this is a "readable" stream.
As such, it's "data" event is emitted whenever there is new data received and available in the stream to be read.
There are other methods such as .pause() and .resume() which can be used to control the flow of these events. Typically you would call these "inside" a "data" event callback, where you wanted to make sure the code in that callback was executed before the "next" data event was processed:
stream.on("data", function(data) {
// pause before processing
stream.pause();
// do some work, possibly with a callback
something(function(err,result) {
// Then resume when done
stream.resume();
});
});
But of course this is just a matter of "scoping". So as long as the "stream" variable is defined in a scope where another piece of code can access it, then you can call either method at any time.
Again, by the same token of scoping, you can just "undefine" the "stream" object at any point in the code, making the "event processing" redundant.
// Just overwrite the object
scope = undefined;
So worth knowing. In fact the newer "version 2.x" of the node driver wraps a "stream interface" directly into the standard Cursor object without the need to call .stream() to convert. Node streams are very useful and powerful things that it would be well worth while coming to terms with their usage.

Related

Understanding anonymous functions in Express js

I am new to express and am trying to wrap my head around callbacks in RESTful actions. In my PUT request below, I'm confused about the following line that I have bolded below. Why is response.pageInfo.book being set to the second parameter in the anonymous function (result)? that seems kind of arbitrary.
Also, what is the best way to inspect some of these parameters (req, res, result, etc)? When I console.log it, doesn't show up in my terminal or in my browser console.
exports.BookEdit = function(request, response) {
var id = request.params.id;
Model.BookModel.findOne({
_id: id
}, function(error, result) {
if (error) {
console.log("error");
response.redirect('/books?error=true&message=There was an error finding a book with this id');
} else {
response.pageInfo.title = "Edit Book";
**response.pageInfo.book = result;**
response.render('books/BookEdit', response.pageInfo)
}
})
}
The findOne function takes a query ({_id : id}) and a callback as arguments. The callback gets called after findOne has finished querying the database. This callback pattern is very common in nodejs. Typically the callback will have 2 arguments
the first one error is only set if there was an error.
the second one usually contains the value being returned. In this case you are finding one book in the database.
The line you have bolded is where the book object is assigned to a variable which will be sent back to be rendered in the browser. It is basically some javascript object.
Your second request, to debug this stuff, here is what you can do:
In you code type the word debugger;
e.g.
var id = request.params.id;
debugger;
Next, instead of running your program like this:
node myprogram.js
... run with debug flag, i.e.
node debug myprogram.js
It will pause at the beginning and you can continue by pressing c then Enter
Next it will stop at that debugger line above. Type repl and then Enter and you'll be able to inspect objects and variables by typing their names.
This works very well and requires no installation. However, you can also take a more visual approach and install a debugger such as node-inspector which does the same thing but in a web browser. If you use a good IDE (e.g. webstorm) you can also debug node.js pretty easily.
In the above, the document that is the result of the findOne() query is being added to the pageInfo key of the response and is then being rendered in a template. The first parameter is a potential error that must be checked and the remainder contain data. It's the standard node idiom that an asynchronous call returns to a callback where you do your work.
The writer of the code has also decided to decorate the response object with an extra attribute. This is often done when a request passes through a number of middleware functions and you might want to build up the response incrementally (for example having a middleware function that adds information about the current user to the pageInfo key).
Look and see what else is on response.pageInfo. Information was probably put there by previous middleware (especially since the function above expects the pageInfo key to exist). Just do a console.log(response.pageInfo) and look on your server log or standard out.

Parse Cloud Code Ending Prematurely?

I'm writing a job that I want to run every hour in the background on Parse. My database has two tables. The first contains a list of Questions, while the second lists all of the user\question agreement pairs (QuestionAgreements). Originally my plan was just to have the client count the QuestionAgreements itself, but I'm finding that this results in a lot of requests that really could be done away with, so I want this background job to run the count, and then update a field directly on Question with it.
Here's my attempt:
Parse.Cloud.job("updateQuestionAgreementCounts", function(request, status) {
Parse.Cloud.useMasterKey();
var query = new Parse.Query("Question");
query.each(function(question) {
var agreementQuery = new Parse.Query("QuestionAgreement");
agreementQuery.equalTo("question", question);
agreementQuery.count({
success: function(count) {
question.set("agreementCount", count);
question.save(null, null);
}
});
}).then(function() {
status.success("Finished updating Question Agreement Counts.");
}, function(error) {
status.error("Failed to update Question Agreement Counts.")
});
});
The problem is, this only seems to be running on a few of the Questions, and then it stops, appearing in the Job Status section of the Parse Dashboard as "succeeded". I suspect the problem is that it's returning prematurely. Here are my questions:
1 - How can I keep this from returning prematurely? (Assuming this is, in fact, my problem.)
2 - What is the best way of debugging cloud code? Since this isn't client side, I don't have any way to set breakpoints or anything, do I?
status.success is called before the asynchronous success calls of count are finished. To prevent this, you can use promises here. Check the docs for Parse.Query.each.
Iterates over each result of a query, calling a callback for each one. If the callback returns a promise, the iteration will not continue until that promise has been fulfilled.
So, you can chain the count promise:
agreementQuery.count().then(function () {
question.set("agreementCount", count);
question.save(null, null);
});
You can also use parallel promises to make it more efficient.
There are no breakpoints in cloud code, that makes Parse really hard to use. Only way is logging your variables with console.log
I was able to utilize promises, as suggested by knshn, to make it so that my code would complete before running success.
Parse.Cloud.job("updateQuestionAgreementCounts", function(request, status) {
Parse.Cloud.useMasterKey();
var promises = []; // Set up a list that will hold the promises being waited on.
var query = new Parse.Query("Question");
query.each(function(question) {
var agreementQuery = new Parse.Query("QuestionAgreement");
agreementQuery.equalTo("question", question);
agreementQuery.equalTo("agreement", 1);
// Make sure that the count finishes running first!
promises.push(agreementQuery.count().then(function(count) {
question.set("agreementCount", count);
// Make sure that the object is actually saved first!
promises.push(question.save(null, null));
}));
}).then(function() {
// Before exiting, make sure all the promises have been fulfilled!
Parse.Promise.when(promises).then(function() {
status.success("Finished updating Question Agreement Counts.");
});
});
});

resending peerconnection offer

I'm currently trying to rebroadcast my local stream to all my peer connections. options I tried:
1) Loop trough all my peer connection and recreate them with the new local stream. Problem that I encounter here is the fact that createOffer is asynchronous.
2) create 1 sdp and send it to all peers. Problem: no video
Would anyone have a way to resend an offer to a list of peers?
Each PC needs to recreate an offer (as bwrent said).
as you obviously are using a p2p multiparty (multiple peer connections) you might want to pass on the peerID to the createOffer success callback every time, then you don't have to worry about it being asynchronous. You need to make the full handshake (offer, answer, candidate) peerID dependent.
(Simplified) Example from our SDK
Skyway.prototype._doCall = function (targetMid) {
var pc = this._peerConnections[targetMid]; // this is thread / asynchronous safe
pc.createOffer(
function (offer) {
self._setLocalAndSendMessage(targetMid, offer); // pass the targetID down the callback chain
},
function (error) {this._onOfferOrAnswerError(targetMid, error);},
constraints
);
};
Skyway.prototype._setLocalAndSendMessage = function (targetMid, sessionDescription) {
var pc = this._peerConnections[targetMid]; // this is thread / asynchronous safe
pc.setLocalDescription(
sessionDescription,
self._sendMessage({ target: targetMid, ... }), // success callback
function () {} // error callback
);
};
If you mean async in a way that when a callback fires it has the wrong variable of who to send it to as the loop has ended and the variable contains the last 'person'? You could scope it to solve the asynchronous problem:
For(var i=0;i<peerConnections.length;i++){
(function(id){
//inside here you have the right id. Even if the loop ended and the i variable has changed to something else, the I'd variable still is the same.
})(i);
}
This is a bit like Alex' answer, as his anwer also describes an example of scoping the variable inside the function executing the .createOffer
Another way to handle this correctly is to use renegotiation. Whenever you change a stream, the on onnegotiation event handler is automatically fired. Inside this function you create a new offer and send that to the other person. As you mentioned you have multiple peer connect ions listening to the stream, you need to know whom to send the sdp to. If you would add the persons id to the rtc object, you can then get it back inside the onnegotioation event by calling this.id.

Strange issue with socket.on method

I am facing a strange issue with calling socket.on methods from the Javascript client. Consider below code:
for(var i=0;i<2;i++) {
var socket = io.connect('http://localhost:5000/');
socket.emit('getLoad');
socket.on('cpuUsage',function(data) {
document.write(data);
});
}
Here basically I am calling a cpuUsage event which is emitted by socket server, but for each iteration I am getting the same value. This is the output:
0.03549148310035006
0.03549148310035006
0.03549148310035006
0.03549148310035006
Edit: Server side code, basically I am using node-usage library to calculate CPU usage:
socket.on('getLoad', function (data) {
usage.lookup(pid, function(err, result) {
cpuUsage = result.cpu;
memUsage = result.memory;
console.log("Cpu Usage1: " + cpuUsage);
console.log("Cpu Usage2: " + memUsage);
/*socket.emit('cpuUsage',result.cpu);
socket.emit('memUsage',result.memory);*/
socket.emit('cpuUsage',cpuUsage);
socket.emit('memUsage',memUsage);
});
});
Where as in the server side, I am getting different values for each emit and socket.on. I am very much feeling strange why this is happening. I tried setting data = null after each socket.on call, but still it prints the same value. I don't know what phrase to search, so I posted. Can anyone please guide me?
Please note: I am basically Java developer and have a less experience in Javascript side.
You are making the assumption that when you use .emit(), a subsequent .on() will wait for a reply, but that's not how socket.io works.
Your code basically does this:
it emits two getLoad messages directly after each other (which is probably why the returning value is the same);
it installs two handlers for a returning cpuUsage message being sent by the server;
This also means that each time you run your loop, you're installing more and more handlers for the same message.
Now I'm not sure what exactly it is you want. If you want to periodically request the CPU load, use setInterval or setTimeout. If you want to send a message to the server and want to 'wait' for a response, you may want to use acknowledgement functions (not very well documented, but see this blog post).
But you should assume that for each type of message, you should only call socket.on('MESSAGETYPE', ) once during the runtime of your code.
EDIT: here's an example client-side setup for a periodic poll of the data:
var socket = io.connect(...);
socket.on('connect', function() {
// Handle the server response:
socket.on('cpuUsage', function(data) {
document.write(data);
});
// Start an interval to query the server for the load every 30 seconds:
setInterval(function() {
socket.emit('getLoad');
}, 30 * 1000); // milliseconds
});
Use this line instead:
var socket = io.connect('iptoserver', {'force new connection': true});
Replace iptoserver with the actual ip to the server of course, in this case localhost.
Edit.
That is, if you want to create multiple clients.
Else you have to place your initiation of the socket variable before the for loop.
I suspected the call returns average CPU usage at the time of startup, which seems to be the case here. Checking the node-usage documentation page (average-cpu-usage-vs-current-cpu-usage) I found:
By default CPU Percentage provided is an average from the starting
time of the process. It does not correctly reflect the current CPU
usage. (this is also a problem with linux ps utility)
But If you call usage.lookup() continuously for a given pid, you can
turn on keepHistory flag and you'll get the CPU usage since last time
you track the usage. This reflects the current CPU usage.
Also given the example how to use it.
var pid = process.pid;
var options = { keepHistory: true }
usage.lookup(pid, options, function(err, result) {
});

How do I return the results of a query using Sequelize and Javascript?

I'm new at javascript and I've hit a wall hard here. I don't even think this is a Sequelize question and probably more so about javascript behavior.
I have this code:
sequelize.query(query).success( function(row){
console.log(row);
}
)
The var row returns the value(s) that I want, but I have no idea how to access them other than printing to the console. I've tried returning the value, but it isn't returned to where I expect it and I'm not sure where it goes. I want my row, but I don't know how to obtain it :(
Using Javascript on the server side like that requires that you use callbacks. You cannot "return" them like you want, you can however write a function to perform actions on the results.
sequelize.query(query).success(function(row) {
// Here is where you do your stuff on row
// End the process
process.exit();
}
A more practical example, in an express route handler:
// Create a session
app.post("/login", function(req, res) {
var username = req.body.username,
password = req.body.password;
// Obviously, do not inject this directly into the query in the real
// world ---- VERY BAD.
return sequelize
.query("SELECT * FROM users WHERE username = '" + username + "'")
.success(function(row) {
// Also - never store passwords in plain text
if (row.password === password) {
req.session.user = row;
return res.json({success: true});
}
else {
return res.json({success: false, incorrect: true});
}
});
});
Ignore injection and plain text password example - for brevity.
Functions act as "closures" by storing references to any variable in the scope the function is defined in. In my above example, the correct res value is stored for reference per request by the callback I've supplied to sequelize. The direct benefit of this is that more requests can be handled while the query is running and once it's finished more code will be executed. If this wasn't the case, then your process (assuming Node.js) would wait for that one query to finish block all other requests. This is not desired. The callback style is such that your code can do what it needs and move on, waiting for important or processer heavy pieces to finish up and call a function once complete.
EDIT
The API for handling callbacks has changed since answering this question. Sequelize now returns a Promise from .query so changing .success to .then should be all you need to do.
According to the changelog
Backwards compatibility changes:
Events support have been removed so using .on('success') or .success()
is no longer supported. Try using .then() instead.
According this Raw queries documentation you will use something like this now:
sequelize.query("SELECT * FROM `users`", { type: sequelize.QueryTypes.SELECT})
.then(function(users) {
console.log(users);
});

Categories

Resources