I guess I don't "get" async programming - javascript

I've been using node.js for about 6 months now, off and on. But, I guess I still don't completely understand designing a program around asynchronous calls.
Take, for example, my most recent program, that needs to read a config, use that config to connect to the database, and then connect to every address in the database asynchronously.
I'm using the modules fnoc and node-mysql, but this is just pseudocode.
// first, get the config
fnoc(function(err, confs){
// code stuff in here
//now check that there's a database config
if (confs.hasOwnProperty("database"))
{
// set up db connection
var mysql_conn = mysql.createConnection({
host: confs.database.host,
user: confs.database.user,
password: confs.database.password,
database: confs.database.database
});
// do some querying here.
mysql_conn.query(query, function(err, records, fields){
records.forEach(function(host){
// four levels in, and just now starting the device connections
});
});
}
});
Every time I write something like this with callback inside of callback inside of callback, I feel like I'm doing something wrong. I know of promises and the async node library, but it seems like if those are the solutions, they should be default functionality. Am I doing something wrong, or is it just not clicking for me?
EDIT: Some suggestions include using functions for the callbacks, but that seems worse somehow (unless I'm doing it wrong, which is entirely possible). You end up calling one function inside of another, and it seems especially spaghetti-ish.
The example above, with functions:
function make_connection (hosts) {
hosts.foreach(function(host){
//here's where the fun starts
};
}
function query_db(dbinfo){
var mysql_conn = mysql.createConnection({
host: dbinfo.host,
user: dbinfo.user,
password: dbinfo.password,
database: dbinfo.database
});
// do some querying here.
mysql_conn.query(query, function(err, records, fields){
make_connection(records);
});
}
// first, get the config
fnoc(function(err, confs){
// code stuff in here
//now check that there's a database config
if (confs.hasOwnProperty("database"))
{
// set up db connection
query_db(confs.database);
var mysql_conn = mysql.createConnection({
host: confs.database.host,
user: confs.database.user,
password: confs.database.password,
database: confs.database.database
});
// do some querying here.
mysql_conn.query(query, function(err, records, fields){
records.forEach(function(host){
// four levels in, and just now starting the device connections
});
});
}
});

The aim of asynchronous functions and callbacks is to avoid any conflicts (which can happen more than you think!) between objects.
I'd like to point you to this asynch enthusiast: http://www.sebastianseilund.com/nodejs-async-in-practice
Yes, the callbacks do take some getting use to, but it's worth it!

In short words: instead of
foo(function () {
// ... stuff #1 ...
bar(function () {
// ... stuff #2 ...
baz();
});
});
do
foo(handleFoo);
function handleFoo() {
// ... stuff #1 ...
bar(handleBar);
}
function handleBar() {
// ... stuff #2 ...
baz();
}
Of course, it can (and maybe should) be more granular, but that depends on the actual code. This is just a pattern to avoid nesting functions. You can also encapulsate these methods more.
This is the "vanilla" approach. There are also libraries that allow you managing this in nice ways.

If you feel really tired of endless callbacks, try Q or co. You can write node.js code like this:
co(function *(){
var a = get('http://google.com');
var b = get('http://yahoo.com');
var c = get('http://cloudup.com');
var res = yield [a, b, c];
console.log(res);
})()
It's almost a different way of writing, so it can be difficult to grasp but the results are quite good.

Related

Yeoman - Delaying logging until after task completion

I'm getting frustrated with part of a Yeoman Generator I'm building. As it's my first, I have no doubt I'm missing something obvious, but here it goes.
Simply put, I'm trying to log a message, Do Things™ and then log another message only when those things have been done.
Here's the method:
repos: function () {
var self = this;
this.log(highlightColour('Pulling down the repositories'));
// Skeleton
this.remote('user', 'skeleton', 'master', function(err, remote) {
if (!err) {
remote.bulkDirectory('.', self.destinationRoot());
} else {
self.log('\n');
self.log(alertColour('Failed to pull down Skeleton'));
repoErr = true;
}
});
//
// Three more near identical remote() tasks
//
if (!repoErr) {
self.log(successColour('Success!'));
self.log('\n');
} else {
self.log(alertColour('One or more repositories failed to download!'));
}
},
Each of the individual remote() tasks are working fine, but I get both the first and last self.log() messages before the file copying happens. It seems trivial, but I simply want the success message to come after everything has been completed.
For example, in the terminal I see:
Pulling down the repositories
Success!
file copying results
It should be:
Pulling down the repositories
file copying results
Success!
I thought it could be something to do with using this.async() with done() at the end of each remote() task, and I tried that, but whenever I do, none of the code fires at all.
I've even tried breaking everything (including the messages) into separate methods, but still no luck.
Such a simple goal, but I'm out of ideas! I'd be grateful for your help!
EDIT: In case you're wondering, I know the messages are coming first because any alerts regarding file conflicts are coming after the messages :)
This is not an issue related to Yeoman. You have asynchronous code, but you're handling it as if it was synchronous.
In the example you posted here, just do the logging as part of this.remote callback:
repos: function () {
var self = this;
this.log(highlightColour('Pulling down the repositories'));
// Skeleton
this.remote('user', 'skeleton', 'master', function(err, remote) {
if (!err) {
remote.bulkDirectory('.', self.destinationRoot());
self.log(successColour('Success!'));
self.log('\n');
} else {
self.log('\n');
self.log(alertColour('Failed to pull down Skeleton'));
self.log(alertColour('One or more repositories failed to download!'));
}
});
},
Maybe your actual use case is more complex; in this case you can use a module like async (or any other alternative) to handle more complex async flow. Either way, Yeoman doesn't provide helpers to handle asynchronous code as this is the bread and butter of Node.js.

Jasmine spyOn mongoose save

I'd like to mock the save() function of a Mongoose model. The function I want to test looks like this in a file called user.js:
var User = import('User.js')
post: function(req, res) {
var user = new User({
password : req.body.password,
email : req.body.email,
});
user.save( function(err) {
if (err) {
....
} else {
....
}
});
I tried to write a test that looks like this in another file called user_spec.js:
var Hander = require('user.js')
it('works properly', function() {
spyOn(User, 'save').andReturn(null)
Handler.post(req, res);
});
but that gives me the error:
save() method does not exist
I've done some more digging and it looks like the User model itself does not have the save() method, an instance does. This would mean I have to mock the constructor of User, but I'm having a lot of trouble with this. Other posts refer to a statement like:
spyOn(window, User)
to fix this, but in NodeJS, global (the window equivalent here), does not have User, since I import is as a variable. Is it possible to mock the constructor to give me something with a mocked save()? I've also taken a look at an npm module called rewire, but I was hoping I could do this without mocking and replacing the entire user module in my handler.
This does not solve the issue of mocking a local variable, but it will solve the issue of unit testing the creation of new documents.
When creating a new document, it is better to use Model.create(). This can be mocked effectively, and it is simply less code. The right way to handle this and test it would be:
var User = import('User.js')
post: function(req, res) {
User.create({
password : req.body.password,
email : req.body.email,
}, function(err) {
if (err) {
....
} else {
....
}
});
});
Corresponding test:
var Hander = require('user.js')
it('works properly', function() {
spyOn(User, 'create').andReturn(null)
Handler.post(req, res);
});
Hopefully this workaround will help other people getting frustrated with jasmine and mongoose unit testing.
You can only swap a function with a spy after the object is created. Hence this will work:
var user = new User(…);
spyOn(user, save).…;
doSomething();
where this will not:
spyOn(User, save).…
doSomething()
Of course you could change the function inside mongoose that creates the save function on the object… but you probably don't want to go there.
In a sane world, you would be able to do this.
spyOn(Model.prototype, 'save')
However, Mongoose tries to overload all their Model functions to work as node.js callbacks and Promises simultaneously. To do this, they manipulate the prototype in a way that is a little hard to predict without reading the actual Model code (https://github.com/Automattic/mongoose/blob/master/lib/model.js).
Here's an example that actually worked for me.
spyOn(Model.prototype, '$__save').and.callFake(function (options, callback) {
callback();
});
For the record, I am using Mongoose with Promises in the application code.

In meteor, can pub/sub be used for arbitrary in-memory objects (not mongo collection)

I want to establish a two-way (bidirectional) communication within my meteor app. But I need to do it without using mongo collections.
So can pub/sub be used for arbitrary in-memory objects?
Is there a better, faster, or lower-level way? Performance is my top concern.
Thanks.
Yes, pub/sub can be used for arbitrary objects. Meteor’s docs even provide an example:
// server: publish the current size of a collection
Meteor.publish("counts-by-room", function (roomId) {
var self = this;
check(roomId, String);
var count = 0;
var initializing = true;
// observeChanges only returns after the initial `added` callbacks
// have run. Until then, we don't want to send a lot of
// `self.changed()` messages - hence tracking the
// `initializing` state.
var handle = Messages.find({roomId: roomId}).observeChanges({
added: function (id) {
count++;
if (!initializing)
self.changed("counts", roomId, {count: count});
},
removed: function (id) {
count--;
self.changed("counts", roomId, {count: count});
}
// don't care about changed
});
// Instead, we'll send one `self.added()` message right after
// observeChanges has returned, and mark the subscription as
// ready.
initializing = false;
self.added("counts", roomId, {count: count});
self.ready();
// Stop observing the cursor when client unsubs.
// Stopping a subscription automatically takes
// care of sending the client any removed messages.
self.onStop(function () {
handle.stop();
});
});
// client: declare collection to hold count object
Counts = new Mongo.Collection("counts");
// client: subscribe to the count for the current room
Tracker.autorun(function () {
Meteor.subscribe("counts-by-room", Session.get("roomId"));
});
// client: use the new collection
console.log("Current room has " +
Counts.findOne(Session.get("roomId")).count +
" messages.");
In this example, counts-by-room is publishing an arbitrary object created from data returned from Messages.find(), but you could just as easily get your source data elsewhere and publish it in the same way. You just need to provide the same added and removed callbacks like the example here.
You’ll notice that on the client there’s a collection called counts, but this is purely in-memory on the client; it’s not saved in MongoDB. I think this is necessary to use pub/sub.
If you want to avoid even an in-memory-only collection, you should look at Meteor.call. You could create a Meteor.method like getCountsByRoom(roomId) and call it from the client like Meteor.call('getCountsByRoom', 123) and the method will execute on the server and return its response. This is more the traditional Ajax way of doing things, and you lose all of Meteor’s reactivity.
Just to add another easy solution. You can pass connection: null to your Collection instantiation on your server. Even though this is not well-documented, but I heard from the meteor folks that this makes the collection in-memory.
Here's an example code posted by Emily Stark a year ago:
if (Meteor.isClient) {
Test = new Meteor.Collection("test");
Meteor.subscribe("testsub");
}
if (Meteor.isServer) {
Test = new Meteor.Collection("test", { connection: null });
Meteor.publish("testsub", function () {
return Test.find();
});
Test.insert({ foo: "bar" });
Test.insert({ foo: "baz" });
}
Edit
This should go under comment but I found it could be too long for it so I post as an answer. Or perhaps I misunderstood your question?
I wonder why you are against mongo. I somehow find it a good match with Meteor.
Anyway, everyone's use case can be different and your idea is doable but not with some serious hacks.
if you look at Meteor source code, you can find tools/run-mongo.js, it's where Meteor talks to mongo, you may tweak or implement your adaptor to work with your in-memory objects.
Another approach I can think of, will be to wrap your in-memory objects and write a database logic/layer to intercept existing mongo database communications (default port on 27017), you have to take care of all system environment variables like MONGO_URL etc. to make it work properly.
Final approach is wait until Meteor officially supports other databases like Redis.
Hope this helps.

Chaining methods on-demand

The title is probably really bad, so sorry for that :/
I have a library that creates users for me with predefined capabilities. Right now that works by doing something like
var User = require(...).User;
var user = new User(...);
// user has methods like which are all asymc
user.register(callback);
user.addBla(callback);
I also have wrapper methods which work like:
lib.createUser.WithBla(callback)
however, that naturally does incur a huge number of methods once you think of various combinations etc. So I have two ideas:
somehow make those calls chain-able without having to do huge levels of callback-function-juggling. eg. lib.createUser(callback).WithBla().WithBlub().WithWhatever()...
passing some sort of capabilities like lib.createUser({Bla:true, Blub:true}, callback)
however I have not the slightest clue how to actually implement that, considering all those methods are asynchronous and use callbacks (which I cannot change, as they are based on the node-module request).
Maybe not quite what you had in mind, but you could use the library async for this.
var user = new User();
user.addSomeValue = function(someValue, cb) { cb(null) }
// Execute some functions in series (one after another)
async.series([
// These two will get a callback as their first (and only) argument.
user.register,
user.addBla,
// If you need to pass variables to the function, you can use a closure:
function(cb) { user.addSomeValue(someValue, cb); }
// Or use .bind(). Be sure not to forget the first param ('this').
user.addSomeValue(user, someValue)
], function(err, results) {
if(err) throw "One of the functions failed!";
console.log(
"The the various functions gave these values to the callbacks:",
results;
);
});
The result is a single callback, not many nested ones.
Another option would be to re-write your code to use Promises.

How can I work around deeply nested callbacks within NodeJS and sqlite-3?

I've been getting to grips with node and node-sqlite3 and need to build a report up based on a number of queries:
var db = require('./db');
module.exports = {
getActivity : function (user_id, done) {
var report = {};
db.get('SELECT * FROM warehouse WHERE user_id = ?', user_id, function (err, warehouse) {
report.warehouse = warehouse;
db.all('SELECT * FROM shops WHERE warehouse_id = ?', report.warehouse.id, function (err, shops) {
report.shops = shops;
return done(report);
});
});
}
};
My goal was to be able to generate a report from a route and serialize it as a JSON response. Here's how my route looks:
app.get('/api/hello',
auth.check,
function(req, res) {
hello.getActivity(1, function (data) {
res.send(data);
});
});
I will most likely have more queries to include in this report and thus more nested callbacks. What options do I have to avoid this? I'm familiar with promises etc but node-sqlite doesn't have anything built in for cleaning this stuff up. Maybe I am using it incorrectly?
Last of all, I am passing in a 'done' callback from the route. Maybe this is the node way of doing things but it would be great if I could just simply return the report once it's generated, without the callback. Is there a better pattern for this?
Any help appreciated!
I have a report engine built on node that has the same issues of multiple queries. To keep thinks clean, I use async, which is an awesome control flow library:
https://github.com/caolan/async#series
You will want to look at the async.series. It keeps your code a little cleaner than tons of embedded functions.
NOTE: You will need to create a reference to variables you need to access from one step to the next outside of the async.series context. For exaple I use the var one in function for two:
//keep context to shared values outside of the async function
var one,
two;
async.series([
function(callback){
// do some stuff ...
one = 'one';
callback(null, one);
},
function(callback){
//!access value from previous step
two = one + one;
callback(null, two);
}
],
// optional callback
function(err, results){
// results is now equal to ['one', 'oneone']
});

Categories

Resources