Is there a way using javascript native promises (docs) to create a promise and attach thenables, without knowing at constructor time how it will resolve?
var foo = new Promise(function(resolve, reject) {
// I don't know how this will resolve yet as some other object will resolve it
});
foo.then(function(val) {
console.log("first " + val);
});
foo.resolve("bar");
foo.then(function(val) {
console.log("second " + val);
});
// result
// first bar
// second bar
Simply save them inside of a closure.
var makePromise = function () {
var resolvePromise = null,
rejectPromise = null,
promise = new Promise(function (resolve, reject) {
resolvePromise = resolve;
rejectPromise = reject;
});
return { promise : promise, resolve : resolvePromise, reject : rejectPromise };
};
var deferredSomething = function () {
var deferredThing = makePromise();
waitAWhile()
.then(doStuff)
.then(function (result) {
if (result.isGood) {
deferredThing.resolve(result.data);
} else {
deferredThing.reject(result.error);
}
});
return deferredThing.promise;
};
This is actually the majority of the difference between the "deferred" concept and the "promise" concept; one more level, on top, that has the actual remote-controls that you can give to someone else, while you hand the .then|.success|.done|etc... to your consumers.
Once you bring those functions out into your upstream process, you can happily lazy-load whatever you'd like, using the "thenable" which you'll return, and then succeed or fail your chain (or leave it hanging) at will...
UPDATE
Seeing as this is probably going to continue to be the chosen answer, and continue to be voted down, as the solution to the exact problem he was having (ie: retrofitting code which was not made with ES6 promises in mind), I figure I'll add a more detailed example of exactly why using this antipattern selectively can be better than naught:
MongoClient.connect("mongodb://localhost:21017/mydb", (err, db) => {
db.collection("mycollection", (err, collection) => {
collection.find().toArray((err, results) => {
doStuff(results);
});
});
});
If I were to write a library, here, hoping to reach the point where I could write:
let dbConnected = MongoClient.connect(dbURL);
dbConnected
.then(db => db.collection(myCollection))
.then(collection => collection.find(query))
.then(stream => doStuff(stream));
...or alternatively:
composeAsync(
(stream) => doStuff(stream),
(collection) => collection.find(query),
(db) => dbCollection(myCollection)
)(dbConnected);
...for ease of use within the library, does it make sense to wrap every single function-body within an instantiated promise
// find = curry(query, collection)
return new Promise(resolve, reject) {
/* whole function body, here /
/ do lots of stuff which is irrelevant to the resolution of mongo.db.collection.find, but is relevant to its calling */
collection.find(query).toArray( /node-callback/(err, result) {
if (err) {
reject(err);
} else {
resolve(result);
}
});
};
...or in looking at the pattern of really only requiring the node-specific callback to be resolved, does it make more sense to have some form of promise-resolver, to save having to write out / copy-paste a half-dozen purely-redundant lines which should be completely DRYed up?
// find = curry(query, collection)
let resolver = new NodeResolver();
collection.find(query).toArray(promise.resolve);
return resolver.promise;
Yes, that is an anti-pattern... ...yet, an antipattern which requires fewer keystrokes, restores the natural flow of the promise-chain, fixes a problem with Node's callback-only API, reduces the potential for errors, et cetera.
Yes, there are already libraries which do this...
...solutions specific to X library or Y...
...or solutions which globally override methods of various modules (scary)
...or solutions which, again, basically force you to pass in all of the details of the call you're making:
wrapNodeMethod(fs, "read", url, config).then(data => { /*...*/ });
But there is no simple solution for the case of inverting all of that pain, without either:
a) wrapping the entire function body in a promise, to feed the async callback a resolver
b) using an antipattern within a library, in order to pass Node callbacks a resolver that the rest of the function-body needs to know precisely nothing about.
Even if data needed to be transformed within the resolver, it would still make more sense to return a transformed set, in a new promise
let resolver = new NodeResolver();
somethingAsync(resolver.resolve);
return resolver.promise.then(transformData).then(logTransform);
...than to wrap the whole body, including transforms, etc, just for closure-scope reference, just for the sake of avoiding an "antipattern" which clearly goes against the grain of what has become a very prominent JS platform/paradigm.
Now, personally, I'd be happier if IO||Node methods returned a promise and/or a stream, as well as taking a callback, as a core part of the platform...
...that's not going to happen...
...but you can't possibly tell me that writing less, and keeping Node modules DRY, while still using ES6 Promises is an "antipattern", without providing me a more-eloquent solution, therefore.
If you can, indeed, provide me something that I can use in any NodeJS callback, which does, indeed, provide a more eloquent solution to this, such that I don't have to wrap every body of every method which contains an async callback in a new constructor, or use clunky dispatcher methods, or hijack entire modules to override their global functionality...
...I'd be more than willing to retract my statement that this particular pattern is still highly-useful, as regards interfacing with APIs which are prone to pyramids o'dewm.
If result of promise depends on other promise, you should just create a promise using then.
What was proposed by #Norguard in direct form, doesn't make much sense (it's even coined as deferred anti-pattern). Below code does exactly same, and no extra promise is needed:
var deferredSomething = function () {
return waitAWhile()
.then(doStuff)
.then(function (result) {
if (result.isGood) {
return result.data;
} else {
throw result.error;
}
});
});
};
And, even if, for whatever reason you would need to create promise upfront, then with constructor pattern it would be cleaner to do it that way:
var deferredSomething = function () {
return new Promise(function (resolve, reject) {
waitAWhile()
.then(doStuff)
.then(function (result) {
if (result.isGood) {
resolve(result.data);
} else {
reject(result.error);
}
});
});
};
Related
I'm trying to use Promise.all() with an array of Promises that is being populated inside of a foreach looop right before but it seems like the Promise.all() is not waiting the promises to be all completed before executing its callback.
What's wrong with the following code? (I tried to simplify it before posting, so parts of it might not make complete sense, but that promised and loops are all there).
class test {
constructor(sql) {
Promise.all([this.sync(sql, 0), this.sync(sql, 1)]).then((data) => {
console.log(data);
});
}
sync(sql, id = 0) {
return new Promise((resolve, reject) => {
request.get('http://localhost/test/' + id, {
json: true
}, (req, res) => {
var promises = [];
res.body['items'].forEach(item => {
promises.push(new Promise((resolve, reject) => {
this.existingRecord(sql, item['id']).then(() => {
resolve(false);
}).catch(() => {
this.add(sql, item).then(resolve(id));
})
}))
});
Promise.all(promises).then((data) => resolve(data));
});
});
}
add(sql, data) {
return new Promise((resolve, reject) => {
console.log('Inserting ' + data['id']);
var request = new sql.Request();
var query = `INSERT INTO test (col1, col2) VALUES (${utils.prepareInsertdata(data)})`;
request.query(query, (err, result) => {
if (err) {
console.log('ERROR INSERTING: ' + data['id']);
console.log(err);
}
resolve();
});
});
}
}
First off, you're making it much harder to write good, clean, error-free code when you have a mix of promises and regular callbacks in your control flow. I find that the best way to write asynchronous code using promises is to first take any asynchronous operations that are not promise based and create promise-based wrappers for them and then write my logic and control flow using only promises. This makes a consistent path for flow of control and for error handling and it removes the mess of promisifying things from the actual main logic.
Then, I see several significant issues in your code.
Asynchronous operations in the constructor
It's almost never a good idea to put asynchronous operations in the constructor. This is because the constructor HAS to return the object itself so that leaves no simple way to communicate back to the code that created your object when the asynchronous operations are actually done and if they succeeded of failed. It is not entirely clear to me what you're trying to accomplish with those async operations, but this is likely a bad design pattern. I favor a factory function that returns a promise that resolves to the new object for combining the creation of an object with asynchronous operations. This gives you everything you need, a fully formed object, knowledge of when the async operations are done and an ability to have error handling for the async operations. You can see more about this factory function option and some other design options here:
Asynchronous operations in constructor
Improve .then() handler construction
When you do this:
this.add(sql, item).then(resolve(id));
You are calling resolve(id) immediately and passing that to .then() rather than waiting for the .then() handler to be called before calling resolve(id). All of this is complicated because you're mixing regular callbacks and promises.
Creating new wrapped promises rather than just returning existing promises
This is related to your mix of regular callbacks and regular promises, but you'd much rather just return an existing promise than wrap it in a new promise that you have to manually resolve and reject. More than half the time, you will miss proper error handling when manually wrapping things in a new promise and it just results in more code than is needed.
Race Conditions
In any sort of multi-user database environment, you can't write database code such as:
if (record exists) {
do one thing
} else {
create new record
}
This is a race condition. If some other database request comes in during the processing of this, it could change the database in the middle of this and you'd be trying to create a record that just got created by another piece of code.
The usual solution varies by database (and you don't say exactly which database library you're using). Usually, you want to let the database manage the creation of unique records making it so that a duplicate record (by whatever key you're managing uniqueness in this table by) isn't allowed in the database and the concurrency of that is managed by the database itself. Some databases have an atomic operation such as findOrCreate() that will find an existing record or create a new one in an atomic fashion. Other databases have other approaches. But, it's important to make sure that adding unique records to the database is an atomic operation that can't ever create unwanted duplicates.
I'd suggest this implementation:
// use promise version of request library (already promisified for us)
const rp = require('request-promise');
class test {
constructor() {
}
init(sql) {
return Promise.all([this.sync(sql, 0), this.sync(sql, 1)]).then((data) => {
console.log(data);
// do something with the data here - probably store it in instance data
});
}
sync(sql, id = 0) {
return rp.get('http://localhost/test/' + id, {json: true}).then(res => {
// process all items
return Promise.all(res.body.items.map(item => {
return this.existingRecord(sql, item.id).then(() => {
return false;
}).catch(() => {
// it's probably bad form here to do this for all possible database errors
// probably this should be looking for a specific error of id not found
// or something like that.
// This is also likely a race condition. You would typically avoid the race
// condition by making the item key unique in the database and just doing an add and letting
// the database tell you the add failed because the item already exists
// This will allow the database to control the concurrency and avoid race conditions
return this.add(sql, item);
});
}));
});
}
}
// factory function that returns promise that resolves to a new object
// don't use new test() elsewhere
function createTestObj(sql) {
let t = new test();
return t.init(sql).then(() => {
// resolve to our new object
return t;
});
}
For your add() method, I'd switch to using the promise interface in your sql database. There should either be one built-in or a 3rd party package that will add one on top of your database interface. This will prevent the manual creation of promises and the incomplete error handling in your add() method.
I'm new to Nodejs and having trouble understand this issue: I tried to run a describe function against an array, and the AWS function seems to run after the main function has finished.
Here's the main function: (loop thru a list of ACM ARNs and check the status)
var checkCertStatus = function(resolveObj){
var promise = new Promise(function(resolve, reject){
console.log('1');
var retObj='';
resolveObj.Items.forEach(function(element) {
var certDescribeParams = {
CertificateArn: element.sslCertId
};
console.log('2');
acm.describeCertificate(certDescribeParams, function(err, data) {
if(err) reject(new Error(err));
else {
console.log(data.Certificate.DomainName + ': ' + data.Certificate.Status);
retObj+=data;
}
});
});
console.log('3');
resolve(retObj);
return promise;
})
}
Based on the debug log, assuming there are 2 items need to be processed, what I got:
1
2
2
3
example.com: ISSUED
example2.com: ISSUED
Basically, I need to pass this result to the next function in the chain (with promise and stuff).
Welcome to Node.js! Speaking generally, it might be helpful to study up on the asynchronous programming style. In particular, you seem to be mixing Promises and callbacks, which may make this example more confusing than it needs to be. I suggest using the AWS SDK's built-in feature to convert responses to Promises.
The first thing I notice is that you are manually constructing a Promise with a resolve/reject function. This is often a red flag unless you are creating a library. Most other libraries support Promises which you can simply use and chain. (This includes AWS SDK, as mentioned above.)
The second thing I notice is that your checkCertStatus function does not return anything. It creates a Promise but does not return it at the end. Your return promise; line is actually inside the callback function used to create the Promise.
Personally, when working with Promises, I prefer to use the Bluebird library. It provides more fully-featured Promises than native, including methods such as map. Conveniently, the AWS SDK can be configured to work with an alternative Promise constructor via AWS.config.setPromisesDependency() as documented here.
To simplify your logic, you might try something along these lines (untested code):
const Promise = require('bluebird');
AWS.config.setPromisesDependency(Promise);
const checkCertStatus = (resolveObj) => {
const items = resolveObj.Items;
console.log(`Mapping ${items.length} item(s)`);
return Promise.resolve(items)
.map((item) => {
const certDescribeParams = {
CertificateArn: item.sslCertId,
};
console.log(`Calling describeCertificate for ${item.sslCertId}`);
return acm.describeCertificate(certDescribeParams)
.promise()
.then((data) => {
console.log(`${data.Certificate.DomainName}: ${data.Certificate.Status}`);
return data;
});
});
};
We're defining checkCertStatus as a function which takes in resolveObj and returns a Promise chain starting from resolveObj.Items. (I apologize if you are not yet familiar with Arrow Functions.) The first and only step in this chain is to map the items array to a new array of Promises returned from the acm.describeCertificate method. If any one of these individual Promises fails, the top-level Promise chain will reject as well. Otherwise, the top-level Promise chain will resolve to an array of the results. (Note that I included an inessential .then step just to log the individual results, but you could remove that clause entirely.)
Hope this helps, and I apologize if I left any mistakes in the code.
I have a function that reads a directory and copies and creates a new file within that directory.
function createFiles (countryCode) {
fs.readdir('./app/data', (err, directories) => {
if (err) {
console.log(err)
} else {
directories.forEach((directory) => {
fs.readdir(`./app/data/${directory}`, (err, files) => {
if (err) console.log(err)
console.log(`Creating ${countryCode}.yml for ${directory}`)
fs.createReadStream(`./app/data/${directory}/en.yml`).pipe(fs.createWriteStream(`./app/data/${directory}/${countryCode}.yml`))
})
})
}
})
}
How do I do this using promises or Promise All to resolve when it's complete?
First, you need to wrap each file stream in a promise that resolves when the stream emits the finish event:
new Promise((resolve, reject) => {
fs.createReadStream(`./app/data/${directory}/en.yml`).pipe(
fs.createWriteStream(`./app/data/${directory}/${countryCode}.yml`)
).on('finish', resolve);
});
The you need to collect these promises in an array. This is done by using map() instead of forEach() and returning the promise:
var promises = directories.map((directory) => {
...
return new Promise((resolve, reject) => {
fs.createReadStream( ...
...
});
});
Now you have a collection of promises that you can wrap with Promise.all() and use with a handler when all the wrapped promises have resolved:
Promise.all(promises).then(completeFunction);
In recent versions of Node (8.0.0 and later), there's a new util.promisify function you can use to get a promise. Here's how we might use it:
// Of course we'll need to require important modules before doing anything
// else.
const util = require('util')
const fs = require('fs')
// We use the "promisify" function to make calling promisifiedReaddir
// return a promise.
const promisifiedReaddir = util.promisify(fs.readdir)
// (You don't need to name the variable util.promisify promisifiedXYZ -
// you could just do `const readdir = util.promisify(fs.readdir)` - but
// I call it promisifiedReaddir here for clarity.
function createFiles(countryCode) {
// Since we're using our promisified readdir function, we'll be storing
// a Promise inside of the readdirPromise variable..
const readdirPromise = promisifiedReaddir('./app/data')
// ..then we can make something happen when the promise finishes (i.e.
// when we get the list of directories) by using .then():
return readdirPromise.then(directories => {
// (Note that we only get the parameter `directories` here, with no `err`.
// That's because promises have their own way of dealing with errors;
// try looking up on "promise rejection" and "promise error catching".)
// We can't use a forEach loop here, because forEach doesn't know how to
// deal with promises. Instead we'll use a Promise.all with an array of
// promises.
// Using the .map() method is a great way to turn our list of directories
// into a list of promises; read up on "array map" if you aren't sure how
// it works.
const promises = directory.map(directory => {
// Since we want an array of promises, we'll need to `return` a promise
// here. We'll use our promisifiedReaddir function for that; it already
// returns a promise, conveniently.
return promisifiedReaddir(`./app/data/${directory}`).then(files => {
// (For now, let's pretend we have a "copy file" function that returns
// a promise. We'll actually make that function later!)
return copyFile(`./app/data/${directory}/en.yml`, `./app/data/${directory}/${countryCode}.yml`)
})
})
// Now that we've got our array of promises, we actually need to turn them
// into ONE promise, that completes when all of its "children" promises
// are completed. Luckily there's a function in JavaScript that's made to
// do just that - Promise.all:
const allPromise = Promies.all(promises)
// Now if we do a .then() on allPromise, the function we passed to .then()
// would only be called when ALL promises are finished. (The function
// would get an array of all the values in `promises` in order, but since
// we're just copying files, those values are irrelevant. And again, don't
// worry about errors!)
// Since we've made our allPromise which does what we want, we can return
// it, and we're done:
return allPromise
})
}
Okay, but, there's probably still a few things that might be puzzling you..
What about errors? I kept saying that you don't need to worry about them, but it is good to know a little about them. Basically, in promise-terms, when an error happens inside of a util.promisify'd function, we say that that promise rejects. Rejected promises behave mostly the same way you'd expect errors to; they throw an error message and stop whatever promise they're in. So if one of our promisifiedReaddir calls rejects, it'll stop the whole createFiles function.
What about that copyFile function? Well, we have two options:
Use somebody else's function. No need to re-invent the wheel! quickly-copy-file looks to be a good module (plus, it returns a promise, which is useful for us).
Program it ourselves.
Programming it ourselves isn't too hard, actually, but it takes a little bit more than simply using util.promisify:
function copyFile(from, to) {
// Hmm.. we want to copy a file. We already know how to do that in normal
// JavaScript - we'd just use a createReadStream and pipe that into a
// createWriteStream. But we need to return a promise for our code to work
// like we want it to.
// This means we'll have to make our own hand-made promise. Thankfully,
// that's not actually too difficult..
return new Promise((resolve, reject) => {
// Yikes! What's THIS code mean?
// Well, it literally says we're returning a new Promise object, with a
// function given to it as an argument. This function takes two arguments
// of its own: "resolve" and "reject". We'll look at them separately
// (but maybe you can guess what they mean already!).
// We do still need to create our read and write streams like we always do
// when copying files:
const readStream = fs.createReadStream(from)
const writeStream = fs.createWriteStream(to)
// And we need to pipe the read stream into the write stream (again, as
// usual):
readStream.pipe(writeStream)
// ..But now we need to figure out how to tell the promise when we're done
// copying the files.
// Well, we'll start by doing *something* when the pipe operation is
// finished. That's simple enough; we'll just set up an event listener:
writeStream.on('close', () => {
// Remember the "resolve" and "reject" functions we got earlier? Well, we
// can use them to tell the promise when we're done. So we'll do that here:
resolve()
})
// Okay, but what about errors? What if, for some reason, the pipe fails?
// That's simple enough to deal with too, if you know how. Remember how we
// learned a little on rejected promises, earlier? Since we're making
// our own Promise object, we'll need to create that rejection ourself
// (if anything goes wrong).
writeStream.on('error', err => {
// We'll use the "reject" argument we were given to show that something
// inside the promise failed. We can specify what that something is by
// passing the error object (which we get passed to our event listener,
// as usual).
reject(err)
})
// ..And we'll do the same in case our read stream fails, just in case:
readStream.on('error', err => {
reject(err)
})
// And now we're done! We've created our own hand-made promise-returning
// function, which we can use in our `createFiles` function that we wrote
// earlier.
})
}
..And here's all the finished code, so that you can review it yourself:
const util = require('util')
const fs = require('fs')
const promisifiedReaddir = util.promisify(fs.readdir)
function createFiles(countryCode) {
const readdirPromise = promisifiedReaddir('./app/data')
return readdirPromise.then(directories => {
const promises = directory.map(directory => {
return promisifiedReaddir(`./app/data/${directory}`).then(files => {
return copyFile(`./app/data/${directory}/en.yml`, `./app/data/${directory}/${countryCode}.yml`)
})
})
const allPromise = Promies.all(promises)
return allPromise
})
}
function copyFile(from, to) {
return new Promise((resolve, reject) => {
const readStream = fs.createReadStream(from)
const writeStream = fs.createWriteStream(to)
readStream.pipe(writeStream)
writeStream.on('close', () => {
resolve()
})
writeStream.on('error', err => {
reject(err)
})
readStream.on('error', err => {
reject(err)
})
})
}
Of course, this implementation isn't perfect. You could improve it by looking at other implementations - for example this one destroys the read and write streams when an error occurs, which is a bit cleaner than our method (which doesn't do that). The most reliable way would probably to go with the module I linked earlier!
I highly recommend you watch funfunfunction's video on promises. It explains how promises work in general, how to use Promise.all, and more; and he's almost certainly better at explaining this whole concept than I am!
First, create a function that returns a promise:
function processDirectory(directory) {
return new Promise((resolve, reject) => {
fs.readdir(`./app/data/${directory}`, (err, files) => {
if (err) reject(err);
console.log(`Creating ${countryCode}.yml for ${directory}`);
fs.createReadStream(`./app/data/${directory}/en.yml`)
.pipe(fs.createWriteStream(`./app/data/${directory}/${countryCode}.yml`))
.on('finish', resolve);
});
});
}
Then use Promise.all:
Promise.all(directories.map(processDirectory))
.then(...)
.catch(...);
There could be a simple answer for this but I've only ever had to use extension methods (are they even called that in JS?) in C#..
I have a 3rd party library that uses events. I need a function to be called after the event is called. I could do this easily via a promise but the 3rd party library was written before ES6 and does not have promises built in. Here's some sample code -
wRcon.on('disconnect', function() {
console.log('You have been Disconnected!')
})
Ideally I would like to be able to implement something like this -
wRcon.on('disconnect', function() {
console.log('You have been Disconnected!')
}).then(wRcon.reconnect())
So to summarize my question, how do I extend wRcon.on to allow for a promise (or some other callback method)?
Promises and events, while they seem similar on the surface, actually solve different problems in Javascript.
Promises provide a way to manage asynchronous actions where an outcome is expected, but you just don't know how long it's going to take.
Events provide a way to manage asynchronous actions where something might happen, but you don't know when (or even if) it will happen.
In Javascript there is no easy way to "interface" between the two.
But for the example you've given, there is an easy solution:
wRcon.on('disconnect', function() {
console.log('You have been Disconnected!')
wRcon.reconnect()
})
This is perhaps not as elegant as your solution, and breaks down if your intention is to append a longer chain of .then() handlers on the end, but it will get the job done.
You could wrap the connection in a way that would create a promise:
const promise = new Promise((resolve) => {
wRcon.on('disconnect', function() {
resolve();
})
}).then(wRcon.reconnect())
However, this does not seem like an ideal place to use Promises. Promises define a single flow of data, not a recurring event-driven way to deal with application state. This setup would only reconnect once after a disconnect.
Three/four options for you, with the third/fourth being my personal preference:
Replacing on
You could replace the on on the wRcon object:
const original_on = wRcon.on;
wRcon.on = function(...args) {
original_on.apply(this, args);
return Promise.resolve();
};
...which you could use almost as you showed (note the _ =>):
wRcon.on('disconnect', function() {
console.log('You have been Disconnected!');
}).then(_ => wRcon.reconnect());
...but I'd be quite leery of doing that, not least because other parts of wRcon may well call on and expect it to have a different return value (such as this, as on is frequently a chainable op).
Adding a new method (onAndAfter)
Or you could add a method to it:
const original_on = wRcon.on;
wRcon.onAndAfter = function(...args) {
wRcon.on.apply(this, args);
return Promise.resolve();
};
...then
wRcon.onAndAfter('disconnect', function() {
console.log('You have been Disconnected!');
}).then(_ => wRcon.reconnect());
...but I don't like to modify other API's objects like that.
Utility method (standalone, or on Function.prototype)
Instead, I think I'd give myself a utility function (which is not "thenable"):
const after = (f, callback) => {
return function(...args) {
const result = f.apply(this, args);
Promise.resolve().then(callback).catch(_ => undefined);
return result;
};
};
...then use it like this:
wRcon.on('disconnect', after(function() {
console.log('You have been Disconnected!');
}, _ => wRcon.reconnect()));
That creates a new function to pass to on which, when called, (ab)uses a Promise to schedule the callback (as a microtask for the end of the current macrotask; use setTimeout if you just want it to be a normal [macro]task instead).
You could make after something you add to Function.prototype, a bit like bind:
Object.defineProperty(Function.prototype, "after", {
value: function(callback) {
const f = this;
return function(...args) {
const result = f.apply(this, args);
Promise.resolve().then(callback).catch(_ => undefined);
return result;
};
}
});
...and then:
wRcon.on('disconnect', function() {
console.log('You have been Disconnected!');
}.after(_ => wRcon.reconnect()));
And yes, I'm aware of the irony of saying (on the one hand) "I don't like to modify other API's objects like that" and then showing an option modifying the root API of Function. ;-)
In nodejs, a lot of function calls are implemented as non-blocking function. Sometimes there could be dependencies between these function calls. For example, when user login, I want first get the user type of the user and then according to the user type I can get the tasks which are just assigned this user type. In the non-blocking function, I use nested function call:
$http.post(url_login, login_request)
.success(function(req) {
$http.post(url_get_tasks, req.type)
.success(function(req) {
//update tasks
})
})
A lot of libraries provides non-blocking functions only, for example, the node_redis library. Is there any good way to handle the nested non-blocking function calls to solve the dependency problem.
There is a page dedicated to this exact problem: http://callbackhell.com/
To sum up, you have the option
to use named functions, as codebox suggests
use the async library
or use promises
If I am not mistaken,
you are using the Angular $http service, which should support proper
promise chaining, so you could reduce your code to
$http.post(url_login, login_request)
.then(function(req){
return $http.pos(url_get_tasks, req.type);
})
.then(function(req){
// update tasks
})
.catch(function(err){
// errors from the first AND second call land here
})
You can write a simple wrapper for async functions to turn them into promises:
// this assumes standard node.js callbacks with the
// signature (error, data)
function toPromise(fn){
return function() {
var args = Array.prototype.slice.call(arguments);
var scope = this;
return new Promise(function(resolve, reject){
var cb = function(err, data) {
return err ? reject(err) : resolve(data);
};
args.push(cb);
fn.apply(scope, args);
});
};
}
// turn fs.readdir into a promise
var readdir = toPromise(require('fs').readdir);
readdir('.')
.then(function(files){
console.log(files);
})
.catch(function(err){
console.log('reading files failed');
})
There was another question regarding Angular $http promises which might help you understand them better: here
I'm not 100% sure what you are asking, but if you want to reduce nesting in your code you can move the inner functions out like this:
function success1(req) {
//update tasks
}
function success2(req) {
$http.post(url_get_tasks, req.type).success(success1)
}
$http.post(url_login, login_request).success(success2)
obviously you should pick better names than success1 though