Do I have to fulfil my JavaScript promises? - javascript

In a Node.js environment if I do this:
var doNoResolve = true;
function a() {
return new Promise(resolve => {
if (doNotResolve) {
return
}
resolve(10);
});
}
a().then(() => {
// I don't want this getting fired
});
On an incoming request, is this a memory leak? If I was using a plain old callback everything would turn out just fine if I didn't execute whatever callback was supplied, but this feels like it might not be... the very name promise implies this is somewhat wrong.
If I had to I could return a "fake promise" (return { then: () => {} }) inside function a() rather than a "real promise" if doNotResolve was true, but that feels a bit gross.
The particular use-case is that of an isomorphic React.js application where I don't want HTTP requests actually getting made (but I do want my stores to update to a state that causes, say, a loading icon to appear).

Why would you do that instead of rejecting?
The benefit of promises is that they allow both resolving and rejecting, which:
Doesn't fire the then handler (unless you provide two callbacks, which is considered bad practice)
Does fire the catch handler, which explicitly handles errors
Still fire the finally handler
You can simply do:
function a() {
return new Promise((resolve, reject) => {
if (doNotResolve) {
reject(new Error('Oh noes!'));
}
resolve(10);
});
}
Any good Promise implementation will give you a stacktrace from the location you called reject to help you debug async code, as well as calling any catch/finally handlers:
a().then(val => {
console.log('Got data:', val);
}).catch(err => {
console.error(err);
}).finally(() => {
console.log('Done!');
});
Never rejecting or resolving a promise will, depending on your implementation, leave it on the stack of pending promises and very likely throw or log an error when your page unloads or the node promise exits. I know Bluebird will complain if you've left any promises pending, since it typically indicates a bug within the async part of your code.

Related

await/async how to handle unresolved promises

How do you handle promises that do not resolve?
Example:
class Utils {
static async thisFunctionOnlyResolvesWhenPassed2AndNeverRejects(number: number) {
return new Promise((resolve, reject) => {
if(number === 2) {
resolve('ok')
}
})
}
}
console.log(await Utils.thisFunctionOnlyResolvesWhenPassed2AndNeverRejects(2))
// this will print "ok" because 2 is passed and the promise is resolved
console.log(await Utils.thisFunctionOnlyResolvesWhenPassed2AndNeverRejects(5))
// this will crash the program silently
uncaughtException and unhandledRejection return nothing when the promise is unresolved. Adding a try/catch around the await doesn't work (no errors). Finally, the only thing that works is using Promise.then instead of await.
Problem is the code base is riddled with async/await and Promises that sometimes resolve (depending on conditions)
Question: Is there a typescript flag I can add to detect a missing resolve/reject? or maybe an automated way to transpile all the async/await to use Promise.then?
When using a debugger, the program stops after the Promise and it is difficult to find which function/promise has a missing resolve/reject.
Rewriting all the async/await calls to use Promise.then is my last resort.
If you have promises that occasionally don't resolve or reject and that's not the way they are supposed to work (which it usually isn't), then you just have to fix that. There really is no work-around. The proper fix is to get down to the lowest level and fix the code so it reliably resolves or rejects every time.
This is not the proper fix, but implementing a timeout wrapper could help with debugging giving you a log message with some semblance of a stack trace for a timed out promise:
function rejectT(t) {
// create potential error here for better opportunity at stack trace
let e = new Error("Promise timed out");
return new Promise((resolve, reject) => {
setTimeout(() => {
console.log(e);
reject(e);
}, t);
});
}
function timeout(p, t = 5000) {
return Promise.race([p, rejectT(t)]);
}
You can then wrap any promise such that instead of:
fn().then(...).catch(...)
You can use:
timeout(fn()).then(...).catch(...);
Or, if you want to set a custom timeout value:
timeout(fn(), 1000).then(...).catch(...);
Again, this is debugging code to help find the culprits that need fixing and to help test fixes, not to bandaid your code.
Rewriting all the async/await calls to use Promise.then is my last resort.
I don't see how this is going to help at all. If await never finishes, neither will promise.then(). They are exactly the same in that regard. If the promise never resolves or rejects, then the .then() handler will never get called either.
Problem is the code base is riddled with async/await and Promises that sometimes resolve (depending on conditions)
There's no shortcut here other than methodical code review to find suspect code that has code paths that may never resolve or reject and then building unit tests to test every function that returns a promise in a variety of conditions.
One likely source of code that never resolves or rejects are some of the promise anti-patterns. The precise reason some of them are anti-patterns is because they can be very easy to mess up. Here are a few references that might spike your sensitivity to suspect code:
Promise Anti-Patterns
Common Promise Anti-Patterns and How to Avoid Them
ES6 Promises: Patterns and Anti-Patterns
async function thisFunctionOnlyResolvesWhenPassed2AndNeverRejects(number) {
return new Promise((resolve, reject) => {
if (number === 2) {
resolve('ok')
} else {
reject('error:' + number)
}
})
}
(async() => {
try {
console.log(await thisFunctionOnlyResolvesWhenPassed2AndNeverRejects(2))
// this will print "ok" because 2 is passed and the promise is resolved
} catch (e) {
console.error(e);
}
try {
console.log(await thisFunctionOnlyResolvesWhenPassed2AndNeverRejects(5))
// this will crash the program silently
} catch (e) {
console.error(e);
}
})()

Early returning inside an asynchronous function

Assume the scenario where you have to call an asynchronous function, but you are not really interested about success/failure situation of that function. In that case what are the pros and cons in following two patterns stated below with respect to call stack, callback queue and event loop
Pattern-1
async setSomething() {
try {
set(); // another async function
} catch (err) {
// log the error here
}
return true;
}
Pattern-2
async setSomething() {
try {
await set(); // another async function
} catch (err) {
// log the error here
}
return true;
}
Pattern 1 does not catch any errors that may occur during asynchronous operations in the set function - any errors will result in an unhandled Promise rejection, which should be avoided. Pattern 1 will only catch errors that occur during set's synchronous operations (such as, when setting up a fetch request), which are not likely to occur in most situations.
Example:
// open your browser's console to see the uncaught rejection
const set = () => new Promise((_, reject) => setTimeout(reject, 500));
async function setSomething() {
try {
set(); // another async function
} catch (err) {
console.log('err');
}
return true;
}
setSomething();
So, pattern 2 is likely preferable. If you don't care about the result of the asynchronous call, then don't await or call .then when you call setSomething().
Or, for something this simple, you might consider using Promise methods only, no async function needed:
const setSomething = () => set()
.catch((err) => {
// log the error here
});
This answer is a rather unconventional advice to the question than an actual answer to the examples posted by OP.
not really interested about success/failure situation of that function
If the above statement is the case, then it means, the return is not dependent on the result of the async invocation.
When you're not bothered about the return of the async invocation, it's better off to not use async/await or any type of promise at all. You could just invoke the function like any other function and process with the rest of the code.

Why add `async` to a promise callback [duplicate]

I've been trying to get a conceptual understanding of why the following code doesn't catch the throw. If you remove the async keyword from the new Promise(async (resolve, ... part then it works fine, so it has to do with the fact that the Promise executor is an async function.
(async function() {
try {
await fn();
} catch(e) {
console.log("CAUGHT fn error -->",e)
}
})();
function fn() {
return new Promise(async (resolve, reject) => {
// ...
throw new Error("<<fn error>>");
// ...
});
}
The answers here, here, and here repeat that "if you're in any other asynchronous callback, you must use reject", but by "asynchronous" they're not referring to async functions, so I don't think their explanations apply here (and if they do, I don't understand how).
If instead of throw we use reject, the above code works fine. I'd like to understand, fundamentally, why throw doesn't work here. Thanks!
This is the async/await version of the Promise constructor antipattern!
Never ever use an async function as a Promise executor function (even when you can make it work1)!
[1: by calling resolve and reject instead of using return and throw statements]
by "asynchronous" they're not referring to async functions, so I don't think their explanations apply here
They could as well. A simple example where it cannot work is
new Promise(async function() {
await delay(…);
throw new Error(…);
})
which is equivalent to
new Promise(function() {
return delay(…).then(function() {
throw new Error(…);
});
})
where it's clear now that the throw is inside an asynchronous callback.
The Promise constructor can only catch synchronous exceptions, and an async function never throws - it always returns a promise (which might get rejected though). And that return value is ignored, as the promise is waiting for resolve to be called.
because the only way to "communicate" to the outside world from within a Promise executor is to use the resolve and reject functions. You could use the following for your example:
function fn() {
return new Promise(async (resolve, reject) => {
// there is no real reason to use an async executor here since there is nothing async happening
try {
throw new Error('<<fn error>>')
} catch(error) {
return reject(error);
}
});
}
An example would be when you want to do something that has convenient async functions, but also requires a callback. The following contrived example copies a file by reading it using the async fs.promises.readFile function with the callback based fs.writeFile function. In the real world, you would never mix fs functions like this because there is no need to. But some libraries like stylus and pug use callbacks, and I use something like this all the time in those scenarios.
const fs = require('fs');
function copyFile(infilePath, outfilePath) {
return new Promise(async (resolve, reject) => {
try {
// the fs.promises library provides convenient async functions
const data = await fs.promises.readFile(infilePath);
// the fs library also provides methods that use callbacks
// the following line doesn't need a return statement, because there is nothing to return the value to
// but IMO it is useful to signal intent that the function has completed (especially in more complex functions)
return fs.writeFile(outfilePath, data, (error) => {
// note that if there is an error we call the reject function
// so whether an error is thrown in the promise executor, or the callback the reject function will be called
// so from the outside, copyFile appears to be a perfectly normal async function
return (error) ? reject(error) : resolve();
});
} catch(error) {
// this will only catch errors from the main body of the promise executor (ie. the fs.promises.readFile statement
// it will not catch any errors from the callback to the fs.writeFile statement
return reject(error);
// the return statement is not necessary, but IMO communicates the intent that the function is completed
}
}
}
Apparently everyone says this is an anti-pattern, but I use it all the time when I want to do some async stuff before doing something that can only be done with a callback (not for copying files like my contrived example). I don't understand why people think it is an anti-pattern (to use an async promise executor), and haven't seen an example yet that has convinced me that it should be accepted as a general rule.

What is a practical / elegant way to manage complex event sequences with cancellation in JavaScript?

I have a JavaScript (EmberJS + Electron) application that needs to execute sequences of asynchronous tasks. Here is a simplified example:
Send a message to a remote device
Receive response less than t1 seconds later
Send another message
Receive second response less than t2 seconds later
Display success message
For simple cases this seems reasonably easy to implement with Promises: 1 then 2 then 3 ... It gets a little trickier when timeouts are incorporated, but Promise.race and Promise.all seem like reasonable solutions for that.
However, I need to allow users to be able to cancel a sequence gracefully, and I am struggling to think of sensible ways to do this. The first thing that came to mind was to do some kind of polling during each step to see if a variable someplace has been set to indicate that the sequence should be canceled. Of course, that has some serious problems with it:
Inefficient: most of the polling time is wasted
Unresponsive: an extra delay is introduced by having to poll
Smelly: I think it goes without saying that this would be inelegant. A cancel event is completely unrelated to time so shouldn't require using a timer. The isCanceled variable may need to be outside of the promise' scope. etc.
Another thought I had was to perhaps race everything so far against another promise that only resolves when the user sends a cancel signal. A major problem here is that the individual tasks running (that the user wants to cancel) don't know that they need to stop, roll-back, etc. so even though the code that gets the promise resolution from the race works fine, the code in the other promises does not get notified.
Once upon a time there was talk about cancel-able promises, but it looks to me like the proposal was withdrawn so won't be incorporated into ECMAScript any time soon though I think the BlueBird promise library supports this idea. The application I'm making already includes the RSVP promise library, so I didn't really want to bring in another one but I guess that's a potential option.
How else can this problem be solved?
Should I be using promises at all? Would this be better served by a pub/sub event system or some such thing?
Ideally, I'd like to separate the concern of being canceled from each task (just like how the Promise object is taking care of the concern of asynchronicity). It'd also be nice if the cancellation signal could be something passed-in/injected.
Despite not being graphically skilled, I've attempted to illustrate what I'm trying to do by making the two drawings below. If you find them confusing then feel free to ignore them.
If I understand your problem correctly, the following may be a solution.
Simple timeout
Assume your mainline code looks like this:
send(msg1)
.then(() => receive(t1))
.then(() => send(msg2))
.then(() => receive(t2))
.catch(() => console.log("Didn't complete sequence"));
receive would be something like:
function receive(t) {
return new Promise((resolve, reject) => {
setTimeout(() => reject("timed out"), t);
receiveMessage(resolve, reject);
});
}
This assumes the existence of an underlying API receiveMessage, which takes two callbacks as parameters, one for success and one for failure. receive simply wraps receiveMessage with the addition of the timeout which rejects the promise if time t passes before receiveMessage resolves.
User cancellation
But how to structure this so that an external user can cancel the sequence? You have the right idea to use a promise instead of polling. Let's write our own cancelablePromise:
function cancelablePromise(executor, canceler) {
return new Promise((resolve, reject) => {
canceler.then(e => reject(`cancelled for reason ${e}`));
executor(resolve, reject);
});
}
We pass an "executor" and a "canceler". "Executor" is the technical term for the parameter passed to the Promise constructor, a function with the signature (resolve, reject). The canceler we pass in is a promise, which when fulfilled, cancels (rejects) the promise we are creating. So cancelablePromise works exactly like new Promise, with the addition of a second parameter, a promise for use in canceling.
Now you can write your code as something like the following, depending on when you want to be able to cancel:
var canceler1 = new Promise(resolve =>
document.getElementById("cancel1", "click", resolve);
);
send(msg1)
.then(() => cancelablePromise(receiveMessage, canceler1))
.then(() => send(msg2))
.then(() => cancelablePromise(receiveMessage, canceler2))
.catch(() => console.log("Didn't complete sequence"));
If you are programming in ES6 and like using classes, you could write
class CancelablePromise extends Promise {
constructor(executor, canceler) {
super((resolve, reject) => {
canceler.then(reject);
executor(resolve, reject);
}
}
This would then obviously be used as in
send(msg1)
.then(() => new CancelablePromise(receiveMessage, canceler1))
.then(() => send(msg2))
.then(() => new CancelablePromise(receiveMessage, canceler2))
.catch(() => console.log("Didn't complete sequence"));
If programming in TypeScript, with the above code you will likely need to target ES6 and run the resulting code in an ES6-friendly environment which can handle the subclassing of built-ins like Promise correctly. If you target ES5, the code TypeScript emits might not work.
The above approach has a minor (?) defect. Even if canceler has fulfilled before we start the sequence, or invoke cancelablePromise(receiveMessage, canceler1), although the promise will still be canceled (rejected) as expected, the executor will nevertheless run, kicking off the receiving logic--which in the best case might consume network resources we would prefer not to. Solving this problem is left as an exercise.
"True" cancelation
But none of the above addresses what may be the real issue: to cancel an in-progress asynchronous computation. This kind of scenario was what motivated the proposals for cancelable promises, including the one which was recently withdrawn from the TC39 process. The assumption is that the computation provides some interface for cancelling it, such as xhr.abort().
Let's assume that we have a web worker to calculate the nth prime, which kicks off on receiving the go message:
function findPrime(n) {
return new Promise(resolve => {
var worker = new Worker('./find-prime.js');
worker.addEventListener('message', evt => resolve(evt.data));
worker.postMessage({cmd: 'go', n});
}
}
> findPrime(1000000).then(console.log)
< 15485863
We can make this cancelable, assuming the worker responds to a "stop" message to terminate its work, again using a canceler promise, by doing:
function findPrime(n, canceler) {
return new Promise((resolve, reject) => {
// Initialize worker.
var worker = new Worker('./find-prime.js');
// Listen for worker result.
worker.addEventListener('message', evt => resolve(evt.data));
// Kick off worker.
worker.postMessage({cmd: 'go', n});
// Handle canceler--stop worker and reject promise.
canceler.then(e => {
worker.postMessage({cmd: 'stop')});
reject(`cancelled for reason ${e}`);
});
}
}
The same approach could be used for a network request, where the cancellation would involve calling xhr.abort(), for example.
By the way, one rather elegant (?) proposal for handling this sort of situation, namely promises which know how to cancel themselves, is to have the executor, whose return value is normally ignored, instead return a function which can be used to cancel itself. Under this approach, we would write the findPrime executor as follows:
const findPrimeExecutor = n => resolve => {
var worker = new Worker('./find-prime.js');
worker.addEventListener('message', evt => resolve(evt.data));
worker.postMessage({cmd: 'go', n});
return e => worker.postMessage({cmd: 'stop'}));
}
In other words, we need only to make a single change to the executor: a return statement which provides a way to cancel the computation in progress.
Now we can write a generic version of cancelablePromise, which we will call cancelablePromise2, which knows how to work with these special executors that return a function to cancel the process:
function cancelablePromise2(executor, canceler) {
return new Promise((resolve, reject) => {
var cancelFunc = executor(resolve, reject);
canceler.then(e => {
if (typeof cancelFunc === 'function') cancelFunc(e);
reject(`cancelled for reason ${e}`));
});
});
}
Assuming a single canceler, your code can now be written as something like
var canceler = new Promise(resolve => document.getElementById("cancel", "click", resolve);
function chain(msg1, msg2, canceler) {
const send = n => () => cancelablePromise2(findPrimeExecutor(n), canceler);
const receive = () => cancelablePromise2(receiveMessage, canceler);
return send(msg1)()
.then(receive)
.then(send(msg2))
.then(receive)
.catch(e => console.log(`Didn't complete sequence for reason ${e}`));
}
chain(msg1, msg2, canceler);
At the moment that the user clicks on the "Cancel" button, and the canceler promise is fulfilled, any pending sends will be canceled, with the worker stopping in midstream, and/or any pending receives will be canceled, and the promise will be rejected, that rejection cascading down the chain to the final catch.
The various approaches that have been proposed for cancelable promise attempt to make the above more streamlined, more flexible, and more functional. To take just one example, some of them allow synchronous inspection of the cancellation state. To do this, some of them use the notion of "cancel tokens" which can be passed around, playing a role somewhat analogous to our canceler promises. However, in most cases cancellation logic can be handled without too much complexity in pure userland code, as we have done here.

exception handling, thrown errors, within promises

I am running external code as a 3rd party extension to a node.js service. The API methods return promises. A resolved promise means the action was carried out successfully, a failed promise means there was some problem carrying out the operation.
Now here's where I'm having trouble.
Since the 3rd party code is unknown, there could be bugs, syntax errors, type issues, any number of things that could cause node.js to throw an exception.
However, since all the code is wrapped up in promises, these thrown exceptions are actually coming back as failed promises.
I tried to put the function call within a try/catch block, but it's never triggered:
// worker process
var mod = require('./3rdparty/module.js');
try {
mod.run().then(function (data) {
sendToClient(true, data);
}, function (err) {
sendToClient(false, err);
});
} catch (e) {
// unrecoverable error inside of module
// ... send signal to restart this worker process ...
});
In the above psuedo-code example, when an error is thrown it turns up in the failed promise function, and not in the catch.
From what I read, this is a feature, not an issue, with promises. However I'm having trouble wrapping my head around why you'd always want to treat exceptions and expected rejections exactly the same.
One case is about actual bugs in the code, possibly irrecoverable -- the other is just possible missing configuration information, or a parameter, or something recoverable.
Thanks for any help!
Crashing and restarting a process is not a valid strategy to deal with errors, not even bugs. It would be fine in Erlang, where a process is cheap and does one isolated thing, like serving a single client. That doesn't apply in node, where a process costs orders of magnitude more and serves thousands of clients at once
Lets say that you have 200 requests per second being served by your service. If 1% of those hit a throwing path in your code, you would get 20 process shutdowns per second, roughly one every 50ms. If you have 4 cores with 1 process per core, you would lose them in 200ms. So if a process takes more than 200ms to start and prepare to serve requests (minimum cost is around 50ms for a node process that doesn't load any modules), we now have a successful total denial of service. Not to mention that users hitting an error tend to do things like e.g. repeatedly refresh the page, thereby compounding the problem.
Domains don't solve the issue because they cannot ensure that resources are not leaked.
Read more at issues #5114 and #5149.
Now you can try to be "smart" about this and have a process recycling policy of some sort based on a certain number of errors, but whatever strategy you approach it will severely change the scalability profile of node. We're talking several dozen requests per second per process, instead of several thousands.
However, promises catch all exceptions and then propagate them in a manner very similar to how synchronous exceptions propagate up the stack. Additionally, they often provide a method finally which is meant to be an equivalent of try...finally Thanks to those two features, we can encapsulate that clean-up logic by building "context-managers" (similar to with in python, using in C# or try-with-resources in Java) that always clean up resources.
Lets assume our resources are represented as objects with acquire and dispose methods, both of which return promises. No connections are being made when the function is called, we only return a resource object. This object will be handled by using later on:
function connect(url) {
return {acquire: cb => pg.connect(url), dispose: conn => conn.dispose()}
}
We want the API to work like this:
using(connect(process.env.DATABASE_URL), async (conn) => {
await conn.query(...);
do other things
return some result;
});
We can easily achieve this API:
function using(resource, fn) {
return Promise.resolve()
.then(() => resource.acquire())
.then(item =>
Promise.resolve(item).then(fn).finally(() =>
// bail if disposing fails, for any reason (sync or async)
Promise.resolve()
.then(() => resource.dispose(item))
.catch(terminate)
)
);
}
The resources will always be disposed of after the promise chain returned within using's fn argument completes. Even if an error was thrown within that function (e.g. from JSON.parse) or its inner .then closures (like the second JSON.parse), or if a promise in the chain was rejected (equivalent to callbacks calling with an error). This is why its so important for promises to catch errors and propagate them.
If however disposing the resource really fails, that is indeed a good reason to terminate. Its extremely likely that we've leaked a resource in this case, and its a good idea to start winding down that process. But now our chances of crashing are isolated to a much smaller part of our code - the part that actually deals with leakable resources!
Note: terminate is basically throwing out-of-band so that promises cannot catch it, e.g. process.nextTick(() => { throw e });. What implementation makes sense might depend on your setup - a nextTick based one works similar to how callbacks bail.
How about using callback based libraries? They could potentially be unsafe. Lets look at an example to see where those errors could come from and which ones could cause problems:
function unwrapped(arg1, arg2, done) {
var resource = allocateResource();
mayThrowError1();
resource.doesntThrow(arg1, (err, res) => {
mayThrowError2(arg2);
done(err, res);
});
}
mayThrowError2() is within an inner callback and will still crash the process if it throws, even if unwrapped is called within another promise's .then. These kinds of errors aren't caught by typical promisify wrappers and will continue to cause a process crash as per usual.
However, mayThrowError1() will be caught by the promise if called within .then, and the inner allocated resource might leak.
We can write a paranoid version of promisify that makes sure that any thrown errors are unrecoverable and crash the process:
function paranoidPromisify(fn) {
return function(...args) {
return new Promise((resolve, reject) =>
try {
fn(...args, (err, res) => err != null ? reject(err) : resolve(res));
} catch (e) {
process.nextTick(() => { throw e; });
}
}
}
}
Using the promisified function within another promise's .then callback now results with a process crash if unwrapped throws, falling back to the throw-crash paradigm.
Its the general hope that as you use more and more promise based libraries, they would use the context manager pattern to manage their resources and therefore you would have less need to let the process crash.
None of these solutions are bulletproof - not even crashing on thrown errors. Its very easy to accidentally write code that leaks resources despite not throwing. For example, this node style function will leak resources even though it doesn't throw:
function unwrapped(arg1, arg2, done) {
var resource = allocateResource();
resource.doSomething(arg1, function(err, res) {
if (err) return done(err);
resource.doSomethingElse(res, function(err, res) {
resource.dispose();
done(err, res);
});
});
}
Why? Because when doSomething's callback receives an error, the code forgets to dispose of the resource.
This sort of problem doesn't happen with context-managers. You cannot forget to call dispose: you don't have to, since using does it for you!
References: why I am switching to promises, context managers and transactions
It is almost the most important feature of promises. If it wasn't there, you might as well use callbacks:
var fs = require("fs");
fs.readFile("myfile.json", function(err, contents) {
if( err ) {
console.error("Cannot read file");
}
else {
try {
var result = JSON.parse(contents);
console.log(result);
}
catch(e) {
console.error("Invalid json");
}
}
});
(Before you say that JSON.parse is the only thing that throws in js, did you know that even coercing a variable to a number e.g. +a can throw a TypeError?
However, the above code can be expressed much more clearly with promises because there is just one exception channel instead of 2:
var Promise = require("bluebird");
var readFile = Promise.promisify(require("fs").readFile);
readFile("myfile.json").then(JSON.parse).then(function(result){
console.log(result);
}).catch(SyntaxError, function(e){
console.error("Invalid json");
}).catch(function(e){
console.error("Cannot read file");
});
Note that catch is sugar for .then(null, fn). If you understand how the exception flow works you will see it is kinda of an anti-pattern to generally use .then(fnSuccess, fnFail).
The point is not at all to do .then(success, fail) over , function(fail, success) (I.E. it is not an alternative way to attach your callbacks) but make written code look almost the same as it would look when writing synchronous code:
try {
var result = JSON.parse(readFileSync("myjson.json"));
console.log(result);
}
catch(SyntaxError e) {
console.error("Invalid json");
}
catch(Error e) {
console.error("Cannot read file");
}
(The sync code will actually be uglier in reality because javascript doesn't have typed catches)
Promise rejection is simply a from of failure abstraction. So are node-style callbacks (err, res) and exceptions. Since promises are asynchronous you can't use try-catch to actually catch anything, because errors a likely to happen not in the same tick of event loop.
A quick example:
function test(callback){
throw 'error';
callback(null);
}
try {
test(function () {});
} catch (e) {
console.log('Caught: ' + e);
}
Here we can catch an error, as function is synchronous (though callback-based). Another:
function test(callback){
process.nextTick(function () {
throw 'error';
callback(null);
});
}
try {
test(function () {});
} catch (e) {
console.log('Caught: ' + e);
}
Now we can't catch the error! The only option is to pass it in the callback:
function test(callback){
process.nextTick(function () {
callback('error', null);
});
}
test(function (err, res) {
if (err) return console.log('Caught: ' + err);
});
Now it's working just like in the first example.The same applies to promises: you can't use try-catch, so you use rejections for error-handling.

Categories

Resources