I've inherited a codebase at work that contains a dozen or so examples of the following pattern:
var promise = null;
try {
promise = backendService.getResults(input);
}
catch (exception) {
console.err(exception);
}
if (promise !== null) {
promise.then(function (response) {
// do stuff
})
.catch(function (error) {
console.err(error);
});
}
Where backendService is an Angular service that in turn calls a REST service through $http.
So here's my question: is that try/catch really necessary? Will there ever be any scenario where a particular error/exception is thrown that the promise's .catch fails to catch?
This has been the subject of a bit of debate on the team all morning, and the only resolution we've come up with is that we don't think it's necessary, but (a) changing it breaks the tests that were written alongside it (which would also need to be changed), and (b) well... it's defensive coding, right? It's not a Bad Thing.
The merits of actually bothering to refactor it into oblivion when there are more important things to do aren't what I'm asking about, though. I just want to know if it's a reasonable pattern when promises are being passed around like this (in AngularJS specifically, if that makes a difference), or just paranoia.
Do promises in AngularJS catch every exception/error?
No. Only exceptions that are thrown from inside then/catch callbacks are automatically caught. All errors happening outside of them will need to be handled explicitly.
Will there ever be any scenario where a particular error/exception is thrown that the promise's .catch fails to catch?
Yes. That backendService.getResults(input) call might not return a promise, but it can throw an exception. Or it doesn't even get that far when backendService is null, and you'll get a ReferenceError or .getResults is not a function and you'll get a TypeError.
is that try/catch really necessary?
Not really. In the latter case, that your code has a grave mistake, you probably don't care to throw and crash. The former case, that backendService.getResults(input) throws, is heavily despised. Asynchronous functions should never throw but only return promises - exactly for the reason that you don't have to write two error handling statements then.
well... it's defensive coding, right? It's not a Bad Thing.
Yeah, pretty much. But it is questionable here. A synchronous exceptions is really unexpected here, not just a service whose failure can be handled. The logging message in the catch block should signify that.
Notice that it also isn't defensive enough. It doesn't catch the probably more likely mistake where getResults() does return, but something that is not a promise. Calling .then() on that might throw. Similarly, the if (promise !== null) is dubious, as it hides when null is returned erroneously (we really could need try-catch-else here).
is that try/catch really necessary?
Not really. As long as your backendService.getResults() returns a promise. Like return $http(...)
Will there ever be any scenario where a particular error/exception is
thrown that the promise's .catch fails to catch?
I don't think so, since any error will be a rejected promise, it will fall into your .catch() handler
well... it's defensive coding, right? It's not a Bad Thing.
It depends... Javascript try/catch is has some performance issues. So if you're using just for the sake of making sure, you could remove it :)
Take a further look at try-catch discussion here if you wish: Javascript Try-Catch Performance Vs. Error Checking Code
Related
Background
I am reading every inch of the docs and trying to learn about Folktale as much as I can.
Recently, I decide to try Future.
Do we need a Future?
Now while I understand the difference between Task and Promise and between Task and Future ( support for cancellation ) it is not clear to me the difference between Future and Promise.
Why would I ever want to use a Future instead of a Promise ? What benefits would I have?
Well, you can say: "This way you actually have a monad, instead of a sorry excuse for a monad".
And that is a fine argument on it's own but... having in mind I always need to convert from Promise to something else ( to future ) and that the Future's API is pretty much the same, it is not clear to me, as someone new, why I should care about Future at all.
Code sample
Lets assume I have this function, where request is a function that makes a request and returns some results.
extractRequestInfo is a function that extracts data from the response object.
If something fails, I catch the error and return an object with all the data, the badId and the error.
const requestFruit = request => data =>
request( data )
.then( extractRequestInfo )
.catch( error => ( { badId: prop( [ "Id" ], data ), error } ) );
Given that this is an HTTP request, I know I don't need a Task because there is no cancellation I can do here. So my options are Promise and Future.
Questions
How would I use Future in this sample?
Since this is something that can fail, should I use Result as well?
Quoting the response from the creator Quil:
Future solves the same problem Promise does, so there isn't much of a
conceptual difference between the two. The difference is more in how
they solve the problem.
Promises can either settle successfully or fail. In any transformation
you apply to a promise's value, errors thrown synchronously will be
implicitly caught and reject the promise as well. This is interesting
in async/await because you can handle these errors (synchronous and
asynchronous) in a similar way--you don't need to lift every
synchronous operation into a promise, because the runtime will do that
for you.
The downside of this is that it's very easy to catch errors that you
didn't intend to, and have your system run in an inconsistent state. I
don't think you can do much with static analysis here either.
Futures don't have that problem because nothing is lifted into a
future implicitly. If you want synchronous operations to use the
Future pipeline for handling errors, you have to put them there
explicitly. This gives you more control over error handling, and
uncaught errors will still crash the process as expected (avoiding
having your program run into inconsistent memory states for cases you
didn't predict), but it takes more effort to write programs this way.
Other than that, if you consider Tasks, Futures model the eventual
value of a Task with a success case, a failure case, and a
cancellation case. Promises only have a success case and a failure
case, so cancellation is modelled as a special failure value. This
changes the idioms for handling cancellations a bit. It's possible for
code using promises to handle failures without being aware of this
special cancellation value, which may be a problem since this value
may easily be lost during these transformations.
In codebases that mix promises and tasks, these problems are more
complicated because the implicit-lifting of errors that promises do is
not very compatible with the explicit-lifiting of errors that
tasks/futures expect (this can lead to problems like this one: #163).
Finding these bugs becomes a lot harder than if you had only promises
or only tasks/futures. Not sure what's the best way to handle these
cases yet.
For the original discussion:
https://github.com/origamitower/folktale/issues/200
For example
p = new Promise(function (resolve, reject) {
throw 'err';
});
p.done();
In most promise polyfill libs, the done will throw an error, and the current execution will exit.
But if we use p.then(), nothing will happen. The error is swallowed by the promise. If we use p.catch, we have no way to exit current execution. I want to achieve something like:
try {
// something
} catch (err) {
if (check(err)) {
throw err;
}
}
No.
Not only will .done likely not be supported in the future versions of the spec - it is unneeded. Quoting from the threads Mariusz linked to:
Domenic:
it's still error-prone: if you slip up and don't follow the rule even once, you might silence an error forever.
Mark Miller (who pioneered the concept of promises):
Note that weak-refs, hopefully in ES7, will provide us one of the diagnostic tools we need to bridge this gap. Using weak-refs, if a rejected promise gets collected without having notified any handlers, we can arrange that this generates a diagnostic. The promise implementation would have to keep the reason in the promise's executor (post-mortem gc handler), so that it has the diagnostic to report after discovery that the promise has been rejected.
Yehuda Kats on RSVP's error handler:
The approach we're taking in RSVP is to install an unhandled promise monitor that throws by default.
You can opt a particular promise out of this behavior by attaching a noop failure handler, if you know that you will be attaching asynchronous error handlers. We will probably have sugar for this (.undone :p)
In our experience, moving the burden from literally everyone to people who may want to attach async error handlers is appropriate.
And, from the actual repo that preceded the spec, Domenic said:
done's job will be done by integrating unhandled rejection tracking functionality into dev tools. Most TC39ers, from what I understand, as well as myself, perceive that as enough for the spec to be complete.
The spec committee did not just ignore .done, they deemed it was unnecessary and error prone. New modern promise libraries automatically detect unhandled rejections - two examples of this are When promises and Bluebird promises that pioneered the idea.
.done is an artifact - originating from the fact the browser could not detect unhandled rejections. Truth is - detecting them deterministically is impossible but for the vast majority of cases it is completely possible.
Don't believe me? Open Firefox and play with its native promises:
p = new Promise(function (resolve, reject) {
throw 'err';
});
// Logs as error: Unhandled error: `err`
Simply put - firefox uses garbage collection hooks in order to determine promises were disposed in an unhandled state and fires a global error handler which defaults to writing on the screen.
Now, the problem is native promises are not very usable yet - since in IE they don't exist and in Chrome unhandled rejection detection was not yet implemented - but it's coming and it'll be there. Meanwhile you can use an ES6 compatible library like Bluebird which will do this rejection tracking for you.
If you want to polyfill done (which I strongly recommend against) - the polyfill by torazaburo has a few shortcomings. It declares an enumerable property on the promise prototype and generally this is not how the spec was designed - you are expected to subclass promises in order to extend them rather than monkey patch them - sadly no implementations currently support this.
So in short:
Wait for native promises to stabilize before you use them - in the meanwhile you can use libraries that implement the spec like Bluebird. When it stabilizes not having .done will not be an issue at all.
Utilize patterns for detecting errors - for example check out the disposer pattern here.
Use the developer tools when available, long stack traces and and async debugging are big plusses. Also note you should not throw strings if you want meaningful stack traces.
Good luck and happy coding.
No, AFAIK done is not part of the spec. To mimic its behavior, you should throw the exception on the next tick, outside the purview of the promises chain:
p.catch(function(e) {
setTimeout(function() { throw e; });
});
This is essentially how libraries implement done. See excerpt from Q documents:
Much like then, but ... the resulting rejection reason is thrown as an exception in a future turn of the event loop.
Implementing done yourself
If you want to implement the approximate semantics of done as typically understood, then something like:
Promise.prototype.done = function(onFulfilled, onRejected) {
this
.then(onFulfilled, onRejected)
.catch(function(e) {
setTimeout(function() { throw e; });
})
;
};
Setting up an error handler
If you want a chance to handle these errors yourself, you could set up an error handler:
Promise.onError = function(e) {
console.log("The sky is falling", e);
throw e;
};
Then invoke the handler on the next tick:
Promise.prototype.done = function(onFulfilled, onRejected) {
this
.then(onFulfilled, onRejected)
.catch(function(e) {
setTimeout(Promise.onError || function() { throw e; }, 1, e);
})
;
};
Current statement of TC39 is that this issue can be and should be solved natively in browser engines with developer tools. That's why they're opposite providing done within native API.
It's indeed controversial decision, see following links for discussions on that matter:
https://github.com/domenic/promises-unwrapping/issues/19
http://mozilla.6506.n7.nabble.com/Where-d-Promise-done-go-td281461.html
https://github.com/promises-aplus/promises-spec/issues/43
https://github.com/slightlyoff/Promises/issues/33
Using the native (ES6) Promise. Should I reject with an Error:
Promise.reject(new Error('Something went wrong'));
Or should I just reject with a string:
Promise.reject('Something went wrong');
And what is the difference in browser behaviour?
Yes, it most definitely should. A string is not an error, when you have errors usually it means something went wrong which means you'd really enjoy a good stack trace. No error - no stack trace.
Just like with try/catch, if you add .catch to a thrown rejection, you want to be able to log the stack trace, throwing strings ruins that for you.
I'm on mobile so this answer is rather short but I really can't emphasize enough how important this is. In large (10K+ LoC) apps stack traces in rejections really made the difference between easy remote bug hunting and a long night in the office.
I'm recommending to use Error object only (not a string) for sending the reasons.
Justification
Other parts of code are generating the Errors inside the Promise rejection reason...
If some code fails the exception returns the Error object. Also if you will call any external library, which does not support the Promise, it will throw the Error object when something fails.
If one of errors mentioned above occurs inside the Promise, it will be transformed into catch with Error object.
Therefore if you will use the string as promise rejection reason, you have to expect that the catch can occurs with your string (part of your code) or Error (when some general error occurs). So you will have to use ugly code (err.message || err) everywhere, when you have to handle the error.
Recently, I've been read the TJ's blog article: "Farewell Node.js".
I'm not quite understand about the Node fails part. Here it is:
Error-handling in Go is superior in my opinion. Node is great in the sense that you have to think about every error, and decide what to do. Node fails however because:
you may get duplicate callbacks
you may not get a callback at all (lost in limbo)
you may get out-of-band errors
emitters may get multiple “error” events
missing “error” events sends everything to hell
often unsure what requires “error” handlers
“error” handlers are very verbose
callbacks suck
What specific problem is being referred to when the author writes "you may not get a callback at all (lost in limbo)"?
It means the error is lost in limbo since the operating function did not "get a callback", viz., the error is "swallowed", since there is no callback to handle it.
var foo = function(onSuccess, onFailure) {
// ...
// uh-oh, I failed
if(onFailure) {
onFailure(err);
}
else {
// well, that probably wasn't too important anyway...
}
}
foo(function() { console.log("success!"); } /* no second argument... */);
Note in synchronous coding (say, most Java) it's much harder for this to happen. Catch blocks are much better enforced and if an exception escapes anyway, it goes to the uncaught exception handler which by default crashes the system. It's like this in node too, except in the above paradigm where an exception isn't thrown it's likely swallowed.
Strong community convention could solve it in my trivial example above, but convention can not completely solve this in general. See e.g. the Q promise library which supports a done method.
Q.fcall(promisedStep1)
.then(promisedStep2)
.then(promisedStep3)
.then(promisedStep4)
.then(function (value4) {
// Do something with value4
})
.catch(function (error) {
// Handle any error from all above steps
})
.done();
The done call there instructs the promise chain to throw any unhandled exceptions (if the catch block were missing, or the catch block itself throws). But it is fully the responsibility of the programmer to call done, as it must be, since only the programmer knows when the chain is complete. If a programmer forgets to call done the error will sit dangling in the promise chain. I have had real production bugs caused by this; I agree, it's a serious problem.
I'll be honest that a lot of that block in the post doesn't make much sense to me. But I'm an experienced Node.js programmer and this is the only thing I can think that could mean.
i have a node.js server that i want to be able to handle exceptions without crashing, and i've got code kinda like the below. What i'm wanting to know, with all the event-driven awesomeness and callbacks and lambdas and all that, will my exceptions still be caught by my main entry point?
try {
http.get(..., function(results) {
// Might get an exception here
results.on('data', function () {
// Might also get an exception here
});
results.on('end', function () {
// Might also get an exception here
});
});
} catch(e) {
// Will the exceptions from the lambdas be caught here?
console.log('Nicely caught error: (' + e.name + '): ' + e.message);
}
Thanks
It depends on the flow of control. Node.js puts emphasis on being asynchronous and one of the main drawbacks of asynchronicity is that the code doesn't flow in a way that you might be used to with a synchronized language.
In a synchronous language the caller is blocked while a function is waiting for some data. This makes the programmers job fairly simple because they can be guaranteed that when the function that's waiting for data returns, there will be data for the caller to consume.
It's the exact opposite in an asynchronous language, or with non-blocking I/O. In this case, the caller is blocked for the duration of the function call, however functions don't have to wait for data or I/O to complete before returning. This makes things slightly harder on the programmer because when an function call returns there are no guarantees about whether there will be data available. Hence, non-blocking I/O typically implies callback functions that get called when data is available to act on.
try/catch blocks work with the call stack. That is, when an exception is thrown the runtime will unwind the call stack until it finds a catch block that surrounds the call that threw the exception. But, since http.get is a non-blocking call it exits immediately after registering some callbacks and processing continues. The callbacks are called in a separate "thread" and therefore the calls aren't nested within the original try/catch block.
A diagram would really help explain things here but unfortunately I don't have one available to me.
The error handling style for the node.js standard library is to call the same callback, but pass a non-null first argument representing the error. If you have exceptional conditions in your asynchronous code, keep it in that format.
A throw would climb up the caller chain, which normally isn't aware of what callbacks are doing (for example, the tcp layer doesn't care that its data is parsed as http). Throwable exceptions are a bad fit for asynchronous programming.
In your example code, none of the potential exceptions thrown from within the http.get callback will land in your catch block. The stack of the callback is built from node's event loop when data is available to read.
There is a way to catch uncaught exceptions in node:
process.on("uncaughtException", function (err) {
console.log("uncaught exception: " + err);
});
This will sort of work for your example, depending on where the exception is.
The trouble is, uncaught exceptions can unravel node's inner workings in surprising ways, so you really don't want to depend on this. The only reliable way to catch all possible exceptions in a way that you can deal with them is to put a try/catch around every entry point from the event loop.
This sounds pretty tedious, but it usually isn't that bad. In your example program you are using node's API for HTTP requests, which is still a very low-level interface. For most things, you'd want to wrap up this exception catching functionality once and use it as a library.