So, I want my first level catch to be the one that handles the error. Is there anyway to propagate my error up to that first catch?
Reference code, not working (yet):
Promise = require('./framework/libraries/bluebird.js');
function promise() {
var promise = new Promise(function(resolve, reject) {
throw('Oh no!');
});
promise.catch(function(error) {
throw(error);
});
}
try {
promise();
}
// I WANT THIS CATCH TO CATCH THE ERROR THROWN IN THE PROMISE
catch(error) {
console.log('Caught!', error);
}
You cannot use try-catch statements to handle exceptions thrown asynchronously, as the function has "returned" before any exception is thrown. You should instead use the promise.then and promise.catch methods, which represent the asynchronous equivalent of the try-catch statement. (Or use the async/await syntax noted in #Edo's answer.)
What you need to do is to return the promise, then chain another .catch to it:
function promise() {
var promise = new Promise(function(resolve, reject) {
throw('Oh no!');
});
return promise.catch(function(error) {
throw(error);
});
}
promise().catch(function(error) {
console.log('Caught!', error);
});
Promises are chainable, so if a promise rethrows an error, it will be delegated down to the next .catch.
By the way, you don't need to use parentheses around throw statements (throw a is the same as throw(a)).
With the new async/await syntax you can achieve this. Please note that at the moment of writing this is not supported by all browsers, you probably need to transpile your code with babel (or something similar).
// Because of the "async" keyword here, calling getSomeValue()
// will return a promise.
async function getSomeValue() {
if (somethingIsNotOk) {
throw new Error('uh oh');
} else {
return 'Yay!';
}
}
async function() {
try {
// "await" will wait for the promise to resolve or reject
// if it rejects, an error will be thrown, which you can
// catch with a regular try/catch block
const someValue = await getSomeValue();
doSomethingWith(someValue);
} catch (error) {
console.error(error);
}
}
No! That's completely impossible, as promises are inherently asynchronous. The try-catch clause will have finished execution when the exception is thrown (and time travel still will not have been invented).
Instead, return promises from all your functions, and hook an error handler on them.
I often find the need to ensure a Promise is returned and almost as often needing to handle a local error and then optionally rethrow it.
function doSomeWork() {
return Promise.try(function() {
return request.get(url).then(function(response) {
// ... do some specific work
});
}).catch(function(err) {
console.log("Some specific work failed", err);
throw err; // IMPORTANT! throw unless you intend to suppress the error
});
}
The benefit of this technique (Promise.try/catch) is that you start/ensure a Promise chain without the resolve/reject requirement which can easily be missed and create a debugging nightmare.
To expand on edo's answer, if you want to catch the errors of an async function that you don't want to wait for. You can add an await statement at the end of your function.
(async function() {
try {
const asyncResult = someAsyncAction();
// "await" will wait for the promise to resolve or reject
// if it rejects, an error will be thrown, which you can
// catch with a regular try/catch block
const someValue = await getSomeValue();
doSomethingWith(someValue);
await asyncResult;
} catch (error) {
console.error(error);
}
})();
If someAsyncAction fails the catch statement will handle it.
Related
It's easy to forget to use try/catch in an async function or otherwise fail to catch all possible errors when working with promises. This can cause an endless "await" is the Promise is never resolved nor rejected.
Is there any way (such as via a proxy or altering the promise constructor) to cause an async function or other promises to be rejected if there is an uncaught error? The following shows a generalized case. I'm looking for some way to get past the "await" (as in "p" should be rejected when the error is thrown) without fixing "badPromise".
async function badPromise() {
const p = new Promise((res) => {
delayTimer = setTimeout(() => {
console.log('running timeout code...');
if (1 > 0) throw new Error('This is NOT caught!'); // prevents the promise from ever resolving, but may log an error message to the console
res();
}, 1000);
});
return p;
}
(async () => {
try {
console.log('start async');
await badPromise();
console.log('Made it to the end'); // never get here
} catch (e) {
console.error('Caught the problem...', e); // never get here
}
})();```
Promises already reject in the case of an uncaught synchronous error:
in a Promise constructor, for synchronous (thrown) errors
If an error is thrown in the executor, the promise is rejected.
in onFulfilled and onRejected functions, such as in then and catch
If a handler function: [...] throws an error, the promise returned by then gets rejected with the thrown error as its value.
in async functions
Return Value: A Promise which will be resolved with the value returned by the async function, or rejected with an exception thrown from, or uncaught within, the async function.
Your problem here isn't that Promise doesn't handle uncaught errors, it's fundamentally because your error is asynchronous: As far as the Promise is concerned, its executor function is a successful little function that calls setTimeout. By the time your setTimeout handler runs and fails, it does so with its own stack that is unrelated to the Promise object or its function; nothing related to badPromise or p exists within your setTimeout handler other than the res reference the handler includes via closure. As in the question "Handle error from setTimeout", the techniques for catching errors in setTimeout handlers all involved editing or wrapping the handler, and per the HTML spec for timers step 9.2 there is no opportunity to catch or interject an error case for the invocation of the function passed into setTimeout.
Other than editing badPromise, there's almost nothing you can do.
Alternatives:
Modify/overwrite both the Promise constructor and the setTimeout method in sequence, wrapping the Promise constructor's method to save the resolve/reject parameters and then wrapping the global setTimeout method so to wrap the setTimeout handler with the try/catch that invokes the newly-saved reject parameter. Due to the fragility of changing both global services, I strongly advise against any solutions like this.
Create a wrapper higher-order function (i.e. function that returns a function) that accepts a rejection callback and wraps the setTimeout call. This is technically an edit to badPromise, but it does encapsulate what's changing. It'd look something like this:
function rejectOnError(rej, func) {
return (...args) => {
try {
return func(...args);
} catch (e) {
rej(e);
}
};
}
async function badPromise() {
const p = new Promise((res, rej) => { // save reject
delayTimer = setTimeout(rejectOnError(rej, () => { // to use here
console.log('running timeout code...');
if (1 > 0) throw new Error('Now this is caught');
res();
}), 1000);
});
return p;
}
badPromise().catch(x => console.error(`outer: ${x}`));
console.log('bad promise initiated');
The underlying issue is that timer callbacks run as top level code and the only way to detect errors in them is to listen for global error events. Here's an example of using a global handler to detect such errors, but it has issues which I'll discuss below the code:
"use strict";
let delayTimer; // declare variable
async function badPromise() {
const p = new Promise((res) => {
let delayTimer = setTimeout(() => { // declare variable!!!
console.log('running timeout code...');
if (1 > 0) throw new Error('This is NOT caught!'); // prevents the promise from ever resolving, but may log an error message to the console
res();
}, 1000);
});
return p;
}
(async () => {
let onerror;
let errorArgs = null;
let pError = new Promise( (res, rej)=> {
onerror = (...args) => rej( args); // error handler rejects pError
window.addEventListener("error", onerror);
})
.catch( args => errorArgs = args); // Catch handler resolves with error args
// race between badPromise and global error
await Promise.race( [badPromise(), pError] );
window.removeEventListener("error", onerror); // remove global error handler
console.log("Made it here");
if( errorArgs) {
console.log(" but a global error occurred, arguments array: ", errorArgs);
}
})();
Issues
The code was written without caring what is passed to an global error handler added using addEventListener - you may get different arguments if you use window.onerror = errorHandler.
The promise race can be won by any error event that bubbles up to window in the example. It need not have been generated in the badPromise() call.
If multiple calls to badPromise are active concurrently, trapping global errors won't tell you which badPromise call errored.
Hence badPromise really is bad and needs to be handled with kid gloves. If you seriously cannot fix it you may need to ensure that you only ever have one call to it outstanding, and you are doing nothing else that might generate a global error at the same time. Whether this is possible in your case is not something I can comment on.
Alternative
A more generic alternative may be to start a timer before calling badPromise and use it to time out the pending state of the returned promise;
let timer;
let timeAllowed = 5000;
let timedOut = false;
let timeout = new Promise( res => timer = setTimeout(res, timeAllowed))
.then( timedOut = true);
await Promise.race( [badPromise(), timeout])
clearTimer( timer);
console.log( "timed out: %s", timedOut);
There may be a way to do this, but in your case I think you really want to use the reject function inside your Promise instead of throw. That's really what reject is for.
async function badPromise() {
const p = new Promise((res, reject) => {
delayTimer = setTimeout(() => {
console.log('running timeout code...');
if (1 > 0) {
reject('This is NOT caught!');
return;
}
res();
}, 1000);
});
return p;
}
(async () => {
try {
console.log('start async');
await badPromise();
console.log('Made it to the end'); // never gets here
} catch (e) {
console.error('Caught the problem...', e); // should work now
}
})();
Maybe not an answer to what you want, but you could use a pattern like this for setTimeout:
function testErrors() {
new Promise((resolve, reject) => {
setTimeout(() => resolve(), 1000);
}).then(() => {
throw Error("other bad error!");
}).catch(err => {
console.log("Catched", err);
})
}
I'm not sure if "fail-fast" is the best way to describe this methodology, but ever since I started to learn about programming I have always been taught to design functions like this:
function doSomething() {
... // do error-prone work here
if (!allGood) {
// Report error, cleanup and return immediately. Makes for cleaner,
// clearer code where error-handling is easily seen at the top
...
return;
}
// Success! Continue on with (potentially long and ugly) code that may distract from the error
}
As such, I'm trying to call a promisified function like so:
doSomethingAsync(param).catch(err => {
console.error(err);
}).then(() => {
// Continue on with the rest of the code
});
But this gives me behaviour akin to the finally block of a classic try...catch...finally statement, i.e. the then() block will always be called, even after an error. Sometimes this is useful, but I rarely find myself needing such functionality (or try...catch statements in general, for that matter).
So in the interest of failing as quickly and clearly as possible, is there a way that I can make the second example above work in the way that I expect (i.e. then() is only executed if catch() wasn't, yet a single catch() will still catch all errors raised by doSomethingAsync())?
If you use async and await instead of .then, you can effectively wait for the Promise to resolve (or reject), and if it rejects, return early:
(async () => {
try {
await doSomethingAsync(param);
} catch(err) {
console.error(err);
return;
}
// Continue on with the rest of the code
})();
const doSomethingAsync = () => new Promise((resolve, reject) => Math.random() < 0.5 ? resolve() : reject('bad'));
(async () => {
try {
await doSomethingAsync();
} catch(err) {
console.error(err);
return;
}
console.log('continuing');
})();
That's what I'd prefer. You can also use the .then(onResolve, onReject) technique, though it's usually not recommended:
function onReject(err) {
console.log(err);
};
doSomethingAsync(param).then(onResolve, onReject);
function onResolve() {
// Continue on with the rest of the code
}
const doSomethingAsync = () => new Promise((resolve, reject) => Math.random() < 0.5 ? resolve() : reject('bad'));
function onReject(err) {
console.log(err);
};
doSomethingAsync().then(onResolve, onReject);
function onResolve() {
console.log('continuing');
}
This will have onReject only handle errors thrown by doSomethingAsync(param). If your onResolve can throw inside its body as well, then you'll have to chain another .catch onto it (which will start to look a bit messy - it's usually nicer to catch errors in just one place)
I found the term "The Ghost Promise" here, which looks like my case.
I have the code like this:
return Q.Promise(function(resolve, reject) {
firstFunctionThatReturnPromise()
.then(function(firstResult) {
_check(firstResult) ? resolve(firstResult) : return secondFunctionThatReturnPromise();
})
.then(function(secondResult) {
console.log(secondResult);
return thirdFunctionThatReturnPromise(secondResult);
})
.then(function(thirdResult) {
resolve(thirdResult);
})
.catch(function(e) {
reject(e)
});
});
The problem is, even though the _check returns true, it still proceeds to the console.log command (which results in undefined).
In case the _check returns false, things work as expected.
So my question is:
If the behavior described above is normal?
If there is a more elegant way to handle this case?
Update 1: Many questioned that why I use Q.Promise instead of returning the result directly. It's because this is a generic function, shared by other functions.
// Usage in other functions
genericFunction()
.then(function(finalResult) {
doSomething(finalResult);
})
.catch(function(err) {
handleError(err);
});
First off, there's no reason to wrap a new promise around any of this. Your operations already return promises so it is an error prone anti-pattern to rewrap them in a new promise.
Second off, as others have said, a .then() handler has these choices:
It can return a result which will be passed to the next .then() handler. Not returning anything passes undefined to the next .then() handler.
It can return a promise whose resolved value will be passed to the next .then() handler or rejected value will be passed to the next reject handler.
It can throw which will reject the current promise.
There is no way from a .then() handler to tell a promise chain to conditionally skip some following .then() handlers other than rejecting.
So, if you want to branch your promise based on some condition logic, then you need to actually nest your .then() handlers according to your branching logic:
a().then(function(result1) {
if (result1) {
return result1;
} else {
// b() is only executed here in this conditional
return b().then(...);
}
}).then(function(result2) {
// as long as no rejection, this is executed for both branches of the above conditional
// result2 will either be result1 or the resolved value of b()
// depending upon your conditional
})
So, when you want to branch, you make a new nested chain that lets you control what happens based on the conditional branching.
Using your psuedo-code, it would look something like this:
firstFunctionThatReturnPromise().then(function (firstResult) {
if (_check(firstResult)) {
return firstResult;
} else {
return secondFunctionThatReturnPromise().then(function (secondResult) {
console.log(secondResult);
return thirdFunctionThatReturnPromise(secondResult);
})
}
}).then(function (finalResult) {
console.log(finalResult);
return finalResult;
}).catch(function (err) {
console.log(err);
throw err;
});
Even if this is inside a genericFunction, you can still just return the promise you already have:
function genericFunction() {
return firstFunctionThatReturnPromise().then(function (firstResult) {
if (_check(firstResult)) {
return firstResult;
} else {
return secondFunctionThatReturnPromise().then(function (secondResult) {
console.log(secondResult);
return thirdFunctionThatReturnPromise(secondResult);
})
}
}).then(function (finalResult) {
console.log(finalResult);
return finalResult;
}).catch(function (err) {
console.log(err);
throw err;
});
}
// usage
genericFunction().then(...).catch(...)
The behavior is expected. When you chain your .then() statements, you cannot break out of the chain early except by throwing an error.
Your top-level promise (the one returned by Q.Promise()) gets resolved after _check(); but you actually have an inner promise chain that continues to execute.
By specification, then() returns a promise: https://promisesaplus.com/#point-40
You can see for yourself in the source code of Q: https://github.com/kriskowal/q/blob/v1/q.js#L899
For your desired behavior, you'll actually need another nested promise chain.
return Q.Promise(function(resolve, reject) {
firstFunctionThatReturnPromise().then(function(firstResult) {
if (_check(firstResult)) {
resolve(firstResult);
} else {
return secondFunctionThatReturnPromise().then(function(secondResult) {
console.log(secondResult);
return thirdFunctionThatReturnPromise(secondResult);
});
}
});
});
I have never used Q, but everything a promise returns is transformed into a promise and passed to the next .then(). Here, your first .then() don't return anything. So it returns undefined. So undefined is wrapped in a new Promise and passed to the next handler, where you get secondResult == undefined.
You can see it in action in the following CodePen : http://codepen.io/JesmoDrazik/pen/xOaXKE
I'm a bit confused understanding Q promise error handling. Let's say I have the following functions (for demonstration only):
function first() {
console.log('first');
var done = Q.defer();
done.resolve('first');
return done.promise;
}
function second() {
console.log('second');
var done = Q.defer();
done.resolve('second');
return done.promise;
}
function third() {
console.log('third');
var done = Q.defer();
done.resolve('third');
return done.promise;
}
function fourth() {
console.log('fourth');
var done = Q.defer();
done.resolve('fourth');
return done.promise;
}
function doWork() {
return first().then(function() {
return second();
}).then(function() {
return third()
}).then(function() {
return fourth();
});
}
doWork().catch(function(err) {
console.log(err);
});
Everything went fine.
Now that if in second, third or fourth function, I've got some errors (thrown by an async call for example), I could catch it gracefully.
For example, if in second, third or fourth function, I add:
throw new Error('async error');
The error is caught. Perfect!
But what makes me confused is that if the error is thrown in first function, the error is not caught and that breaks my execution.
Please someone tell me why or what I am doing wrong?
Thanks a lot!
Only exceptions in then callbacks are caught by promise implementations. If you throw in first, the exception will bubble and would only be caught by a try-catch statement.
That's exactly why asynchronous (promise-returning) functions should never throw. Instead, reject the promise you're returning (done.reject(…) or return Q.reject(…)). If you don't trust your function, you can use Promise.resolve().then(first).… to start your chain.
Wrap the logic that can break in a try block and reject the promise with the error in the catch block.
var def = q.defer();
try {
// sync or async logic that can break
}
catch (ex) {
q.reject(ex);
}
return def;
I code JavaScript quite a bit and although I think I do understand workings of promises I'm not sure if I fully understand advantages that promises bring to JS world. Consider code below, simply asynchronous calls with callbacks containing furhter calls and so on.
(function doWorkOldSchool() {
setTimeout(function() {
// once done resolve promise
console.log("work done");
setTimeout(function goHome() {
// once done resolve promise
console.log("got home");
try {
setTimeout(function cookDinner() {
// this exception will not be caught
throw "No ingredients for dinner!";
console.log("dinner cooked");
setTimeout(function goToSleep() {
// once done resolve promise
console.log("go to sleep");
}, 2000);
}, 2000);
} catch (ex) {
console.log(ex);
}
}, 2000);
}, 2000);
}());
One problem I see with this:
Exceptions thrown inside of callbacks are useless. Is it correct to say that as throw calls happen these throw calls are out of scope hence the exception cannot be called and bubbles all the way to the top? How this sort of exception can be dealt with?
Second problem I see that this nesting business might get really deep and even though you can keep callback functions code outside of the setTimeout code, it might become a mess.
So first could someone please clarify if there is anything else that is obvious problem or advantage of this sort of coding?
Now, below I prepared program that does the same thing really, but this time using promises:
function doWork() {
return new Promise(function(res, rej) {
// do your asynchronous stuff
setTimeout(function() {
// once done resolve promise
res("work done");
}, 2000);
});
}
function goHome(succ) {
console.log(succ);
return new Promise(function(res, rej) {
// do your asynchronous stuff
setTimeout(function() {
// once done resolve promise
res("got home");
}, 2000);
});
}
function cookDinner(succ) {
console.log(succ);
//if exception thrown here it will be caught by chained err handler
throw "No ingredients for dinner Exception!";
return new Promise(function(res, rej) {
// do your asynchronous stuff
setTimeout(function() {
// something went wrong so instead of using throw we reject the promise
rej("No ingredients for dinner!");
// once done resolve promise
}, 2000);
});
}
function goToSleep(succ) {
console.log(succ);
return new Promise(function(res, rej) {
// do your asynchronous stuff
setTimeout(function() {
// once done resolve promise
res("zzz... zz..");
}, 2000);
});
}
doWork()
.then(goHome)
.then(cookDinner)
.then(goToSleep)
.then(function(succ) {
console.log(succ);
}, function(err) {
console.log(err);
});
Comparing to previous solution I see no obvious problems with this approach, apart from you obviously must have understanding of promises to code/maintain this thing. Advantages would be however:
the exception thrown INSIDE OF handlers will be caught in err handler that is chained somewhere further.
the rejected Promises will be caught by chained err handler
the code is much cleaner
Now, is my understanding correct or are there any other advantages/disadvantages of each approach?
Your code is employing some anti patterns, you should never create promises (e.g. by new Promise, "deferreds" and so on) in application code. You must also never mix callbacks with promises because then you lose exception bubbling (the point of promises) and make your code super verbose.
You can use a library or implement a delay yourself:
function delay(ms, val) {
return new Promise(function(res){setTimeout(res.bind(null, val), ms);});
}
Then:
function doWork() {
return delay(2000, "work done");
}
doWork()
.then(function(succ) {
console.log(succ);
return delay(2000, "got home");
})
.then(function(succ) {
console.log(succ);
// Never throw strings, yet another anti-pattern
throw new Error("No ingredients for dinner Exception!");
// Pointless to add code after throw
})
.then(function(succ) {
return delay(2000, "zzz.. zz..");
})
// Don't use the second argument of .then, use .catch
.catch(function(err) {
console.log("error: ", err);
});
Why should you use .catch instead of the second argument? Well, compare to how you would write it synchronously:
try {
doWork();
console.log("work done");
sleep(2000);
console.log("got home");
sleep(2000);
throw new Error("No ingredients for dinner Exception!";)
sleep(2000);
console.log("zzz.. zz..");
}
catch(e) {
console.log("error: ", err);
}
Get it?
Promises can only return/decide on a value once, so once a value is chosen it cannot be changed. So if a user clicks on a div once the Promise only executes once
Ex:
p = new Promise(function (res, rej) {
var b = document.getElementById('helloWorld');
b.onclick = function() {
res(prompt('value'));
};
});
p.then(function(val) { console.log(val); });
There will always be only one value that will be in the log.
This would be useful for GUI and controls for games/applications.
Also you could have two event listeners inside of the Promise, this is useful for loading images and files. Promises react even after you have added a succeed or failure handler, so you create a function for a success though the failure function is created later which normal event handlers would generate errors if it failed, though Promises don't and call the function later when it is created. This allows you to focus more on how to react to an event and not worry about the timing of things. Very useful for loading things.
Amazing Page On Promises