I just started using Promises in JavaScript using the Q library. I am running into a race condition and am wondering what would be the best way to resolve it.
The problem is that Q always calls the callbacks using process.nextTick() (also mentioned in the Promises/A+ spec) which means that I might miss some state changes in the object in the time between the resolution of the promise and the time the callbacks are called. My concrete problem is that I have an incoming connection and am missing the first messages. Messages are distributed using EventEmitter.
The code looks something like this. It uses the palava-client (but the problem should be universal) and is written in CoffeeScript:
defer = q.defer()
session.on 'peer_joined', (peer) ->
defer.resolve(peer)
return defer.promise
And somewhere else
peer_promise.then (peer) ->
peer.on 'message', (msg) ->
console.log "Message received:", msg
The first messages sometimes are lost because they got emitted before the promise ever got notified. The problem does not occur using only EventEmitter and Callbacks because the callbacks are always called immediately blocking the JavaScript Thread from handling the incoming messages.
The following code never misses messages:
session.on 'peer_joined', (peer) ->
peer.on 'message', (msg) ->
console.log "Message received:", msg
Do you see any way I can solve the problem using promises? I really would like to keep using them in my abstraction layer to ensure that only one peer is accepted. Is there any other way to avoid such race conditions?
As someone who usually promotes using promises, my suggestion is:
Do not use promises here
Promises represent one time events. They are an abstraction over values, one a promise changes its state it can no longer be changed. A promise starts off as pending and changes state once to either fulfilled or rejected.
You are facing a scenario where you have many users, each user joins and needs to add events, the users might 'go away', your scenario simply doesn't describe the same linear flow promises excel at. Promises are useful for a certain scenario - they are not for every concurrency problem. Using an event-emitter here is perfectly appropriate.
Your case (a user joining) does not really represent a resolved proxy operation. The code that "doesn't miss messages" is indeed more correct.
If you still choose to use promises here
There are a few things you can do:
You can use Q's progression events and add a progress handler in the creation phase. Note that Kris (Q's author) has called progression broken, and it is being removed in the next version of Q. I recommend against it.
You can wrap the message callback to only fire once a handler has been attached - accumulate items fired with a handler when it is created (in an array) and then trigger them all when the message handler is added (after you resolve, in a .then on the deferred you return.
Related
What's expected to happen with running Javascript promises when my system sleeps and then get back to the resume state?
Let's suppose that I have a Javascript application that is running and there are some promises doing their asynchronous work (as requesting data from a remote endpoint), so I'm put my OS on sleep mode and go away. After some time, I get back and resume my system. Would those promises continue running after that? Can I assume that either the "then" or "catch" will be called for each of them or is it possible to get some zombie promises that will never return?
It would be great if the answer comes together with some source to basis it =)
Yes of course you could have Promises that never resolve. here you have a Promise that never resolve.
const promiseThatNeverResolve = new Promise((resolve, reject) => {})
Though, they are implementation errors... in general asynchronous activities have some timeout to take a determination of the resolution or rejection of the promise.
In any case, I think it has some merit to do the test of the scenario you were mentioning and see what is happening and reason about it.
If you execute the code below and put your laptop to sleep for less than 50 seconds and then awake it. You will see that the Promise resolve OK showing "finished OK". Laptop was able to save the current state of the memory before going to sleep and when was awaken to restart with the same state. This means Promise was saved in Pending state and recover in Pending state. Same thing for node event loop, processes and so on.
But even more... it would work if you wait for more than 60 seconds. The laptop awakes with the same state of event loop, stack and so on (that had before going to sleep), like a time machine that freezes the time and resume it as it no time has passed... so resolve function is still scheduled to be executed. And Promise will still be pending. Then, due to the implementation of setTimeout, it will call the resolve function, at least one, no matter how much time would have passed. If the implementation would have decided to throw an exception then Promise will be rejected and you will see the log of the catch().
It is interesting to observe that after awakening the laptop for more than 60 seconds, it took another 10 seconds to show the successful result... Probably while restoring the internal memory state.
const myProm = new Promise((resolve, reject) => {
setTimeout(resolve, 50000)
})
myProm.then(()=>console.log('finished OK')).catch(()=>console.log('failed'))
This has nothing to do with promises.
There are some promises doing their asynchronous work (as requesting data from a remote endpoint)
Promises do not do any work. Promises are only a notification mechanism (and a bit more), they do represent an asynchronous result not the work.
So it depends on that work which you're doing, for whose result you got a promise, and how that API (like your network request) handles being suspended by the operating system. Assuming there are no mistakes in the driver code or library that your JS runtime's host objects exposes, they should be getting an error about the cancelled request. Even if the error code was ignored, any reasonably well-defined API for remote access should at some point run into a timeout, and invoke a callback that will then resolve your promise.
If you are talking about the system being suspended between the time a promise was resolved and the time the then handlers get called, yes the handlers got scheduled and the JS runtime is responsible for normal continuation of the program.
Background
I am reading every inch of the docs and trying to learn about Folktale as much as I can.
Recently, I decide to try Future.
Do we need a Future?
Now while I understand the difference between Task and Promise and between Task and Future ( support for cancellation ) it is not clear to me the difference between Future and Promise.
Why would I ever want to use a Future instead of a Promise ? What benefits would I have?
Well, you can say: "This way you actually have a monad, instead of a sorry excuse for a monad".
And that is a fine argument on it's own but... having in mind I always need to convert from Promise to something else ( to future ) and that the Future's API is pretty much the same, it is not clear to me, as someone new, why I should care about Future at all.
Code sample
Lets assume I have this function, where request is a function that makes a request and returns some results.
extractRequestInfo is a function that extracts data from the response object.
If something fails, I catch the error and return an object with all the data, the badId and the error.
const requestFruit = request => data =>
request( data )
.then( extractRequestInfo )
.catch( error => ( { badId: prop( [ "Id" ], data ), error } ) );
Given that this is an HTTP request, I know I don't need a Task because there is no cancellation I can do here. So my options are Promise and Future.
Questions
How would I use Future in this sample?
Since this is something that can fail, should I use Result as well?
Quoting the response from the creator Quil:
Future solves the same problem Promise does, so there isn't much of a
conceptual difference between the two. The difference is more in how
they solve the problem.
Promises can either settle successfully or fail. In any transformation
you apply to a promise's value, errors thrown synchronously will be
implicitly caught and reject the promise as well. This is interesting
in async/await because you can handle these errors (synchronous and
asynchronous) in a similar way--you don't need to lift every
synchronous operation into a promise, because the runtime will do that
for you.
The downside of this is that it's very easy to catch errors that you
didn't intend to, and have your system run in an inconsistent state. I
don't think you can do much with static analysis here either.
Futures don't have that problem because nothing is lifted into a
future implicitly. If you want synchronous operations to use the
Future pipeline for handling errors, you have to put them there
explicitly. This gives you more control over error handling, and
uncaught errors will still crash the process as expected (avoiding
having your program run into inconsistent memory states for cases you
didn't predict), but it takes more effort to write programs this way.
Other than that, if you consider Tasks, Futures model the eventual
value of a Task with a success case, a failure case, and a
cancellation case. Promises only have a success case and a failure
case, so cancellation is modelled as a special failure value. This
changes the idioms for handling cancellations a bit. It's possible for
code using promises to handle failures without being aware of this
special cancellation value, which may be a problem since this value
may easily be lost during these transformations.
In codebases that mix promises and tasks, these problems are more
complicated because the implicit-lifting of errors that promises do is
not very compatible with the explicit-lifiting of errors that
tasks/futures expect (this can lead to problems like this one: #163).
Finding these bugs becomes a lot harder than if you had only promises
or only tasks/futures. Not sure what's the best way to handle these
cases yet.
For the original discussion:
https://github.com/origamitower/folktale/issues/200
I'm having trouble controlling execution flow. This is a follow-on to node.js, bluebird, poor control of execution path and node.js table search fails with promises in use. Judging by console.log print-outs, my recursive routine works great, except that the first call to resolve() (a signal to the nth recursive call) gives the green light to follow-on code that shouldn't get that green light until the first call to the recursive routine calls resolve(). It turns out the first call to the recursive routine delivers the answer I want reported, but by the time it reports it, the follow-on code is no longer listening for it and is running blissfully along with an "undefined" answer. Bad.
My code is much to long to share here. I tried to write a small model of the problem, but haven't found the combination of factors to replicate the behavior.
Sound familiar? How do you keep proper control over Promises releasing follow-on code on time?
I thought maybe the first call to the routine could start an array passed into a Promise.all and later calls would add another entry to that array. I haven't tried it. Crazy?
Without seeing your actual code, we can't answer specifically.
Sound familiar? How do you keep proper control over Promises releasing
follow-on code on time?
The answer is always to not resolve the first promise in the chain until you're ready for things to execute and to structure your promise chain so that dependent things don't get executed until the things they are waiting on have been properly resolved. If something is executing too soon, then you're either calling something too soon or your promise structure is not correct. Without seeing your actual code, we cannot know for sure.
A common mistake is this:
someAsyncOperation().then(someOtherAync()).then(...)
which should be:
someAsyncOperation().then(someOtherAync).then(...)
where you should pass a reference to the next async function rather than calling it immediately and passing its return value.
I thought maybe the first call to the routine could start an array
passed into a Promise.all and later calls would add another entry to
that array. I haven't tried it. Crazy?
You cannot pass an array to Promise.all() and then add things to the array later - that is not a supported capability of Promise.all(). You can chain subsequent things onto the results of Promise.all() or do another Promise.all() that includes the promise from the previous Promise.all() and some more promises.
var someArrayOfPromises = [...];
var pAll = Promise.all(someArrayOfPromises);
var someMorePromises = [...]
someMorePromises.push(pAll);
Promise.all(someMorePromoises).then(...)
I'm running an Angular app and when testing on protractor a click(), I don't know when should I resolve the promise with a then().
I found this on Protractor API:
A promise that will be resolved when the click command has completed.
So, should I use click().then() in every click?
So, should I use click().then() in every click?
Definitely not.
It's not needed because Protractor/WebDriverJS has this mechanism called "Control Flow" which is basically a queue of promises that need to be resolved:
WebDriverJS maintains a queue of pending promises, called the control
flow, to keep execution organized.
and Protractor waits for Angular naturally and out-of-the-box:
You no longer need to add waits and sleeps to your test. Protractor
can automatically execute the next step in your test the moment the
webpage finishes pending tasks, so you don’t have to worry about
waiting for your test and webpage to sync.
Which leads to a quite straight-forward testing code:
var elementToBePresent = element(by.css(".anotherelementclass")).isPresent();
expect(elementToBePresent.isPresent()).toBe(false);
element(by.css("#mybutton")).click();
expect(elementToBePresent.isPresent()).toBe(true);
Sometimes though, if you experience synchronization/timing issues, or your app under test is non-Angular, you may solve it by resolving the click() explicitly with then() and continue inside the click callback:
expect(elementToBePresent.isPresent()).toBe(false);
element(by.css("#mybutton")).click().then(function () {
expect(elementToBePresent.isPresent()).toBe(true);
});
There are also Explicit Waits to the rescue in these cases, but it's not relevant here.
Yes, you should.
Maybe right now it's not necessary, but maybe in next versions it is.
So, if click return a promise, you should use it.
http://www.protractortest.org/#/api?view=webdriver.WebElement.prototype.click
It seems that there is no way for jQuery to know when your app is done using a promise. Since memory is managed in js, I presume that the promise continues to exist until all references to it are gone. Specifically, it will exist indefinitely until it is resolved AND the code that created or used it has finished exiting (functions returned etc). At which point it will be garbaged collected.
Can anyone verify my assumptions? Or add other thoughts to this?
Understanding the underlying mechanics has some important connotations; memory leaks, potential caching opportunities (via persisting promises after they have been resolved), etc. My next step is to dive into the jQuery source, but I was hoping for some additional guidance before starting that.
If there are no references to a resolved promise, it will (eventually) be disposed. Otherwise, it will be kept in memory in case anyone wants to access its value.
Promises are no different here from any other object in this case.
Promises are only removed in one case, if the progress is done.
js source
.done( updateFunc( i, resolveContexts, resolveValues ) )
...->
deferred.resolveWith( contexts, values );
To note, resolveWith is part of jQuery convention to use what they call a tuple, resolve in this case, suffixed with "With" in order to essentially issue a callback to deferred.resolve. This essentially calls the original callback using the same context as the deferred object.
Internally when a callback from a list is fired, jQuery will remove that from the list of callbacks held for that list.
Thus, the only way a promise is resolved, is by being done. There is no timing which monitors it.
The promise will either be attached to the target if one is passed in the jQuery constructor, or will be attached to a new instance of jQuery. This will be the lifetime of the list which holds these deferred callback lists.
As with any other garbage collection, this lifetime will be browser dependent (IE sometimes does interesting things).