What happens to a jQuery promise after it is done being used? - javascript

It seems that there is no way for jQuery to know when your app is done using a promise. Since memory is managed in js, I presume that the promise continues to exist until all references to it are gone. Specifically, it will exist indefinitely until it is resolved AND the code that created or used it has finished exiting (functions returned etc). At which point it will be garbaged collected.
Can anyone verify my assumptions? Or add other thoughts to this?
Understanding the underlying mechanics has some important connotations; memory leaks, potential caching opportunities (via persisting promises after they have been resolved), etc. My next step is to dive into the jQuery source, but I was hoping for some additional guidance before starting that.

If there are no references to a resolved promise, it will (eventually) be disposed. Otherwise, it will be kept in memory in case anyone wants to access its value.
Promises are no different here from any other object in this case.

Promises are only removed in one case, if the progress is done.
js source
.done( updateFunc( i, resolveContexts, resolveValues ) )
...->
deferred.resolveWith( contexts, values );
To note, resolveWith is part of jQuery convention to use what they call a tuple, resolve in this case, suffixed with "With" in order to essentially issue a callback to deferred.resolve. This essentially calls the original callback using the same context as the deferred object.
Internally when a callback from a list is fired, jQuery will remove that from the list of callbacks held for that list.
Thus, the only way a promise is resolved, is by being done. There is no timing which monitors it.
The promise will either be attached to the target if one is passed in the jQuery constructor, or will be attached to a new instance of jQuery. This will be the lifetime of the list which holds these deferred callback lists.
As with any other garbage collection, this lifetime will be browser dependent (IE sometimes does interesting things).

Related

Are there any potential memory leaks when using deferreds?

I get a jQuery.Deferred somewhere in my code, and I add several callbacks to it which are member methods of short-lived objects. I was wondering if there are any kind of memory leak potential in this situation, similarly like in .NET event handlers.
I was checking the code of jQuery, but haven't seen any part where callbacks are cleared. I didn't even find where the lifecycle of a deferred object should end.
Could anyone please shed some light on this topic?
EDIT
As I'm thinking about it, it narrows down to this question. In JavaScript, will holding a reference to a member function of an object (not prototype) deny the object from being GC-d? Because jQuery seems to hold these function references in the callbacks of the deferred object.
I haven't seen any part where callbacks are cleared.
The callbacks are cleared when the promise is settled (fulfilled or rejected).
I didn't even find where the lifecycle of a deferred object should end.
The lifecycle of a promise ends when nothing holds a reference to it any more.
There are generally two things that hold a reference to it: the resolver (e.g. timeout, ajax request etc) that eventually will settle the promise, and entities that store the promise because they want to use it (i.e. its result) later. The promise object in turn holds a reference to all the callbacks (until settled), and to the result value (since settled).
A leak can occur if the promise is never resolved, has callbacks attached to it, and is prevented from being garbage-collected by some references. That's very rare though.
In JavaScript, will holding a reference to a member function of an object (not prototype) deny the object from being GC-d?
No, in general not. There are no "members" in javascript, just plain standalone functions.
Though of course the function, when being a closure, could hold a reference to the object, and would keep it from being collected.
I'll answer my question because it seems to be a simple one after some thinking. Actually JavaScript functions don't belong "tightly" to the objects where they're defined, unless they are manually bound with Function.prototype.bind for example, but that's another case.
So if functions are living their own life, holding a reference to them should not deny collecting the object where it was originally defined.
Also I have to note that this all doesn't even matter if I don't hold a direct or closured reference to the deferred object itself, because then whenever it's done the job (resolved/rejected) it will be collectible.
Please someone more experienced correct me if any assumption is wrong here.
From what I've seen (and some light reading: Do never resolved promises cause memory leak?), there is negligible impact from unresolved Promises -- or Deferrereds -- unless you are:
Creating a large number: hundreds of instances of any object are a drag that requires special handling and design
Maintaining references to instances that prevent a GC run from cleaning up any out-of-scope items

Javascript, Node, Promises, and recursion

I'm having trouble controlling execution flow. This is a follow-on to node.js, bluebird, poor control of execution path and node.js table search fails with promises in use. Judging by console.log print-outs, my recursive routine works great, except that the first call to resolve() (a signal to the nth recursive call) gives the green light to follow-on code that shouldn't get that green light until the first call to the recursive routine calls resolve(). It turns out the first call to the recursive routine delivers the answer I want reported, but by the time it reports it, the follow-on code is no longer listening for it and is running blissfully along with an "undefined" answer. Bad.
My code is much to long to share here. I tried to write a small model of the problem, but haven't found the combination of factors to replicate the behavior.
Sound familiar? How do you keep proper control over Promises releasing follow-on code on time?
I thought maybe the first call to the routine could start an array passed into a Promise.all and later calls would add another entry to that array. I haven't tried it. Crazy?
Without seeing your actual code, we can't answer specifically.
Sound familiar? How do you keep proper control over Promises releasing
follow-on code on time?
The answer is always to not resolve the first promise in the chain until you're ready for things to execute and to structure your promise chain so that dependent things don't get executed until the things they are waiting on have been properly resolved. If something is executing too soon, then you're either calling something too soon or your promise structure is not correct. Without seeing your actual code, we cannot know for sure.
A common mistake is this:
someAsyncOperation().then(someOtherAync()).then(...)
which should be:
someAsyncOperation().then(someOtherAync).then(...)
where you should pass a reference to the next async function rather than calling it immediately and passing its return value.
I thought maybe the first call to the routine could start an array
passed into a Promise.all and later calls would add another entry to
that array. I haven't tried it. Crazy?
You cannot pass an array to Promise.all() and then add things to the array later - that is not a supported capability of Promise.all(). You can chain subsequent things onto the results of Promise.all() or do another Promise.all() that includes the promise from the previous Promise.all() and some more promises.
var someArrayOfPromises = [...];
var pAll = Promise.all(someArrayOfPromises);
var someMorePromises = [...]
someMorePromises.push(pAll);
Promise.all(someMorePromoises).then(...)

JavaScript Promises and race conditions

I just started using Promises in JavaScript using the Q library. I am running into a race condition and am wondering what would be the best way to resolve it.
The problem is that Q always calls the callbacks using process.nextTick() (also mentioned in the Promises/A+ spec) which means that I might miss some state changes in the object in the time between the resolution of the promise and the time the callbacks are called. My concrete problem is that I have an incoming connection and am missing the first messages. Messages are distributed using EventEmitter.
The code looks something like this. It uses the palava-client (but the problem should be universal) and is written in CoffeeScript:
defer = q.defer()
session.on 'peer_joined', (peer) ->
defer.resolve(peer)
return defer.promise
And somewhere else
peer_promise.then (peer) ->
peer.on 'message', (msg) ->
console.log "Message received:", msg
The first messages sometimes are lost because they got emitted before the promise ever got notified. The problem does not occur using only EventEmitter and Callbacks because the callbacks are always called immediately blocking the JavaScript Thread from handling the incoming messages.
The following code never misses messages:
session.on 'peer_joined', (peer) ->
peer.on 'message', (msg) ->
console.log "Message received:", msg
Do you see any way I can solve the problem using promises? I really would like to keep using them in my abstraction layer to ensure that only one peer is accepted. Is there any other way to avoid such race conditions?
As someone who usually promotes using promises, my suggestion is:
Do not use promises here
Promises represent one time events. They are an abstraction over values, one a promise changes its state it can no longer be changed. A promise starts off as pending and changes state once to either fulfilled or rejected.
You are facing a scenario where you have many users, each user joins and needs to add events, the users might 'go away', your scenario simply doesn't describe the same linear flow promises excel at. Promises are useful for a certain scenario - they are not for every concurrency problem. Using an event-emitter here is perfectly appropriate.
Your case (a user joining) does not really represent a resolved proxy operation. The code that "doesn't miss messages" is indeed more correct.
If you still choose to use promises here
There are a few things you can do:
You can use Q's progression events and add a progress handler in the creation phase. Note that Kris (Q's author) has called progression broken, and it is being removed in the next version of Q. I recommend against it.
You can wrap the message callback to only fire once a handler has been attached - accumulate items fired with a handler when it is created (in an array) and then trigger them all when the message handler is added (after you resolve, in a .then on the deferred you return.

Should a JS API Convert jqXHRs to Promises?

I've seen a pattern of people converting their jqXHR objects (ie. those things JQuery returns when you do an AJAX operation) in to promises. For instance:
function doAJAX() {
var jqXHR = $.get('http://www.example.com');
return jqXHR.promise();
}
I've never bothered with this pattern because it doesn't seem to really offer anything. When people talk about converting $.Deffereds to $.Promises they recommend doing so to prevent the user from prematurely resolving the deferred. But jqXHRs already effectively are promises: they implement the promise interface and they can't be resolved prematurely.
But, I'm now working on a public-facing API, where I'm willing to bend over backwards (code-wise) if it will result in a better API for the customer. So, my question is, am I missing anything? Will throwing .promise() after every AJAX call that gets returned to the customer actually make things better for them in any way, or is this just an example of patternitis where people apply .promise() to their jqXHRs simply because they're used to doing it to their deferreds?
jQuery's jqXHR as returned from $.get() is already a fully functioning promise object, not a deferred object so it is already protected as a promise. You don't need to convert it into one and doing so only hides existing jqXHR functionality.
So, you can already directly do this:
$.get('http://www.example.com').done(function() {
// your code here
});
Straight from the jQuery doc for $.get();
As of jQuery 1.5, all of jQuery's Ajax methods return a superset of
the XMLHTTPRequest object. This jQuery XHR object, or "jqXHR,"
returned by $.get() implements the Promise interface, giving it all
the properties, methods, and behavior of a Promise (see Deferred
object for more information). ...
The Promise interface also allows jQuery's Ajax methods, including
$.get(), to chain multiple .done(), .fail(), and .always() callbacks
on a single request, and even to assign these callbacks after the
request may have completed. If the request is already complete, the
callback is fired immediately.
And, to your question:
Will throwing .promise() after every AJAX call that gets returned to
the customer actually make things better for them in any way?
No, in my opinion, this will not help make it a better API as it will only serve to hide the jqXHR functionality and turn it into only a promise. The jQXHR object is already a promise object and not a deferred object so the promise aspect of protection against people mucking with the deferred is already there.
The only reason I can think of to return ONLY a promise object from your API would be if you're truly trying to hide the fact that you're using jQuery underneath so you don't want there to be any way for the API user to use any other jQuery features. But, unless you're also hiding ALL the argument functionality on the input to the Ajax call (so it can't look like jQuery on the input side of things either), then I see no point in hiding output functionality either. After all, it is jQuery underneath so why go to all the trouble of abstracting/redefining the whole jQuery ajax interface.
Most best practices are invented not to get things done now, but to get things done further down the road as well. It all
revolves around maintainability, reusability, flexibility etc. Requirements and technology change and your code should be robust enough to be able to adapt to those changes. The more tight coupling between components for instance, the harder it is to change a part without affecting other parts.
When designing a public facing API this is really important as well, probably even more so, since you should try not only making things easy for your future self, but also for all the users of your API. Backwards compatibility is one of these things users of an API really like, since it means they can upgrade your library w/o breaking any existing code. Obviously it's not always possible to make everything 100% backwards compatible (or sometimes it's not even desirable) It's a balancing act. Try to foresee possible future changes and code towards them. However this doesn't mean you should try jumping through hoops just because maybe one day this or that will change. You need to weigh the pros and cons and analyse the amount of work involved.
Now let's get to the question:
If you return a jqXHR object from your method people will start using it not just as a promise, but as a full-fledged jqXHR object. They'll see it for what it is in the documentation, source code or their inspector and they _will_start using it as that. This means that their code expects not just a promise, but a jqXHR object, since that's what they are using. However this limits you the creator of the public API, because if one day down the road for whatever reason you don't want to return a jqXHR object you'll be making changes that aren't backwards compatible.
So let's assess how realistic this scenario is and what the solution is.
Possible reasons why you might not be returning a jqXHR object in the future include (but are not limited to):
What if you decide to use a different library instead of Jquery (for this method)?
What if you need/want a different retrieval mechanism? For instance instead of retrieving data through ajax, you want to retrieve it from LocalStorage. Or maybe both?
What if you want to wrap jquery promises with a promise library (for instance Q allows you to do this, since jquery promises aren't proper promises)?
What if the flow changes and this method is only the first in a sequence of distributed processors?
In all of the above cases you will either jump through hoops (like for instance passing a stale jqXHR object around) to avoid breaking backward compatibility or you'll just break it. In all of the above cases people that have been using the returned object not only as a promise, but as a jqXHR object will have to change their code. This can be quite a substantial change.
Now let's get back to the balancing thing. None of the above scenario's are a concern if the solution to avoid some of the potential headaches would be convoluted or elaborate, but it's not. It's just one simple thing:
return jqXHR.promise();
The implementation detail is abstracted away, people don't know and especially can't rely on one specific implementation detail. I can almost guarantee you that it will save you trouble.
Many people seem to have a kneejerk reaction when it comes to best practices: "but it's totally possible to do it with [other solution]". Sure. Nobody's saying it's impossible or insurmountable. You just try to keep things easy both now and in the future.

Is promise a closure?

In closure tag wiki page, it reads "jQuery itself is one big closure."
But is promise a closure as well? Could you please explain why or why not? This is how I understand closure: assign a function to a variable and reuse it with different environments. Promise does that with $.ajax(), but I could not find anywhere in stackoverflow where promise is introduced as a closure. Maybe because there are other features of promise like $.Deferred(), resolve(), and fail() to expand its functionality beyond a simple function passing?
Closures
This is how I understand closure: assign a function to a variable and reuse it with different environments.
That's not a strictly accurate definition of a closure.
A closure is a function that has access to a referencing-environment. In Javascript, that means a function that is returned by another function and has access to the original functions scope. there are other SO questions that describe this very well
Closure's are general purpose structures that can be used in a variety of ways. One of their biggest benefits is that they protect private scope, which is why libraries like jQuery are often written as closures, so that they don't need to expose all their functions globally.
Promises
Promises are a different concept. They are a way of structuring asynchronous code to make it easier to follow the flow. A promise object in particular is an object that provides functions to chain operations in a clear and easy to read way. A promise might be implemented using closures, but it does not have to be. For instance here is an implementation that does not use closures:
https://gist.github.com/814052/690a6b41dc8445479676b347f1ed49f4fd0b1637
whereas jQuery's implementation uses at least one closure, but isn't really based on them
http://james.padolsey.com/jquery/#v=1.10.2&fn=jQuery.Deferred
Conclusion
Promises and Closures aren't directly related concepts. Closure's are a programming technique that might be used in a Promise implementation. In the end it is neither impossible or necessary to implement it like that.
You wouldn't ask if a birdhouse was a 2x4, even if you used one to make it. The same can be said of promises and closures. Promises make use of closures to retain references to state, callbacks and other such things.
Because of the nature of JavaScript, being asynchronous that is, we are provide much power by the language and it's runtimes. First off, a Promise in jQuery, although it is not unique to jQuery, is an object that will as the documentation puts it, observe when all actions of a certain type bound to the collection, queued or not, have finished. This means you can use this object to know when to continue after a set or queue of items has finished some behavior. Now a Closure on the other hand is not unique to jQuery, but is rather a JavaScript construct, one that combines two things: a function, and the environment in which that function was created. This means that not only executing a function but doing so in possibly an entirely different context.
Closures and Promise are different concepts. Closures refers to scope of variables where as promise are used to 'promise' that an act on something will occur when it is done on an asynchronous action. Since Javascript is non-blocking (not asynchronous --edit), it will not wait for a function to get a response if it needs to access the internet or disk, that said you can have a promise execute after something is done.

Categories

Resources