What happens when an XMLHttpRequest is aborted? - javascript

Will an aborted XMLHttpRequest still download the response from the server?
At what point in the request lifecycle does it differ from a regular request?
Do different browsers behave differently?
Is it bad practise to abort requests?

No, the download will (should) cancel (does in my browser at least)
When a request is aborted, its readyState is changed to XMLHttpRequest.UNSENT (0) and the request's status code is set to 0. -- MDN
No, at least hopefully not. They should be following the spec.
In my opinion, definitely not. It's a waste of bandwidth and other resources to have requests you no longer need running in the background. Much better to abort them.
Two recent use-cases from personal experience:
A table with various parameters for filtering. Depending on the parameters selected, the resulting request sometimes took a while to complete. If you selected a slow set of parameters A, and then a fast set of parameters B before A completed, you'd first see the results of B in the table, but then A would eventually complete and "replace" the contents of the table so you'd suddenly see A instead.
Solution: Abort the previous incomplete request before starting the next one.
SPA with pages with sometimes long running requests, for example the previously mentioned table. When navigating away to a different page, there were sometimes several requests running in the background for stuff no longer needed.
Solution: Register those requests to be aborted when the page/component was unmounted.

Related

Race conditions during simultaneous link click and asynchronous AJAX request?

I'm currently facing a situation similar to the relatively-simple example shown below. When a user clicks on a link to a third-party domain, I need to capture certain characteristics present in the user's DOM and store that data on my server. It's critical that I capture this data for all JS-enabled users, with zero data loss.
I'm slightly concerned that my current implementation (shown below) may be problematic. What would happen if the external destination server was extremely fast (or my internal /save-outbound-link-data endpoint was extremely slow), and the user's request to visit the external link was processed before the internal AJAX request had enough time to complete? I don't think this would be a problem (because in this situation, the browser doesn't care about receiving a response from the AJAX request), but getting some confirmation from fellow developers would be much appreciated.
Also, would the answer to the question above vary if the <a> link pointed to an internal URL rather than an external one?
<script type="text/javascript">
$(document).ready(function() {
$('.record-outbound-click').on('click', function(event) {
var link = $(this);
$.post(
'/save-outbound-link-data',
{
destination: link.attr('href'),
category: link.data('cat')
},
function() {
// Link tracked successfully.
}
);
});
});
</script>
<a href="http://www.stackoverflow.com" class="record-outbound-click" data-cat="programming">
Visit Stack Overflow
</a>
Please note that using event.preventDefault(), along with window.location.href = var.attr('href') inside $.post's success callback, isn't a viable solution for me. Neither is sending the user to a preliminary script on my server (for instance, /outbound?cat=programming&dest=http://www.stackoverflow.com), capturing their data, and then redirecting them to their destination.
Edit 2
Also consider the handshake step (Google's docs):
Time it took to establish a connection, including TCP handshakes/retries and negotiating a SSL.
I don't think you and the server you're sending the AJAX request to can complete the handshake if your client is no longer open for connection to the server (i.e., you're already at Stackoverflow or whatever website your link navigates to.)
Edit 1
More broadly, though, I was hoping to understand from a theoretical point of view whether or not the risk I'm concerned about is a legitimate one.
That's an interesting question, and the answer may not be as obvious as it seems.
That's just a sample request/response in my network tab. Definitely shouldn't be thought of to be used as any sort of trend or representation for general requests/responses.
I think the gap we might be most concerned with is the 1.933ms stall time. There's also other additional steps that need to happen before the actual request is sent (which itself was about 0.061ms).
I'd be worried if there's an interruption in any of the 3 steps leading up to the actual request (which took about 35ms give or take).
I think the question is, if you go somewhere else before the "stalled", "DNS Lookup", and "Initial connection" steps happen, is the request still going to be sent? That part, I don't know. But what about any general computer or browser lag beforehand?
Like you mentioned, the idea that somehow the req/res cycle to/from Stackoverflow would be faster than what's happening on your client (i.e., the initiation itself -- not even the complete cycle -- of a network request to your server) is probably a bit ridiculous, but I think theoretically (as you mentioned, this is what you're interested in), it's probably a bad idea in general to depend on these types of race conditions.
Original answer
What about making the AJAX request synchronous?
$.ajax({
type: "POST",
url: url,
async: false
});
This is generally a terrible idea, but if, in your case, the legacy code is so limiting that you have no way to modify it and this is your last option (think, zombie apocalypse), then consider it.
See jQuery: Performing synchronous AJAX requests.
The reason it's a bad idea is because it's completely blocking (in normal circumstances, you don't want potentially un-completeable requests blocking your main thread). But in your case, it looks like that's actually exactly what you want.

How to clean chrome in-memory cache?

I'm developing an extension in chrome and I'm trying to perform an action each time a user searches in Google. Currently I'm using chrome.webRequest onBeforeRequest listener. It works perfectly most of the cases but some of the requests are done through the cache and doesn't perform any call. I've found this in the API documentation about caching:
Chrome employs two caches — an on-disk cache and a very fast in-memory cache. The lifetime of an in-memory cache is attached to the lifetime of a render process, which roughly corresponds to a tab. Requests that are answered from the in-memory cache are invisible to the web request API. If a request handler changes its behavior (for example, the behavior according to which requests are blocked), a simple page refresh might not respect this changed behavior. To make sure the behavior change goes through, call handlerBehaviorChanged() to flush the in-memory cache. But don't do it often; flushing the cache is a very expensive operation. You don't need to call handlerBehaviorChanged() after registering or unregistering an event listener.
I've tried using the handlerBehaviorChanged() method to empty the in-memory cache, but there was no difference. Although it's not recommended I've even tried to call it after every request.
This is my code:
chrome.webRequest.MAX_HANDLER_BEHAVIOR_CHANGED_CALLS_PER_10_MINUTES = 1000;
chrome.webRequest.onBeforeRequest.addListener(function (details) {
//perform action
chrome.webRequest.handlerBehaviorChanged();
} {
urls: ["*://*.google.com/*"]
});
Is there any way to empty/disable this in-memory cache from the extension?
I asume the "Caching" is performed by the Google-Website with some crazy JavaScript in Objects, Arrays,... so emptying the browser in Memory-Cache won't help.
My first thought was that the data was Stored in the sessionStorage (due to the fact that the Values had the search-term in them [here I searched for test] and are updated/created on every request/change of the selected "search-word"
)
I tried clearing the Sessionstorage (even periodicaly), but it didn't really change the "not"-loading, further more the storage was recreated and even without the storage, the different results were displayed.
Due to this Information and the fact that I can't check several 1000 lines of minfied JavaScript Code, I just can asume that the website does the caching of the requests. I hope this Information can point you in the right direction.

Ajax-intensive page: reuse the same XMLHttpRequest object or create new one every time?

I'm working on some sort of online multiuser editor / coop interface, which will be doing a lot (as in, thousands) of ajax requests during one page lifetime.
What would be best: ('best' in terms of stability, compatibility, avoiding trouble)
Create one XMLHttpRequest object and reuse that for every HTTP request
Create a new XMLHttpRequest object for every HTTP request
Manage a dynamic 'pool' of XMLHttpRequest objects, creating a new one when starting a HTTP request and no existing object is available, and tagging a previously created object as 'available' when its last request was completed successfully
I think 1 is not an option, cause some requests may fail, I may be initiating new requests while a previous one is not finished yet, etc.
As for 2, I guess this is a memory leak, or may result in insane memory/resource usage. Or can I somehow close or delete an object when its request is finished? (where/how?) Or does the JS garbage collector properly take care of this itself?
Never tried 3 before but it feels like the best of both worlds. Or is an approach like that unnecessary, or am I still missing potential problems? Exactly when can I assume a request to be finished (thus, the object being available for a new request), is that when receiving readyState 4 and http status 200 ? (i.e. can I be sure no more updates or callbacks will ever follow after that?)
Create a new one when you need one. The GC will deal with the old ones once they are not needed anymore.
However, for something like a cooperative editor you might want to consider using WebSockets instead of sending requests all the time. The overhead of a small HTTP request is huge while there is almost no overhead with a WebSocket connection.

Disable browser cache

I implemented a REST service and i'm using a web page as client.
My page has some javascript functions that performs several times the same http get request to REST server and process the replies.
My problem is that the browser caches the first reply and not actualy sends the following requests..
Is there some way to force the browser execute all the requests without caching?
I'm using internet explorer 8.0
Thanks
Not sure if it can help you, but sometimes, I add a random parameter in the URL of my request in order to avoid being cached.
So instead of having:
http://my-server:8080/myApp/foo?bar=baz
I will use:
http://my-server:8080/myApp/foo?bar=baz&random=123456789
of course, the value of the random is different for every request. You can use the current time in milliseconds for that.
Not really. This is a known issue with IE, the classic solution is to append a random parameter at the end of the query string for every request. Most JS libraries do this natively if you ask them to (jQuery's cache:false AJAX option, for instance)
Well, of course you don't actually want to disable the browser cache entirely; correct caching is a key part of REST and the fact that it can (if properly followed by both client and server) allow for a high degree of caching while also giving fine control over the cache expiry and revalidation is one of the key advantages.
There is though an issue, as you have spotted, with subsequent GETs to the same URI from the same document (as in DOM document lifetime, reload the page and you'll get another go at that XMLHttpRequest request). Pretty much IE seems to treat it as it would a request for more than one copy of the same image or other related resource in a web page; it uses the cached version even if the entity isn't cacheable.
Firefox has the opposite problem, and will send a subsequent request even when caching information says that it shouldn't!
We could add a random or time-stamped bogus parameter at the end of a query string for each request. However, this is a bit like screaming "THIS IS SPARTA!" and kicking our hard-won download into a deep pit that no Health & Safety inspector considered putting a safety rail around. We obviously don't want to repeat a full unconditional request when we don't need to.
However, this behaviour has a time component. If we delay the subsequent request by a second, then IE will re-request when appropriate while Firefox will honour the max-age and expires headers and not re-request when needless.
Hence, if two requests could be within a second of each other (either we know they are called from the same function, or there's the chance of two events triggering it in close succession) using setTimeout to delay the second request by a second after the first has completed will make it use the cache correctly, rather than in the two different sorts of incorrect behaviour.
Of course, a second's delay is a second's delay. This could be a big deal or not, depending primarily on the size of the downloaded entity.
Another possibility is that something that changes so rapidly shouldn't be modelled as GETting the state of a resource at all, but as POSTing a request for a current status to a resource. This does smell heavily of abusing REST and POSTing what should really be a GET though.
Which can mean that on balance the THIS IS SPARTA approach of appending random stuff to query strings is the way to go. It depends, really.

javascript: cancel all kinds of requests

My website makes a lot of requests. I often need to cancel all current requests, so that the browser is not blocking relevant new requests.
I have 3 kinds of requests:
Ajax
inserted script-tags (which do JSONP-Communication)
inserted image-tags (which cause the browser to request data from various servers)
For Ajax its no problem as the XMLHttpRequest object supports canceling.
What I need is a way to make any browser stop loading resources, from DOM-Objects.
Looks like simply removing an object (eg. an image-tag) from the DOM only helps avoiding an request, if the request is not already running.
UPDATE: a way to cancel all requests, which are irrelevant, instead of really any request would be perfect.
window.stop() should cancel any pending image or script requests.
I think document.close() stops all requests, but I'm not so sure about it.

Categories

Resources