making multiple HTTP GET requests with javascript - javascript

I am attempting to make multiple http get requests in js. but I'm a total noob. I would not be opposed to jQuery either. The idea is I make a call to my server so I can populate a js chart.
var client;
function handler() {
if(client.readyState == 4 && client.status == 200) {
// make a chart
}
}
window.onload = function() {
client = new XMLHttpRequest();
client.onreadystatechanged = handler;
client.open("GET", "http://someurl/stuff");
client.send();
}
so the above is the basic idea of a web request. It seems that the handler acts a callback or event. this works when creating one get request. but if I make two or more, then I get mixed results. sometimes all requests will work, other times none, or just one. the error that occurs is the connection has not been closed.

The primary issue with this code is that it uses the same global variables (e.g. client) which will just fail for multiple requests as they become clobbered.
(The issue that the order of the handler invocations is undefined due to the asynchronous nature of the requests is only secondary to the lack of re-entrancy of creating/using the XHR object in the given code - there is no way that the handler is guaranteed the correct XHR object when it access the client variable!)
Closures and objects avoid this issue but .. just use jQuery (or whatever library you prefer that provides a nice XHR interface, preferably with Promises).
Since you "would not be opposed to jquery", then use it. Take care to only do work from within the callbacks (or promise handlers) using the returned data they provide - this is because the requests are, well, asynchronous.

Related

At what point in a function call is an AJAX request actually initiated by the browser?

Let's say I have a function that does a standard AJAX request:
function doXHR() {
var xhr = new XMLHttpRequest();
xhr.open('GET', 'https://jsonplaceholder.typicode.com/posts');
xhr.send();
xhr.onreadystatechange = () => {
console.log('ready state change');
};
}
doXHR();
console.log('done');
This code will cause the browser to start an AJAX request, but at what point in the function does the request actually start? According to this post: https://blog.raananweber.com/2015/06/17/no-there-are-no-race-conditions-in-javascript/
[Calling xhr.onreadystate() after send()] is possible, because the HTTP request is only executed after the current scope has ended its tasks. This enables the programmer to set the callbacks at any position he wishes. As JavaScript is single-threaded, the request can only be sent when this one single thread is free to run new tasks. The HTTP request is added to the list of tasks to be executed, that is managed by the engine.
But when I add a breakpoint in devtools right after the send call:
I do get a network request, albeit in a pending state:
At the breakpoint, the XHR's readystate is 1, which is XMLHttpRequest.OPENED. According to MDN's documentation, XMLHttpRequest.OPENED means that xhr.open() has been called: https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/readyState
But if I comment out xhr.send() so that only .open() is called:
Then I get no pending network request:
Thinking that perhaps I need the function scope to end in order for the request to actually be sent out, I moved the breakpoint to after the function call (and modified the code a bit to see the XHR readyState):
But I still get the pending network request:
It seems like the debugger not only freezes code execution, but also any network requests. This makes sense, as you don't want network requests to complete while you're on a breakpoint, but this also makes it impossible to tell when the network request is actually initiated.
My question is, at what point does the request actually get sent out? Does calling xhr.send() merely set up the request, but requires that the function scope end before the request is initiated? Does xhr.send() immediately initiate the request before the function scope ends? Or is there something else going on here?
send immediately initiates the request. This is at least hinted at by MDN's documentation for send.
The author of the blog you link to is correct that there are no race conditions per-se, but that does not keep you from having things happen out-of-order. An example of this would be if you load multiple <script> tags in with the async=true set on them. In this case, if one script depends on the other, you could end up in a situation where you have an unpredictable sequence of events (which is very similar to a race condition) because two asynchronous events finish at different times.
It is true that you can set onreadystatechange after calling send because even if the request request failed immediately, the event won't get dispatched until the function scope completes. The delay here is not in the dispatching of the network request, but in the dispatching of the event to say that the network request completed.
It is important to note, that networking itself is not handled in JavaScript, but rather by the browser implementation, which is native code, and could be multi-threaded (although I do not know if it is). This means that the browser is perfectly capable of handling network tasks while your javascript is running.

Is it possible to cancel asynchronous call independently of its current state?

When I type text into my textfield widget I send request with every character typed into it to get matching data from the server.
When I type really fast I swarm server with the requests and this causes to freeze my control, I managed to create throttling mechanism where I set how many ms client should wait before sending request. This requires to set arbitrary constant ms to wait. The better solution would be to just cancel sending previous request when the next key button is pressed.
Is it possible to cancel AJAX request independently of its current state? If yes, how to achieve this?
Call XMLHttpRequest.abort()
See: https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/abort
You'll have to track your requests somehow, perhaps in an array.
requests = [];
xhr = new XMLHttpRequest(),
method = "GET",
url = "https://developer.mozilla.org/";
requests.push(xhr);
MDN says :
The XMLHttpRequest.abort() method aborts the request if it has already
been sent. When a request is aborted, its readyState is set to 0
(UNSENT), but the readystatechange event is not fired.
What's important to note here is that while you will be able to .abort() requests on the client side (and thus not have to worry about the server's response), you are still swarming your server because all those requests are still being sent.
My opinion is that you had it right the first time, by implementing a mechanism that limits the frequency of AJAX requests. You mentioned that you had a problem with this freezing your control (I assume you mean that the browser is either taking longer to respond to user actions, or stops responding completely), and this could be a sign that there is a problem with the way your application handles asynchronous code.
Make sure you are using async APIs like Promise correctly, avoid loops that do heavy processing or just wait around in client code, and make your event processing (i.e your AJAX callback) simple and fast to reduce the impact on the user.

using file scope variables in node.js

Beginner question
Running on server side, in node.js:
If I use a file-scope (or even global) variable which is set by an export.function(), where that exported function is called via ajax from a client, if multiple requests come from different clients, is the variable now prone to unexpected results?
I.e. do I need to set up an array so every time the export.function() is called it adds a new file-scope instance for that particular ajax request? Or is this magically handled by node.js where every ajax request gets its own instance of the server?
Requests will share the same instances so you'll need to guard against this. Note, however, that blocks of synchronous code will be executed completely before execution switches to handle another request so this simplifies the "guarding" you need to do.

One ajax handler or multiple handler?

My webpage needs to send/receieve several ajax operations when being used. Currently I use one ajax handler to handle all events. All ajax requests are sent/received using this format "(type) | {json string}".
A piece of js code will handler the ajax requests/responses: parsing response text -> getting type -> select...case doing something in each type case.
This works but as ajax events grow there are too many cases like from 0 to 99. Well it's not likely to be an easy job for maintenance or further developing.
Maybe I need to split the single ajax handler to mulitiple ones? Then how does the browser know which ajax response should be sent to the specific handler?
Any advice is appreciated.
Currently the code looks like this: (one of the pages using simple javascript, no framework used)
xmlhttp.onreadystatechange = function() {
if (xmlhttp.readyState == 4 && xmlhttp.status == 200) {
//alert(xmlhttp.responseText);
var reply = decodeArray(xmlhttp.responseText); //convert to json object and some other stuff
switch (reply.type) {
case 0:
case 1:
....
}
This is one of the fundamental challenges with software engineering. You have a task that you do repeatedly, but the details change slightly with each variation, so how do you reuse code and keep things clean....every substantial app has this problem.
You need to use a design approach that enforces good Separation of Concerns. I usually develop on API for my app that is completely separate from other parts of the app. My API does one thing and one thing only -- it communicates with the server.
It has no application specific code.
This means I can reuse my api in different apps, if necessary. It also means I can test my API independently of the app.
So lets say you need to do something like loadObject. The applications sees
App.API = {
loadObject: function(params, onSuccess, onFail) {...}
}
The key to keeping this decouples is the onSuccess and onFail callbacks. The application that is using the API passes in these functions; so the API knows nothing of the application specific logic -- all it knows is that if fires these callbacks appropriately. The callbacks are methods that take arguments that are the response data; all the API does is pass the response data to the callback.
Since most of the time, the details of your ajax calls have lots of common items, I also would create some sort of XhrWrapper that fires requests. So in your loadObjects you would use your xhr helper to make the request
xhr.sendRequest(url, method, onSuccess, onFail);
This way all the tedium of firing xhrs is minimized.
Obviously, you can go further; for example, most of the time failure is bad, so my xhr wrapper will have a default onFail implementation, and specific API methods can pass in an override, only if it makes sense.

Ajax-intensive page: reuse the same XMLHttpRequest object or create new one every time?

I'm working on some sort of online multiuser editor / coop interface, which will be doing a lot (as in, thousands) of ajax requests during one page lifetime.
What would be best: ('best' in terms of stability, compatibility, avoiding trouble)
Create one XMLHttpRequest object and reuse that for every HTTP request
Create a new XMLHttpRequest object for every HTTP request
Manage a dynamic 'pool' of XMLHttpRequest objects, creating a new one when starting a HTTP request and no existing object is available, and tagging a previously created object as 'available' when its last request was completed successfully
I think 1 is not an option, cause some requests may fail, I may be initiating new requests while a previous one is not finished yet, etc.
As for 2, I guess this is a memory leak, or may result in insane memory/resource usage. Or can I somehow close or delete an object when its request is finished? (where/how?) Or does the JS garbage collector properly take care of this itself?
Never tried 3 before but it feels like the best of both worlds. Or is an approach like that unnecessary, or am I still missing potential problems? Exactly when can I assume a request to be finished (thus, the object being available for a new request), is that when receiving readyState 4 and http status 200 ? (i.e. can I be sure no more updates or callbacks will ever follow after that?)
Create a new one when you need one. The GC will deal with the old ones once they are not needed anymore.
However, for something like a cooperative editor you might want to consider using WebSockets instead of sending requests all the time. The overhead of a small HTTP request is huge while there is almost no overhead with a WebSocket connection.

Categories

Resources