How to control server side AJAX call timeout - javascript

I am writing server code in ASP.NET and using the MVC framework. My client code is javascript.
Whether using JQuery or not to send an AJAX request from browser to server, you can set a timeout, and of course the browser also has a timeout it can enforce seperately. If either of those timeouts are reached, the failure method of the ajax invocation is called and one of the arguments specifies a timeout.
HOWEVER. I have recently discovered, it seems, that the server can timeout an AJAX call, and terminate with a 500 error code, which the client will receive as a 500 error code no different than any unexpected error. However, in this case, the underlying server worker process continues -- it is only the web server itself (IIS in my case) that interrupts and sends a 500 error code without letting the worker know, so that the worker continues on blissfully unaware that there is a problem.
Meanwhile, the client in the error handler for the AJAX call gets a status of 500, which it would naturally interpret as a failure of the worker process, perhaps an unhandled exception and ungraceful termination. As far as I can see, there is no way for the client to KNOW that the problem was a timeout, seperately from an unexpected exception. So the client code might incorrectly assume that the worker process is dead, when in fact it is very much alive.
So.... threefold question:
Is there a way in MVC ASP.NET to control the server timeout settings?
Is there a way for the worker process that the server thinks is taking too long to be informed if the server generates a timeout on its behalf?
Is there a way for the client side AJAX failure callback to know that this particular 500 error is not because the worker had an unexpected error, but because the wrapping server code decided it was taking too long?
w.r.t. #3, I can see that the responseText property of the Ajax response does contain some html that if rendered would tell the user that there was a timeout, but programmatically parsing for that seems really messy and unreliable.
Anyone else run into this?
ADD / EDIT, 4pm PDT on 1/26:
Based on a comment immediately below suggesting that I might find a solution with this article, I implemented the suggested filter. It did not work. I was able to trigger the new filter by explicitly throwing an unhandled timeout exception from the worker process, so my filter as per that other SO article was clearly in play, but it wasn't triggered.
I should add that this application is running as a windows Azure web site. It is my belief from the the circumstantial evidence/data that I have been able to accumulate that IIS on the VM is itself interrupting the request and responding with a 500 error without even telling the underlying worker process or MVC app that it has summarily terminated the request.
So this almost seems like an IT issue of being able to configure IIS on that particular VM for that particular web site.
From my data, it appears that IIS is just canceling the request after 230 seconds.

Try this web.config entry:
<system.web>
<httpRuntime executionTimeout="your-timeout-in-seconds">
...
</system.web>

Related

How does Javascript handle obsolete callbacks

I'm trying to understand the event loop in Node JS. There are fantastic resources online but i keep finding myself wondering the same thing. The scenario is below:
For Javascript/Node
On a web page with a getData button.
The user clicks the button and a request is sent to the server to the getData route (in this simplified scenario when the server gets the data it returns a new page showing the data). The data request is asynchronously offloaded to a Node API (and out of the call stack).
Before the api returns the data the user changes pages in the browser, a new http request is sent to the server for a different page and this fires an http.get(url, callback) adding the new page to the callback queue to be loaded (this beats the data request and the new page is returned).
What happens to the old callback for the now redundant data? Presumably when the event loop reaches the getData callback in the cb queue, it will attempt to run the callback function. Will Node send a response to change the page again with the data displaying page? If it was Ajax and the data returned after a page change what happens to the callback in this scenario, does ajax simply ignore the returned data on the new, different page?
I know Ajax handles on page updates but i wanted to simplify to understand how the event loop handles situation where the call stack has moved on and the older callbacks are no longer useful.
Thankyou for the responses they have been very useful! The only thing that i would add is (and i understand this scenario requires very bad server design but the theory is my interest here) if the server does respond with a page that is no longer wanted by the user (due to the delay in async response), does the browser understand that this is not the most recent request? I think as http is stateless the browser will receive the response over TCP and reload with the previously requested page (to the the clients frustration).
Many thanks!
Pretty sure the browser discards the response because the request itself is closed. It depends on the timing, the server (forget whether it's node.js or apache or anything else) might still get the response in time to try to respond to it. If it does respond to it, it doesn't care what the browser does with the response.
You say
The user clicks the button and a request is sent to the server to the getData route (in this simplified scenario when the server gets the data it returns a new page showing the data). The data request is asynchronously offloaded to a Node API (and out of the call stack)
The browser is oblivious of the server's implementation. It doesn't care how the server responds as long as it tries to respond. At the same time, the server is also oblivious of the "state" of the browser. All the server knows is that there is an open request that it needs to respond to and does so. Once it has responded, it doesn't care.
I know Ajax handles on page updates but i wanted to simplify to understand how the event loop handles situation where the call stack has moved on and the older callbacks are no longer useful.
There is no movement of the callstack, the server received a request and as long as that request is open, the server should respond to it. If another request comes in (even if the new request is nearly identical) the server will also respond to that request separately.
In a simplified HTTP request scenario, a connection is established between client (eg Web browser) and server first, using TCP.
HTTP is sent over this link to configure the exchange of data.
The TCP connection is maintained until either end closes it (usually client says "Thanks! All done!"). So the TCP connection remains open while the Node.js process fetches some data asynchronously.
If a user gives up waiting for a response and (say) closes the browser, then the TCP connection is closed on both ends. Meanwhile, the callback for the database call continues to exist in server memory, ready for when the DB returns. The DB then returns, the callback is run, and as soon as the callback attempts to send data back to the client over the TCP connection it will discover it has been closed and a suitable error will be raised (and possibly caught) on the server-side.
The callback then exits, the stack frame is popped off the stack, the memory allocated to the stack frame is freed, and control returns to the previous frame. When the garbage collector in the Node.js JavaScript runtime next runs, any objects solely linked to that no-longer-in-existence frame are swept up and the memory associated with them is freed.
In an unhappy-path scenario (eg. power plug pulled on the client computer), then the server will think the TCP connection is still open for a configurable period, until it times out, or it discovers it is unable to successfully send data across it, at which point the server closes it and cleans up the associated memory.
This can be complicated by the concept of HTTP keep-alive, which involves the browser and server agreeing to use a single TCP connection for multiple HTTP requests to save time. Under HTTP keep-alive, the HTTP protocol is leveraged to ensure the browser will safely ignore any data returned over the TCP connection that corresponds to a no-longer active request.
And there is an even more modern technique of multiplexing several concurrent HTTP requests over the same TCP connection too, but that is outside the scope of this question.
I think.

How to stop all functions (multiple asyncs etc) from executing if user abort connecting to the server (refresh while connecting, close browser, etc..)

I'm currently working with React.js and Server-Side rendering where I have multiple async calls requesting json from API server. So the problem is if user stops connecting, the functions keep executing even though it has nobody to serve it to.
For example if I hold down refresh button, the node.js Express server will keep sending hundreds of async requests (and probably non async functions as well which take longer to execute) and then executing functions once received response.
So basically I need some way to stop functions from firing if user stops HTTP request, worst case if holding refresh button down...
I've tried to use res.end(), but the functions keep firing. Is there some smart way to listen to an event and stop the async or something?
requests are made with "superagent" and async with "async.js"
Thanks.
There are no particularly great options for you on the server.
If a single client sends off multiple requests from the server and then while the requests are being processed or are in the server queue waiting to be processed, the end user hits refresh, then the browser will drop/close the open Ajax sockets that the currently open web page has.
But, requests that are already underway on the server will continue to process and the server will not even know that those sockets have been closed until the server tries to write to those sockets. At that point, it may (depending upon timing) discover that the sockets have been closed. But, of course by that time, you've already processed the requests and have the results ready to go. It may also occur that the request is processed and sent before the server is even aware that the sockets have been closed. This will eventually cause an error on the socket (and a resulting close of the socket) when no response comes back from the other end or a close comes back from the other end while awaiting confirmation of the data that was sent (since TCP is a reliable protocol, every send must be ACKed by the other end of the connection).
So, this all explains why there really isn't a whole lot you can do on the server.
The situation you describe will be exacerbated if the client is sending multiple API requests in parallel and the end-user is somehow led to refresh the page in the middle of that. The things you can do to lessen this issue are as follows:
Combine multiple requests into a single API request. If your API already permits that type of structuring of the request, then switch to that. Or, if you control the API, then add new capabilities to the API so that rather than having a client send multiple requests in parallel, it can send a single request and get all the data in that one request. This limits the number of requests that might get caught in process when the web page is closed.
Limit the number of parallel requests you send. You can do this either by using the Async library feature that controls how many requests are sent at once from a list of requests you're trying to send or you can serialize your requests so one is sent only after the previous one finishes. This would also limit the number of requests that might get caught in process when the web page is closed.
If you had a particularly expensive server-side operation (something that might take minutes to run such as building a large report file), you could devise some sort of check-in with the client and then during your processing you could check to see if the client was still active and if not, then you could abort your own server-side processing. For most normal types of server-side requests, this would probably not be worth the extra trouble (since verifying the client is still active has its own cost), but if you have a particularly costly server-side operation, you could implement something like this.
If you have async operations in your request processing and if you're willing to put checks into your request handling after each async operation, then you could register a listener for the 'close' with req.connection.addListener('close', function () { ... });, then set some sort of flag on that request that you check after each async operation and then abort the rest of the request processing whenever you discover that flag on that connection. I scoured the net module docs for some flag or method on the socket object that would tell you if it's already been closed (such information should be there), but I could not find it mentioned in the nodejs doc.
Aborting requests halfway through isn't the job of async. From async's point of view, a request to the server is just a single task that can't be split up any further. If it is possible to do something like this, it would be part of the built-in http module or express. But I'm not sure that it would be very helpful, even if there is a way of doing it. The request has probably reached the internet by the time you try to unsend it. What you really want is not to send so many requests in the first place.
You haven't posted any code, so I can't write modifications to your code in response. Instead of having an event handler that says "send a request and do something with the response", write one that says "if you haven't sent a request in the last 2 seconds, send one and do something with the response, and also save the response in a variable; otherwise, look at the value of the variable and pretend you've received it as the response". (I'm assuming that all requests are identical - if not, then you need to think a bit more about it.)

Jxcore Error handling in For Multiple Request Nodejs

i am running a nodejs code (server.js) as a jxcore using
jx mt-keep:4 server.js
we have a lot of request hit per seconds and mostly transaction take place.
I am looking for a way to catch error incase any thread dies and the request information is
returned back to me so that i can catch that request and take appropriate action based on it.
So in this i might not lose teh request coming in and would handle it.
This is a nodejs project and due to project urgency has been moved to jxcore.
Please let me know if there is a way to handle it even from code level.
Actually it's similar to a single Node.JS instance. You have same tools and options for handling the errors.
Besides, JXcore thread warns the task queue when it catches an unexpected exception on the JS land (Task queue stops sending the requests back to this instance) then safely restarts the particular thread. You may listen to 'uncaught exception', 'restart' events for the thread and manage a softer restart.
process.on('restart', res_cb, exit_code){
// thread needs a restart (due to unhandled exception, IO, hardware etc..)
// prepare your app for this thread's restart
// call res_cb(exit_code) for restart.
});
Note: JXcore expects the application is up and running at least for 5 seconds before restarting any thread. Perhaps this limitation protects the application from looping thread restarts.
You may start your application using 'jx monitor' it supports multi thread and reloads the crashed processes.

Prevent recursive calls of XmlHttpRequest to server

I've been googling for hours for this issue, but did not find any solution.
I am currently working on this app, built on Meteor.
Now the scenario is, after the website is opened and all the assets have been loaded in browser, the browser constantly makes recursive xhr calls to server. These calls are made at the regular interval of 25 seconds.
This can be seen in the Network tab of browser console. See the Pending request of the last row in image.
I can't figure out from where it originates, and why it is invoked automatically even when the user is idle.
Now the question is, How can I disable these automatic requests? I want to invoke the requests manually, i.e. when the menu item is selected, etc.
Any help will be appriciated.
[UPDATE]
In response to the Jan Dvorak's comment:
When I type "e" in the search box, the the list of events which has name starting with letter "e" will be displayed.
The request goes with all valid parameters and the Payload like this:
["{\"msg\":\"sub\",\"id\":\"8ef5e419-c422-429a-907e-38b6e669a493\",\"name\":\"event_Coll_Search_by_PromoterName\",\"params\":[\"e\"]}"]
And this is the response, which is valid.
a["{\"msg\":\"data\",\"subs\":[\"8ef5e419-c422-429a-907e-38b6e669a493\"]}"]
The code for this action is posted here
But in the case of automatic recursive requests, the request goes without the payload and the response is just a letter "h", which is strange. Isn't it? How can I get rid of this.?
Meteor has a feature called
Live page updates.
Just write your templates. They automatically update when data in the database changes. No more boilerplate redraw code to write. Supports any templating language.
To support this feature, Meteor needs to do some server-client communication behind the scenes.
Traditionally, HTTP was created to fetch dead data. The client tells the server it needs something, and it gets something. There is no way for the server to tell the client it needs something. Later, it became needed to push some data to the client. Several alternatives came to existence:
polling:
The client makes periodic requests to the server. The server responds with new data or says "no data" immediately. It's easy to implement and doesn't use much resources. However, it's not exactly live. It can be used for a news ticker but it's not exactly good for a chat application.
If you increase the polling frequency, you improve the update rate, but the resource usage grows with the polling frequency, not with the data transfer rate. HTTP requests are not exactly cheap. One request per second from multiple clients at the same time could really hurt the server.
hanging requests:
The client makes a request to the server. If the server has data, it sends them. If the server doesn't have data, it doesn't respond until it does. The changes are picked up immediately, no data is transferred when it doesn't need to be. It does have a few drawbacks, though:
If a web proxy sees that the server is silent, it eventually cuts off the connection. This means that even if there is no data to send, the server needs to send a keep-alive response anyways to make the proxies (and the web browser) happy.
Hanging requests don't use up (much) bandwidth, but they do take up memory. Nowadays' servers can handle multiple concurrent TCP connections, so it's less of an issue than it was before. What does need to be considered is the amount of memory associated with the threads holding on to these requests - especially when the connections are tied to specific threads serving them.
Browsers have hard limits on the number of concurrent requests per domain and in total. Again, this is less of a concern now than it was before. Thus, it seems like a good idea to have one hanging request per session only.
Managing hanging requests feels kinda manual as you have to make a new request after each response. A TCP handshake takes some time as well, but we can live with a 300ms (at worst) refractory period.
Chunked response:
The client creates a hidden iFrame with a source corresponding to the data stream. The server responds with an HTTP response header immediately and leaves the connection open. To send a message, the server wraps it in a pair of <script></script> tags that the browser executes when it receives the closing tag. The upside is that there's no connection reopening but there is more overhead with each message. Moreover, this requires a callback in the global scope that the response calls.
Also, this cannot be used with cross-domain requests as cross-domain iFrame communication presents its own set of problems. The need to trust the server is also a challenge here.
Web Sockets:
These start as a normal HTTP connection but they don't actually follow the HTTP protocol later on. From the programming point of view, things are as simple as they can be. The API is a classic open/callback style on the client side and the server just pushes messages into an open socket. No need to reopen anything after each message.
There still needs to be an open connection, but it's not really an issue here with the browser limits out of the way. The browser knows the connection is going to be open for a while, so it doesn't need to apply the same limits as to normal requests.
These seem like the ideal solution, but there is one major issue: IE<10 doesn't know them. As long as IE8 is alive, web sockets cannot be relied upon. Also, the native Android browser and Opera mini are out as well (ref.).
Still, web sockets seem to be the way to go once IE8 (and IE9) finally dies.
What you see are hanging requests with the timeout of 25 seconds that are used to implement the live update feature. As I already said, the keep-alive message ("h") is used so that the browser doesn't think it's not going to get a response. "h" simply means "nothing happens".
Chrome supports web sockets, so Meteor could have used them with a fallback to long requests, but, frankly, hanging requests are not at all bad once you've got them implemented (sure, the browser connection limit still applies).

How to handle aborted synchronic AJAX

My web server can change its IP in response to a specific HTTP request.
The thing is, that the browser uses synchronic $.ajax() to post this request. Since the server IP is changed the request is aborted once timed-out ("Aborted" in firebug net tab). However, since the post is synchronic, the browser (FF in this case) hangs infinitely. As far as I understand, it is not possible to timeout or programmatically abort sync AJAX.
For many practical reasons, I cant change the request to be async. Any ideas how to handle this situation? Thanks
You have a couple options available to you.
Change server-side behavior
Change the web application on the server to complete its response to the client before changing its IP address. Then, your application gets a response whether or not the call succeeded.
Use async AJAX calls
Self-explanatory. You don't want to do this, but you really should, and not for just the problem you're having now. If your application really requires significant changes for this to work, then it probably has other design issues that could be revisited as well.
You're using a fire-and-forget method anyway currently, so I really don't see why this would be a problem.

Categories

Resources