Long Polling: How do I calm it down? - javascript

I'm working on a simple chat implementation in a function that has an ajax call that invokes a setTimeout to call itself on success. This runs every 30 seconds or so. This works fine, but I'd like a more immediate notification when a message has come. I'm seeing a lot of examples for long polling with jQuery code that looks something like this:
function poll()
{
$.ajax(
{
data:{"foo":"bar"},
url:"webservice.do",
success:function(msg)
{
doSomething(msg);
},
complete:poll
});
}
I understand how this works, but this will just keep repeatedly sending requests to the server immediately. Seems to me there needs to be some logic on the server that will hold off until something has changed, otherwise a response is immediately sent back, even if there is nothing new to report. Is this handled purely in javascript or am I missing something to be implemented server-side? If it is handled on the server, is pausing server execution really a good idea? In all of your experience, what is a better way of handling this? Is my setTimeout() method sufficient, maybe with just a smaller timeout?
I know about websockets, but as they are not widely supported yet, I'd like to stick to current-gen techniques.

Do no pause the sever execution... it will lead to drying out server resources if lot of people try to chat...
Use client side to manage the pause time as you did with the setTimeout but with lower delay

You missed the long part in "long polling". It is incumbent on the server to not return unless there's something interesting to say. See this article for more discussion.

You've identified the trade-off, open connections to the web server, therefore consuming http connections (i.e. the response must block server side) vs frequent 'is there anything new' requests therefore consuming bandwidth. WebSockets may be an option if your browser base can support them (most 'modern' browsers http://caniuse.com/websockets)

There is no proper way to handle this on the javascript side through traditional ajax polling as you will always have a lag at one end or the other if you are looking to throttle the amount of requests being made. Take a look at a nodeJS based solution or perhaps even look at the Ajax Push Engine www.ape-project.org which is PHP based.

Related

Is there any option for setInterval()?

I am reading a file continuously after a some time as
setInterval(function(){
$.getJSON("json/someFile.json", function(data){
// Some code
});
}, 5000);
I am reading this file continuously after a delay as it is getting updated in other part of the code. I want to avoid using setInterval().
Is there any way, by which I will be able to know that the file is updated and read it only when it is updated.
Firstly, setInterval is a native JavaScript method. It does not come from jQuery. Second what you've done is called polling. Meaning that you request some information periodically in order to keep it up to date. The alternative is using a WebSockets. Websockets are a two way connection between the client and the server, which can both push and receive messages. This way, you can send a socket message to the client whenever the file is updated in the backend.
I'm assuming you're talking about client side code. Then no: there is no way to "watch" a json file like you could have a file watcher in "regular" applications. You need either:
Interval-based checking as you're doing now. However, as suggested in comments by #George, you might be better off if you use setTimeout and only re-fire the Ajax request in specific situations (e.g. on success, perhaps not on failures); With your current approach the function may run on the interval, but if it takes longer than the interval timing to respond you get a build-up of requests;
Websockets (potentially with fallback to something like long-polling), perhaps using another library for that + the server-side part of this solution;
No other way I'm afraid.
As a footnote, this hasn't got much to do with jQuery. First, the setInterval is not of jQuery but a regular window function, and second the problem of "watching" a file isn't specific to how you're doing the Ajax call (you're using jQuery, but you could use another lib for it too).

Race conditions during simultaneous link click and asynchronous AJAX request?

I'm currently facing a situation similar to the relatively-simple example shown below. When a user clicks on a link to a third-party domain, I need to capture certain characteristics present in the user's DOM and store that data on my server. It's critical that I capture this data for all JS-enabled users, with zero data loss.
I'm slightly concerned that my current implementation (shown below) may be problematic. What would happen if the external destination server was extremely fast (or my internal /save-outbound-link-data endpoint was extremely slow), and the user's request to visit the external link was processed before the internal AJAX request had enough time to complete? I don't think this would be a problem (because in this situation, the browser doesn't care about receiving a response from the AJAX request), but getting some confirmation from fellow developers would be much appreciated.
Also, would the answer to the question above vary if the <a> link pointed to an internal URL rather than an external one?
<script type="text/javascript">
$(document).ready(function() {
$('.record-outbound-click').on('click', function(event) {
var link = $(this);
$.post(
'/save-outbound-link-data',
{
destination: link.attr('href'),
category: link.data('cat')
},
function() {
// Link tracked successfully.
}
);
});
});
</script>
<a href="http://www.stackoverflow.com" class="record-outbound-click" data-cat="programming">
Visit Stack Overflow
</a>
Please note that using event.preventDefault(), along with window.location.href = var.attr('href') inside $.post's success callback, isn't a viable solution for me. Neither is sending the user to a preliminary script on my server (for instance, /outbound?cat=programming&dest=http://www.stackoverflow.com), capturing their data, and then redirecting them to their destination.
Edit 2
Also consider the handshake step (Google's docs):
Time it took to establish a connection, including TCP handshakes/retries and negotiating a SSL.
I don't think you and the server you're sending the AJAX request to can complete the handshake if your client is no longer open for connection to the server (i.e., you're already at Stackoverflow or whatever website your link navigates to.)
Edit 1
More broadly, though, I was hoping to understand from a theoretical point of view whether or not the risk I'm concerned about is a legitimate one.
That's an interesting question, and the answer may not be as obvious as it seems.
That's just a sample request/response in my network tab. Definitely shouldn't be thought of to be used as any sort of trend or representation for general requests/responses.
I think the gap we might be most concerned with is the 1.933ms stall time. There's also other additional steps that need to happen before the actual request is sent (which itself was about 0.061ms).
I'd be worried if there's an interruption in any of the 3 steps leading up to the actual request (which took about 35ms give or take).
I think the question is, if you go somewhere else before the "stalled", "DNS Lookup", and "Initial connection" steps happen, is the request still going to be sent? That part, I don't know. But what about any general computer or browser lag beforehand?
Like you mentioned, the idea that somehow the req/res cycle to/from Stackoverflow would be faster than what's happening on your client (i.e., the initiation itself -- not even the complete cycle -- of a network request to your server) is probably a bit ridiculous, but I think theoretically (as you mentioned, this is what you're interested in), it's probably a bad idea in general to depend on these types of race conditions.
Original answer
What about making the AJAX request synchronous?
$.ajax({
type: "POST",
url: url,
async: false
});
This is generally a terrible idea, but if, in your case, the legacy code is so limiting that you have no way to modify it and this is your last option (think, zombie apocalypse), then consider it.
See jQuery: Performing synchronous AJAX requests.
The reason it's a bad idea is because it's completely blocking (in normal circumstances, you don't want potentially un-completeable requests blocking your main thread). But in your case, it looks like that's actually exactly what you want.

adding latency to a web application

I want to slow down my app by adding latency for ajax requests. I have two options: doing it in javascript and doing it server-side. With javascript, I could easily add a setTimeout on my requests but there are about 30 different requests and I'm wondering if there's a better way, with less code.
I want to slow down ajax requests server-side. What's the best way to do it? I'm using about 25 different asmx web services (will be converted to wcf soon) and I'm wondering how to make it so that all requests have 1000ms of latency.
My goal is to change as little code as possible so that I can turn this feature on/off by changing as little as possible.
Thanks for your suggestions.
In case you're wondering why: I'm running on my local machine. I'm going to do a user-testing session and I need to simulate real ajax requests. Without latency, the ajax request happens almost instantaneously.
You could add a
System.Threading.Thread.Sleep(1000)
in the OnRequestBegin-Handler or where ever you can intercept the request before doing the actual work.
you can hook in a timeout event on the server side code before it responds to the ajax request. At least then your ajax's interaction with a latent response is authentic.
If you are using jquery for the ajax call, just go into the jquery code file and add the latency there. Would that work?
Look for this line in the main jquery file:
ajax: function( url, options ) {
Add this code right after:
var ms = 1000 //wait time in milliseconds
ms += new Date().getTime();
while (new Date() < ms) { }
ms is the number of milliseconds to wait
I would put an http proxy in the way which you control and can make slow.
Since I know Perl best I'd use something like http://metacpan.org/pod/HTTP::Proxy and add a filter method that did nothing but wait a second.

how to silently guarantee executing an ASP.NET MVC3 action on page unload

I need to execute an action of a controller when a user leave a page (close, refresh, go to link, etc.). The action code is like:
public ActionResult WindowUnload(int token)
{
MyObjects[token].Dispose();
return Content("Disposed");
}
On window download I do Ajax request to the action:
$(window).unload(function ()
{
$.ajax({
type: "POST",
url: "#Url.Action("WindowUnload")",
data: {token: "#ViewData["Token"]"},
cache: false,
async: true
});
//alert("Disposing.");
})
The above ajax request does not come to my controller, i.e., the action is not executed.
To make the code above to work I have to uncomment the alert line, but I don't want to fire alert on a user.
If I change async option to false (alert is commented), then it sometimes works. For example, if I refresh the page several times too fast then the action will not be executed for every unload.
Any suggestions how to execute the action on every unload without alert?
Note, I don't need to return anything from action to the page.
Updated: answers summary
It is not possible reliably to do request on unload, since it is not proper or expected behavior on unload. So it is better to redesign the application and avoid doing HTTP request on window unload.
If it is not avoidable, then there are common solutions (described in the question):
Call ajax synchronously, i.e., async: false.
Pros: works in most cases.
Pros: silent
Cons: does not work in some cases, e.g, when a user refreshes the windows several times too fast (observed in Firefox)
Use alert on success or after ajax call
Pros: seems to work in all cases.
Cons: is not silent and fires pop up alert.
According to unload documentation, with async: false it should work as expected. However, this will always be a bit shaky - for example, user can leave your page by killing/crashing the browser and you will not receive any callback. Also, browser implementations vary. I fear you won't get any failproof even.
HTTP is stateless and you can never get a reliable way to detect that the user has left your page.
Suggested events:
Session timeout (if you are using sessions)
The application is going down
A timer (need to be combined with the previous suggestion)
Remove the previous token when a new page is visited.
Why does this need to happen at all?
From the code snippet you posted you are attempting to use this to dispose of objects server side? You are supposed to call Dispose to free up any un-managed resources your objects are using (such as Database connections).
This should be done during the processing of each request. There shouldn't be any un-managed resources awaiting a dispose when the client closes the browser window.
If this is the way you are attempting this in the manner noted above the code needs to be reworked.
Have you tried onbeforeunload()?
$(window).bind('beforeunload', function()
{
alert('unloading!');
}
);
or
window.onbeforeunload = function() {
alert('unloading!');
}
From the comment you made to #Frazzell's answer it sounds like you are trying to manage concurrency. So on the chance that this is the case here are two common method for managing it.
Optimistic concurrency
Optimistic concurrency adds a timestamp to the table. When the client edits the record the timestamp is included in the form. When they post their update the timestamp is also sent and the value is checked to make sure it is the most recent in the table. If it is, the update succeeds. If it is not then someone else got in sooner with an update so it is discarded. How you handle this is then up to you.
Pessimistic concurrency
If you often experience concurrency clashes then pessimistic concurrency may be better. Here when the client edits the record a flag is set on that row to lock it. This will remain until the client completes the edit and no other user can edit that row. This method avoids users loosing changes but add an administration over head to the application. Now you need a way to release unwanted locks. You also have to inform the user through the UI that a row is locked for edit.
In my experience it is best to start with optimistic concurrency. If I have lots of people reporting problems I will try to find out why people are having these conflicts. It maybe that I have to break down some entities in to smaller types as they have become responsible for doing too many jobs.
This wont work and even if you are able to somehow make it work it will give you lots of headaches later on, because this is not how the browser/HTTP is supposed to be used. When the page is unloading (in browser) the browser will call the unload event and then unload the page (you cannot make it wait, not even my making sync ajax calls) and in case the call was going on and the browser after executing the code unload the page, the call will also get cancelled and thats why you see the call on server sometimes and sometimes it doesn't work. If you could tell use why you want to do this we could suggest you a better approach.
You can't. The only thing you can do is prompt the user to stay and hope for the best. There are a whole host of security concerns here.

How to perform Ajax requests, a few at a time

I am not really sure it is possible in JavaScript, so I thought I'd ask. :)
Say we have 100 requests to be done and want to speed things up.
What I was thinking of doing is:
Create a loop that will launch the first 5 ajax calls
Wait until they all return (success - call a function to update the dom / error) - not sure how, maybe with a global counter?
Repeat until all requests are done.
Considering browser JavaScript does not support thread, can we "exploit" the async functionality to do that?
Do you think it would work, or there are inherent problems doing that in JavaScript?
Yes, I have done something similar to this before. The basic process is:
Create a stack to store your jobs (requests, in this case).
Start out by executing 3 or 4 of the requests.
In the callback of the request, pop the next job out of the stack and execute it (giving it the same callback).
I'd say, the comment from Dancrumb is the "answer" to this question, but anyway...
Current browsers do limit HTTP requests, so you can even easily just start all 100 request immediately, and the browser will take care of sending those requests as fast as possible, but limited to a decent number of parallel requests.
So, just start them all immediately and trust on the browser.
However, this may change in the future (the number of parallel requests that a browser sends increases as end-user internet bandwidth increases and technology advances).
EDIT: you should also think and read about the meaning of "asynchronous" in a javascript context.. asynchronous here just means that you give up control about something to some other part of a system. so "sending" an async request just means, that you tell the browser to do so! you do not control the browser, you just tell it to send that request and please notify me about the outcome.
It's actually slower to break up 100 requests and batch post them 5 at a time whilst waiting for them to complete till you send the next batch. You might be better off simply sending 100 requests, remember JavaScript is single threaded so it can only resolve 1 response at a time anyways.
A better way is set up a batch request service that accepts something like:
/ajax_batch?req1=/some/request.json&req2=/other/request.json
And so on. Basically you send multiple requests in a single HTTP request. The response of such a request would look like:
[
{"reqName":"req1","data":{}},
{"reqName":"req2","data":{}}
]
Your ajax_batch service would resolve each request and send back the results in proper order. Client side, you keep track of what you sent and what you expect, so you can match up the results to the correct requests. Downside, it takes quite some coding.
The speed gain would come entirely from a massive reduction of HTTP requests.
There's a limit on how many requests you send because the url length has a limit iirc.
DWR does exactly that afaik.

Categories

Resources