HTTP Long Polling - Timeout best practice - javascript

I play with Javascript AJAX and long-polling.
Try to find best value for server response timeout.
I read many docs but couldn't find a detailed explanation for timeout.
Someone choose 20 secs, other 30 secs...
I use logic like on diagram
How can I choose better value for timeout?
Can I use 5 minutes?
Is it normal practice?
PS: Possible Ajax client internet connections: Ethernet RJ-45, WiFi, 3G, 4G, also, with NAT, Proxy.
I worry about connection can be dropped by third party in some cases by long timeout.

Maybe its your grasp of English which is the problem, but its the lifetime of the connection (time between connection opening and closing) you need to worry about more than the timeout (length of time with no activity after which the connection will be terminated).
Despite the existence of websockets, there is still a lot of deployed hardware which will drop connections regardless of activity (and some which will look for inactivity) where it thinks the traffic is HTTP or HTTPS - sometimes as a design fault, sometimes as a home-grown mitigation to sloloris attacks. That you have 3G and 4G clients means you can probably expect problems with a 5 minute lifespan.
Unfortunately there's no magic solution to knowing what will work universally. The key thing is to know how widely distributed your users are. If they're all on your LAN and connecting directly to the server, then you should be able to use a relatively large value, however setting the duration to unlimited will reveal any memory leaks in your app - sometimes its better to do refresh every now and again anyway.
Taking the case where there is infrastructure other than hubs and switches between your server and the clients, you need to provide a mechanism for detecting and re-establishing a dropped connection regardless of the length of time. When you have worked out how to do this, then:
dropped connections are only a minor performance glitch and do not have a significant effect on the functionality
it's trivial to then add the capability to log dropped connections and thereby determine the optimal connection time to eliminate the small problem described in (1)

Your English is fine.
TL;DR - 5-30s depending on user experience.
I suggest long poll timeouts be 100x the server's "request" time. This makes a strong argument for 5-20s timeouts, depending on your urgency to detect dropped connections and disappeared clients.
Here's why:
Most examples use 20-30 seconds.
Most routers will silently drop connections that stay open too long.
Clients may "disappear" for reasons like network errors or going into low power state.
Servers cannot detect dropped connections. This makes 5 min timeouts a bad practice as they will tie up sockets and resources. This would be an easy DOS attack on your server.
So, < 30 seconds would be "normal". How should you choose?
What is the cost-benefit of making the long-poll connections?
Let's say a regular request takes 100ms of server "request" time to open/close the connection, run a database query, and compute/send a response.
A 10 second timeout would be 10,000 ms, and your request time is 1% of the long-polling time. 100 / 10,000 = .01 = 1%
A 20 second timeout would be 100/20000 = 0.5%
A 30 second timeout = 0.33%, etc.
After 30 seconds, the practical benefit of the longer timeout will always be less than: 0.33% performance improvement. There is little reason for > 30s
Conclusion
I suggest long poll timeouts be 100x the server's "request" time. This makes a strong argument for 5-20s timeouts, depending on your urgency to detect dropped connections and disappeared clients.
Best practice: Configure your client and server to abandon requests at the same timeout. Give the client extra network ping time for safety. E.g. server = 100x request time, client = 102x request time.
Best practice: Long polling is superior to websockets for many/most use cases because of the lack of complexity, more scalable architecture, and HTTP's well-known security attack surface area.

Related

Fetching data in intervals vs sse / websockets?

I wanted to implement sse for notifications, so the user can get new notifications.
What is the downside of fetching that data in intervals in my frontend? Cant I just do that? Why is it common to use sse or websockets instead of fetching data periodically?
Thanks
Polling is always a tradeoff between how often (and thus how much resource wastage, in this case of client and server CPU, client battery, and network traffic) vs. latency of noticing a notification.
I'd be annoyed if every one of the dozen tabs I have open was wasting CPU time and network traffic to contact some server every few seconds or minutes. Minutes wouldn't to totally terrible, but it would mean the worst case latency for a notification to pop up would be minutes after the thing happened, unlike on Stack Overflow where comment notifications happen within a second or so.
From the server's PoV, it would potentially pile up a lot of traffic if everyone with the page open was making an HTTP request every few seconds, not just users that were actively clicking on things that needed server interaction.
Polling every second would be bad enough waiting for something locally; it would be ridiculous in a case like yours involving network traffic. This is why we have event-driven methods of doing things all the way from low-level I/O (DMA + interrupts) to user-space socket APIs (select/poll) to GUI application to network stuff.
Polling is the easy but ugly solution. Either amateur-hour or done as a fallback if no better API is available. IDK if it would make sense to do it while prototyping before implementing the proper way in this case, or as a fallback, but it's not a good plan A.
https://en.wikipedia.org/wiki/Polling_(computer_science)

LibGDX GWT - WebSocket Causing Temporary FPS Spikes

BACKGROUND:
I'm implementing a client that communicates to a server using a WebSocket. The core of the client application is built ontop of LibGDX and it is deployed solely in browser form using GWT (HTML5/JavaScript). This is for a side-project - a fast-paced (high network traffic), online, browser-based game.
Without providing any source - essentially the client code is an empty LibGDX application that provides a correct implementation of gwt-websockets (a commonly recommended GWT wrapper for JavaScript websockets).
SYMPTOMS OF THE PROBLEM:
(pre-note: This appears to be a purely client-side issue) I start the application and connect to the server. On 1000 ms intervals the client application sends a packet to the server and receives a packet back - no problems. There are also no problems when I send and/or receive two, three, or even five packets per second.
Speeding up that process faster however ... approximately sending more than 10 packets per second (100 ms intervals) from the client to the server OR receiving more than 10 packets per second (100 ms intervals) OR both sending and receiving 10 packets per second (100 ms intervals) causes the client to slowly drop in FPS while the packets are pouring in, out, or both. The more network communication, the lower the FPS floor becomes (slowly drains from 60...55..50..45..all the way down to 1 if you keep sending the packets). Meanwhile, the server is completely healthy.
Here's the weird thing which makes me suspect there is some sort of buffer overflow on the client - after a packet has been "handled" (note: my Websocket.onHandle() method is empty), the FPS jumps back up to ~60 as if nothing ever happened. From this point, if a packet is sent or received, the FPS drops right back down to the floor value (except a tad bit worse each time this occurs).
FURTHER DEBUGGING:
I've looked into the possibility of memory leaks on my end, but after
going through a 15 hour debugging session I doubt my code is at fault
here (not to mention it is essentially empty). My network class that
communicates via Websocket contains literally a bare-bones
implementation and the symptoms only occur upon network activity.
I've read about potential garbage collection causing undesirable
performance hits in a GWT deployment. I don't know much about this
other than I cannot rule it out.
I doubt this matters but my server uses Java-WebSocket to
communicate with the client's gwt-websocket
MY BEST GUESS:
I'm really stumped here. My leading suspicion is that there exists some sort of bug in gwt-websockets with buffers and the handling of frequent network traffic or possibly there is a memory leak.

How can I sync two javascript timers via websockets

I have a site with a js visual metronome (like a conductor) that can be triggered via a websocket on multiple machines.
How can I make the start of the metronomes as sync as possible?
Thanks.
What I would do is measure latency, and compensate accordingly.
For each connection to the server, the server should regularly send ping messages (not a real ICMP ping... just some message over your established websocket channel), and measure the amount of time it takes to get a response back from the client. Divide this latency by 2, and you have a decent guess as to the time it takes for your messages to get from the server to the client.
Once you know this, you can estimate the timing difference between the two clients, and adjust accordingly.
Note that this isn't perfect by any means. I would maintain your metronome clock entirely client-side, and only use the communication with the server to adjust. This will give smoother performance for the user.
This is a clock recovery problem.
The problem you will face is that wall-time is likely to be different on each system, and furthermore, will tend to drift apart over the longer term. In addition to this, audio word-clock is not driven from wall-time either and will have a drift of its own from it. This means you have no chance of achieving sample or phase accuracy. Both IEEE1394 audio streaming and the MPEG Transport stream layers pull this trick off by sending a time-stamp embedded with bit-accuracy into the data stream. You clearly don't get this luxury with the combination of Ethernet, a network stack, The Nagel algorithm and any transmit queue in webSockets.
10s to 100ms might be more realistic.
You might like to look at the techniques the Network Time Protocol uses to solve an essentially similar problem.
Get the time from all the clients (UTC). Find client's offset. Schedule a start time according to each client offset.

Javascript timers & Ajax polling/scheduling

I've been looking for a simpler way than Comet or Long-Polling to push some very basic ajax updates to the browser.
In my research, I've seen that people do in fact use Javascript timers to send Ajax calls at set intervals. Is this a bad approach? It almost seems too easy. Also consider that the updates I'll be sending are not critical data, but they will be monitoring a process that may run for several hours.
As an example - Is it reliable to use this design to send an ajax call every 10 seconds for 3 hours?
Thanks, Brian
Generally, using timers to update content on a page via Ajax is at least as robust as relying on a long-lived stream connection like Comet. Firewalls, short DHCP leases, etc., can all interrupt a persistent connection, but polling will re-establish a client connection on each request.
The trade-off is that polling often requires more resources on the server. Even a handful of clients polling for updates every 10 seconds can put a lot more load on your server than normal interactive users, who are more likely to load new pages only every few minutes, and will spend less time doing so before moving to another site. As one data point, a simple Sinatra/Ajax toy application I wrote last year had 3-5 unique visitors per day to the normal "text" pages, but its Ajax callback URL quickly became the most-requested portion of any site on the server, including several sites with an order of magnitude (or more) higher traffic.
One way to minimize load due to polling is to separate the Ajax callback server code from the general site code, if at all possible, and run it in its own application server process. That "service middleware" service can handle polling callbacks, rather than giving up a server thread/Apache listener/etc. for what effectively amounts to a question of "are we there yet?"
Of course, if you only expect to have a small number (say, under 10) users using the poll service at a time, go ahead and start out running it in the same server process.
I think that one thing that might be useful here is that polling at an unchanging interval is simple, but is often unnecessary or undesirable.
One method that I've been experimenting with lately is having positive and negative feedback on the poll. Essentially, an update is either active (changes happened) or passive (no newer changes were available, so none were needed). Updates that are passive increase the polling interval. Updates that are active set the polling interval back to the baseline value.
So for example, on this chat that I'm working on, different users post messages. The polling interval starts off at the high value of 5 seconds. If other site users are chatting, you get updated every 5 secs about it. If activity slows down, and no-one is chatting since the latest message was displayed, the polling interval gets slower and slower by about a second each time, eventually capping at once every 3 minutes. If, an hour later, someone sends a chat message again, the polling interval suddenly drops back to 5 second updates and starts slowing.
High activity -> frequent polling. Low activity -> eventually very infrequent polling.

Advantage of COMET over long request polling?

I've been wondering if there is a real advantage to using COMET / push-technologies over the much simpler polling with long requests where the server will wait a certain maximum time for new events to happen before telling the clients that nothing happened.
Both technologies have similar client latencies and while common wisdom is that long requests are worse because they need to establish a new connection, there's also the fact that there is HTTP keep-alive -- so in the end, both seem to produce a very similar amount of traffic / load.
So is there some clear advantage to using COMET?
AFAIK polling with long requests pretty much IS comet. Polling with short requests is not.
Some advantages I can think of:
Makes client programming easier.
Minimum latency between the real event and the notification reaching the client. With polling this has a mean time of [POLL TIME]/2 and a worst case of [POLL TIME].
Can minimize the resources needed in the server. See this article for example. New server technologies need to be used for this to happen.

Categories

Resources