What are some good use cases for Server Sent Events - javascript

I discovered SSE (Server Sent Events) pretty late, but I can't seem to figure out some use cases for it, so that it would be more efficient than using setInterval() and ajax.
I guess, if we'd have to update the data multiple times per second then having one single connection created would produce less overhead. But, except this case, when would one really choose SSE?
I was thinking of this scenario:
A new user comment from the website is added in the database
Server periodically queries DB for changes. If it finds new comment, send notification to client with SSE
Also, this SSE question came into my mind after having to do a simple "live" website change (when someone posts a comment, notify everybody who is on the site). Is there really another way of doing this without periodically querying the database?

Nowadays web technologies are used to implmement all sort of applications, including those which need to fetch constant updates from the server.
As an example, imagine to have a graph in your web page which displays real time data. Your page must refresh the graph any time there is new data to display.
Before Server Sent Events the only way to obtain new data from the server was to perform a new request every time.
Polling
As you pointed out in the question, one way to look for updates is to use setInterval() and an ajax request. With this technique, our client will perform a request once every X seconds, no matter if there is new data or not. This technique is known as polling.
Events
Server Sent Events on the contrary are asynchronous. The server itself will notify to the client when there is new data available.
In the scenario of your example, you would implement SSE such in a way that the server sends an event immediately after adding the new comment, and not by polling the DB.
Comparison
Now the question may be when is it advisable to use polling vs SSE. Aside from compatibility issues (not all browsers support SSE, although there are some polyfills which essentially emulate SSE via polling), you should focus on the frequency and regularity of the updates.
If you are uncertain about the frequency of the updates (how often new data should be available), SSE may be the solution because they avoid all the extra requests that polling would perform.
However, it is wrong to say in general that SSE produce less overhead than polling. That is because SSE requires an open TCP connection to work. This essentially means that some resources on the server (e.g. a worker and a network socket) are allocated to one client until the connection is over. With polling instead, after the request is answered the connection may be reset.
Therefore, I would not recommend to use SSE if the average number of connected clients is high, because this could create some overhead on the server.
In general, I advice to use SSE only if your application requires real time updates. As real life example, I developed a data acquisition software in the past and had to provide a web interface for it. In this case, a lot of graphs were updated every time a new data point was collected. That was a good fit for SSE because the number of connected clients was low (essentially, only one), the user interface should update in real-time, and the server was not flooded with requests as it would be with polling.
Many applications do not require real time updates, and thus it is perfectly acceptable to display the updates with some delay. In this case, polling with a long interval may be viable.

Related

Real-Time with Node.js: WebSocket + Server-Side Polling vs. Client-Side Polling

I'm developing application that displays real-time data (charts, etc.) from Redis. Updated data comes to Redis very quickly (milliseconds). So it would make sense to show updates as often as possible (as long as human eye can notice it).
Technology stack:
Node.js as a web server
Redis that holds the data
JavaScript/HTML (AngularJS) as a client
Right now I have client-side polling (GET requests to Node.js server every second that queries Redis for updates).
Is there advantage of doing server-side polling instead, and exposing updates through WebSocket? Every WebSocket connection will require separate Node.js poll (setInterval) though since client queries may be different. But it's not expected to have more than 100 WebSocket connections.
Any pros/cons between these two approaches?
If I understood your question correctly: you have less than 100 users who are going to use your resource simultaneously, and you want to find out what can be a better way to give them updates:
clients ask for updates through time-out request (1 per second)
server keep track of clients and whenever there is an update, it issues them an update.
I think the best solution depends on the data that you have and how important is for users to get this data.
I would go with client-side if:
people do not care if their data is a little bit stale
there would be approximately more then 1 update during this 1 second
I do not have time to modify the code
I would go with server-side if:
it is important to have up to date data and users can not tolerate lags
updates are not so often (if for example we have updates only once per minute, only 1 in 60 client side request would be useful. And here server will just issue only one update)
One good thing is that node.js already has an excellent socket.io library for this purpose.

Advice on which technology to use for real time notifications

I have X amount of activity sensors connected to a server that inserts data to a database everytime a sensor is triggered. What I'm trying to do is create a web interface with a blue print of the facility (svg) and whenever a sensor is triggered, besides the db insert, I want it to show some sort of alert in my blue print. For that I need to keep an open connection to the server I think.
I was thinking of using web sockets, but it might be overkill since I only need to retrieve data from the server. But running an ajax call every second doesn't sound very efficient either. Are there any other alternatives?
Thank you
Some potential choices include:
WebSocket
Adobe® Flash® Socket
AJAX long polling
AJAX multipart streaming
Forever Iframe
JSONP Polling
Which actual transport you end up using will depend on the your requirements for browser support and what technology you are using on the server to handle these requests. The transport choice may also depend on your network topology - what types of load balancers you need to integrate with, proxies, etc.
There are many libraries available on both the client and server sides, many of which support more than one of these transports.
For example (not an exhaustive list):
socket.io for nodejs
WebSocket
Adobe® Flash® Socket
AJAX long polling
AJAX multipart streaming
Forever Iframe
JSONP Polling
SignalR for an asp/.net backend
WebSockets
Server-Sent Events
ForeverFrame
Long Polling
Atmosphere for a java backend
WebSockets
Server Side Events (SSE)
Long-Polling
Forever frame
JSONP
IMO - Websockets is NOT overkill for this type of problem and would lend itself nicely to this type of application.
Without specifically discussing frameworks or knowing what is running in the backend of your server(s), we have a few options to consider for the frontend:
Websockets
Websockets are designed for bidirectional communication, although it is kind of shocking how many users are surfing the web in a browser that doesn't support websockets. I always recommend a fallback for this, such as the other methods listed below.
SSE
SSE is an HTML5 spec and is still shaky at best. Try scrolling on a page while when an SSE event fires... It may be a little easier on the backend, put it sometimes hangs on the client side since it runs inside the same thread that the DOM is running in.
Long Polling
Keeps your connection open. It doesn't scale well with PHP, but performs swimmingly with Python+Twisted on the backend, or Node.Js
Good Old Ajax
Keep your requests small, and you still have a scalable solution. Yes, a full GET request is the most expensive, but is supported in just about every browser rolled out the past ten years. It is also worth noting that GET requests are easy to scale horizontally with more hardware.
In a perfect world:
You would break up your application into a few components, operating behind a reverse proxy such as Nginx. Then use Node.Js + Socket.IO handle the realtime aspects of your app.
Another option would be to use small Ajax requests, and offer websocket support for the browsers that support it. This is advice specifically for PHP in the backend.
WebSocket is certainly not overkill. On the contrary. With websockets, you have a bi-directional communication channel; this means, that the server can initiate communication whenever it seems fit (e.g. when sensor data changes).
In a previous project, I have used node.js together with socket.io, to monitor 50+ sensors. Data was updated in real-time in a browser. The data was visualized using smoothie.js.
Whenever a sensor value was updated, it was communicated to the browser. Some sensors only updated once a minute, others once a second, ...
Polling would have been overkill, because it would retrieve all data for all sensors, even from those that were not updated yet.
I had a similar problem and did a lot of research on this. As I understand it, there are three main options:
Short polling: Have an endpoint that your javascript client pings every second. This is the worst option, because the pings add latency up to one second to your communication, and depending on how you implement, the endpoint could query the database every second, adding unnecessary overhead.
Long polling: Have an endpoint that your javascript client pings that holds the connection until a) the event occurs or b) the connection times out. If the endpoint returns a response, the client gets the event information. If the endpoint does not return a response, no event has occurred, and the client sends a new request. This is a good option because the events can immediately trigger the response to the client, assuming you have an asynchronous interprocess communication layer (like 0MQ) to send the message without any sort of polling.
Websocket: Have your javascript client connect to a websocket server, which will send a message to your client immediately upon the event trigger.
I think a websocket is your best option, because it accommodates immediate communication of the event without all the request/response overhead. And most importantly, this is exactly what websockets are designed to do! As such, you will probably have to write the least amount of custom code with this solution.
There are two great commercial services that might work for you.
Firebase - a javascript hierarchical database and realtime
messaging/ synchronization platform, uses websockets and has other fallbacks
PubNub - a real time message passing and queue system, uses websockets

Prevent recursive calls of XmlHttpRequest to server

I've been googling for hours for this issue, but did not find any solution.
I am currently working on this app, built on Meteor.
Now the scenario is, after the website is opened and all the assets have been loaded in browser, the browser constantly makes recursive xhr calls to server. These calls are made at the regular interval of 25 seconds.
This can be seen in the Network tab of browser console. See the Pending request of the last row in image.
I can't figure out from where it originates, and why it is invoked automatically even when the user is idle.
Now the question is, How can I disable these automatic requests? I want to invoke the requests manually, i.e. when the menu item is selected, etc.
Any help will be appriciated.
[UPDATE]
In response to the Jan Dvorak's comment:
When I type "e" in the search box, the the list of events which has name starting with letter "e" will be displayed.
The request goes with all valid parameters and the Payload like this:
["{\"msg\":\"sub\",\"id\":\"8ef5e419-c422-429a-907e-38b6e669a493\",\"name\":\"event_Coll_Search_by_PromoterName\",\"params\":[\"e\"]}"]
And this is the response, which is valid.
a["{\"msg\":\"data\",\"subs\":[\"8ef5e419-c422-429a-907e-38b6e669a493\"]}"]
The code for this action is posted here
But in the case of automatic recursive requests, the request goes without the payload and the response is just a letter "h", which is strange. Isn't it? How can I get rid of this.?
Meteor has a feature called
Live page updates.
Just write your templates. They automatically update when data in the database changes. No more boilerplate redraw code to write. Supports any templating language.
To support this feature, Meteor needs to do some server-client communication behind the scenes.
Traditionally, HTTP was created to fetch dead data. The client tells the server it needs something, and it gets something. There is no way for the server to tell the client it needs something. Later, it became needed to push some data to the client. Several alternatives came to existence:
polling:
The client makes periodic requests to the server. The server responds with new data or says "no data" immediately. It's easy to implement and doesn't use much resources. However, it's not exactly live. It can be used for a news ticker but it's not exactly good for a chat application.
If you increase the polling frequency, you improve the update rate, but the resource usage grows with the polling frequency, not with the data transfer rate. HTTP requests are not exactly cheap. One request per second from multiple clients at the same time could really hurt the server.
hanging requests:
The client makes a request to the server. If the server has data, it sends them. If the server doesn't have data, it doesn't respond until it does. The changes are picked up immediately, no data is transferred when it doesn't need to be. It does have a few drawbacks, though:
If a web proxy sees that the server is silent, it eventually cuts off the connection. This means that even if there is no data to send, the server needs to send a keep-alive response anyways to make the proxies (and the web browser) happy.
Hanging requests don't use up (much) bandwidth, but they do take up memory. Nowadays' servers can handle multiple concurrent TCP connections, so it's less of an issue than it was before. What does need to be considered is the amount of memory associated with the threads holding on to these requests - especially when the connections are tied to specific threads serving them.
Browsers have hard limits on the number of concurrent requests per domain and in total. Again, this is less of a concern now than it was before. Thus, it seems like a good idea to have one hanging request per session only.
Managing hanging requests feels kinda manual as you have to make a new request after each response. A TCP handshake takes some time as well, but we can live with a 300ms (at worst) refractory period.
Chunked response:
The client creates a hidden iFrame with a source corresponding to the data stream. The server responds with an HTTP response header immediately and leaves the connection open. To send a message, the server wraps it in a pair of <script></script> tags that the browser executes when it receives the closing tag. The upside is that there's no connection reopening but there is more overhead with each message. Moreover, this requires a callback in the global scope that the response calls.
Also, this cannot be used with cross-domain requests as cross-domain iFrame communication presents its own set of problems. The need to trust the server is also a challenge here.
Web Sockets:
These start as a normal HTTP connection but they don't actually follow the HTTP protocol later on. From the programming point of view, things are as simple as they can be. The API is a classic open/callback style on the client side and the server just pushes messages into an open socket. No need to reopen anything after each message.
There still needs to be an open connection, but it's not really an issue here with the browser limits out of the way. The browser knows the connection is going to be open for a while, so it doesn't need to apply the same limits as to normal requests.
These seem like the ideal solution, but there is one major issue: IE<10 doesn't know them. As long as IE8 is alive, web sockets cannot be relied upon. Also, the native Android browser and Opera mini are out as well (ref.).
Still, web sockets seem to be the way to go once IE8 (and IE9) finally dies.
What you see are hanging requests with the timeout of 25 seconds that are used to implement the live update feature. As I already said, the keep-alive message ("h") is used so that the browser doesn't think it's not going to get a response. "h" simply means "nothing happens".
Chrome supports web sockets, so Meteor could have used them with a fallback to long requests, but, frankly, hanging requests are not at all bad once you've got them implemented (sure, the browser connection limit still applies).

WebSockets: useful for reducing overhead?

I am building a dynamic search (updated with every keystroke): my current scheme is to, at each keystroke, send a new AJAX request to the server and get data back in JSON.
I considered opening a WebSocket for every search "session" in order to save some overhead.
I know that this will save time, but the question is, is it really worth it, considering those parameters:
80ms average ping time
166ms: time between each keystroke, assuming the user types relatively fast
A worst-case transfer rate of 1MB/s, with each data pack that has to be received on every keystroke being no more than 1KB.
The app also takes something like 30-40ms to weld the search results to the DOM.
I found this: HTTP vs Websockets with respect to overhead, but it was a different use case.
Will websockets reduce anything besides the pure HTTP overhead? How much is the HTTP overhead (assuming no cookies and minimal headers)?
I guess that HTTP requests open a new network socket on each request, while the WebSocket allows us to use only one all the time. If my understanding is correct, what is the actual overhead of opening a new network socket?
It seems like WebSockets provide a better performance in situations like yours.
Web Socked
small handshake header
full duplex communication after the handshake.
After the connection is established, only 2 bytes are added per transmitted request/response
Http
Http headers are sent along with each request
On the other hand, WebSocket is a relatively new technology. It would be wise to investigate web browser support potential network related issues.
Ref:
http://websocket.org/quantum.html
http://www.youtube.com/watch?v=Z897fkPn7Rw
http://en.wikipedia.org/wiki/WebSocket#Browser_support

Quick AJAX responses from Rails application

I have a need to send alerts to a web-based monitoring system written in RoR. The brute force solution is to frequently poll a lightweight controller with javascript. Naturally, the downside is that in order to get a tight response time on the alerts, I'd have to poll very frequently (every 5 seconds).
One idea I had was to have the AJAX-originated polling thread sleep on the server side until an alert arrived on the server. The server would then wake up the sleeping thread and get a response back to the web client that would be shown immediately. This would have allowed me to cut the polling interval down to once every 30 seconds or every minute while improving the time it took to alert the user.
One thing I didn't count on was that mongrel/rails doesn't launch a thread per web request as I had expected it to. That means that other incoming web requests block until the first thread's sleep times out.
I've tried tinkering around with calling "config.threadsafe!" in my configuration, but that doesn't seem to change the behavior to a thread per request model. Plus, it appears that running with config.threadsafe! is a risky proposition that could require a great deal more testing and rework on my existing application.
Any thoughts on the approach I took or better ways to go about getting the response times I'm looking for without the need to deluge the server with requests?
You could use Rails Metal to improve the controller performance or maybe even separate it out entirely into a Sinatra application (Sinatra can handle some serious request throughput).
Another idea is to look into a push solution using Juggernaut or similar.
One approach you could consider is to have (some or all of) your requests create deferred monitoring jobs in an external queue which would in turn periodically notify the monitoring application.
What you need is Juggernaut which is a Rails plugin that allows your app to initiate a connection and push data to the client. In other words your app can have a real time connection to the server with the advantage of instant updates.

Categories

Resources