Quick AJAX responses from Rails application - javascript

I have a need to send alerts to a web-based monitoring system written in RoR. The brute force solution is to frequently poll a lightweight controller with javascript. Naturally, the downside is that in order to get a tight response time on the alerts, I'd have to poll very frequently (every 5 seconds).
One idea I had was to have the AJAX-originated polling thread sleep on the server side until an alert arrived on the server. The server would then wake up the sleeping thread and get a response back to the web client that would be shown immediately. This would have allowed me to cut the polling interval down to once every 30 seconds or every minute while improving the time it took to alert the user.
One thing I didn't count on was that mongrel/rails doesn't launch a thread per web request as I had expected it to. That means that other incoming web requests block until the first thread's sleep times out.
I've tried tinkering around with calling "config.threadsafe!" in my configuration, but that doesn't seem to change the behavior to a thread per request model. Plus, it appears that running with config.threadsafe! is a risky proposition that could require a great deal more testing and rework on my existing application.
Any thoughts on the approach I took or better ways to go about getting the response times I'm looking for without the need to deluge the server with requests?

You could use Rails Metal to improve the controller performance or maybe even separate it out entirely into a Sinatra application (Sinatra can handle some serious request throughput).
Another idea is to look into a push solution using Juggernaut or similar.

One approach you could consider is to have (some or all of) your requests create deferred monitoring jobs in an external queue which would in turn periodically notify the monitoring application.

What you need is Juggernaut which is a Rails plugin that allows your app to initiate a connection and push data to the client. In other words your app can have a real time connection to the server with the advantage of instant updates.

Related

What are some good use cases for Server Sent Events

I discovered SSE (Server Sent Events) pretty late, but I can't seem to figure out some use cases for it, so that it would be more efficient than using setInterval() and ajax.
I guess, if we'd have to update the data multiple times per second then having one single connection created would produce less overhead. But, except this case, when would one really choose SSE?
I was thinking of this scenario:
A new user comment from the website is added in the database
Server periodically queries DB for changes. If it finds new comment, send notification to client with SSE
Also, this SSE question came into my mind after having to do a simple "live" website change (when someone posts a comment, notify everybody who is on the site). Is there really another way of doing this without periodically querying the database?
Nowadays web technologies are used to implmement all sort of applications, including those which need to fetch constant updates from the server.
As an example, imagine to have a graph in your web page which displays real time data. Your page must refresh the graph any time there is new data to display.
Before Server Sent Events the only way to obtain new data from the server was to perform a new request every time.
Polling
As you pointed out in the question, one way to look for updates is to use setInterval() and an ajax request. With this technique, our client will perform a request once every X seconds, no matter if there is new data or not. This technique is known as polling.
Events
Server Sent Events on the contrary are asynchronous. The server itself will notify to the client when there is new data available.
In the scenario of your example, you would implement SSE such in a way that the server sends an event immediately after adding the new comment, and not by polling the DB.
Comparison
Now the question may be when is it advisable to use polling vs SSE. Aside from compatibility issues (not all browsers support SSE, although there are some polyfills which essentially emulate SSE via polling), you should focus on the frequency and regularity of the updates.
If you are uncertain about the frequency of the updates (how often new data should be available), SSE may be the solution because they avoid all the extra requests that polling would perform.
However, it is wrong to say in general that SSE produce less overhead than polling. That is because SSE requires an open TCP connection to work. This essentially means that some resources on the server (e.g. a worker and a network socket) are allocated to one client until the connection is over. With polling instead, after the request is answered the connection may be reset.
Therefore, I would not recommend to use SSE if the average number of connected clients is high, because this could create some overhead on the server.
In general, I advice to use SSE only if your application requires real time updates. As real life example, I developed a data acquisition software in the past and had to provide a web interface for it. In this case, a lot of graphs were updated every time a new data point was collected. That was a good fit for SSE because the number of connected clients was low (essentially, only one), the user interface should update in real-time, and the server was not flooded with requests as it would be with polling.
Many applications do not require real time updates, and thus it is perfectly acceptable to display the updates with some delay. In this case, polling with a long interval may be viable.

Real-Time with Node.js: WebSocket + Server-Side Polling vs. Client-Side Polling

I'm developing application that displays real-time data (charts, etc.) from Redis. Updated data comes to Redis very quickly (milliseconds). So it would make sense to show updates as often as possible (as long as human eye can notice it).
Technology stack:
Node.js as a web server
Redis that holds the data
JavaScript/HTML (AngularJS) as a client
Right now I have client-side polling (GET requests to Node.js server every second that queries Redis for updates).
Is there advantage of doing server-side polling instead, and exposing updates through WebSocket? Every WebSocket connection will require separate Node.js poll (setInterval) though since client queries may be different. But it's not expected to have more than 100 WebSocket connections.
Any pros/cons between these two approaches?
If I understood your question correctly: you have less than 100 users who are going to use your resource simultaneously, and you want to find out what can be a better way to give them updates:
clients ask for updates through time-out request (1 per second)
server keep track of clients and whenever there is an update, it issues them an update.
I think the best solution depends on the data that you have and how important is for users to get this data.
I would go with client-side if:
people do not care if their data is a little bit stale
there would be approximately more then 1 update during this 1 second
I do not have time to modify the code
I would go with server-side if:
it is important to have up to date data and users can not tolerate lags
updates are not so often (if for example we have updates only once per minute, only 1 in 60 client side request would be useful. And here server will just issue only one update)
One good thing is that node.js already has an excellent socket.io library for this purpose.

How to deal with a big set of pending requests

I want to implement web site that will display to user a notification about some event happened on server. My plan is:
to make an asynchronous request to the server (ASP.NET) which will have a 600 seconds time-out
if event occurs on the server in the time interval of these 600 seconds server will response with an event details
if event is not occurred the server then server will send an 'no event' response at the end of 600 seconds
JS upon receiving a feedback from server will process the response and send the next request.
The problem of the approach is that for a big amount of visitors web site will have a lot of 'pending' requests.
Questions:
Should I consider that as a problem? What is solution for that? Probably I should implement another approach?
Please advice, any feedback is welcome.
I don't know specifics about asp.net's handling of pending requests, but what you are describing is basically long-polling. It's tricky for a number of reasons, including but not limited to:
each pending request consumes a thread, and you'll need to store state on each of those threads
if you have enough connections (not necessarily all that many; see above), you'll need them to span multiple machines, and you then need to come up with an architecture to distribute endpoints across those machines, and make sure each incoming request goes to the right machine. If you're only broadcasting the same data to all your users, this becomes much easier.
proxies or ISPs or what-have-you may shut down your long-poll request. You'll need an architecture resilient to that.
Here's a question about long-polling in asp.net: How to do long-polling AJAX requests in ASP.NET MVC? It's probably a good place to start.
Also you could consider a 3rd-party service like pusher to handle these connections for you, or (disclaimer: I work on App Engine) App Engine's Channel API.
Surely you could make more frequent requests to the server that do not consume server resources for 10 whole minutes?
e.g. send an AJAX request every 60 seconds or so, and return whether or not any event has occurred. The downside is that it could take up to a minute for a user to see notification about some event, so if you need it more or less immediately, that is a problem.
If it does have to be immediate, it seems like looking into "long polling" with something like node.js might be a solution, though non-trivial to implement.

Javascript 1 second apart ajax requests? Resource usage?

I have a long-term goal of eventually creating a chat sort by any means, but for now I'd like to just have a simple one with some Mysql and ajax calls.
To make the chat seem instant, I'd like to have the ajax request interval as fast as possible. I get the feeling if it's as low or lower than a second, it's going to bog down the browser, the user's internet, or my server.
Assuming the server doesn't return anything, how much bandwidth and cpu/memory would the client use with constant, one second apart ajax calls?
edit: I'm still open to suggestions on how I can do a chat server. Anything that's possible with free hosting from x10 or 000webhost. I've been told of Henoku but I have no clue how to use it.
edit: Thanks for the long polling suggestion, but that uses too much cpu on the servers.
One technique that can be used is to use a long-running ajax request. The client asks if there's any chat data. The server receives the request. If there's chat data available, it returns that data immediately. If there is no chat data, it hangs onto the request for some period of time (perhaps two minutes) and if some chat data appears during that two minutes, the web request returns immediately with that data. If the full two minutes elapses and no chat data is received, then the ajax call returns with no data.
The client can then immediately issue another request to wait another two minutes for some data.
To make these "long" http requests work, you just need to make sure that your underlying ajax call has a timeout set for longer than the time you've set it for on the server.
On the server, you need to do an efficient mechanism of waiting for data, probably involving semaphores or something like that because you don't want to be polling internally in the server either.
Doing it this way, you can get near instantaneous response on the client, but only be making 30 requests an hour.
To be friendly to the battery of a laptop or mobile device, you need to be sensitive to when your app isn't actually being used (browser not displayed, not the current tab, etc...) and stop the requests during that time.
As to your other questions, repeated ajax calls (as long as they are spaced at least some small amount of time apart) don't really use much in the way of CPU or memory. They may use battery if they keep the computer from going into an idle mode.

If I wanted to create an AJAX chat what communication technique should be used to maintain scalability?

I put together an AJAX chat a while back ASP.NET MVC and jQuery. The javascript would hit the server about every 7 seconds to check for new messages. Obviously this was horrible on performance as the chat grew and included more and more users. The site traffic grew exponentially with so many requests going on. A user could leave the computer on all day and not even be there and they would still be making hits every 7 seconds.
Is there a better way to do this? I have heard of something called "push" but I haven't really been able to wrap my head around it. I think I just need pointed in the right direction.
1.) What is the best way to develop an AJAX chat and have it be scalable?
2.) What is push and how would I just that with jQuery?
1.) What is the best way to develop an AJAX chat and have it be scalable?
I agree with #freakish about the complexity and potential lack of scaling of IIS.
However, there is a relatively new Microsoft option in the works called SignalR which could become a core part of ASP.NET. More details in this related SO Question:
AJAX Comet - Is there any solution Microsoft is working on or supports to allow it to be scalable?
2.) What is push and how would I just that with jQuery?
Partially answered elsewhere, but it's a long-held persistent connection between the server and the client which means the server can instantly 'push' data to the client when it has new data available.
jQuery does support making AJAX requests but the core library doesn't support expose ways of doing HTTP Long-Polling or HTTP Streaming. More information in this SO answer to 'Long Polling/HTTP Streaming General Questions'.
Server push is a technology that allows the server to push data back to the client without forcing client to make many requests (like every 7 seconds). It is not really a matter of javascript but rather good server scripting. The upcoming HTML5 will make it simple due to server-sent events and/or WebSockets. This will be a true TCP connection between different machines.
But if you intend to make a webpage compatible with older browsers, then the most common technique is the long polling. The client sends request to the server and the server does not respond to it until it has new data. If it does then the response is made and the client immediatly after receiving data calls the server with new request. In practice however this requires the server to be well-written (for example it has to maintain thousands of idle requests at the same time) and can become a rather big challenge for developers.
I hope this helps. :) Good luck!
The technique you should use is the real-time persistent long running connections over a web page using WebSockets. You can use this library.
Node.js is becoming quite popular to build something like this and supports socket connections so you could push the data out only when there is a new message. But that would be learning something completely new.
Another nice potential would be to use MVC's OutputCacheAttribute and use the SQL dependency option so your AJAX page could be cached and would only be a new request when a new chat message appears. Also you would want your controller to be an Asynchronous controller to help reduce the load on IIS.
Enjoy, optimization is always fun and very time consuming!

Categories

Resources