What is the limit of sending concurrent ajax requests with node.js? - javascript

I am making a node.js server that makes a lot of ajax requests to download json or html code, to different websites (not ddos/dos). For this language, I was wondering how many requests I can make at the same time without problems. Like is it ok to do like
for(i=0;i<1000;i+=1) {
callajax();
}
or do I have to do
function call() {
callajax(call);
}
this sorta calls the next ajax call, when the current one finishes.
If the top one is ok, how many ajax calls can I call at the same time before I have to wait till they return. I don't want problems with non-returning ajax requests.
Can anyone share some insight into this?
Also, lets say I have two servers running on the same wifi network. If both of them runs their node.js server making ajax requests, is that the same thing as doing it from 1 server?

This will depend upon several variables.
How many simultaneous sockets is node.js configured to allow (this is a configuration option within node.js)?
If all the requests are going to the same host, then how many simultaneous connections can that host handle?
If the requests take a little time for the receiving server to process, how fast can it process each request and will it be able to process the last request in less time than your timeout value is set to?
In general, it would be best to limit the number of simultaneous requests you send to the same host to something manageable like 5-10 because in most cases you won't actually get better throughput by sending more simultaneous requests, but you could exhaust some resources or hit timeouts with a lot of simultaneous requests.
The actual max value is dependent upon a lot of configuration choices and specifics related to the type of request and would have to be discovered via testing.

Related

Is there a javascript API which predicts that an ajax request is likely to be throttled?

I am implementing part of a large javascript application where various scripts will be doing AJAX requests, often simultaneously.
I am considering implementing some helper class which can prioritize a stream of AJAX requests, and in some cases optimize by using a bulk api that the server understands instead of many small requests.
However I have a dilemma: if I just call e.g. jquery.ajax(), then I don't know if the request will be sent straight away, or be throttled by the browser (there may be requests executing and waiting for response already, sent from other components of the app). So, I don't really know when to do the bunching optimization, and when to just send a request straight away.
Is there a good way to ask the browser whether I'm going to be throttled in advance of issuing an AJAX request?

Is there any good trick for server to handle more requests if I don't have to sent any data back?

I want to handle a lot of (> 100k/sec) POST requests from javascript clients with some kind of service server. Not many of this data will be stored, but I have to process all of them so I cannot spend my whole server power for serving requests only. All the processing need to be done in the same server instance, otherwise I'll need to use database for synchronization between servers which will be slower by orders of magnitude.
However I don't need to send any data back to the clients, and they don't even expect them.
So far my plan was to create few proxy servers instances which will be able to buffer the request and send them to main server in bigger packs.
For example let's say that I need to handle 200k requests / sec and each server can handle 40k. I can split load between 5 of them. Then each one will be buffering requests and sending them back to main server in packs of 100. This will result in 2k requests / sec on the main server (however, each message will be 100 times bigger - which probably means around 100-200kB). I could even send them back to the server using UDP to decrease amount of needed resources (then I need only one socket on main server, right?).
I'm just thinking if there is no other way to speed up the things. Especially, when as I said I don't need to send anything back. I have full control over javascript clients also, but unlucky javascript is unable to send data using UDP which probably would be solution for me (I don't even care if 0.1% of data will be lost).
Any ideas?
Edit in response to answers given me so far.
The problem isn't with server being to slow at processing events from the queue or with putting events in the queue itself. In fact I plan to use disruptor pattern (http://code.google.com/p/disruptor/) which was proven to process up to 6 million requests per second.
The only problem which I potentially can have is need to have 100, 200 or 300k sockets open at the same time, which cannot be handled by any of the mainstream servers. I know some custom solutions are possible (http://www.metabrew.com/article/a-million-user-comet-application-with-mochiweb-part-3) but I'm wondering if there is no way to even better utilization of fact that I don't have to replay to clients.
(For example some way to embed part of the data in initial TCP packet and handle TCP packets as they would be UDP. Or some other kind of magic ;))
Make a unique and fast (probably in C) function that get's all requests, from a very fast server (like nginx). The only job of this function is to store the requests in a very fast queue (like redis if you got enought ram).
In another process (or server), depop the queue and do the real work, processing request one by one.
If you have control of the clients, as you say, then your proxy server doesn't even need to be an HTTP server, because you can assume that all of the requests are valid.
You could implement it as a non-HTTP server that simply sends back a 200, reads the client request until it disconnects, and then queues the requests for processing.
I think what you're describing is an implementation of a Message Queue. You also will need something to hand off these requests to whatever queue you use (RabbitMQ is quite good, there are many alternatives).
You'll also need something else running which can do whatever processing you actually want on the requests. You haven't made that very clear, so I'm not too sure exactly what would be right for you. Essentially the idea will be that incoming requests are dumped as quickly as simply as possible into the queue by your web server, and then the web server is free to go back to serving more requests. When the system has some resources, it uses them to process the queue, but when it's busy the queue just keeps growing.
Not sure what platform you're on, but might want to look at something like Lighttpd for serving the POSTs. You might (if same-domain restrictions don't shoot you down) get away with having Lighttpd running on a subdomain of your application (so post.myapp.com). Failing that you could put a proper load balancer in front of your webservers altogether (so all requests go to www.myapp.com and the load balancer decides whether to forward them to the web server or the queue processor).
Hope that helps
Consider using MongoDB for persisting your requests, it's fire and forget mechanism can help your servers to response faster.

How to deal with a big set of pending requests

I want to implement web site that will display to user a notification about some event happened on server. My plan is:
to make an asynchronous request to the server (ASP.NET) which will have a 600 seconds time-out
if event occurs on the server in the time interval of these 600 seconds server will response with an event details
if event is not occurred the server then server will send an 'no event' response at the end of 600 seconds
JS upon receiving a feedback from server will process the response and send the next request.
The problem of the approach is that for a big amount of visitors web site will have a lot of 'pending' requests.
Questions:
Should I consider that as a problem? What is solution for that? Probably I should implement another approach?
Please advice, any feedback is welcome.
I don't know specifics about asp.net's handling of pending requests, but what you are describing is basically long-polling. It's tricky for a number of reasons, including but not limited to:
each pending request consumes a thread, and you'll need to store state on each of those threads
if you have enough connections (not necessarily all that many; see above), you'll need them to span multiple machines, and you then need to come up with an architecture to distribute endpoints across those machines, and make sure each incoming request goes to the right machine. If you're only broadcasting the same data to all your users, this becomes much easier.
proxies or ISPs or what-have-you may shut down your long-poll request. You'll need an architecture resilient to that.
Here's a question about long-polling in asp.net: How to do long-polling AJAX requests in ASP.NET MVC? It's probably a good place to start.
Also you could consider a 3rd-party service like pusher to handle these connections for you, or (disclaimer: I work on App Engine) App Engine's Channel API.
Surely you could make more frequent requests to the server that do not consume server resources for 10 whole minutes?
e.g. send an AJAX request every 60 seconds or so, and return whether or not any event has occurred. The downside is that it could take up to a minute for a user to see notification about some event, so if you need it more or less immediately, that is a problem.
If it does have to be immediate, it seems like looking into "long polling" with something like node.js might be a solution, though non-trivial to implement.

Javascript 1 second apart ajax requests? Resource usage?

I have a long-term goal of eventually creating a chat sort by any means, but for now I'd like to just have a simple one with some Mysql and ajax calls.
To make the chat seem instant, I'd like to have the ajax request interval as fast as possible. I get the feeling if it's as low or lower than a second, it's going to bog down the browser, the user's internet, or my server.
Assuming the server doesn't return anything, how much bandwidth and cpu/memory would the client use with constant, one second apart ajax calls?
edit: I'm still open to suggestions on how I can do a chat server. Anything that's possible with free hosting from x10 or 000webhost. I've been told of Henoku but I have no clue how to use it.
edit: Thanks for the long polling suggestion, but that uses too much cpu on the servers.
One technique that can be used is to use a long-running ajax request. The client asks if there's any chat data. The server receives the request. If there's chat data available, it returns that data immediately. If there is no chat data, it hangs onto the request for some period of time (perhaps two minutes) and if some chat data appears during that two minutes, the web request returns immediately with that data. If the full two minutes elapses and no chat data is received, then the ajax call returns with no data.
The client can then immediately issue another request to wait another two minutes for some data.
To make these "long" http requests work, you just need to make sure that your underlying ajax call has a timeout set for longer than the time you've set it for on the server.
On the server, you need to do an efficient mechanism of waiting for data, probably involving semaphores or something like that because you don't want to be polling internally in the server either.
Doing it this way, you can get near instantaneous response on the client, but only be making 30 requests an hour.
To be friendly to the battery of a laptop or mobile device, you need to be sensitive to when your app isn't actually being used (browser not displayed, not the current tab, etc...) and stop the requests during that time.
As to your other questions, repeated ajax calls (as long as they are spaced at least some small amount of time apart) don't really use much in the way of CPU or memory. They may use battery if they keep the computer from going into an idle mode.

is it better or group my ajax requests or send every request separately?

I'm developing an ajax project, but i'm confusing in something
i have 3 functions that are sending data to the server every function has a certain job.
is it better to group the ajax request that will be send from each function and send it in one big request. which will reduce my requests count & time. or send each request separately which will reduce execution time for my code at the server side but will make many requests.
P.S: for my application it works in both cases.
If you can consolidate them into a single request, neatly, I'd suggest you do that. Browsers will put restrictions on how many requests you can be making simultaneously (can be as low as 2), as will many servers.
Of course, as always, there are exceptions. If you have some data that you need to query every 10 seconds, and other data you need to query every 10 minutes, it's really not necessary to query the 10-minute data along with the 10-second data every 10 seconds.
Perhaps a single request with options that control what is sent back. You can request large amounts of data, or small amounts of data, all from the same request depending on the options you pass along with the request itself.
Just theory.
There are several factors involved here.
Putting them together on the client has the advantage of the client doing more of the work, which is good.
All but the newest browsers only allow 2 requests to the same domain at a time. If you have a lot of latency, this would cause to to want to group requests together if they're going to the same domain. However, newer browsers allow 8 requests at a time, so this problem will gradually go away.
But on the other hand
Do the requests need to all be made at the same time? Keeping the requests separate will allow you to maintain flexibility.
Probably the most important: Keeping the request separate will probably result in more maintainable, straightforward code.
I would suggest doing whatever keeps the code on the browser straightforward. Javascript (or frameworks that do ajax for you) is difficult enough to maintain and coordinate. Keep it simple!

Categories

Resources