Increase Concurrent HTTP calls - javascript

I went through many posts for this on SO but not found any suitable solution
I got it from one of the answers for maximum concurrent connection to one domain limit
IE 6 and 7: 2
IE 8: 6
IE 9: 6
IE 10: 8
IE 11: 8
Firefox 2: 2
Firefox 3: 6
Firefox 4 to 46: 6
Opera 9.63: 4
Opera 10: 8
Opera 11 and 12: 6
Chrome 1 and 2: 6
Chrome 3: 4
Chrome 4 to 23: 6
Safari 3 and 4: 4
How to call more than the maximum http calls set by browsers to one domain.
I went through this
One trick you can use to increase the number of concurrent conncetions
is to host your images from a different sub domain. These will be
treated as seperate requests, each domain is what will be limited to
the concurrent maximum.
IE6, IE7 - Have a limit of two. IE8 is 6 if your a broadband, 2 if you
are dial up.
but i don't have scenario like this. I am fetching specific data which is pointing to one web server. how will I overcome this.
I am having 14 http calls to same server at starting which is the reason it takes long to load the actual page. How to increase performance of my website through concurrent ajax/http calls

what you can do is dispatch that load to many subdomains. Instead of using only www, you use www1, www2, www3, www4 and round robin between those clientside.
You'll need to configure your web server so that www* subdomains ends up to the same place.

14 requests is not an issue. It becomes issue only if server response time is large. So most likely the root issue is the server side performance
These solutions are possible:
use HTTP cache (server should send corresponding headers)
use cache at the middle (e.g. CDN, Varnish)
optimize server side
content related:
combine several requests into one
remove duplicated information in requests
do not load information which client doesn't render
use cache at server side
etc... any other approach... there are plenty of them.
UPDATE:
Suggestions for people who have to download static resources and have troubles with that...
Check size of resources and optimize where possible
Use HTTP2 - it shares connection between requests, so server will be less loaded and respond faster, mostly because it doesn't need to establish separate SSL connection per each request (web is secure nova days, everybody use HTTPS)
HTTP specification limits number of parallel requests to single domain. This leaves chance to increase count of parallel requests using several different domains (or subdomains) to download required resources

Just to extend on Charly Koza's answer as this has some limitations depending on user count etc.
First thing you should look at is into using CDNs, I will assume you have done this already.
The fact you are only hitting one server is not a problem, the browser will allow concurrent connections based on DNS host and not just IP.
If you have access to your DNS management and can dynamically spawn up a new subdomain, look to free services like CloudFlare's API.
Alternatively, create a wildcard domain, which will allow any subdomain to point to 1 server.
In this way, on your server side, you can identify if the user already has X amount of connections active, if so, the following scenario can be done:
Dynamically create a new Subdomain on the same IP, or if using the Wildcard, create a random subdomain newDomainRandom.domain.com
Then return the
user a 301 redirect to the new domain, the users Internet Client will
then register this as a new connection to another domain.
There is a lot of Pseudo work here, but this is more of a networking issue that a coding issue.
Compulsory warning on this method though :
There are no limits in using 301 redirects on a site. You can
implement more than 100k of 301 redirects without getting any penalty.
But: Too many 301 redirects put unnecessary load on the server and
reduce speed.

How to call more than the maximum http calls set by browsers to one
domain.
That is a HTTP/1.1 limit (6-8), If you are able to change the server (you tag this question as http), the best solution is using HTTP/2 (RFC 7540) instead of HTTP/1.1.
HTTP/2 multiplex many HTTP requests on a single connection, see this diagram. When HTTP/1.1 has a limit of 6-8 roughly, HTTP/2 does not have a standard limit but say that "It is recommended that this value (SETTINGS_MAX_CONCURRENT_STREAMS) be no smaller than 100" (RFC 7540). That number is better than 6-8.

I have a data driven application. ... I have google map, pivot, charts and grids
In comments you mentioned that data is coming from different providers, in case those are on different domains - try using dns-prefetch, i.e. like this:
<link rel="dns-prefetch" href="http://www.your-data-domain-1.com/">
<link rel="dns-prefetch" href="http://www.your-data-domain-2.com/">
<link rel="dns-prefetch" href="http://www.3rd-party-service-1.com/">
<link rel="dns-prefetch" href="http://www.3rd-party-service-2.com/">
You need to list all domains which you are calling via AJAX and which are not the actual domain of your website itself.
It will force browser to send the DNS request as soon as it reads and parses that HTML data and not when your code requests the data from those domains for the first time. It might save you up to few hundreds of milliseconds when the browser will be actually doing an AJAX request for the data.
See also:
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-DNS-Prefetch-Control
https://caniuse.com/#feat=link-rel-dns-prefetch

Just use Http2. Http1 has a limit on concurrent connections, depending on the browser about 6-10.

If your server is able to perform tasks concurrently, there is no added value in opening multiple connections to it, other than being able to update your UI earlier (before the completion of the slowest tasks). E.g. To enable users use the page before all the tasks have finished. This offers a good user experience.
However, there are other solutions to achieve this, than opening parellel http connections. In short you can merge all your endpoints under one endpoint that handles all tasks, and asynchronously place the result of each finished task to the response. The client can then process the result of each task at the time it finishes.
To achieve the above you need some short of protocol/api that operates on top of an http connection, or a connection that has been upgraded to websocket. Below there are some alternatives which provide the async response message feature:
Server-side-events spec
Facebook's BigPipe concept: BigPipe: Pipelining web pages for high performance. Some implementation: server / client
socket.io: server/ client
CometD framework (serverside is java-based)

Related

Server sent events and browser limits

I have a web application that listens for Server Sent Events. While I was working and testing with multiple windows open, things were not working and I banged my head for several times looking in the wrong direction: eventually, I realized that the problem was concurrent connections.
However I was testing a very limited number and even if I am running the test on Apache (I know, I should use node).
I then, switched browser and noticed something really interesting: apparently Chrome limits Server Sent Events connections to 4-5, while Opera doesn't. Firefox, on the other hand, after 4-5 simultaneous connections, refuses to load any other page.
What is the reason behind this? Does the limit only apply to SSE connections from the same source, or would it be the same if I were to test open them from a different domain? Is there any chance that I am misusing SSE and this is actually blocking the browsers, or this is a known behaviour? Is there any way around it?
The way this works in all browsers are that each domain gets a limited amount of connections and the limits are global for your whole application. That means if you have one connection open for realtime communication you have one less for loading images, CSS and other pages. On top of that you don't get new connections for new tabs or windows, all of them needs to share the same amount of connections. This is very frustrating but there are good reasons for limiting the connections. A few years back, this limit was 2 in all browsers (based on the rules in (http://www.ietf.org/rfc/rfc2616.txt) HTTP1.1 spec) but now most browsers use 4-10 connections in general. Mobile browsers on the other hand still needs to limit the amount of connections for battery saving purposes.
These tricks are available:
Use more host names. By assigning ex. www1.example.com, www2.example.com you get new connections for each host name. This trick works in all browsers. Don't forget to change the cookie domain to include the whole domain (example.com, not www.example.com)
Use web sockets. Web sockets are not limited by these restrictions and more importantly they are not competing with the rest of your websites content.
Reuse the same connection when you open new tabs/windows. If you have gathered all realtime communication logic to an object call Hub you can recall that object on all opened windows like this:
window.hub = window.opener ? window.opener.hub || new Hub()
4. or use flash - not quite the best advice these days but it might still be an option if websockets aren't an option.
5. Remember to add a few seconds of time between each SSE request to let queued requests to be cleared before starting a new one. Also add a little more waiting time for each second the user is inactive, that way you can concentrate your server resources on those users that are active. Also add a random number of delay to avoid the Thundering Herd Problem
Another thing to remember when using a multithreaded and blocking language such as Java or C# you risk using resources in your long polling request that are needed for the rest of your application. For example in C# each request locks the Session object which means that the whole application is unresponsive during the time a SSE request is active.
NodeJs is great for these things for many reasons as you have already figured out and if you were using NodeJS you would have used socket.io or engine.io that takes care of all these problems for you by using websockets, flashsockets and XHR-polling and also because it is non blocking and single threaded which means it will consume very little resources on the server when it is waiting for things to send. A C# application consumes one thread per waiting request which takes at least 2MB of memory just for the thread.
One way to get around this issue is to shut down the connections on all the hidden tabs, and reconnect when the user visits a hidden tab.
I'm working with an application that uniquely identifies users which allowed me to implement this simple work-around:
When users connect to sse, store their identifier, along with a timestamp of when their tab loaded. If you are not currently identifying users in your app, consider using sessions & cookies.
When a new tab opens and connects to sse, in your server-side code, send a message to all other connections associated with that identifier (that do not have the current timestamp) telling the front-end to close down the EventSource. The front-end handler would look something like this:
myEventSourceObject.addEventListener('close', () => {
myEventSourceObject.close();
myEventSourceObject = null;
});
Use the javascript page visibility api to check to see if an old tab is visible again, and re-connect that tab to the sse if it is.
document.addEventListener('visibilitychange', () => {
if (!document.hidden && myEventSourceObject === null) {
// reconnect your eventsource here
}
});
If you set up your server code like step 2 describes, on re-connect, the server-side code will remove all the other connections to the sse. Hence, you can click between your tabs and the EventSource for each tab will only be connected when you are viewing the page.
Note that the page visibility api isn't available on some legacy browsers:
https://caniuse.com/#feat=pagevisibility
2022 Update
This problem has been fixed in HTTP/2.
According to mozilla docs:-
When not used over HTTP/2, SSE suffers from a limitation to the maximum number of open connections, which can be especially painful when opening multiple tabs, as the limit is per browser and is set to a very low number (6).
The issue has been marked as "Won't fix" in Chrome and Firefox.
This limit is per browser + domain, which means that you can open 6 SSE connections across all of the tabs to www.1.example and another 6 SSE connections to www.2.example (per Stackoverflow).
When using HTTP/2, the maximum number of simultaneous HTTP streams is negotiated between the server and the client (defaults to 100).
Spring Boot 2.1+ ships by default with Tomcat 9.0.x which supports HTTP/2 out of the box when using JDK 9 or later.
If you are using any other backend, please enable http/2 to fix this issue.
You are right about the number of simultaneous connections.
You can check this list for max values: http://www.browserscope.org/?category=network
And unfortunately, I never found any work around, except multiplexing and/or using different hostnames.

Do AJAX applications that use POST requests always fail in Internet Explorer?

I have recently discovered that problems with intermittent failures for users running my application using Internet Explorer is due to a bug in Internet Explorer. The bug is in the HTTP stack, and should be affecting all applications using POST requests from IE. The result is a failure characterized by a request that seems to hang for about 5 minutes (depending on server type and configuration), then fail from the server end. The browser application will error out of the post request after the server gives up. I will explain the IE bug in detail below.
As far as I can tell this will happen with any application using XMLHttpRequest to send POST requests to the server, if the request is sent at the wrong moment. I have written a sample program that attempts to send the POSTS at just those times. It attempts to send continuous POSTs to the server at the precise moment the server closes the connections. The interval is derived from the Keep-Alive header sent by the server.
I am finding that when running from IE to a server with a bit of latency (i.e. not on the same LAN), the problem occurs after only a few POSTs. When it happens, IE locks up so hard that it has to be force closed. The ticking clock is an indication that the browser is still responding.
You can try it by browsing to: http://pubdev.hitech.com/test.post.php. Please take care that you don't have any important unsaved information in any IE session when you run it, because I am finding that it will crash IE.
The full source can be retrieved at: http://pubdev.hitech.com/test.post.php.txt. You can run it on any server that has php and is configured for persistent connections.
My questions are:
What are other people's experiences with this issue?
Is there a known strategy for working around this problem (other than "use another browser")?
Does Microsoft have better information about this issue than the article I found (see below)?
The problem is that web browsers and servers by default use persistent connections as described in RFC 2616 section 8.1 (see http://www.ietf.org/rfc/rfc2616.txt). This is very important for performance--especially for AJAX applications--and should not be disabled. There is however a small timing hole where the browser may start to send a POST on a previously used connection at the same time the server decides the connection is idle and decides to close it. The result is that the browser's HTTP stack will get a socket error because it is using a closed socket. RFC 2616 section 8.1.4 anticipates this situation, and states, "...clients, servers, and proxies MUST be able to recover from asynchronous close events. Client software SHOULD reopen the transport connection and retransmit the aborted sequence of requests without user interaction..."
Internet Explorer does resend the POST when this happens, but when it does it mangles the request. It sends the POST headers, including the Content-Length of the data as posted, but it does not send the data. This is an improper request, and the server will wait an unspecified amount of time for the promised data before failing the request with an error. I have been able to demonstrate this failure 100% of the time using a C program that simulates an HTTP server, which closes the socket of an incoming POST request without sending a response.
Microsoft seems to acknowledge this failure in http://support.microsoft.com/kb/895954. They say that it affects IE versions 6 through 9. That provide a hotfix for this problem, that has shipped with all versions of IE since IE 7. The hotfix does not seem satisfactory for the following reasons:
It is not enabled unless you use regedit to add a key called FEATURE_SKIP_POST_RETRY_ON_INTERNETWRITEFILE_KB895954 to the registry. This is not something I would expect my users to have to do.
The hotfix does not actually fix the broken POST. Instead, if the socket gets closed as anticipated by the RFC, it simply errors out immediately without trying to resent the POST. The application still fails--it just fails sooner.
The following example is a self contained php program that demonstrates the bug. It attempts to send continuous POSTs to the server at the precise moment the server closes the connections. The interval is derived from the Keep-Alive header sent by the server.
We've encountered this problem with IE on a regular basis. There is no good solution. The only solution that is guaranteed to solve the problem is to ensure that the web server keepalive timeout is higher than the browser keepalive timeout (by default with IE this is 60s). Any situation where the web server is set to a lower value can result in IE attempting to reuse the connection and sending a request that gets rejected with a TCP RST because the socket has been closed. If the web server keepalive timeout value is higher than IE's keepalive timeout then IE's reuse of the connections ensure that the socket won't be closed. With high latency connections you'll have to consider the latency time as the time spent in-transit could be an issue.
Keep in mind however, that increasing the keepalive on the server means that an idle connection is using server sockets for that much longer. So you may need to size the server to handle a large number of inactive idle connections. This can be a problem as it may result in a burst of load to the server that the server isn't able to handle.
Another thing to keep in mind. You note that the RFC section 8.1.4 states :"...clients, servers, and proxies MUST be able to recover from asynchronous close events. Client software SHOULD reopen the transport connection and retransmit the aborted sequence of requests without user interaction..."
You forgot a very important part. Here's the full text:
Client software SHOULD reopen the
transport connection and retransmit the aborted sequence of requests
without user interaction so long as the request sequence is
idempotent (see section 9.1.2). Non-idempotent methods or sequences
MUST NOT be automatically retried, although user agents MAY offer a
human operator the choice of retrying the request(s). Confirmation by
user-agent software with semantic understanding of the application
MAY substitute for user confirmation. The automatic retry SHOULD NOT
be repeated if the second sequence of requests fails
An HTTP POST is non-idempotent as defined by 9.1.2. Thus the behavior of the registry hack is actually technically correct per the RFC.
No, generally POST works in IE. It may be an issue, what you are saying,
but it isn't such a major issue to deserve this huge a post.
And when you issue POST ajax request, to make sure every browser inconsistency is covered, just use jquery.
One more thing:
Noone sane will tell you to "use another browser" because IE is widely used and needs to be taken care of (well, except IE6 and for some, maybe even some newer versions)
So, POST has to work in IE, but to make yourself covered for unexpected buggy behavior, use jquery and you can sleep well.
I have never encountered this issue. And our clients mostly runs IE6.
I suspect you've configured your keep-alive timer too long. Most people configure it to be under 1 second because persistent connections are only meant to speed up page loading not service Ajax calls.
If you have keep-alive configured too long you'll face much more severe problems than IE crashing - your server will run out file descriptors to open sockets!*
* note: Incidentally, opening and not closing connections to HTTP servers is a well known DOS attack that tries to force the server to reach its max open socket limit. Which is why most server admins also configure connection timeouts to avoid having sockets open for too long.

Checking for updates using AJAX - bandwidth optimization possibilities?

I'm creating a simple online multiplayer game, with which two players (clients) can play the game with each other. The data is sent to and fetched from a server, which manages all data concerning this.
The problem I'm facing is how to fetch updates from the server efficiently. The data is fetched using AJAX: every 5 seconds, data is fetched from the server to check for updates. This is however done using HTTP, which means all headers are sent each time as well. The data itself is kept to an absolute minimum.
I was wondering if anyone would have tips on how to save bandwidth in this server/client scenario. Would it be possible to fetch using a custom protocol or something like that, to prevent all headers (like 'Server: Apache') being sent each single time? I basically only need the very data (only 9 bytes) and not all headers (which are like 100 bytes if it's not more).
Thanks.
Comet or Websockets
HTML5's websockets (as mentioned in other answers here) may have limited browser support at the moment, but using long-lived HTTP connections to push data (aka Comet) gives you similar "streaming" functionality in a way that even IE6 can cope with. Comet is rather tricky to implement though, as it is kind of a hack taking advantage of the way browsers just happened to be implemented at the time.
Also note that both techniques will require your server to handle a lot more simultaneous connections than it's used to, which can be a problem even if they're idle most of the time. This is sometimes referred to as the C10K problem.
This article has some discussion of websockets vs comet.
Reducing header size
You may have some success reducing the HTTP headers to the minimum required to save bytes. But you will need to keep Date as this is not optional according to the spec (RFC 2616). You will probably also need Content-Length to tell browser the size of the body, but might be able to drop this and close the connection after sending the body bytes but this would prevent the browser from taking advantage of HTTP/1.1 persistent connections.
Note that the Server header is not required, but Apache doesn't let you remove it completely - the ServerTokens directive controls this, and the shortest setting results in Server: Apache as you already have. I don't think other webservers usually let you drop the Server header either, but if you're on a shared host you're probably stuck with Apache as configured by your provider.
html5 sockets will be the way to do this in the near future.
http://net.tutsplus.com/tutorials/javascript-ajax/start-using-html5-websockets-today/
This isn't possible for all browsers, but it is supported in newer ones(Chrome, Safari). You should use a framework that uses websockets and then gracefully degrades to long polling(you don't want to poll at fixed intervals unless there are always events waiting). This way you will get the benefit of the newer browsers and that pool will continue to expand as people upgrade.
For Java the common solution is Atmosphere: http://atmosphere.java.net. It has a jQuery plugin as well as a abstraction the servlet container level.

Understanding mod_proxy and Apache 2 for writing a comet-server

I currently try to implement a simple HTTP-server for some kind of comet-technique (long polling XHR-requests). As JavaScript is very strict about crossdomain requests I have a few questions:
As I understood any apache worker is blocked while serving a request, so writing the "script" as a usual website would block the apache, when all workers having a request to serve. --> Does not work!
I came up with the idea writing a own simple HTTP server only for serving this long polling requests. This server should not be blocking, so each worker could handle many request at the same time. As my site also contains content / images etc and my server does not need to server content I started him on a different port then 80. The problem now is that I can't interact between my JavaScript delivered by my apache and my comet-server running on a different port, because of some crossdomain restrictions. --> Does not work!
Then I came up with the idea to use mod_proxy to map my server on a new subdomain. I really don't could figure out how mod_proxy works but I could imagine that I know have the same effect as on my first approach?
What would be the best way to create these kind of combination this kind of classic website and these long-polling XHR-requests? Do I need to implement content delivery on my server at my own?
I'm pretty sure using mod_proxy will block a worker while the request is being processed.
If you can use 2 IPs, there is a fairly easy solution.
Let's say IP A is 1.1.1.1 and IP B is 2.2.2.2, and let's say your domain is example.com.
This is how it will work:
-Configure Apache to listen on port 80, but ONLY on IP A.
-Start your other server on port 80, but only on IP B.
-Configure the XHR requests to be on a subdomain of your domain, but with the same port. So the cross-domain restrictions don't prevent them. So your site is example.com, and the XHR requests go to xhr.example.com, for example.
-Configure your DNS so that example.com resolves to IP A, and xhr.example.com resolves to IP B.
-You're done.
This solution will work if you have 2 servers and each one has its IP, and it will work as well if you have one server with 2 IPs.
If you can't use 2 IPs, I may have another solution, I'm checking if it's applicable to your case.
This is a difficult problem. Even if you get past the security issues you're running into, you'll end up having to hold a TCP connection open for every client currently looking at a web page. You won't be able to create a thread to handle each connection, and you won't be able to "select" on all the connections from a single thread. Having done this before, I can tell you it's not easy. You may want to look into libevent, which memcached uses to a similar end.
Up to a point you can probably get away with setting long timeouts and allowing Apache to have a huge number of workers, most of which will be idle most of the time. Careful choice and configuration of the Apache worker module will stretch this to thousands of concurrent users, I believe. At some point, however, it will not scale up any more.
I don't know what you're infrastructure looks like, but we have load balancing boxes in the network racks called F5s. These present a single external domain, but redirect the traffic to multiple internal servers based on their response times, cookies in the request headers, etc.. They can be configured to send requests for a certain path within the virtual domain to a specific server. Thus you could have example.com/xhr/foo requests mapped to a specific server to handle these comet requests. Unfortunately, this is not a software solution, but a rather expensive hardware solution.
Anyway, you may need some kind of load-balancing system (or maybe you have one already), and perhaps it can be configured to handle this situation better than Apache can.
I had a problem years ago where I wanted customers using a client-server system with a proprietary binary protocol to be able to access our servers on port 80 because they were continuously having problems with firewalls on the custom port that the system used. What I needed was a proxy that would live on port 80 and direct the traffic to either Apache or the app server depending on the first few bytes of what came across from the client. I looked for a solution and found nothing that fit. I considered writing an Apache module, a plugin for DeleGate, etc., but eventually rolled by own custom content-sensing proxy service. That, I think, is the worst-case scenario for what you're trying to do.
To answer the specific question about mod-proxy: yes, you can setup mod_proxy to serve content that is generated by a server (or service) that is not public facing (i.e. which is only available via an internal address or localhost).
I've done this in a production environment and it works very, very well. Apache forwarding some requests to Tomcat via AJP workers, and others to a GIS application server via mod proxy. As others have pointed out, cross-site security may stop you working on a sub-domain, but there is no reason why you can't proxy requests to mydomain.com/application
To talk about your specific problem - I think really you are getting bogged down in looking at the problem as "long lived requests" - i.e. assuming that when you make one of these requests that's it, the whole process needs to stop. It seems as though your are trying to solve an issue with application architecture via changes to system architecture. In-fact what you need to do is treat these background requests exactly as such; and multi-thread it:
Client makes the request to the remote service "perform task X with data A, B and C"
Your service receives the request: it passes it onto a scheduler which issues a unique ticket / token for the request. The service then returns this token to the client "thanks, your task is in a queue running under token Z"
The client then hangs onto this token, shows a "loading/please wait" box, and sets up a timer that fires say, for arguments, every second
When the timer fires, the client makes another request to the remote service "have you got the results for my task, it's token Z"
You background service can then check with your scheduler, and will likely return an empty document "no, not done yet" or the results
When the client gets the results back, it can simply clear the timer and display them.
So long as you're reasonably comfortable with threading (which you must be if you've indicated you're looking at writing your own HTTP server, this shouldn't be too complex - on top of the http listener part:
Scheduler object - singleton object, really that just wraps a "First in, First Out" stack. New tasks go onto the end of the stack, jobs can be pulled off from the beginning: just make sure that the code to issue a job is thread safe (less you get two works pulling the same job from the stack).
Worker threads can be quite simple - get access to the scheduler, ask for the next job: if there is one then do the work send the results, otherwise just sleep for a period, start over.
This way, you're never going to be blocking Apache for longer than needs be, as all you are doing is issues requests for "do x" or "give me results for x". You'll probably want to build some safety features in at a few points - such as handling tasks that fail, and making sure there is a time-out on the client side so it doesn't wait indefinitely.
For number 2: you can get around crossdomain restrictions by using JSONP.
Two Three alternatives:
Use nginx. This means you run 3 servers: nginx, Apache, and your own server.
Run your server on its own port.
Use Apache mod_proxy_http (as your own suggestion).
I've confirmed mod_proxy_http (Apache 2.2.16) works proxying a Comet application (powered by Atmosphere 0.7.1) running in GlassFish 3.1.1.
My test app with full source is here: https://github.com/ceefour/jsfajaxpush

max number of concurrent file downloads in a browser?

Two related questions:
What are the maximum number of concurrent files that a web page is allowed to open (e.g., images, css files, etc)? I assume this value is different in different browsers (and maybe per file type). For example, I am pretty sure that javascript files can only be loaded one at a time (right?).
Is there a way I can use javascript to query this information?
For Internet Explorer see this MSDN article. Basically, unless the user has edited the registry or run a 'internet speedup' program, they are going to have a maximum of two connections if using IE7 or earlier. IE8 tries to be smart about it and can create up to 6 concurrent connections, depending on the server and the type of internet connection. In JavaScript, on IE8, you can query the property window.maxConnectionsPerServer.
For Firefox, the default is 2 for FF2 and earlier, and 6 for FF3. See Mozilla documentation. I'm not aware of anyway to retrieve this value from JavaScript in FF.
Most HTTP servers have little ability to restrict the number of connections from a single host, other than to ban the IP. In general, this isn't a good idea, as many users are behind a proxy or a NAT router, which would allow for multiple connections to come from the same IP address.
On the client side, you can artificially increase this amount by requesting resources from multiple domains. You can setup www1, www2, etc.. alias which all point to your same web server. Then mix up where the static content is being pulled from. This will incur a small overhead the first time due to extra DNS resolution.
One interesting way to get around the X connections per server limit is to map static resources like scripts and images to their own domains... img.foo.com or js.foo.com.
I have only read about this - not actually tried or tested. So please let me know if this doesn't work.
I know at least in Firefox, this value is configurable (network.http.max-connections, network.http.max-connections-per-server, and network.http.pipelining.maxrequests), so I doubt you'll get a definitive answer on this one. The default is 4, however.
What are you attempting to accomplish?
The limitiation is usually the web server. It's common that a web server only allows two concurrent downloads per user.
Active scripting engines like ASP.NET only executes one request at a time per user. Requests for static files are not handles by the scripting engine, so you can still get for example an image while getting an aspx file.
Pages often have content from different servers, like traffic measuring scripts and such. As the download limit is per server you can typically download two files at a time from each server.
As this is a server limitation, you can't find out anything about it using javascript.
There is nothing in HTTP that limits the number of sessions.
However, there are configuration items in FF for one, that specifically set how many total sessions and how many sessions to a single server are allowed. Other browsers may also have this feature.
In addition, a server can limit how many sessions come in totally and from each client IP address.
So the correct answer is:
1/ The number is sessions is limited to the minimum of that imposed by the client (browser) and server.
2/ Because of this, there's no reliable way to query it in JavaScript.
This is both a server and browser limitation. General netiquette holds that no more than 4 simultaneous connections are allowable. Most server allow a maximum of 2 connections by default, and most browsers follow suit. Most are configurable.
No.

Categories

Resources