Two related questions:
What are the maximum number of concurrent files that a web page is allowed to open (e.g., images, css files, etc)? I assume this value is different in different browsers (and maybe per file type). For example, I am pretty sure that javascript files can only be loaded one at a time (right?).
Is there a way I can use javascript to query this information?
For Internet Explorer see this MSDN article. Basically, unless the user has edited the registry or run a 'internet speedup' program, they are going to have a maximum of two connections if using IE7 or earlier. IE8 tries to be smart about it and can create up to 6 concurrent connections, depending on the server and the type of internet connection. In JavaScript, on IE8, you can query the property window.maxConnectionsPerServer.
For Firefox, the default is 2 for FF2 and earlier, and 6 for FF3. See Mozilla documentation. I'm not aware of anyway to retrieve this value from JavaScript in FF.
Most HTTP servers have little ability to restrict the number of connections from a single host, other than to ban the IP. In general, this isn't a good idea, as many users are behind a proxy or a NAT router, which would allow for multiple connections to come from the same IP address.
On the client side, you can artificially increase this amount by requesting resources from multiple domains. You can setup www1, www2, etc.. alias which all point to your same web server. Then mix up where the static content is being pulled from. This will incur a small overhead the first time due to extra DNS resolution.
One interesting way to get around the X connections per server limit is to map static resources like scripts and images to their own domains... img.foo.com or js.foo.com.
I have only read about this - not actually tried or tested. So please let me know if this doesn't work.
I know at least in Firefox, this value is configurable (network.http.max-connections, network.http.max-connections-per-server, and network.http.pipelining.maxrequests), so I doubt you'll get a definitive answer on this one. The default is 4, however.
What are you attempting to accomplish?
The limitiation is usually the web server. It's common that a web server only allows two concurrent downloads per user.
Active scripting engines like ASP.NET only executes one request at a time per user. Requests for static files are not handles by the scripting engine, so you can still get for example an image while getting an aspx file.
Pages often have content from different servers, like traffic measuring scripts and such. As the download limit is per server you can typically download two files at a time from each server.
As this is a server limitation, you can't find out anything about it using javascript.
There is nothing in HTTP that limits the number of sessions.
However, there are configuration items in FF for one, that specifically set how many total sessions and how many sessions to a single server are allowed. Other browsers may also have this feature.
In addition, a server can limit how many sessions come in totally and from each client IP address.
So the correct answer is:
1/ The number is sessions is limited to the minimum of that imposed by the client (browser) and server.
2/ Because of this, there's no reliable way to query it in JavaScript.
This is both a server and browser limitation. General netiquette holds that no more than 4 simultaneous connections are allowable. Most server allow a maximum of 2 connections by default, and most browsers follow suit. Most are configurable.
No.
Related
I went through many posts for this on SO but not found any suitable solution
I got it from one of the answers for maximum concurrent connection to one domain limit
IE 6 and 7: 2
IE 8: 6
IE 9: 6
IE 10: 8
IE 11: 8
Firefox 2: 2
Firefox 3: 6
Firefox 4 to 46: 6
Opera 9.63: 4
Opera 10: 8
Opera 11 and 12: 6
Chrome 1 and 2: 6
Chrome 3: 4
Chrome 4 to 23: 6
Safari 3 and 4: 4
How to call more than the maximum http calls set by browsers to one domain.
I went through this
One trick you can use to increase the number of concurrent conncetions
is to host your images from a different sub domain. These will be
treated as seperate requests, each domain is what will be limited to
the concurrent maximum.
IE6, IE7 - Have a limit of two. IE8 is 6 if your a broadband, 2 if you
are dial up.
but i don't have scenario like this. I am fetching specific data which is pointing to one web server. how will I overcome this.
I am having 14 http calls to same server at starting which is the reason it takes long to load the actual page. How to increase performance of my website through concurrent ajax/http calls
what you can do is dispatch that load to many subdomains. Instead of using only www, you use www1, www2, www3, www4 and round robin between those clientside.
You'll need to configure your web server so that www* subdomains ends up to the same place.
14 requests is not an issue. It becomes issue only if server response time is large. So most likely the root issue is the server side performance
These solutions are possible:
use HTTP cache (server should send corresponding headers)
use cache at the middle (e.g. CDN, Varnish)
optimize server side
content related:
combine several requests into one
remove duplicated information in requests
do not load information which client doesn't render
use cache at server side
etc... any other approach... there are plenty of them.
UPDATE:
Suggestions for people who have to download static resources and have troubles with that...
Check size of resources and optimize where possible
Use HTTP2 - it shares connection between requests, so server will be less loaded and respond faster, mostly because it doesn't need to establish separate SSL connection per each request (web is secure nova days, everybody use HTTPS)
HTTP specification limits number of parallel requests to single domain. This leaves chance to increase count of parallel requests using several different domains (or subdomains) to download required resources
Just to extend on Charly Koza's answer as this has some limitations depending on user count etc.
First thing you should look at is into using CDNs, I will assume you have done this already.
The fact you are only hitting one server is not a problem, the browser will allow concurrent connections based on DNS host and not just IP.
If you have access to your DNS management and can dynamically spawn up a new subdomain, look to free services like CloudFlare's API.
Alternatively, create a wildcard domain, which will allow any subdomain to point to 1 server.
In this way, on your server side, you can identify if the user already has X amount of connections active, if so, the following scenario can be done:
Dynamically create a new Subdomain on the same IP, or if using the Wildcard, create a random subdomain newDomainRandom.domain.com
Then return the
user a 301 redirect to the new domain, the users Internet Client will
then register this as a new connection to another domain.
There is a lot of Pseudo work here, but this is more of a networking issue that a coding issue.
Compulsory warning on this method though :
There are no limits in using 301 redirects on a site. You can
implement more than 100k of 301 redirects without getting any penalty.
But: Too many 301 redirects put unnecessary load on the server and
reduce speed.
How to call more than the maximum http calls set by browsers to one
domain.
That is a HTTP/1.1 limit (6-8), If you are able to change the server (you tag this question as http), the best solution is using HTTP/2 (RFC 7540) instead of HTTP/1.1.
HTTP/2 multiplex many HTTP requests on a single connection, see this diagram. When HTTP/1.1 has a limit of 6-8 roughly, HTTP/2 does not have a standard limit but say that "It is recommended that this value (SETTINGS_MAX_CONCURRENT_STREAMS) be no smaller than 100" (RFC 7540). That number is better than 6-8.
I have a data driven application. ... I have google map, pivot, charts and grids
In comments you mentioned that data is coming from different providers, in case those are on different domains - try using dns-prefetch, i.e. like this:
<link rel="dns-prefetch" href="http://www.your-data-domain-1.com/">
<link rel="dns-prefetch" href="http://www.your-data-domain-2.com/">
<link rel="dns-prefetch" href="http://www.3rd-party-service-1.com/">
<link rel="dns-prefetch" href="http://www.3rd-party-service-2.com/">
You need to list all domains which you are calling via AJAX and which are not the actual domain of your website itself.
It will force browser to send the DNS request as soon as it reads and parses that HTML data and not when your code requests the data from those domains for the first time. It might save you up to few hundreds of milliseconds when the browser will be actually doing an AJAX request for the data.
See also:
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-DNS-Prefetch-Control
https://caniuse.com/#feat=link-rel-dns-prefetch
Just use Http2. Http1 has a limit on concurrent connections, depending on the browser about 6-10.
If your server is able to perform tasks concurrently, there is no added value in opening multiple connections to it, other than being able to update your UI earlier (before the completion of the slowest tasks). E.g. To enable users use the page before all the tasks have finished. This offers a good user experience.
However, there are other solutions to achieve this, than opening parellel http connections. In short you can merge all your endpoints under one endpoint that handles all tasks, and asynchronously place the result of each finished task to the response. The client can then process the result of each task at the time it finishes.
To achieve the above you need some short of protocol/api that operates on top of an http connection, or a connection that has been upgraded to websocket. Below there are some alternatives which provide the async response message feature:
Server-side-events spec
Facebook's BigPipe concept: BigPipe: Pipelining web pages for high performance. Some implementation: server / client
socket.io: server/ client
CometD framework (serverside is java-based)
I have a web application that listens for Server Sent Events. While I was working and testing with multiple windows open, things were not working and I banged my head for several times looking in the wrong direction: eventually, I realized that the problem was concurrent connections.
However I was testing a very limited number and even if I am running the test on Apache (I know, I should use node).
I then, switched browser and noticed something really interesting: apparently Chrome limits Server Sent Events connections to 4-5, while Opera doesn't. Firefox, on the other hand, after 4-5 simultaneous connections, refuses to load any other page.
What is the reason behind this? Does the limit only apply to SSE connections from the same source, or would it be the same if I were to test open them from a different domain? Is there any chance that I am misusing SSE and this is actually blocking the browsers, or this is a known behaviour? Is there any way around it?
The way this works in all browsers are that each domain gets a limited amount of connections and the limits are global for your whole application. That means if you have one connection open for realtime communication you have one less for loading images, CSS and other pages. On top of that you don't get new connections for new tabs or windows, all of them needs to share the same amount of connections. This is very frustrating but there are good reasons for limiting the connections. A few years back, this limit was 2 in all browsers (based on the rules in (http://www.ietf.org/rfc/rfc2616.txt) HTTP1.1 spec) but now most browsers use 4-10 connections in general. Mobile browsers on the other hand still needs to limit the amount of connections for battery saving purposes.
These tricks are available:
Use more host names. By assigning ex. www1.example.com, www2.example.com you get new connections for each host name. This trick works in all browsers. Don't forget to change the cookie domain to include the whole domain (example.com, not www.example.com)
Use web sockets. Web sockets are not limited by these restrictions and more importantly they are not competing with the rest of your websites content.
Reuse the same connection when you open new tabs/windows. If you have gathered all realtime communication logic to an object call Hub you can recall that object on all opened windows like this:
window.hub = window.opener ? window.opener.hub || new Hub()
4. or use flash - not quite the best advice these days but it might still be an option if websockets aren't an option.
5. Remember to add a few seconds of time between each SSE request to let queued requests to be cleared before starting a new one. Also add a little more waiting time for each second the user is inactive, that way you can concentrate your server resources on those users that are active. Also add a random number of delay to avoid the Thundering Herd Problem
Another thing to remember when using a multithreaded and blocking language such as Java or C# you risk using resources in your long polling request that are needed for the rest of your application. For example in C# each request locks the Session object which means that the whole application is unresponsive during the time a SSE request is active.
NodeJs is great for these things for many reasons as you have already figured out and if you were using NodeJS you would have used socket.io or engine.io that takes care of all these problems for you by using websockets, flashsockets and XHR-polling and also because it is non blocking and single threaded which means it will consume very little resources on the server when it is waiting for things to send. A C# application consumes one thread per waiting request which takes at least 2MB of memory just for the thread.
One way to get around this issue is to shut down the connections on all the hidden tabs, and reconnect when the user visits a hidden tab.
I'm working with an application that uniquely identifies users which allowed me to implement this simple work-around:
When users connect to sse, store their identifier, along with a timestamp of when their tab loaded. If you are not currently identifying users in your app, consider using sessions & cookies.
When a new tab opens and connects to sse, in your server-side code, send a message to all other connections associated with that identifier (that do not have the current timestamp) telling the front-end to close down the EventSource. The front-end handler would look something like this:
myEventSourceObject.addEventListener('close', () => {
myEventSourceObject.close();
myEventSourceObject = null;
});
Use the javascript page visibility api to check to see if an old tab is visible again, and re-connect that tab to the sse if it is.
document.addEventListener('visibilitychange', () => {
if (!document.hidden && myEventSourceObject === null) {
// reconnect your eventsource here
}
});
If you set up your server code like step 2 describes, on re-connect, the server-side code will remove all the other connections to the sse. Hence, you can click between your tabs and the EventSource for each tab will only be connected when you are viewing the page.
Note that the page visibility api isn't available on some legacy browsers:
https://caniuse.com/#feat=pagevisibility
2022 Update
This problem has been fixed in HTTP/2.
According to mozilla docs:-
When not used over HTTP/2, SSE suffers from a limitation to the maximum number of open connections, which can be especially painful when opening multiple tabs, as the limit is per browser and is set to a very low number (6).
The issue has been marked as "Won't fix" in Chrome and Firefox.
This limit is per browser + domain, which means that you can open 6 SSE connections across all of the tabs to www.1.example and another 6 SSE connections to www.2.example (per Stackoverflow).
When using HTTP/2, the maximum number of simultaneous HTTP streams is negotiated between the server and the client (defaults to 100).
Spring Boot 2.1+ ships by default with Tomcat 9.0.x which supports HTTP/2 out of the box when using JDK 9 or later.
If you are using any other backend, please enable http/2 to fix this issue.
You are right about the number of simultaneous connections.
You can check this list for max values: http://www.browserscope.org/?category=network
And unfortunately, I never found any work around, except multiplexing and/or using different hostnames.
I am working on a file upload system which will store individual parts of large files on more than one server. So the distribution of a 1GB file will look something like this:
Server 1: 0-128MB
Server 2: 128MB-256MB
Server 2: 256MB-384MB
... etc
The intention of this is to allow for redundancy (each part will exist on more than one server), security (no one server has access to the entire file), and cost (bandwidth expenses are distributed).
I am curious if anyone has an opinion on how I might be able to "trick" web browsers into downloading the various parts all in one link.
What I had in mind was something like:
Browser is linked to Server 1, which provides a content-size of the full file
Once 128MB is served, Server 1 will intentionally close the connection
Hopefully, the browser will try to restart the download, requesting Server 1
Server 1 provides a 3XX redirect to Server 2
Browser continues downloading from Server 2
I don't know for certain that my example works, as I haven't tested it yet. I was curious if there were other solutions someone might have?
I'd like to make the whole process as easy as possible (ideally requiring no work beyond a simple download). I don't want the users to have to use another program (ie: cat'ing the files together). I'd also like to not use a proxy server, since it would incur extra bandwidth costs.
As far as I'm aware, there is no javascript solution for writing a file, if there was one, that would be great.
AFAIK this is not possible by using the HTTP protocol. You can probably use a custom browser extension but it would depend on the browser. Another alternative is to create a Java applet that would download the file from different servers. The applet can accept the URLs to the different servers as parameters.
To save the generated file:
https://stackoverflow.com/a/4551467/329062
That solution stores the file in memory though, so it won't work with very large files.
You can download the partial files into a JS variable using JSONP. That will also let you get around the same-origin policy.
Javascripts security model will only allow you to access data from the same origin where the Javascript came from - i.e. not multiple servers.
If you are going to have the file bits on multiple servers, you will need the user to load the web page, fetch the bit and then finally stick the bits together in the correct order. If you can manage to get all your users to do this (correctly), you are a better man than I.
It's possible to do in modern browsers over standard HTTP.
You can use XHR2 with CORS to download file chunks as ArrayBuffers and then merge them using Blob constructor and use createObjectURL to send merged file to the user.
However, I suspect that browsers will store these objects in RAM, so it's probably a bad idea to use it for large files.
I'm creating a simple online multiplayer game, with which two players (clients) can play the game with each other. The data is sent to and fetched from a server, which manages all data concerning this.
The problem I'm facing is how to fetch updates from the server efficiently. The data is fetched using AJAX: every 5 seconds, data is fetched from the server to check for updates. This is however done using HTTP, which means all headers are sent each time as well. The data itself is kept to an absolute minimum.
I was wondering if anyone would have tips on how to save bandwidth in this server/client scenario. Would it be possible to fetch using a custom protocol or something like that, to prevent all headers (like 'Server: Apache') being sent each single time? I basically only need the very data (only 9 bytes) and not all headers (which are like 100 bytes if it's not more).
Thanks.
Comet or Websockets
HTML5's websockets (as mentioned in other answers here) may have limited browser support at the moment, but using long-lived HTTP connections to push data (aka Comet) gives you similar "streaming" functionality in a way that even IE6 can cope with. Comet is rather tricky to implement though, as it is kind of a hack taking advantage of the way browsers just happened to be implemented at the time.
Also note that both techniques will require your server to handle a lot more simultaneous connections than it's used to, which can be a problem even if they're idle most of the time. This is sometimes referred to as the C10K problem.
This article has some discussion of websockets vs comet.
Reducing header size
You may have some success reducing the HTTP headers to the minimum required to save bytes. But you will need to keep Date as this is not optional according to the spec (RFC 2616). You will probably also need Content-Length to tell browser the size of the body, but might be able to drop this and close the connection after sending the body bytes but this would prevent the browser from taking advantage of HTTP/1.1 persistent connections.
Note that the Server header is not required, but Apache doesn't let you remove it completely - the ServerTokens directive controls this, and the shortest setting results in Server: Apache as you already have. I don't think other webservers usually let you drop the Server header either, but if you're on a shared host you're probably stuck with Apache as configured by your provider.
html5 sockets will be the way to do this in the near future.
http://net.tutsplus.com/tutorials/javascript-ajax/start-using-html5-websockets-today/
This isn't possible for all browsers, but it is supported in newer ones(Chrome, Safari). You should use a framework that uses websockets and then gracefully degrades to long polling(you don't want to poll at fixed intervals unless there are always events waiting). This way you will get the benefit of the newer browsers and that pool will continue to expand as people upgrade.
For Java the common solution is Atmosphere: http://atmosphere.java.net. It has a jQuery plugin as well as a abstraction the servlet container level.
I need to load a couple thousand records of user data (user contacts in a contact-management system, to be precise) from a REST service and run a seach on them. Unfortunately, the REST service doesn't offer a search which meets my needs, so I'm reduced to just loading a bunch of data and searching through it myself. Loading the records is time-consuming, so I only want to do it once for each user.
Obviously this data needs to be cached. Unfortunately, server-side caching is not an option. My client runs apps on multiple servers, and there's no way to predict which server a given request will land on.
So, the next option is to cache this data on the browser side and run searches on it there. For a user with thousands of contacts, this could mean caching several megs of data. What problems might I run in to storing several megs of javascript data in browser memory?
Storing several megs of Javascript data should cause no problems. Memory leaks will. Think about how much RAM modern computers have - a few megabytes is a molecule in the drop in the proverbial bucket.
Be careful when doing anything client side if you intend your users to use mobile devices. While desktops won't have an issue, Mobile Safari will stop working at (I believe) 10Mb of JavaScript data. (See this article for more info on Mobile Safari). Other mobile browsers are likely to have similar memory restrictions. Figure out the minimal set of info that you can return to allow the user to perform the search, and then lazy load richer records from the REST API as you need them.
As an alternative, proxy the REST Service in question, and create your own search on a server that you then control. You could do this with pretty quickly and easily with Python + Django + XML Models. No doubt there are equally simple ways to do this with whatever your preferred dev language is. (In re-reading, I see that you can't do server-side caching which may make this point moot).
You can manage tens of thousands of records safely in the browser. I'm running search & sorting benchmarks with jOrder (http://github.com/danstocker/jorder) on such datasets with no problem.
I would look at a distributed server side cache. If you keep the data in the browser, as system grows you will have to increase the browser cache lifetime to keep traffic down.