Optimal number of requests from client - javascript

Let's say we have the page. During rendering the page, we need to execute about 15 requests to API for getting some data.
How does this number of requests will affect on performance for desktop/mobile versions? Do I need to do any changes for reducing the number of requests? It will be great if you can send me the link with clarification related to this theme.

Optimization is this case really depends on the result of the API calls. Like what you are getting in response. Is the same static data each time or is it the same data with slight changes or is it extremely weird data which changes in real time?
There are many optimization techniques like whether to use Sync or Async, caching, batching, payload reduction. There could be many more but I know the few above. You can get a lot about these with a single Google query. It is up to you to decide which to use and where to use.

Various browsers have various limits for maximum connections per host
name; you can find the exact numbers at
http://www.browserscope.org/?category=network
Here is an interesting article about connection limitations from web
performance expert Steve Souders
http://www.stevesouders.com/blog/2008/03/20/roundup-on-parallel-connections/
12 requests to one domain/service is not much. latest versions of browsers supports around 6 simultaneous http 1.x connections per domain. So that means, your first 6 service calls (to a particular domain) needs to be done before initiating the next HTTP connection to that domain. (With HTTP2, this limitation will not be there though). So if your application is not intended to be high performing you are usually fine.
On the other hand, if every milli seconds counts, then it's better to have an edge service / GraphQL (my preference) aggregates all the services and send to the browser.

Related

Best way to increase odds of connecting to a server that is temporarily timing out due to an influx of visitors?

Problem
When trying to access and purchase a specific item from store X, which releases limited quantities randomly throughout the week, trying to load the page via the browser is essentially pointless. 99 out of 100 requests time out. By the time 1 page loads, the stock is sold out.
Question
What would be the fastest way to load these pages from a website -- one that is currently under high amounts of stress and timing out regularly -- programmatically, or even via the browser?
For example, is it better to send multiple requests and wait until a "timed out" response is received? Is it best to retry the request after X seconds has passed regardless? Etc, etc.
Tried
I've tried both solutions above in browser without much luck, so I'm thinking of putting together a python or javascript solution in order to better my chances, but couldn't find an answer to my question via Google.
EDIT:
Just to clarify, the website in question doesn't sporadically time out -- it is strictly when new stock is released and the website is bombarded with visitors. Once stock is bought up, the site returns to normal. New stock releases last anywhere from 5 minutes to 25 minutes.
The best way is to inspect the website and find out how the http query's are done. Maybe a special buy There is no fastest way because you want to load from a server that is stressed, you will have the same 'luck', as others. You could decrease theping of your internet, but will do minimal good.
There can be so many reasons why you are getting so many request timeouts from the server. It may be from your client application or from the server application settings (to reduce unfavorable request behaviours from certain clients) or a simple DNS resolution taking too long. One thing that is sure though is that bugging the server with so many requests at a time will definitely not guarantee you less timeouts, but may aggravate the situation.
One way you can solve the problem (if you don't have control of the server side) is to monitor the server application behaviour from your end for at least a day or two. A simple script that sends test requests at regular intervals might do the trick. You can measure parameters like request resolution time, frequency of failed requests, and type (cause) of failure (if that is deducible). These parameters can be measured over a given period of time (a day or two) to know statistically when it is more favourable to make request to the server. This "profiling" of the server may not always be accurate but can be done regularly with better thought out parameters to get better results. BTW... Enough data may even benefit from some AI data analytics :).

When do you want more/less http requests? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
It seems like to have you page load fast, you would want a series of small http requests.
If it was one big one, the user might have to wait much longer to see that the page was there at all.
However, I'v heard that minimizing your HTTP requests is more efficient. For example, this is why sprites are created for multiple images.
Is there a general guideline for when you want more and when you want less?
Multiple requests create overhead from both the connection and the headers.
Its like downloading the contents of an FTP site, one site has a single 1GB blob, another has 1,000,000 files totalling a few MB. On a good connection, the 1GB file could be downloaded in a few minutes, but the other is sure to take all day because the transfer negotiation ironically takes more time that the transfer itself.
HTTP is a bit more efficient than FTP, but the principle is the same.
What is important is the initial page load, which needs to be small enough to show some content to the user, then load additional assets outside of the user's view. A page with a thousand tiny images will benefit from a sprite always because the negotiations would not only cause strain to the connection, but also potentially the client computer.
EDIT 2 (25-08-2017)
Another update here; Some time has passed and HTTP2 is (becoming) a real thing. I suggest reading this page for more information about it.
Taken from the second link (at the time of this edit):
It is expected that HTTP/2.0 will:
Substantially and measurably improve end-user perceived latency in
most cases, over HTTP/1.1 using TCP. Address the "head of line
blocking" problem in HTTP.
Not require multiple connections to a server to enable parallelism,
thus improving its use of TCP, especially regarding congestion
control.
Retain the semantics of HTTP/1.1, leveraging existing documentation
(see above), including (but not limited to) HTTP methods, status
codes, URIs, and where appropriate, header fields.
Clearly define how HTTP/2.0 interacts with HTTP/1.x, especially in
intermediaries (both 2->1 and 1->2).
Clearly identify any new extensibility points and policy for their
appropriate use.
The bold sentence (emphasis mine) explains how HTTP2 will handle requests differently from HTTP1. Whereas HTTP1 will create ~8 (differs per browser) simultaneous (or "parallel") connections to fetch as much resources as possible, HTTP2 will re-use the same connection. This reduces overall time and network latency required to create a new connection which in turn, speeds up asset delivery. Additionally, your webserver will also have an easier time keeping ~8 times less connections open. Imagine the gains there :)
HTTP2 is also already quite widely supported in major browsers, caniuse has a table for it :)
EDIT (30-11-2015)
I've recently found this article on the topic 'page speed'. this post is very thorough and it's an interesting read at worst so I'd definitely give it a shot.
Original
There are too many answers to this question but here's my 2cents.
If you want to build a website you'll need few basic things in your tool belt like HTML, CSS, JS - maybe even PHP / Rails / Django (or one of the 10000+ other web frameworks) and MySQL.
The front-end part is basically all that gets sent to the client every request. The server-sided language calculates what needs to be sent which is how you build your website.
Now when it comes to managing assets (images, CSS, JS) you're diving into HTTP land since you'll want to do as few requests as possible. The reason for this is that there is a DNS penalty.
This DNS penalty however does not dictate your entire website of course. It's all about the balance between amount of requests and read- / maintainability for the programmers building the website.
Some frameworks like rails allow you to combine all your JS and CSS files into a big meta-like JS and CSS file before you deploy your application on your server. This ensures that (unless done otherwise) for instance ALL the JS and ALL the CSS used in the website get sent in one request per file.
Imagine having a popup script and something that fetches articles through AJAX. These will be two different scripts and when deploying without combining them - each page load including the popup and article script will send two requests, one for each file respectively.
The reason this is not true is because browsers cache whatever they can whenever they can because in the end browsers and people who build websites want the same thing. The best experience for our users!
This means that during the first request your website will ever answer to a client will cache as much as possible to make consecutive page loads faster in the future.
This is kind of like the browser way of helping websites become faster.
Now when the brilliant browserologists think of something it's more or less our job to make sure it works for the browser. Usually these sorts of things with caching etc are trivial and not hard to implement (thank god for that).
Having a lot of HTTP requests in a page load isn't an end-of-the-world thing since it'll only slow your first request but overall having less requests makes this "DNS-penalty" thing appear less often and will give your users more of an instant page load.
There are also other techniques besides file-merging that you could use to your advantage, when including a javascript you can choose it to be async or defer.
For async it means the script will be loaded and executed in the background whenever it's loaded, regardless of order of inclusion within HTML. This also pauses the HTML parser to execute the script directly.
For defer it's a bit different. It's kind of like async but files will be executed in the correct order and only after the HTML parser is done.
Something you wouldn't want to be "async" would be jQuery for instance, it's the key library for a lot of websites and you'll want to use it in other scripts so using async and not being sure when it's downloaded and executed is not a good plan.
Something you would want to be "async" is a google analytics script for instance, it's effectively optional for the end-user and thus should be labelled as not important - no matter how much you care about the stats your website isn't built for you but by you :)
To get back to requests and blend all this talk about async and deferred together, you can have multiple JS on your page for instance and not have the HTML parser pause to execute some JS - instead you can make this script defer and you'll be fine since the user's HTML and CSS will load while the JS parser waits nicely for the HTML parser.
This is not an example of reducing HTTP requests but it is an example of an alternative solution should you have this "one file" that doesn't really belong anywhere except in a separate request.
You will also never be able to build a perfect website, nor will http://github.com or http://stackoverflow.com but it doesn't matter, they are fast enough for our eyes to not see any crazy flashing content and those things are truly important for end-users.
If you are curious about how much requests is normal - don't. It's different for every website and the purpose of the website, tho I agree some things do go over the top sometimes but it is what it is and all we have to do is support browsers like they are supporting us - Even looking at IE / Edge there since they are also improving (slowly but steady anyways).
I hope my story made sense to you, I did re-read before the post but couldn't find anything while scouting for irregular typing or other kinds of illogical things.
Good luck!
The HTTP protocol is verbose, so the ratio of header size to payload size makes it more efficient to have a larger payload. On top of that, this is still a distributed communication which makes it inherently slow. You also, usually, have to set up and tear down the TCP connection for each request.
Also, I have found, that the small requests repeat data between themselves in an attempt to achieve RESTful purity (like including user data in every response).
The only time small requests are useful is when the data may not be needed at all, so you only load it when needed. However, even then it may be more performant to.simply retrieve it all in one go.
You always want less requests.
The reason we separate any javascript/css code in other files is we want the browser to cache them so other pages on our website will load faster.
If we have a single page website with no common libraries (like jQuery) it's best if you include all the code in your html.

Overhead of separated javascript files in application loading time

My company is building a single page application using javascript extensively. As time goes on, the number of javascript files to include in the associated html page is getting bigger and bigger.
We have a program that mignifies the javascript files during the integration process but it does not merge them. So the number of files is not reduced.
Concretely this means that when the page is getting loaded, the browser requires the javascript files one by one initiating each time a http request.
Does anyone has metrics or a sort of benchmark that would indicate up to what extent the overhead in requesting the javascript files one by one is really a problem that would require to merge the files into a single one?
Thanks
It really depends on the number of users and connections allowed by the server and the maximum number of connections of the client.
Generally, a browser can do multiple HTTP requests at the same time, so in theory there shouldn't be much difference in having one javascript file or a few.
You don't only have to consider the javascript files, but the images too of course, so a high number of files can indeed slow things down (if you hit the maximum number of simultaneous connection from server or client). So regarding that it would be wise to merge those files.
#Patrick already explained benefits of merging. There is however also a benefit of having many small files. Browsers by default give you a maximum number of parallel requests per domain. It should be 2 by HTTP standard but browsers don't follow it anymore. This means that requests beyond that limit wait.
You can use subdomains and redirect requests from them to your server. Then you can code client in such way that it will use a unique subdomain for each file. Thus you'll be able to download all files at the same time (requests won't queue) effectively increasing performance (note that you will probably need more static files servers for this to handle the traffic).
I haven't seen this being used in real life but I think that's an idea worth mentioning and testing. Related:
Max parallel http connections in a browser?
I think that you should have a look at your app Architecture more than thinking about what is out there.
But this site should give you a good idea: http://www.browserscope.org/?category=network
Browsers and servers may have their own rules which are different. If you search for http requests limit, you will find a lot of posts. For example the max http request limit is per domain.
But speaking a bit about software development. I like the component based approach.
You should group your files per component. Depending on your application requirements, you can load first the mandatory components and lazy load the less needed one or on the fly. I don't think you should download the entire app if it's huge and has a lot of different functionalities that may or may not all be used by your users.

HTML5 LocalStorage as cache and single asset request

I would like to know what the limits, cons are of the following concept:
Requirements:
Browser with LocalStorage support.
Serverside asyncronous non-blocking i/o technology.
Lets imagine the following request flow:
client GET / request -> server. We call this stage "greeting", which is an interesting stage because the client now sends (also trough headers ofcourse) :
ip
browser
browser version
language
charset
server -> client (200 OK)
client -> IF OK
-> establish a websocket with the server
once the websocket has been established we enter the "asset stream" stage.
server -> looks for matching assets (stylesheets, images, javascript files, fonts etc.) that are specific for: language, browser, resolution specific assets) and STREAMS them through the websocket.
server -> request (websocket, async stream of assets)
BENEFIT 1. No multiple requests through the wire avoiding DNS lookups etc.
BENEFIT 2. Cache the hell out of these assets in localStorage, which is the following stage.
request -> put in LocalStorage cache.
request -> render website.
I would like to know get some opinions, what might be a good idea, what might not etc.
My first thoughts where:
CDN's not supported in this Architecture
We need one single request to get the javascript / html to start WebSocket etc.
I hope my question was clear.
Interesting approach, it's definitely worth thinking about. Let me be your devil's advocate:
BENEFIT 1. No multiple requests through the wire avoiding DNS lookups
etc.
This is true, although it's only an issue when you're accessing a page/site for the first time. It's also somewhat mitigated by prefetching that modern browsers implement. It's important to remember that browsers will download multiple resources in parallel, which could be faster, and definitely more progressively responsive, than downloading the whole payload in bulk.
With today's technologies you can already serve a full fledged pages and applications with only a handful of resources as far as a web client is concerned (all of them could be gziped!):
HTML
combined and minified CSS files as one resource
same for JS
image sprite
BENEFIT 2. Cache the hell out of these assets in localStorage...
Browsers already cache the hell out of such assets! In addition, there are proven and intelligent techniques to invalidate those caches (which is the second biggest challenge in software development).
Other things to consider:
Don't underestimate CDN. They are life savers when it comes to
latency. Your approach is not latency friendly during the first
request.
AJAX and progressive enhancement approaches can optimize web app
experience to make it feel like a desktop app already.
You will need to re-invent or modify tools like FireBug to work
with one stream containing all resources. No web development can be
imagined nowadays without those tools.
If browsers don't support this approach natively, then you will
still have a hell of a time programming and letting browser know
what your stream contains and how to handle it. By the time you
process the stream and fire all necessary events (in the optimal
sequence!) you might not gain as much benefits as you hoped for.
Good luck!

Saving Application State in Node.js

How can I save the application state for a node.js Application that consists mostly of HTTP request?
I have a script in Node.JS that works with a RESTful API to import a large number (10,000+) of products into an E-Commerce application. The API has a limit on the amount of requests that can be made and we are staring to brush up against that limit. On a previous run the script exited with a Error: connect ETIMEDOUT probably due to exceeding API limits. I would like to be able to try connecting 5 times and if that fails resume after an hour when the limit has been restored.
It would also be beneficial to save the progress throughout in case of a crash (power goes down, network crashes etc). And be able to resume the script from the point it left off.
I know that Node.js operates as a giant event-queue, all http requests and their callbacks get put into that queue (together with some other events). This makes it a prime target for saving the state of current execution. Other pleasant (not totally necessary for this project) would be being able to distribute the work among several machines on different networks to increase throughput.
So is there an existing way to do it? A framework perhaps? Or do I need to implement this myself, in that case, any useful resources on how this can be done would be appreciated.
I'm not sure what you mean when you say
I know that Node.js operates as a giant event-queue, all http requests and their callbacks get put into that queue (together with some other events). This makes it a prime target for saving the state of current execution
Please feel free to comment or expound on this if you find it relevant to the answer.
That said, if you're simply looking for a persistence mechanism for this particular task, I might recommend Redis, for a few reasons:
It allows atomic operations on many data types; for example, if you had an entry in Redis called num_requests_made that represented the number of requests made, you could increment this number easily in Redis using INCR num_requests_made, and it's guaranteed to be atomic, making it easier to scale to multiple workers.
It has several data types that could prove useful for your needs; for example, a simple string could represent the number of API requests made during a certain period of time (as in the previous bullet point); you might store details on failed API request that need to be resubmitted in a list; etc.
It provides pub/sub mechanisms which would allow you to communicate easily between multiple instances of the program.
If this sounds interesting or useful and you're not already familiar with Redis, I highly recommend trying out the interactive tutorial, which introduces you to a few data types and commands for them. Another good piece of reading material is A fifteen minute introduction to Redis data types.

Categories

Resources