How to reduce server "Wait" time? - javascript

I am trying to optimize my site's speed and I'm using the great tool at pingdom.com. Right now, over 50% of the time it takes to load the page is "Wait" time as shown in the screenshot below. What can I do to reduce this? Also, how typical is this figure? are there benchmarks on this? Thanks!
EDIT:
Ok.. let me clarify a few things. There are no server side scripts or database calls going on. Just HTML, CSS, JS, and images. I have already done some things like push js to the end of the body tag to get parallel downloads. I am aware that the main.html and templates.html are adding to the overall wait time by being done synchronously after js.js downloads, that's not the problem. I am just surprised at how much "wait" time there is for each request. Does server distance affect this? what about being on a shared server, does that affect the wait time? Is there any low-hanging fruit to remedy those issues?

The most common reason for this in the case of Apache is the usage of DNS Reversal Lookup. What this means is that the server tries to figure out what the name of your machine is, each time you make a request. This can take several seconds, and that explains why you have a long WAIT time and then a very quick load, because the matter is not about bandwidth.
The obvious solution for this is to disable hostnamelookup in /etc/httpd/conf/httpd.conf
HostnameLookups Off
However...this is usually NOT enough. The fact is that in many cases, apache still does a reversal lookup even when you have disabled host name lookup, so you need to take a careful look at each line of your apache config. In particular, one of the most common reasons for this are LOGS. By default on many red hat - centos installations, the log format includes %h which stands for "hostname", and requires apache to do a reverse lookup. You can see this here:
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %b" common
You should change those %h for %a to solve this problem.

If you have multiple server requests which the page is waiting on, you can make sure that those server requests are sent asynchronously in parallel so that you are serializing them.
The slowest possible way to fetch multiple requests is to send one request, wait for its response, send the next request, wait for its response, etc... It's usually much faster to send all requests asynchronously and then process all responses as they arrive. This shortens the total wait time to the longest wait time for any single request rather than the cumulative wait time of all requests.
If you are only making one single request, then all you can do on the client-side of things is to make sure that the request is sent to the server as early as possible in the page loading sequence so that other parts of the page can be doing their business while the request is processing, thus getting the initial request started sooner (and thus finishing sooner).

The wait time, also known as time to first byte is how long it takes for the server to send the first byte from when the connection is initiated. If this is high, it means your server has got to do a lot of work to render the page before sending it. We need more information about what your site is doing to render the page.

TTFB is directly influenced by "physical" distance between browser and server. CDN proxy is the best way to shorten said distance. This, coupled with native caching capabilities, will help provide swifter response by loading cached object from the nearest POP (point of placement) location.
The effect will depend on user geo-location and CDN's spread. Still, you can expect significant improvement, 50%-70% or more.
Speaking from experience, I saw cases in which 90% of content was cached and deliver directly from proxy placed on a different continent, from the other side of the globe.

This is an issue with the server... According to Pingdom, "The web browser is waiting for data from the server" is what defines the "Wait" time.
There isn't much you can do from a javascript or code end to fix this.

Related

How to efficiently send tons of get request with php

I'm working on a project in which I have to develop a simple PHP based web module from where the user (admins) can send SMS messages (Followup) to students, as for the sake of advertisement and other needs.
The SMS API is very simple and I just need to send a GET request to a Cross Origin Domain along with the phone number and message.
I tested it with the file_get_contents("sms_api_url?credentials"); and it works fine.
What worries me is that the SMS will be sent to TONS of numbers and so I have to send the request multiple times using a loop, which will take a lot of time and I think will be too much resource consuming.
Also the max execution time for PHP is set to 30 seconds which I don't want to change.
I thought to use the Client side JavaScript for sending cross origin request in a loop so that it wont affect my server but that wouldn't be secure as it would reveal the API credentials.
What Technology should I use to accomplish my goals? and send tons of get request efficiently?
You've told us nothing about the the actual volume you need to handle, the metrics for the processing/connection time nor what constraints there are on the implementation.
As it stands this is way too broad to answer. But some approaches you might consider are:
1) Running concurrent requests - but note that just like domain sharding, this can undermine your bandwidth if over used
2) You can have PHP scripts running indefinitely outside the webserver (using the CLI SAPI) and these can be launched from a web session.
I thought to use the Client side JavaScript for sending cross origin request in a loop so that it wont affect my server but that wouldn't be secure as it would reveal the API credentials.
If you send directly to the endpoint, then yes, you'd need the credentials in the browser. But if you implement a proxy script which injects the credentials on your webserver then you can use your own credentials from the browser.
Using cron has certian advantages - but you really don't want to be spawning a task from crond to send one SMS message - it needs to run in batches, and you need to manage the concurrency.
You might want to consider switching to a different aggregator whom can offer bulk processing.
Regardless of the aproach you will need a way to store the messages/phone numbers and a locking mechanism around retrieval processing.
Personally, I'd be tempted to look at using an MTA for this or perhaps even Kannel - but that's more an approach for handling volumes in excess of 300,000 per day.
To send as many network requests as needed in less than 30 seconds are two requirements that kind of contradict themselves. Also, raw "efficiency" can just mean squeeze every single resource in the server, which not may be desirable.
Said that, I think the key points are:
I may be wrong but, as far as I know, there're only two ways to prevent a non-authorised party from consuming a web service: private credentials and IP filtering. None are possible in browser-based JavaScript.
Don't make a human being stare in front of the computer until a task of this kind completes. There's absolutely no need to and it can even cause the task to abort.
If you need to send the same text to different recipients, find out whether the SMS provider has an API that allows to do it in a single API request. Large batch deliveries get one or two orders of magnitude harder when this feature is not available.
In short you need:
A command line script
A task scheduler (e.g. cron)
Prefer server stability to maximum efficiency (you may even want to throttle your requests)
Send the requests from the server, but don't do it in the PHP script that generates the page.
Instead, store information about the desired messages in a database.
Write another program which, periodically, checks the database for unsent messages and makes the call to the API. You could run it using cron.

How to load a javascript file from the best source

I have a few web servers in different locations and would like to load my javascript files from the fastest (nearest??) server. For example in Location A, I would expect the users to get their files from servers in that Location, but users from Location B would get their files from other servers, hopefully servers from location B, but that is not necessary.
I have found how to load javascript files conditionally, and I think that is a good start. I just need a way to find which is the best source(faster response).
Thanks,
Just use a CDN if you want that minimal performance advantage. This would differ a few milliseconds.
There is a list of CDN on http://jquery.com/download/#using-jquery-with-a-cdn
The only advantage of using a CDN is that the user may have downloaded the jQuery library earlier from another website, so the jQuery library is reused from it's cache.
If you are encountering performance problems, try profiling the website and check the ammount of time that a resource takes to run or load.
This isn't really a problem the client should solve. You should put your server behind a proxy that balances the load. If the proxy's bandwidth isn't enough, then I think you're out of luck. A quick and dirty solution is to do a Math.random() in the client side and choose the server based on that. It should balance the load pretty evenly.
If you were to measure the response time from the mirror servers, you would just introduce more load. Lets say, we have a way to determine the response time. You would either request the file from all servers, meaning you just made everything worse, or you would wait for server1, and if that didn't respond in time, you would move to server2. But by doing this you introduced load to server1.
Also if you were to ping the server, that isn't a real indicator if the available performance of that server. The server might be able to respond fast as the response is short and requires no real IO, but if you were to request a file that would mean possibly reading from the disk.

how can i reduce waiting time of http Response

While searching tips to speed up the performance one site suggest me that Reduce number of HTTP request : With the help of Firebug, Firefox extension, you can find out how many resource requests are made by your asp.net web page. It is very important to reduce the number of HTTP requests on your server; it will help in reducing the server load and allow more visitors to access your website.
As per this i check my site and noticed that : it need to wait a long time for http Response
How can i reduce this time to speedup the performance
The most probable reason could be some expensive operation in one of your events of the page load process. You need to optimize. To confirm, you can create a datetime variable in Page_PreInit and you can check the total processing time taken in the PagePrerender.
Most of the time, the database call will be the culprit.

Does using jQuery.get effectively double the ping time?

Suppose I have some script myScript.js that uses jQuery.get() to retrieve a small piece of data from the server. Suppose also that my ping time is horrible at 1500ms. Does using jQuery.get effectively double the ping time to 3000ms?
Or is there async magic that allows some sort of parallel processing? The reason I'm asking is that we use jQuery.get() fairly liberally and I'm wondering if it is an area we need to look at optimizing.
Edit: double compared to if I can somehow rearrange things to just load all the data upon the initial load and bypass jQuery get altogether
Ping time is usually server-related, where as jQuery is all client side. So the answer is no, it doesn't affect your ping time.
If you're asking if using jQuery.get (or ajax in general) can make your client side slower then the answer is that yes, the more JS you have then generally the slower the client gets if you're trying to process a lot of things since everything pretty much runs on the same thread. However, by default these ajax requests are asynchronous so until the server sends the response back the thread is usually idling anyways.
I'd suggest you open your page in Chrome and use the developer tools to see the network usage. That will tell you exactly how much time is taken 'waiting' on the server.
If you break down a request, you can get an idea of what latency you can expect.
Every TCP connection begins with a three-way-handshake:
SYN (client to server)
SYN-ACK (server to client)
ACK (client to server)
If the request fits in the size of one tcp packet (~1500 bytes) it can be sent as the last part of the handshake to optimize the network flow.
The response might be sent in just one packet as well (depending on its size). Once sent, both sides engage in a connection termination which takes two pairs of FIN-ACK sequences unless the connection is kept alive. At this point I'm not entirely sure whether the server can send FIN together with the last response packet.
So, in the best case scenario you can expect at least 2x ping time, but more likely 3-4x.

Is there any good trick for server to handle more requests if I don't have to sent any data back?

I want to handle a lot of (> 100k/sec) POST requests from javascript clients with some kind of service server. Not many of this data will be stored, but I have to process all of them so I cannot spend my whole server power for serving requests only. All the processing need to be done in the same server instance, otherwise I'll need to use database for synchronization between servers which will be slower by orders of magnitude.
However I don't need to send any data back to the clients, and they don't even expect them.
So far my plan was to create few proxy servers instances which will be able to buffer the request and send them to main server in bigger packs.
For example let's say that I need to handle 200k requests / sec and each server can handle 40k. I can split load between 5 of them. Then each one will be buffering requests and sending them back to main server in packs of 100. This will result in 2k requests / sec on the main server (however, each message will be 100 times bigger - which probably means around 100-200kB). I could even send them back to the server using UDP to decrease amount of needed resources (then I need only one socket on main server, right?).
I'm just thinking if there is no other way to speed up the things. Especially, when as I said I don't need to send anything back. I have full control over javascript clients also, but unlucky javascript is unable to send data using UDP which probably would be solution for me (I don't even care if 0.1% of data will be lost).
Any ideas?
Edit in response to answers given me so far.
The problem isn't with server being to slow at processing events from the queue or with putting events in the queue itself. In fact I plan to use disruptor pattern (http://code.google.com/p/disruptor/) which was proven to process up to 6 million requests per second.
The only problem which I potentially can have is need to have 100, 200 or 300k sockets open at the same time, which cannot be handled by any of the mainstream servers. I know some custom solutions are possible (http://www.metabrew.com/article/a-million-user-comet-application-with-mochiweb-part-3) but I'm wondering if there is no way to even better utilization of fact that I don't have to replay to clients.
(For example some way to embed part of the data in initial TCP packet and handle TCP packets as they would be UDP. Or some other kind of magic ;))
Make a unique and fast (probably in C) function that get's all requests, from a very fast server (like nginx). The only job of this function is to store the requests in a very fast queue (like redis if you got enought ram).
In another process (or server), depop the queue and do the real work, processing request one by one.
If you have control of the clients, as you say, then your proxy server doesn't even need to be an HTTP server, because you can assume that all of the requests are valid.
You could implement it as a non-HTTP server that simply sends back a 200, reads the client request until it disconnects, and then queues the requests for processing.
I think what you're describing is an implementation of a Message Queue. You also will need something to hand off these requests to whatever queue you use (RabbitMQ is quite good, there are many alternatives).
You'll also need something else running which can do whatever processing you actually want on the requests. You haven't made that very clear, so I'm not too sure exactly what would be right for you. Essentially the idea will be that incoming requests are dumped as quickly as simply as possible into the queue by your web server, and then the web server is free to go back to serving more requests. When the system has some resources, it uses them to process the queue, but when it's busy the queue just keeps growing.
Not sure what platform you're on, but might want to look at something like Lighttpd for serving the POSTs. You might (if same-domain restrictions don't shoot you down) get away with having Lighttpd running on a subdomain of your application (so post.myapp.com). Failing that you could put a proper load balancer in front of your webservers altogether (so all requests go to www.myapp.com and the load balancer decides whether to forward them to the web server or the queue processor).
Hope that helps
Consider using MongoDB for persisting your requests, it's fire and forget mechanism can help your servers to response faster.

Categories

Resources