I am using WordPress framework and working on dedicated server from name-cheap and only one site is running on this server.
Even after that I'm getting waterfall time in the range of 500ms, but I want to make it around 100ms.
This is my website (http://ucbrowserdownload.net/) and the waterfall
You can see that everything is perfect from my end but still not getting some solution.
Also can check http://labnol.org/
This website is also in WordPress and using same theme, even I am calling very less images or blogs on my index page even I'm error a huge waterfall.
Want to know, how to solve all these and to know where is the problem either in WordPress or in theme or in host.
Completely got stuck and no solution from last few weeks.
Your help will be highly appreciated.
Thank you.
Original Source
Optimization of Nginx
Optimal Nginx configuration presented in this article. Once again
briefly go through the already known parameters and add some new ones
that directly affect TTFB.
compounds
First we need to define the number of "workers" Nginx.
worker_processes Nginx Each workflow is able to handle many
connections and is linked to the physical processor cores. If you know
exactly how many cores in your server, you can specify the number
yourself, or trust Nginx:
worker_processes auto;
# Determination of the number of working processes
In addition, you must specify the number of connections:
worker_connections 1024;
# Quantification of compounds by one working process, ranging from 1024 to 4096
requests
To the Web server can process the maximum number of requests, it is
necessary to use a switched off by default directive multi_accept :
multi_accept on;
# Workflows will accept all connections
It is noteworthy that the function will be useful only if a large
number of requests simultaneously. If the request is not so much, it
makes sense to optimize work processes, so that they did not work in
vain:
accept_mutex on;
# Workflows will take turns Connection
Improving TTFB and server response time depends on the directives
tcp_nodelay and tcp_nopush :
on tcp_nodelay;
tcp_nopush on;
# Activate directives tcp_nodelay and tcp_nopush
If you do not go into too much detail, the two functions allow you to
disable certain features of the TCP, which were relevant in the 90s,
when the Internet was just gaining momentum, but do not make sense in
the modern world. The first directive sends the data as soon as they
are available (bypass the Nagle algorithm). The second allows you to
send a header response (Web page) and the beginning of the file,
waiting for filling the package (ie, includes TCP_CORK ). So the
browser can start displaying the web page before.
At first glance, the functions are contradictory. Therefore, the
directive tcp_nopush should be used in conjunction with the sendfile .
In this case, the packets are filled prior to shipment, as directive
is much faster and more optimal than the method of the read + the
write . After the package is full, Nginx automatically disables
tcp_nopush , and tcp_nodelay causes the socket to send the data.
Enable sendfile is very simple:
sendfile on;
# Enable more effective, compared to read + write, file sending method
So the combination of all three Directives reduces the load on the
network and speeds the sending of files.
Buffers
Another important optimization affects the size of the buffer - if
they are too small, Nginx will often refer to the disks are too big -
will quickly fill up the RAM. Nginx Buffers To do this, you need to
set up four directives. Client_body_buffer_size and
client_header_buffer_size set the buffer size for the body and read
the client request header, respectively. Of client_max_body_size sets
the maximum size of the client request, and
large_client_header_buffers specifies the maximum number and size of
buffers to read large request headers.
The optimal buffer settings will look like this:
10K client_body_buffer_size;
client_header_buffer_size 1k;
of client_max_body_size 8m;
large_client_header_buffers 2 1k;
# 10k buffer size on the body of the request, 1 KB per title, 8MB to the query buffer and 2 to read large headlines
Timeouts and keepalive
Proper configuration of standby time and keepalive can also
significantly improve server responsiveness.
Directive client_body_timeout and client_header_timeout set time delay
on the body and reading the request header:
client_body_timeout 10;
client_header_timeout 10;
# Set the waiting time in seconds
In the case of lack of response from the client using
reset_timedout_connection you can specify Nginx disable such
compounds:
reset_timedout_connection on;
# Disable connections timed-out
Directive keepalive_timeout sets the wait time before the stop
connection and keepalive_requests limits the number of
keepalive-requests from the same client:
keepalive_timeout 30;
keepalive_requests 100;
# Set the timeout to 30 and limitations 100 on client requests
Well send_timeout sets the wait time in the transmission response
between two write operations:
send_timeout 2;
# Nginx will wait for an answer 2
Caching
Enable caching significantly improve server response time. Nginx cache
Methods are laid out in more detail in the material about caching with
Nginx, but in this case the inclusion of important cache-control .
Nginx is able to send a request to redkoizmenyaemyh caching data,
which are often used on the client side. To do this, the server
section you want to add a line:
. Location ~ * (jpg | jpeg | png | gif | ico | css | js) $ {expires 365d;}
Targets file formats and duration Cache
Also it does not hurt to cache information about commonly used files:
open_file_cache max = 10000 = 20s the inactive;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
# Enables the cache tags 10 000 files in 30 seconds
open_file_cache specifies the maximum number of files for which
information is stored, and the storage time. open_file_cache_valid
sets the time after which you need to check the relevance of the
information, open_file_cache_min_uses specifies the minimum number of
references to the file on the part of customers and
open_file_cache_errors includes caching troubleshooting files.
logging
This is another feature that can significantly reduce the performance
of the entire server and, accordingly, the response time and TTFB. So
the best solution is to disable basic log and store information about
critical errors only:
off the access_log;
the error_log /var/log/nginx/error.log crit;
# Turn off the main logging
Gzip compression
Usefulness Gzip is difficult to overstate. Compression can
significantly reduce traffic and relieve the channel. But he has a
downside - need to compress time. So it will have to turn off to
improve TTFB and server response time. Gzip At this stage, we can not
recommend Gzip off as compression improves the Time To Last Byte, ie,
the time required for a full page load. And this is in most cases a
more important parameter. On TTFB and improving server response time
greatly affect large-scale implementation of HTTP / 2 , which contains
a built-in methods for header compression and multiplexing. So that in
the future may disable Gzip will not be as prominent as it is now.
PHP Optimization: FastCGI in Nginx
All sites use modern server technology. PHP, for example, which is
also important to optimize . Typically, PHP opens a file, verifies and
compiles the code, then executes. Such files and processes can be set,
so PHP can cache the result for redkoizmenyaemyh files using OPcache
module. And Nginx, connected to PHP using FastCGI module can store the
result of the PHP script to send the user an instant.
The most important
Optimization of resources and the correct settings for the web server
- the main influencing TTFB and server response time factors. Also do not forget about regular software updates to the stable release, which
are to optimize and improve performance.
Related
I have to code a website with the capability of watching many live streams (video-surveillance cameras) at the same time.
So far, I'm using MJPEG and JS to play my live videos and it is working well ... be only up to 6 streams !
Indeed, I'm stuck with the 6 parallel downloads limit most browser have (link).
Does someone know how to by-pass this limit ? Is there a tip ?
So far, my options are:
increase the limit (only possible on Firefox) but I don't like messing with my users browser settings
merge the streams in one big stream/video on the server side, so that I can have one download at the time. But then I won't be able to deal with each stream individually, won't I ?
Switch to JPEG stream and deal with a queue of images to be refreshed on the front side (but if I have say 15 streams, I'm afraid I will collapse my client browser on the requests (15x25images/s)
Do I have any other options ? Is there a tip or a lib, for example could I merge my stream in one big pipe (so 1 download at the time) but have access to each one individually in the front code ?
I'm sure I'm on the right stack-exchange site to ask this, if I'm not please tell me ;-)
Why not stream (if you have control over the server side and the line is capable) in one connection? You do one request for all 15 streams to be send /streamed in one connection (not one big stream) so the headers of each chunk have to match the appropriate stream-id. Read more: http://qnimate.com/what-is-multiplexing-in-http2/
More in-depth here: https://hpbn.co/http2/
With http1.0/1.1 you are out of luck for this scenario - back then when developed one video or mp3 file was already heavy stuff (work arounds where e.g. torrent libraries but unreliable and not suited for most scenarios apart from mere downloading/streaming). For your interactive scenario http2 is the way to go imho.
As Codebreaker007 said, I would prefer HTTP2 stream multiplexing too. It is specifically designed to get around the very problem of too many concurrent connections.
However, if you are stuck with HTTP1.x I don't think you're completely out of luck. It is possible to merge the streams in a way so that the clientside can destructure and manipulate the individual streams, although admittedly it takes a bit more work, and you might have to resort to clientside polling.
The idea is simple - define a really simple data structure:
[streamCount len1 data1 len2 data2 ...]
Byte 0 ~ 3: 32-bit unsigned int number of merged streams
Byte 4 ~ 7: 32-bit unsigned int length of data of stream 1
Byte 8 ~ 8+len1: binary data of stream 1
Byte 8+len1+1 ~ 8+len1+4: length of data of stream 2
...
Each data is allowed to have a length of 0, and is handled no differently in this case.
On the clientside, poll continuously for more data, expecting this data structure. Then destructure it and pipe the data to the individual streams' buffer. Then you can still manipulate the component streams individually.
On the serverside, cache the data from individual component streams in memory. Then in each response empty the cache, compose this data structure and send.
But again, this is very much a plaster solution. I would recommend using HTTP2 stream as well, but this would be a reasonable fallback.
Sorry if this is duplicate,I would think it would be but couldn't find anything.
I have a flex application that I am posting data back to a php/mysql server via IE. I haven't run into any problems yet, but knowing this ahead of time might save me a bunch of frustration and work. Is there a size limit to posting data via http?
This article says no:
http://www.netlobo.com/ie_form_submit.html
This discussion says yes:
http://bytes.com/topic/php/answers/538226-what-maximum-limit-using-post-method
And it all goes back and forth what I'm able to find online. So please limit answers to personally tested/verified numbers.
I am wanting to post back an XML string that can be quite large (say up to 5mb).
If it makes any difference: browser will always be IE (our product requires it), post is coming from and httpService in flex, web server is php, DB is mySql.
It depends on a server configuration. If you're working with PHP under Linux or similar, you can control it using .htaccess configuration file, like so:
#set max post size
php_value post_max_size 20M
And, yes, I can personally attest to the fact that this works :)
If you're using IIS, I don't have any idea how you'd set this particular value.
The url portion of a request (GET and POST) can be limited by both the browser and the server - generally the safe size is 2KB as there are almost no browsers or servers that use a smaller limit.
The body of a request (POST) is normally* limited by the server on a byte size basis in order to prevent a type of DoS attack (note that this means character escaping can increase the byte size of the body). The most common server setting is 10MB, though all popular servers allow this to be increased or decreased via a setting file or panel.
*Some exceptions exist with older cell phone or other small device browsers - in those cases it is more a function of heap space reserved for this purpose on the device then anything else.
Also, in PHP.INI file there is a setting:
max_input_vars
which in my version of PHP: 5.4.16 defaults to 1000.
From the manual:
"How many input variables may be accepted (limit is applied to $_GET, $_POST and $_COOKIE superglobal separately)"
Ref.: http://www.php.net/manual/en/info.configuration.php#ini.max-input-vars
You can post large amount of data by setting php.ini variable: max_input_vars
Default size of this variable is 1000 but if you want to sent large amount of data you have to increase the size accordingly.
If you can't set the size from ini_set you have to do it through htaccess or by doing changes into php.ini file directly.
max_input_vars 2500
memory_limit 256M
As David pointed out, I would go with KB in most cases.
php_value post_max_size 2K
Note: my form is simple, just a few text boxes, not long text.
(PHP shorthand for KB is K, as outlined here.)
By default, the post request has maximum size of 8mb. But you can modify it according to your requirements.
The modification can be done by opening php.ini file (php configuration setting).
Find
post_max_size=8M //for me, that was on line:771
replace 8 according to your requirements.
One of the best solutions for this, you do not use multiple or more than 1,000 input fields. You can concatenate multiple inputs with any special character, for ex. #.
See this:
<input type='text' name='hs1' id='hs1'>
<input type='text' name='hs2' id='hs2'>
<input type='text' name='hs3' id='hs3'>
<input type='text' name='hs4' id='hs4'>
<input type='text' name='hs5' id='hs5'>
<input type='hidden' name='hd' id='hd'>
Using any script (JavaScript or JScript),
document.getElementById("hd").value = document.getElementById("hs1").value+"#"+document.getElementById("hs2").value+"#"+document.getElementById("hs3").value+"#"+document.getElementById("hs4").value+"#"+document.getElementById("hs5").value
With this, you will bypass the max_input_vars issue. If you increase max_input_vars in the php.ini file, that is harmful to the server because it uses more server cache memory, and this can sometimes crash the server.
It is up to the http server to decide if there is a limit. The product I work on allows the admin to configure the limit.
For developers who cannot change php configuration because of the webhosting. (My settings 256MB max size, 1000 max variables)
I got the same issue that just 2 out of 5 big data objects (associative arrays) with substructures were received on the server side.
I find out that the whole substructure is being "flattened" in the post request. So, one object becomes a hundreds of literal variables. At the end, instead of 5 Object variables it is in reality sending dozens of hundreds elementar variables.
Solution in this case is to serialize each of the substructures into String. Then it is received on the server as 5 String variables.
Example:
{variable1:JSON.stringify(myDataObject1),variable2:JSON.stringify(myDataObject2)...}
This is cross post from software engineering Q/A
There are couple (a lot of) websites provides internet speed test, I tried to build same, but I'm still not able to get the accurate results.
Trying
Created several files on server, let say (1, 8, 32, 64, 128, 256, 512, 1024)KB.
Then on the client side, I'm downloading each of them.
measuring
start time to request to the server
1st response from server
time it finishes the downloading
then, internet speed = all transfered size / time taken in seconds.
I checked a couple of other websites which do not download large files / larger data (more than 5Kb), but instead a lot of request are made to the server in parallel,
Also there is something smoothing factor or stabalizing factor, or something which samples the data, and calculates better results.
Here is how speedtest.net implemented, but I'm still not able to understand it properly.
https://support.speedtest.net/hc/en-us/articles/203845400-How-does-the-test-itself-work-How-is-the-result-calculated-
Can someone guide me to understand and point to the right direction to calculate internet speed?
Edit: I want to show my users on my web/app how much speed they are getting on it. For this I'm trying to apply a general creteria, similar to speedtest, but instead of taking from multiple servers, just want to try with one server only.
The general idea is to compute parameters to be able to stuff the physical communication channel. The main part is to determine which number of parallel downloads will reach the goal.
A single communication is clearly not sufficient because there exist many overheads during which you can send other packets. In a very rough approximation where to receive messages you need to send a packet from A to B to request some data, and then the data is sent back from B to A, you can clearly request something else while the data is sent back. You can also think of how many data packet can be sent along the link from X to Y? Just like you can have several cars in the same road from B to A. Each car being a packet from a given communication.
Determining the speed of a connection is highly dependent on many factors, and what is obtained is only an approximation.
Here is the case:
I get a js to monitor web ads.Because of the browser cache,when i update js on server side,js on client side will not be refreshed immediately.How could i force refresh client js as soon as i update js on server side?
p.s. Add version number strategy is not useful in my case.
Simple strategy - add a version number as a query string to your js files, and change the number. This will cause the browsers to fetch your js files again -
<script src="mysource.js?version=123"></script>
Whenever you change your script on the server, change this version number in the html too. Or better yet, apply a random number as the version value every time you request this script.
You can use HTTP's cache-control mechanisms to control the browser's caching.
When serving a copy of your JS file, include an ETag and/or Last-Modified header in the response. Also include a "Cache-Control: must-revalidate" header. This tells the browser that it must check back with the server every time, and it can send an If-None-Match and/or If-Modified-Since header in future requests to ask the server to send the file only if it's changed.
If you'd like to avoid the load of browsers checking with the server every time, and it's OK for the changes to not take effect immediately, you can also include a Date header with the current time and an Expires header set to some point in the future — maybe 12 or 24 hours. That allows the browser to use its cached copy for the specified amount of time before it has to check back with your server again.
HTTP's cache-control features are pretty robust, but there are plenty of nuances, such as controls for intermediate caches (e.g. other systems between your server and the user's browser). You'll want to read about caching in HTTP overall, not just the specific header fields that I've mentioned.
You can do this by changing the name of the file. Add some version number (could be like parameter, i.e. filename.js?v=time(); for PHP for example) or just append some random numbers at the end of the filename.
Actually I'm not sure whether you can force the client to refresh this type of files. But when changing the file name you will force the browser to get the newest version.
I have created a website for a friend. Because he wished to have a music player continue to play music through page loads, I decided to load content into the page via ajax (facilitated by jQuery). It works fine, it falls back nicely when there is no javascript, and the back/forward buttons are working great, but it's dreadfully slow on the server.
A couple points:
The initial page load is fairly quick. The Chrome developer console tells me that "index.php" loads in about 2.5 seconds. I have it set up so that query string params dictate which page is loaded, and this time frame is approximately accurate for them all. For the homepage, there is 8.4KB of data loaded.
When I load the content in via an ajax request, no matter the size of the data downloaded, it takes approximately 20 seconds. The smallest amount of data that is loaded in this way is about 500 bytes. There is obviously a mismatch here.
So Chrome tells me that the vast majority of the time spent is "waiting" which I take to mean that the server is dealing with the request. So, that can only mean, I guess, that either my code is taking a long time, or something screwy is going on with the server. I don't think it's my code, because it's fairly minimal:
$file = "";
if (isset($_GET['page'])) {
$file = $_GET['page'];
} else if (isset($_POST['page'])) {
$file = $_POST['page'];
} else {
$file = "home";
}
$file = 'content/' . $file . '.php';
if (file_exists($file)) {
include_once($file);
} else {
include_once('content/404.php');
}
This is in a content_loader.php file which my javascript (in this scenario) sends a GET request to along with a "page" parameter. HTML markup is returned and put into a DIV on the page.
I'm using the jQuery .get() shorthand function, so I don't imagine I could be messing anything up there, and I'm confident it's not a Javascript problem because the delay is in waiting for the data from the server. And again, even when the data is very small, it takes about 20 seconds.
I currently believe it's a problem with the server, but I don't understand why a request made through javascript would be so much slower than a request made the traditional way through the browser. As an additional note, some of the content pages do connect to a MySQL database, but some do not. It doesn't seem to matter what the page requires for processing or how much data it consists of, it takes right around 20 seconds.
I'm at a loss... does anyone know of anything that might explain this? Also, I apologize if this is not the correct place for such a question, none of the other venues seemed particularly well suited for the question either.
As I mentioned in my comment, a definite possibility could be reverse DNS lookups. I've had this problem before and I bet it's the source of your slow requests. There are certain Apache config directives you need to watch out for in both regular apache and vhost configs as well as .htaccess. Here are some links that should hopefully help:
http://www.tablix.org/~avian/blog/archives/2011/04/reverse_dns_lookups_and_apache/
http://betabug.ch/blogs/ch-athens/933
To find more resources just Google something like "apache slow reverse dns".
A very little explanation
In a reverse DNS lookup an attempt is made to resolve an IP address to a hostname. Most of the time with services like Apache, SSH and MySQL this is unnecessary and it's a bad idea as it only serves to slow down requests/connections. It's good to look for configuration settings for your different services and disable reverse DNS lookups if they aren't needed.
In Apache there are certain configuration settings that cause a reverse lookup to occur. Things like HostnameLookups and allow/deny rules specifying domains instead of IP addresses. See the links above for more info.
As you suggested in your comment, the PHP script is executing quickly once it finally runs. The time is spent waiting on Apache - most likely to do a reverse DNS lookup, and failing. You know the problem isn't with your code, it's with the other services involved in the request.
Hope this helps!