Sorry if this is duplicate,I would think it would be but couldn't find anything.
I have a flex application that I am posting data back to a php/mysql server via IE. I haven't run into any problems yet, but knowing this ahead of time might save me a bunch of frustration and work. Is there a size limit to posting data via http?
This article says no:
http://www.netlobo.com/ie_form_submit.html
This discussion says yes:
http://bytes.com/topic/php/answers/538226-what-maximum-limit-using-post-method
And it all goes back and forth what I'm able to find online. So please limit answers to personally tested/verified numbers.
I am wanting to post back an XML string that can be quite large (say up to 5mb).
If it makes any difference: browser will always be IE (our product requires it), post is coming from and httpService in flex, web server is php, DB is mySql.
It depends on a server configuration. If you're working with PHP under Linux or similar, you can control it using .htaccess configuration file, like so:
#set max post size
php_value post_max_size 20M
And, yes, I can personally attest to the fact that this works :)
If you're using IIS, I don't have any idea how you'd set this particular value.
The url portion of a request (GET and POST) can be limited by both the browser and the server - generally the safe size is 2KB as there are almost no browsers or servers that use a smaller limit.
The body of a request (POST) is normally* limited by the server on a byte size basis in order to prevent a type of DoS attack (note that this means character escaping can increase the byte size of the body). The most common server setting is 10MB, though all popular servers allow this to be increased or decreased via a setting file or panel.
*Some exceptions exist with older cell phone or other small device browsers - in those cases it is more a function of heap space reserved for this purpose on the device then anything else.
Also, in PHP.INI file there is a setting:
max_input_vars
which in my version of PHP: 5.4.16 defaults to 1000.
From the manual:
"How many input variables may be accepted (limit is applied to $_GET, $_POST and $_COOKIE superglobal separately)"
Ref.: http://www.php.net/manual/en/info.configuration.php#ini.max-input-vars
You can post large amount of data by setting php.ini variable: max_input_vars
Default size of this variable is 1000 but if you want to sent large amount of data you have to increase the size accordingly.
If you can't set the size from ini_set you have to do it through htaccess or by doing changes into php.ini file directly.
max_input_vars 2500
memory_limit 256M
As David pointed out, I would go with KB in most cases.
php_value post_max_size 2K
Note: my form is simple, just a few text boxes, not long text.
(PHP shorthand for KB is K, as outlined here.)
By default, the post request has maximum size of 8mb. But you can modify it according to your requirements.
The modification can be done by opening php.ini file (php configuration setting).
Find
post_max_size=8M //for me, that was on line:771
replace 8 according to your requirements.
One of the best solutions for this, you do not use multiple or more than 1,000 input fields. You can concatenate multiple inputs with any special character, for ex. #.
See this:
<input type='text' name='hs1' id='hs1'>
<input type='text' name='hs2' id='hs2'>
<input type='text' name='hs3' id='hs3'>
<input type='text' name='hs4' id='hs4'>
<input type='text' name='hs5' id='hs5'>
<input type='hidden' name='hd' id='hd'>
Using any script (JavaScript or JScript),
document.getElementById("hd").value = document.getElementById("hs1").value+"#"+document.getElementById("hs2").value+"#"+document.getElementById("hs3").value+"#"+document.getElementById("hs4").value+"#"+document.getElementById("hs5").value
With this, you will bypass the max_input_vars issue. If you increase max_input_vars in the php.ini file, that is harmful to the server because it uses more server cache memory, and this can sometimes crash the server.
It is up to the http server to decide if there is a limit. The product I work on allows the admin to configure the limit.
For developers who cannot change php configuration because of the webhosting. (My settings 256MB max size, 1000 max variables)
I got the same issue that just 2 out of 5 big data objects (associative arrays) with substructures were received on the server side.
I find out that the whole substructure is being "flattened" in the post request. So, one object becomes a hundreds of literal variables. At the end, instead of 5 Object variables it is in reality sending dozens of hundreds elementar variables.
Solution in this case is to serialize each of the substructures into String. Then it is received on the server as 5 String variables.
Example:
{variable1:JSON.stringify(myDataObject1),variable2:JSON.stringify(myDataObject2)...}
Related
I used https://github.com/js-cookie/js-cookie to set cookies. When I check in browser, cookies are set properly but When I extract cookies on server side(PHP) it behaves abnormally. Sometimes $_COOKIE contains/holds all the content/data which is normal & fine, but sometimes $_COOKIE misses some of the cookie data that is being set in from client side.
You can see in screenshot that sub-total & laravel_session keys are missing in response when I print it in PHP while both are present in request header
I am using Laravel 5.1
Cookies size is limited to about 4000 bytes ( including key, value, expiration date). you probably exceeded the limit, and your data was cut off.
You can increase the size by changing the value of LimitRequestFieldSize in your apache conf file.
keep in mind that generally, storing so much data in cookies is a sign of bad design, maybe try using session or local storage instead.
I am using WordPress framework and working on dedicated server from name-cheap and only one site is running on this server.
Even after that I'm getting waterfall time in the range of 500ms, but I want to make it around 100ms.
This is my website (http://ucbrowserdownload.net/) and the waterfall
You can see that everything is perfect from my end but still not getting some solution.
Also can check http://labnol.org/
This website is also in WordPress and using same theme, even I am calling very less images or blogs on my index page even I'm error a huge waterfall.
Want to know, how to solve all these and to know where is the problem either in WordPress or in theme or in host.
Completely got stuck and no solution from last few weeks.
Your help will be highly appreciated.
Thank you.
Original Source
Optimization of Nginx
Optimal Nginx configuration presented in this article. Once again
briefly go through the already known parameters and add some new ones
that directly affect TTFB.
compounds
First we need to define the number of "workers" Nginx.
worker_processes Nginx Each workflow is able to handle many
connections and is linked to the physical processor cores. If you know
exactly how many cores in your server, you can specify the number
yourself, or trust Nginx:
worker_processes auto;
# Determination of the number of working processes
In addition, you must specify the number of connections:
worker_connections 1024;
# Quantification of compounds by one working process, ranging from 1024 to 4096
requests
To the Web server can process the maximum number of requests, it is
necessary to use a switched off by default directive multi_accept :
multi_accept on;
# Workflows will accept all connections
It is noteworthy that the function will be useful only if a large
number of requests simultaneously. If the request is not so much, it
makes sense to optimize work processes, so that they did not work in
vain:
accept_mutex on;
# Workflows will take turns Connection
Improving TTFB and server response time depends on the directives
tcp_nodelay and tcp_nopush :
on tcp_nodelay;
tcp_nopush on;
# Activate directives tcp_nodelay and tcp_nopush
If you do not go into too much detail, the two functions allow you to
disable certain features of the TCP, which were relevant in the 90s,
when the Internet was just gaining momentum, but do not make sense in
the modern world. The first directive sends the data as soon as they
are available (bypass the Nagle algorithm). The second allows you to
send a header response (Web page) and the beginning of the file,
waiting for filling the package (ie, includes TCP_CORK ). So the
browser can start displaying the web page before.
At first glance, the functions are contradictory. Therefore, the
directive tcp_nopush should be used in conjunction with the sendfile .
In this case, the packets are filled prior to shipment, as directive
is much faster and more optimal than the method of the read + the
write . After the package is full, Nginx automatically disables
tcp_nopush , and tcp_nodelay causes the socket to send the data.
Enable sendfile is very simple:
sendfile on;
# Enable more effective, compared to read + write, file sending method
So the combination of all three Directives reduces the load on the
network and speeds the sending of files.
Buffers
Another important optimization affects the size of the buffer - if
they are too small, Nginx will often refer to the disks are too big -
will quickly fill up the RAM. Nginx Buffers To do this, you need to
set up four directives. Client_body_buffer_size and
client_header_buffer_size set the buffer size for the body and read
the client request header, respectively. Of client_max_body_size sets
the maximum size of the client request, and
large_client_header_buffers specifies the maximum number and size of
buffers to read large request headers.
The optimal buffer settings will look like this:
10K client_body_buffer_size;
client_header_buffer_size 1k;
of client_max_body_size 8m;
large_client_header_buffers 2 1k;
# 10k buffer size on the body of the request, 1 KB per title, 8MB to the query buffer and 2 to read large headlines
Timeouts and keepalive
Proper configuration of standby time and keepalive can also
significantly improve server responsiveness.
Directive client_body_timeout and client_header_timeout set time delay
on the body and reading the request header:
client_body_timeout 10;
client_header_timeout 10;
# Set the waiting time in seconds
In the case of lack of response from the client using
reset_timedout_connection you can specify Nginx disable such
compounds:
reset_timedout_connection on;
# Disable connections timed-out
Directive keepalive_timeout sets the wait time before the stop
connection and keepalive_requests limits the number of
keepalive-requests from the same client:
keepalive_timeout 30;
keepalive_requests 100;
# Set the timeout to 30 and limitations 100 on client requests
Well send_timeout sets the wait time in the transmission response
between two write operations:
send_timeout 2;
# Nginx will wait for an answer 2
Caching
Enable caching significantly improve server response time. Nginx cache
Methods are laid out in more detail in the material about caching with
Nginx, but in this case the inclusion of important cache-control .
Nginx is able to send a request to redkoizmenyaemyh caching data,
which are often used on the client side. To do this, the server
section you want to add a line:
. Location ~ * (jpg | jpeg | png | gif | ico | css | js) $ {expires 365d;}
Targets file formats and duration Cache
Also it does not hurt to cache information about commonly used files:
open_file_cache max = 10000 = 20s the inactive;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
# Enables the cache tags 10 000 files in 30 seconds
open_file_cache specifies the maximum number of files for which
information is stored, and the storage time. open_file_cache_valid
sets the time after which you need to check the relevance of the
information, open_file_cache_min_uses specifies the minimum number of
references to the file on the part of customers and
open_file_cache_errors includes caching troubleshooting files.
logging
This is another feature that can significantly reduce the performance
of the entire server and, accordingly, the response time and TTFB. So
the best solution is to disable basic log and store information about
critical errors only:
off the access_log;
the error_log /var/log/nginx/error.log crit;
# Turn off the main logging
Gzip compression
Usefulness Gzip is difficult to overstate. Compression can
significantly reduce traffic and relieve the channel. But he has a
downside - need to compress time. So it will have to turn off to
improve TTFB and server response time. Gzip At this stage, we can not
recommend Gzip off as compression improves the Time To Last Byte, ie,
the time required for a full page load. And this is in most cases a
more important parameter. On TTFB and improving server response time
greatly affect large-scale implementation of HTTP / 2 , which contains
a built-in methods for header compression and multiplexing. So that in
the future may disable Gzip will not be as prominent as it is now.
PHP Optimization: FastCGI in Nginx
All sites use modern server technology. PHP, for example, which is
also important to optimize . Typically, PHP opens a file, verifies and
compiles the code, then executes. Such files and processes can be set,
so PHP can cache the result for redkoizmenyaemyh files using OPcache
module. And Nginx, connected to PHP using FastCGI module can store the
result of the PHP script to send the user an instant.
The most important
Optimization of resources and the correct settings for the web server
- the main influencing TTFB and server response time factors. Also do not forget about regular software updates to the stable release, which
are to optimize and improve performance.
I have a form which is submitted via mailto to a email server.
As you most know, there is a limitation to the mailto content over which it won't work because it exceeds URL characters limit.
I developed some custom data compression that are domain specific, but it is still not enough (In case all fields are filled, it will bust the limit, this is rare... but rare is bad enough for the client. Never is better.).
I found the Lempel–Ziv–Welch algorithm (http://en.wikipedia.org/wiki/Lempel%E2%80%93Ziv%E2%80%93Welch) and concluded it would allow me to save 40% of the length average.
Unfortunately, I need of course to call encodeURIComponent to send it to mailto, and as LSW algorightm will return many URL unsupported characters this will in fact make it worse once URL encoded.
Before you tell me it would be easier to make a post to a server using server-side language, let me tell you this is a really unique situation where the form has to be submitted via email via a client-side application, because emails are the only way to connect with the outside world for the end users...
So, do you know any way to compress data efficiently without encodeURIComponent ruining it all ?
Or is there a way to send content to mailto without going through browser ?
I've seen some ways to open Outlook with ActiveX and stuff, but this is pretty browser/email client specific.
Also I checked for options where I save form info in a file using javascript... but the application users are, well let's just say they are not experts at all, and from what I've been told, they could fail to attach the email. (yes, they are that bad)
So I look for the simplest option, where user involvment is almost 0 and where the result is an email sent with the form data, all of that without server-side languages, with a compression algorithm if applicable.
Thanks a lot for your help !
You'll have a hard time getting to "never" with compression, since there will always be strings that a compressor expands instead of compresses. (Basic mathematical property of compression.)
Having said that, there are much better compressors than LZW, depending on the length of your input. You should try zlib and lzma. The binary output of those would then need to be coded using only the allowed URL characters.
Is there a limit to the length of the parameter that be can added to the url for ajax? I am using Thin server on Ruby, and did an ajax request from the web browser in this format:
io=new XMLHttpRequest();
io.open("GET","http://localhost:3000&v="+encodeURIComponent(JSON.stringify(v)),true);
When the length of the string v exceeds about 7000 bytes, it seems to crash. When less, it seems to work. Is my observation right? Where is the restriction coming from? From Thin, Javascript, or the browser? I use Google Chrome browser.
Is there a limit to the length of the parameter that can added to the url for ajax?
Yes, if you are using a GET request there's a limit which will depend on the client browser. And this limit has nothing to do with AJAX. IIRC it was around 4K for IE but things might have changed. But in any case there's a limit. If you don't want to be limited you should use POST.
Restriction most likely comes from the browser. According to this discussion you should try to keep your URLs under about 2000 characters.
There is a limit to the GET request depending on the character bytes. If you use ASCII it's 256 characters including the url itself. For UTF-8 it's practically the half because 1 utf character is 2bytes long.
You won't have this problem on POST though.
I have a very specific situation where a javascript page creates a new window with a meta-refresh in it which goes to a php page which has 1 variable with everything that the javascript page has put in it. Like so:
form.write('<meta http-equiv="refresh" content="0;URL=contentreq/index.php?data=');
Very long text with a lot of data (over 3000 characters)
form.write('" />');
The php page gets it like this:
$data=$_GET['data'];
$order=htmlentities(stripslashes(strip_tags($order)));
The problem is an existing app has this problem and I'm not in the situation to solve it properly so I was wondering if there is some way to encrypt/encode the data variable so that it will be a lot shorter. (My apache server does not like an 82512 characters long url...) Like tinyurl does, but PHP must be able to decode it back. I don't know the right term for it so my googling does not give me a lot of results.
The right term for this would be compression, but it won't work in this case either - the general URL length limit is 2000 characters because IE sets it that low, and if your data is tough to compress, you won't fit 3kb reliably into 2kb.
The only idea that comes to mind, if you can't store the data in a PHP session, is to "send the data ahead" using a Ajax POST request. That can carry much more than 3 kb. When the request has been sent, redirect to the next page, and have PHP fetch the transmitted data from the session or something.
It's not pretty, but the only idea that comes to my mind for this very specific scenario.
Another idea, although this is purely client-side so you can't pass on the URL, is storing the data in the browser-side local storage.
Reference:
Maximum URL length is 2,083 characters in Internet Explorer
jQuery Ajax
PHP Sessions
A URL shortening service saves hash-URL pairs and does redirects from the short URL to the long ones. This is not applicable to your case because your data URL part is dynamic.
An approach you can take is to put all your data in a cookie and let the PHP script read it from that cookie. Another option would be to POST you request to the PHP script, but this one may not be applicable.
Also note that there are differences between encryption, encoding and compression. You weren't asking for encryption or encoding, but compression.
Regards,
Alin
Compression was the term I was looking for. I've rebuilt the javascript function to do a post with the link Pekka sent. The thing now works with the bizzare long queries.
Thanks!
This is the code I've used:
document.body.innerHTML += '<form id="dynForm" action="http://example.com/" method="post"><input type="hidden" name="q" value="a"></form>';
document.getElementById("dynForm").submit();