Ajax send request with encoding gzip (iis7) is not working below are the code for send request
can some one help me what is wrong in my code.
Thanks in advance
function sendRequest(url, callback, postData)
{
var req = createXMLHTTPObject();
if (!req) {
return;
}
var method = (postData) ? "POST" : "GET";
req.open(method, "xml/" + url, true);
req.setRequestHeader('User-Agent', 'XMLHTTP/1.0');
if (postData) {
req.setRequestHeader('Content-type', 'application/x-www-form-urlencoded');
req.setRequestHeader("Content-Encoding", "gzip");
}
req.onreadystatechange = function() {
}
req.send(postData);
}
Considering the security, browser does not allow you to override some headers including "Content-Encoding".
One way to transparently have the requests for your XMLHttpRequest highly compressed is to use HTTP/2 (e.g. serve your website via CloudFlare).
When using HTTP/2, then although the HTTP headers do not say Content-Encoding: gzip the underlying HTTP/2 protocol compresses everything.
It also compresses much better than gzip because:
it compresses headers
header compression uses a standard dictionary
I think data compression builds a dictionary over multiple messages (brotli - I haven't double-checked that though)
You can see if your server is using HTTP/2 by:
Open Chrome, and F12 to open developer tools
Click on the network tab
close the request inspector panel (has tabs Headers Preview Response Timing)
Right click on the Name header of the list of requests and tick Protocol
Navigate to your website and watch what protocol is used for all requests - in the protocol column you want to see h2 not http/1.1
I wouldn't recommend using JavaScript compression libraries because that causes slowdown and inefficiencies.
The problem doesn't seem to be related to header but to compression.
You don't seem to compress your postData.
If postData is already compressed, no need to try to manually set content-encoding.
If it is not, either let the browser negotiate the transfer encoding with the server (this is part of the protocol and done automatically, the server saying if it accepts it, but I think that's rarely the case) or (if you really really need to) encode it yourself. This SO question indicates a library to compress browserside : JavaScript implementation of Gzip
Related
I want the user to be able to enter their website URL into an input box that is part of a Chrome Extension and the Chrome extension will use an AJAX request or something similar to detect and tell the user if the server behind the URL supports sending responses via HTTP2. Is this possible?
Maybe the WebRequest has a way of picking up this information? Or the new Fetch API? Could your request tell the server somehow that only HTTP2 replies are understood? I can't see an obvious way.
I know you can use window.chrome.loadTimes().connectionInfo to get the protocol of the current page but this requires loading the whole page which I don't want to do.
Example URLS:
Delivered over HTTP2: https://cdn.sstatic.net/
Delivered over HTTP 1.1: https://stackoverflow.com/
HTTP/2 responses require a "status" response header - https://http2.github.io/http2-spec/#HttpResponse, so to check whether the response is using HTTP/2, you can use the chrome.webRequest.onHeadersReceived event with "responseHeaders" in extraInfoSpec. For example, with your test cases:
chrome.webRequest.onHeadersReceived.addListener(function(details) {
var isHttp2 = details.responseHeaders.some(function(header) {
return header.name === 'status';
});
console.log('Request to ' + details.url + ', http2 = ' + isHttp2);
}, {
urls: ['https://cdn.sstatic.net/*', 'http://stackoverflow.com/*'],
types: ['xmlhttprequest']
}, ['responseHeaders']);
// Tests:
fetch('http://stackoverflow.com');
fetch('https://cdn.sstatic.net');
EDIT: Apparently you can do this with the iframe and webRequest trick! I found a reference gist (but I haven't tested it myself though):
https://gist.github.com/dergachev/e216b25d9a144914eae2
OLD ANSWER
You probably won't able able to do this without an external API. Here's why
1) Using ajax only requires that the server of the url to be tested sends CORS headers back to the user, otherwise the browser will not accept it.
2) You could create an iframe on the fly and use chrome.loadTimes().connectionInfo in the iframe contentWindow but if the server sends X-Frame-Options: Deny header the browser won't let you load the url in the iframe either.
3) Stripping the X-frame headers via webRequest API as mentioned here
Getting around X-Frame-Options DENY in a Chrome extension?
will likely not work, afaik Chrome extension are not allowed to modify the response body.
Possible solutions
1) The problems above could be solved using a simple proxy that adds the appropriate headers. Here's a reference on how to do it using Nginx
http://balaji-damodaran.com/programming/2015/07/30/nginx-headers.html
2) Just create a custom API that does the request for you server-side and parses the result to check for http2 support. If your extension gets popular it would still be fairly easy to scale it up e.g via caching and horizontal scaling.
Hope this helps!
I have a REST API that accepts an Audio file via an HTTP Post. The API has support for Transfer-Encoding: chunked request header so that the file can be uploaded in pieces as it is being created from a recorder running on the client. This way the server can start processing the file as it arrives for improved performance. For example:
HTTP 1.1 POST .../v1/processAudio
Transfer-Encoding: chunked
[Chunk 1 256 Bytes] (server starts processing when arrives)
[Chunk 2 256 Bytes]
[Chunk 3 256 Bytes]
...
The audio files are typically short and are around 10K to 100K in size. I have C# and Java code that is working so I know that API works. However, I cannot seem to get the recording and upload working in a browser using javascript.
Here is my Test Code that does a POST to localhost with Transfer-Encoding:
<html>
<script type="text/javascript">
function streamUpload() {
var blob = new Blob(['GmnQPBU+nyRGER4JPAW4DjDQC19D']);
var xhr = new XMLHttpRequest();
// Add any event handlers here...
xhr.open('POST', '/', true);
xhr.setRequestHeader("Transfer-Encoding", "chunked");
xhr.send(blob);
}
</script>
<body>
<div id='demo'>Test Chunked Upload using XHR</div>
<button onclick="streamUpload()">Start Upload</button>
</body>
</html>
The problem is that i'm receiving the following Error in Chrome
Refused to set unsafe header "Transfer-Encoding"
streamUpload # uploadTest.html:14
onclick # uploadTest.html:24
After looking at XHR documentation i'm still confused because it does not talk about unsafe request headers. I'm wondering if its possible that XHR does not allow or implement Transfer-Encoding: chunked for HTTP POST?
I've looked at work arounds using multiple XHR.send() requests and WebSockets but both are undesirable because it will require significant changes to the server APIs which are already in place, simple, stable and working. The only issue is that we cannot seem to POST from a browser with psedo-streaming via Transfer-Encoding: chunked request header.
Any thoughts or advice would be very helpful.
As was mentioned in a comment, you're not allowed to set that header as it's controlled by the user agent.
For the full set of headers, see 4.6.2 The setRequestHeader() method from W3C XMLHttpRequest Level 1 and note that Transfer-Encoding is one of the headers that are controlled by the user agent to let it control those aspects of transport.
Accept-Charset
Accept-Encoding
Access-Control-Request-Headers
Access-Control-Request-Method
Connection
Content-Length
Cookie
Cookie2
Date
DNT
Expect
Host
Keep-Alive
Origin
Referer
TE
Trailer
Transfer-Encoding
Upgrade
User-Agent
Via
There is a similar list in the WhatWG Fetch API Living Standard.
https://fetch.spec.whatwg.org/#terminology-headers
As other replies have already mentioned, you aren't allowed to set the "Transfer-Encoding" header yourself.
However, you also don't actually need to use HTTP chunked transfer encoding in order to incrementally stream a file to your server and start processing parts of it right away either. A regular HTTP POST works just fine for that. Even though it is transmitted as a single HTTP request, I believe the streaming/chunking magic happens for you at the TCP level (other people are welcome to correct me if I'm wrong on where the magic specifically happens). I can confirm this works because I've done it with node.js and Express on the backend. I'm sure it probably works with other server side technologies as well.
HTTP chunked transfer encoding is only useful when you DON'T know the size of the stream you are going to be sending in advance (such as live video, video conference calls, remote desktop sessions, chats, etc.). And for these cases WebSockets are a more widely deployed solution that solve the same problem:
https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API
For your use case, where you DO know the size of the file in advance you are probably better off sticking to your XmlHttpRequest and abandoning the chunked transfer encoding. Alternatively, you can give the new Fetch API a try:
https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API
I got caught in problems with CORS and HTTPS and POST request and just cannot get out of it.
My setup:
I have a web app that runs on HTTPS protocol
I need to make POST request to API on different domain (sailsjs with CORS enabled)
The API is setup to accept both HTTP and HTTPS requests (nginx)
Problems:
On desktop everything works fine if I use HTTPS protocol. If I make call to HTTP, I get error: "Blocked loading mixed active content"
On Mobile I spent days and days trying to make work HTTPS calls, I tried hundereds of ways, to no success. HTTP calls work fine, but again cannot use HTTPS to HTTP.
So basically, on desktop everything works splendid while everything is on https. On mobile it doesn't work. When switching to http target everything brakes because of mixed origin.
More Details
The webapp is using Angular which is making CORS POST HTTPS requests without any problems on all platforms.
The small part that I am trying to make work is using vanilla xhr request:
submit: function(data){
var request = this.createCORSRequest("POST",'https://heregoesthedomain.com');
if (request){
request.onload = function() {
// Success code goes here.
console.log('success');
};
request.onerror = function() {
// Error code goes here.
console.log('error');
};
request.send(JSON.stringify(data));
}
},
createCORSRequest: function(method, url){
var xhr = new XMLHttpRequest();
if ("withCredentials" in xhr) {
// Most browsers.
xhr.open(method, url, true);
} else if (typeof XDomainRequest != "undefined") {
// IE8 & IE9
xhr = new XDomainRequest();
xhr.open(method, url);
} else {
// CORS not supported.
xhr = null;
}
return xhr;
}
Angular seems to be making post requests from HTTPS to HTTPS from domain to domain without a problem across all platforms, while I just cannot seem to make it work in clean JS. I cannot use Angular script, because I need these calls to happen before Angular is initialized.
Update 1
Just to make clear, I didn't post this question after 5 minutes of trying. I have done all kinds of debugging, I have used remote consoles for mobile browser debugging etc etc.
If you ask for the error messages, there simply isn't any. Request returns status:0 and all other XMLHttp values are simply empty. I have read in one of the specs that request get sometime blocked when its been made from origin that is no longer active/doesnt exist, but that is not my case. Simply staying on the same static page.
Since you API works with both types of schemes you could make the requesting URL match the scheme of the page (using a schema less url)
this.createCORSRequest("POST",'//heregoesthedomain.com')
This should solve the
Blocked loading mixed active content
problem.
Also on IE8/IE9 you cannot do normal way any CORS request. You can sort it by using XDomain
Never then less still I will say that for web browser you will get problem with mix content. Only solution is creating special route so you will have http and other calls on https.
If you have Netflix look that they sort it out by playing movie on http because there movie chunks are get by http and session is done by https
I have found this very useful Chrome extension called Postman. This is a very useful extension especially when you are into programming RESTful applications.
One thing I am confused on is that how this plugin/extension able to send POST request successfully on different domains?
I tried voting in a poll using Postman like this.
After submitting that, the vote was actually counted in, but when I tried doing that using AJAX and JavaScript, it fails, because of different origin policy of browsers.
How is that even possible?
Here is my code using jQuery. I used that in my computer though, localhost.
init: function() {
$.ajax({
url: 'http://example.com/vote.php',
type:'POST',
dataType: 'html',
data: {
id: '1'
},
success: function(data) {
if ( data == 'voted' ) {
$('.set-result').html( 'you already voted. try again after 24 hours' );
} else {
$('.set-result').html( 'successfully voted' );
}
}
});
},
Chrome packaged apps can have cross domain permissions. When you install Postman it promts you that this app will access any domain.
By placing */* in permissions section of your manifest file, you can do this.
Read more here:
https://developer.chrome.com/extensions/xhr.html
You can add the following header to sent Ajax request in postman.
Content-Type application/json
X-Requested-With XMLHttpRequest
Screenshot
Sounds like the site that hosts the poll (the "vote.php" script) needs to have an "Access-Control-Allow-Origin" header set to allow posting from a list of sites (or all sites).
A value of * for the header will allow posting from any website:
Access-Control-Allow-Origin: *
i.e. You could put the following at the top of vote.php
header('Access-Control-Allow-Origin: *');
Chrome extensions and apps are not subject to the same security limitations placed on normal webpages.
Additional debugging tips:
If you're trying to access remote services from web pages you have open on your local file system in your browser, you might find your browser applies different security rules to them than it does to files served from a web service.
e.g. If you open local files from a locational like C:\MyDocuments\weboot\index.htm (Windows) or \Users\joe\Sites\index.html (Mac) in your browser your AJAX request might not work, even with the header specified in most browsers.
Apple's Safari applies almost no cross domain restrictions to files opened locally but Firefox is much more strict about what it permits, with Chrome somewhere in the middle. Running a web server locally (e.g. on http://localhost/) is a good idea to avoid unexpected behaviour.
Additionally, other libraries that provide functions to handle Ajax requests (such as AngularJS) may require other headers to be set on the server by default. You can usually see the reason for failure in a browser debug console.
2021 Oct
In my investigation, I found out that you need an extra field in the header of your request. So simply add the following key-value into the header:
key: X-Requested-With | value: XMLHttpRequest
In my javacript function I call this ajax. It works fine but only when I access the web page from firebird server. I have the same code on my testing server. The ajax asks to download some files but only firebird server has its ip registers with our clients to be able to scp there. I need to do the same if I access the php files from testing server. All the servers are inside intranet.
is it possbile to use dataType text to do so?
do I need to do any changes on the server side?
ajax call:
url = "https://firebird"+path+"/tools.php?";
jQuery.ajax({
type: 'get',
dataType: 'text',
url: url,
data: {database: database_name, what: 'download', files: files, t: Math.random() },
success: function(data, textStatus){
document.getElementById("downloading").innerHTML+=data;
}
});
Update 1
My little web application restores databases so I can do my testing on them. Now I want to enhance it so I can connect to our customers and download a particular backup. Our customer allowed only firebird server to connect to their networks. But I have my own server dedicated to testing. So every time I want to download a database I need to connect firebird. The source of my web application and the folder with all backups are mounted into the same location on both servers firebird and testing. Right now my solution (for downloading) works but only from firebird. I work basically only testing server though.
Update 2
I make two ajax calls. One is pure jQuery call (I guess I can apply any solution to this one) and the other one is ajax call from jsTree. I created new question for that one. I seems to me that I have to go for #zzzz's option b).
To do cross domain requests, your options are fairly limited. As #Mrchief mentioned, you could do server side proxy and jsonp.
Another option is Cross-Origin Resource Sharing (CORS), a W3C working draft. Quoting from this blog post:
The basic idea behind CORS is to use custom HTTP headers to allow both
the browser and the server to know enough about each other to
determine if the request or response should succeed or fail.
For a simple request, one that uses either GET or POST with no custom
headers and whose body is text/plain, the request is sent with an
extra header called Origin. The Origin header contains the origin
(protocol, domain name, and port) of the requesting page so that the
server can easily determine whether or not it should serve a response.
You can find some live examples on this site.
You will need to make changes to the server side, to accept the CORS requests. Since you have control over the server, this shouldn't be a problem. Another downside with CORS is that, it might not be compatible with older browsers. So, if some of your essential audiences use incompatible browsers, the server side proxy may actually be a better option for you.
I just want to offer an alternative.
I am not too sure regarding your network setup, but if you have access to the DNS, maybe it would be easiest if you just give your servers some arbitrary subdomain of the same domain. Something like www.foo.com for the webfront and firebird.private.foo.com for the firebird server. This way, it becomes cross subdomain instead of cross domain. Then somewhere in your JavaScript on both pages,
document.domain = "foo.com";
This gentleman achieved this solution here.
You have the following options with you
a) You use jsonp type as your datatype but this involves making changes on the server side to pass the data back as json and not as txt.. this change might be as simple as
{
"text":<your current text json encoded>
}
and on your js side you use this as response.text; Having said that if you are getting the textis for you file from sm other domain I am not sure how easy it is for you to change the code.
b) The other option is you write a handler/end point on your server i.e within your domain that will make an HTTP request to this third domain gets the file and you send the file back to your client and effectively now your client talks to your domain only and you have control over everything. as most of yoyr questions are based on ruby here is an example:
req = Net::HTTP.get_response(URI.parse('http://www.domain.com/coupons.txt'))
#play = req.body
you can find more details about the same here.
Hope this helps.
Another idea is to use you web server as a proxy. You will need to consider the security implications for this route.