I'm implementing Dropbox's "/list_folder/longpoll" call through their API in Javascript. As described here, the API call fails due to a cross-domain access control error.
Dropbox recommends a hacky workaround that goes against W3C standards: setting an invalid "Content-Type" header "text/plain; charset=dropbox-cors-hack", which somehow helps to comply with requirements for a "simple cross-site request" and therefore skips the cross-domain check. Because this is against the web standards, the browser modifies the header back to a valid form, and the API call always fails.
I discovered a couple possible workarounds for this:
Using own server to divert the call from browser->dropbox to browser->own server->dropbox. Make an ajax call to the server, server makes a cURL request to dropbox, everything works fine.
This method's drawback is that you have to actually have a capable server with spare resources to keep all your user's longpoll connections open. I wasn't able to implement this in PHP efficiently.
Using Javascript's new fetch() method instead of XMLHttpRequest. It seems to allow setting the invalid header, and the API call works fine. Setting a normal (not the hacky one) header results in a failed call.
The drawback of this method is browser support. Without the fetch polyfill only Chrome and Firefox supports this. Using the polyfill theoretically adds support for IE and Safari too. But because the polyfill is based on XMLHttpRequest, the headers are changed back to valid ones, as they would be when using plain XMLHttpRequest. Except for IE, where the invalid headers don't get changed back, because IE.
I went with the second workaround, so now I'm left without Safari support.
My question is this: how to solve this problem? Maybe by somehow making PHP handle long (1-2 minute) cURL calls more efficiently? Or maybe by somehow hacking my way into a cross-browser solution of setting an invalid Content-Type header?
I'm thinking about iframes, but this is getting a little ridiculous :)
Related
I'm working on google chrome extension which get the page url and analyze it. How can i intercept the browser request and serve that request condionally based on some criteria. I'm surfing but could find any material.
That's going to be very tricky, if at all possible.
The closest that extensions API provide is blocking webRequest API. There, you can intercept a request and make a decision to allow it or block it, but..
You can only do that until the request is sent out. So you can only rely on the URL and maybe request headers. Even in later events (when it's too late to redirect) no point webRequest API gives access to the response itself.
You have to make the decision synchronously, which basically severely limits processing options.
What you could do (very much in theory) is always redirect the request to your own "loading" page, meanwhile trying to replicate the request yourself (near-impossible to fully do, also consider side-effects), analyze the response and then substitute the "loading" page with the real one.
It's going to be either very complicated or impossible to do in complex cases. You're basically trying to implement an intercepting proxy in a Chrome extension - it doesn't really provide the full toolset to do so.
I run a site A and I want to be able to POST data to site B, which is hosted on a different subdomain. Now I have complete access to A, but cannot modify B at all.
My requirements are:
supports file upload
does not refresh browser on POST
uses Windows integrated security
works in IE 7/8 (does not need to support any other browsers)
What's the best way to accomplish this?
What I've tried:
Ideally this could be done in a simple AJAX call. However the current standard does not support sending binary data (supported in the XMLHttpRequest Level 2 standard, which is not implemented in IE yet).
So the next best thing is to POST to a hidden <iframe> element. Now I've tried this but the server on site B won't accept the data. I looked at the request and the only discrepancies that I found were the referer URL and the integrated authentication. The referer URL might have to be spoofed, which cannot be accomplished by this method. Also for some reason the authentication isn't being negotiated. I'm not 100% sure why.
Ideas:
I'm thinking of creating a proxy page on the server that I run (site A) that forwards the request to site B. Site A also uses integrated security. I don't see anything wrong with this, but I'm not sure if this is the best way to go. Will there be any authentication issues if I just forward the request over?
Using a proxy seems to be the only thing which can work in your case. If you want to make a get request then it can be done using JSONP provided that the server supports JSONP. To make the <iframe> hack work the server should send the headers as
Access-Control-Allow-Origin:*
which is not the case with you.
So using a proxy seems the solution
I'm using jQuery's $.ajax() method to get about 26KB of JSONP data.
The data is returned perfectly in FF, Chrome, IE, and Safari from every location I've tested it (work, home, mobile phone, etc.).
The only exception is one of my clients who attempted to access the site from behind his company's firewall. Unfortunately, I was unable to get him to provide me with the response, but I know my success function is executing - so it seems that the response is being corrupted somehow, since anything that references the returned JSON is coming up undefined.
My question is this: is it possible that a firewall would place restrictions on the length of XHR responses? Is there some other obvious explanation that maybe I'm missing?
Many thanks.
UPDATE:
For anyone who happens to stumble upon this post... I had completely forgotten that the AJAX call was to one of my development servers using non-standard ports. Pretty sure that's why his firewall didn't like it.
Thanks to all who commented.
I was going to suggest that apart from you messing up the urls, some firewalls have active filtering of requests, which might strip relevant java script calls from your requests (paranoid networks make for unique development environments).
Just a heads up to people that might be scratching their head in the future when their apps work here, there but not OVER there in a corporate setting.
We are able to reliably recreate the following scenario:
Create a small HTML page that makes AJAX requests to a server (using HTTP POST)
Disconnect from the network and reconnect
Monitor the packets that IE generates after the failure
After a failed network connection, IE makes the next AJAX request but only sends the HTTP header (not the body) when doing the HTTP post. This causes all sorts of problems on the server as it is only a partial request. Google this issue with Bing and you'll find lots of people complaining about "random server errors" using AJAX or unexplained AJAX failures.
We know that IE (unlike most other browsers) always sends an HTTP POST as TWO TCP/IP packets. The header and body is sent separately. In the case directly after a failure, IE only sends the header. IE never sends the payload and the server eventually responds with a Timeout.
So my question is - why does it behave this way? It seems wrong based on the HTTP spec and other browsers don't behave this way. Is it simply a bug? Surely this creates havoc in any serious AJAX based Web application.
Reference information:
There is a similar problem, triggered by HTTP keep-alive timeouts that are shorter than 1 minute and is documented here:
http://us.generation-nt.com/xmlhttprequest-post-sometimes-fails-when-server-using-keep-aliv-help-188813541.html
http://support.microsoft.com/default.aspx?kbid=831167
There does not seem to be a clear answer to this question, so I will provide my empirical data as a substitute and provide some ways to work around it. Maybe some MS insider will one day shed some light on this...
If HTTP Keep-Alive is disabled on the server, this issue goes away. In other words, your HTTP 1.1 server will respond to every Ajax request with a Connection: Close line in the response. This keeps IE happy but causes every Ajax request to open a new connection. This can have a significant performance impact, especially on high latency networks.
The issue is triggered easily if Ajax requests are made in rapid succession. For example, we make Ajax requests every 100ms and then the network status changes, the error is easy to reproduce. Although most applications probably do not make such requests, you might well have a couple of server calls happening right after each other which could lead to this problem. Less chatty keeps IE happy.
It happens even without NTLM authentication.
It happens when your HTTP keep-alive timeout on the server is shorter than the default (which defaults to 60 seconds on Windows). Details provided in link in question.
It does not happen with Chrome or Firefox. FF sends one packet so seems to avoid this issue altogether.
It happens in IE 6, 7, 8. Could not reproduce with IE 9 beta.
The microsoft KB article titled When you use Microsoft Internet Explorer or another program to perform a re-POST operation, only the header data is posted seems to fix this problem.
The article provides a hotfix. For later browsers such as IE8 it says the hotfix is already included but needs to be enabled through the registry settings on the client PC.
I had a similar problem where some older versions of IE would send back only the Header and not the body of a POST. My problem turned out to be related to IE and NTLM. Since you didn't mention NTLM, this probably does not help, but just in case:
http://support.microsoft.com/kb/251404
This is a longshot, but IE (and even Firefox) sometimes "remembers"
the connection it uses for an HTTP request. Notes/examples:
In Firefox, if I change the proxy settings and hit SHIFT-RELOAD on
a page, it still uses the old proxy. However, if I kill the old
proxy ("killall squid"), it starts using the new proxy.
When you disconnect/reconnect, do you receive a new IP address or
anything similar? Can you somehow monitor the old IP address to see
if IE is sending data to that now-dead address?
My guess is that IE is sending the data, just down the wrong
path. It might be smart enough to not cache network connections for
"POST" packets, but might not be smart enough to do that for POST
payloads.
This probably doesn't affect most AJAX apps, since people rarely
disconnect and re-connect to their networks?
Are you using NTLM authentication?
When using NTLM authentication, IE doesn't send post-data. It sends header info, expects an unauthorized response send authorization, and after the 're-authentication' sends the post.
I had a similar problem today when using $.ajax and was able to fix it by setting async to false.
$.ajax({
async: false,
url: '[post action url]',
data: $form.serialize(),
type: 'POST',
success: successCallback
});
Is it possible to do a cross-site call, in Javascript, to a WCF service?
I don't mind if it's a POST or a GET.
But I've heard that these days, browsers don't allow cross-site calls with either POST or GET.
How can I circumvent this and still call a WCF Service?
There's not a whole lot you can do to circumvent the browser's cross-site scripting blockers. Those blockers stop XMLHTTPRequest's from happening to any domain but the one that loaded the containing script or page.
That said, there is one commonly used workaround: Use JavaScript to write a new entry into the DOM that references a src that is a cross-site URL. You'll pass all your RPC method arguments to this "script" which will return some JavaScript that will be executed, telling you success or failure.
There's no way to do a POST in this manner, the src URL must be a GET, so you can pass arguments that way. I'm not sure if WCF has a "GET only" method of access. And, since the browser will expect the result of the remote tag to be a valid JavaScript object, you'll have to make sure that your WCF service obeys that as well, otherwise you'll get JavaScript errors.
Another common method of circumventing cross-site scripting is to write a proxy for your requests. In other words, if you want to access domain test.com from scripts hosted on example.com, then make some URL on example.com that proxies the request over to test.com in the proper way.
For your example, the proxying is likely the right answer, assuming that WCF doesn't have it's own cross-site scripting restrictions.
Are you using jQuery by any chance? jQuery supports Cross-Domain JSON requests using "JSONP". You will be limited to GET requests, but I've tried it out and it works well! It's also very simple to get working.
See the "Cross-Domain getJSON (using JSONP) " section on this page for details:
http://docs.jquery.com/Release:jQuery_1.2/Ajax
And here's some background on JSONP:
http://bob.pythonmac.org/archives/2005/12/05/remote-json-jsonp/
Let me know how it goes!
New W3C recommendations are being standardised to allow cross-site requests between trusted parties via the Access Control for Cross-Site Requests specification.
This requires a server serving suitable Access Control HTTP headers and a browser capable of understanding and acting upon such headers.
In short, if a remote host says it likes your domain, and a browser understands what this means, you can perform xmlHttpRequests against that host regardless of the same origin policy.
Currently very few browsers support this functionality. IE8 apparently does (I haven't tested it) and Firefox 3.1 does (I have tested this extensively). I expect other browsers to follow suit quite quickly.
You shouldn't expect sufficient adoption of compatible browsers until 2012 at the earliest.
That's the ultimate solution to the problem. The downside is waiting a few years before it can be used in mainstream applications.
If this is for use within an environment you fully control, such as for an intranet where you can determine which browser is used and where you can configure multiple servers to issue the correct headers, it works perfectly.
To expand on Ben's answer... I extended our WCF service to support JSONP calls from jQuery using code similar to this example from Microsoft:
http://msdn.microsoft.com/en-us/library/cc716898.aspx