I'm using jQuery's $.ajax() method to get about 26KB of JSONP data.
The data is returned perfectly in FF, Chrome, IE, and Safari from every location I've tested it (work, home, mobile phone, etc.).
The only exception is one of my clients who attempted to access the site from behind his company's firewall. Unfortunately, I was unable to get him to provide me with the response, but I know my success function is executing - so it seems that the response is being corrupted somehow, since anything that references the returned JSON is coming up undefined.
My question is this: is it possible that a firewall would place restrictions on the length of XHR responses? Is there some other obvious explanation that maybe I'm missing?
Many thanks.
UPDATE:
For anyone who happens to stumble upon this post... I had completely forgotten that the AJAX call was to one of my development servers using non-standard ports. Pretty sure that's why his firewall didn't like it.
Thanks to all who commented.
I was going to suggest that apart from you messing up the urls, some firewalls have active filtering of requests, which might strip relevant java script calls from your requests (paranoid networks make for unique development environments).
Just a heads up to people that might be scratching their head in the future when their apps work here, there but not OVER there in a corporate setting.
Related
Sometimes it could be useful for a web application to be able to determine whether a seeming slowness in the application is due to network conditions or something else.
To make it easier to determine the cause in this kind of situation, I was thinking about creating something like a "check the connectivity" client-side script makes e.g. an AJAX call to the server and returns some useful metrics that make it easier to determine if the cause lies with the network.
But how much useful network troubleshooting could be done using a JavaScript that calls the server?
It sounds like you need a way to keep an eye on your site's performance from the end-user's perspective.
One way to do this is to have your client-side scripts include a way to log to a log aggregation site like SumoLogic. Here is a doc to reference about using client-side JavaScript to log to SumoLogic.
On your server side, you could implement a /ping API endpoint that would just immediately return true so you know how long it takes your user to at least reach your site. You can then log to SumoLogic how long that request took. You could do this with other requests as well to see which APIs are slower than others.
If you include geo-location when logging to SumoLogic, you can see how well your site performs around the world.
And if you want to get really fancy, then you should implement a custom header that your APIs understand which is a transaction token of some sort for all requests. When your server receives that header, it should use that token throughout the request's logs so you can see where things go wrong and what to do about them.
Another good site to check out for this sort of thing is New Relic - Browser Monitoring. This is much more performance-centric and you don't get the insights of your own logs, but it's an awesome app in its own right.
EDIT
As mentioned in the comments by #Bergi, you could also have your server respond with the headers immediately and measure performance that way.
I'm implementing Dropbox's "/list_folder/longpoll" call through their API in Javascript. As described here, the API call fails due to a cross-domain access control error.
Dropbox recommends a hacky workaround that goes against W3C standards: setting an invalid "Content-Type" header "text/plain; charset=dropbox-cors-hack", which somehow helps to comply with requirements for a "simple cross-site request" and therefore skips the cross-domain check. Because this is against the web standards, the browser modifies the header back to a valid form, and the API call always fails.
I discovered a couple possible workarounds for this:
Using own server to divert the call from browser->dropbox to browser->own server->dropbox. Make an ajax call to the server, server makes a cURL request to dropbox, everything works fine.
This method's drawback is that you have to actually have a capable server with spare resources to keep all your user's longpoll connections open. I wasn't able to implement this in PHP efficiently.
Using Javascript's new fetch() method instead of XMLHttpRequest. It seems to allow setting the invalid header, and the API call works fine. Setting a normal (not the hacky one) header results in a failed call.
The drawback of this method is browser support. Without the fetch polyfill only Chrome and Firefox supports this. Using the polyfill theoretically adds support for IE and Safari too. But because the polyfill is based on XMLHttpRequest, the headers are changed back to valid ones, as they would be when using plain XMLHttpRequest. Except for IE, where the invalid headers don't get changed back, because IE.
I went with the second workaround, so now I'm left without Safari support.
My question is this: how to solve this problem? Maybe by somehow making PHP handle long (1-2 minute) cURL calls more efficiently? Or maybe by somehow hacking my way into a cross-browser solution of setting an invalid Content-Type header?
I'm thinking about iframes, but this is getting a little ridiculous :)
Ultimately I need to know what domain is hosting one of my javascript files. I have have read and experienced first hand that $_SERVER['HTTP_REFERER'] unreliable. One of the first 3 browser/computer combos I tested didn't send the HTTP_REFERER, and I know that it can be spoofed. I implemented a different solution using two javascript methods.
document.referrer
AND
window.location.href
I use the former to get the url of the window where someone clicked on one of my links. I use the former to see which domain my javascript file is included in. I have tested it a little so far and it is grabbing the urls from the browser very well with no hiccups. My question is, are the two javascript methods reliable? Will they return the url from the browser everytime or are there caveats like using the $_SERVER['HTTP_REFERER'] that I haven't run into yet?
You should always assume that any information about the referrer URI is going to be unavailable (or perhaps even unreliable), due to browsers or users wanting to conceal this information because of privacy issues.
In general, you won't have the referrer information when linking from an HTTPS to an HTTP domain. Check this question for more info on this:
https://webmasters.stackexchange.com/questions/47405/how-can-i-pass-referrer-header-from-my-https-domain-to-http-domains
About using window.location.href, I'd say it's reliable in practice, but only because it's interesting that the client will supply the correct information so that applications depending on that will behave as expected.
Just keep in mind that this is still the client side sending you some information, so it'll always be up to the browser to send you something that is correct. You can't have control over that, just trust that it's going to work according to what is specified in the standard. The client might still decide to conceal it or fake it for any reason.
For example it might be possible that in some situations, like third party included scripts (also privacy reasons), the browser might opt to just leave it blank.
I am about to look at building an extension for chrome that can listen to a particular http GET request, and then react by passing the body on to another application.
This will be limited to one website, and I am only concerned about the one request (though the query parameters can change)
The data will most likely be communicated to the other application using another GET or POST request by the extension.
I am new to most of the topics concerned with this issue. I have not used created a chrome extension before, though I see it has a lot of documentation which has helped greatly.
The question I am asking is how in a google chrome extension do I react to a GET request?
I am aware there is a network view in the dev tools, so I guess what I am asking is probably possible. I had guessed it may be done with an "Event" as listed here http://developer.chrome.com/extensions/events.html , but I cannot find the "onGET" event or similar
Thanks,
Luke
It would appear that this is not particularly easy to do. There is an "onCompleted" callback with the chrome "webRequest" API, but that would not allow me access to the body.
What I have had to do to solve my issue, is get the plugin
https://chrome.google.com/webstore/detail/proxy-switchy/caehdcpeofiiigpdhbabniblemipncjj/related
And set up a proxy on my local machine. ProxySwitchy detects the request to the URL I care about, and uses my proxy instead. My proxy then duplicates the message and allows me to do with it as I please.
I run a site A and I want to be able to POST data to site B, which is hosted on a different subdomain. Now I have complete access to A, but cannot modify B at all.
My requirements are:
supports file upload
does not refresh browser on POST
uses Windows integrated security
works in IE 7/8 (does not need to support any other browsers)
What's the best way to accomplish this?
What I've tried:
Ideally this could be done in a simple AJAX call. However the current standard does not support sending binary data (supported in the XMLHttpRequest Level 2 standard, which is not implemented in IE yet).
So the next best thing is to POST to a hidden <iframe> element. Now I've tried this but the server on site B won't accept the data. I looked at the request and the only discrepancies that I found were the referer URL and the integrated authentication. The referer URL might have to be spoofed, which cannot be accomplished by this method. Also for some reason the authentication isn't being negotiated. I'm not 100% sure why.
Ideas:
I'm thinking of creating a proxy page on the server that I run (site A) that forwards the request to site B. Site A also uses integrated security. I don't see anything wrong with this, but I'm not sure if this is the best way to go. Will there be any authentication issues if I just forward the request over?
Using a proxy seems to be the only thing which can work in your case. If you want to make a get request then it can be done using JSONP provided that the server supports JSONP. To make the <iframe> hack work the server should send the headers as
Access-Control-Allow-Origin:*
which is not the case with you.
So using a proxy seems the solution