XMLHttpRequest with modified Accept header in Chrome extension - javascript

I am incorporating a DOI-to-formatted citation feature in my Chrome extension. dx.doi.org introduced content negotiation about one year ago to support this kind of activity.
I noticed some odd behavior in Chrome when I performed cross-domain XMLHttpRequest with a modified Accept header. Below is a minimal example of the function that reproduces the issue (jQuery used below, but exact same behavior noticed with XMLHttpRequest):
function getCitation() {
var resolveUrl = "http://dx.doi.org/10.2331/suisan.32.804";
var content = 'text/bibliography; style=bibtex; locale=en-US';
document.getElementById("notify").innerHTML = "Loading...";
var jqxhr = jQuery.ajax({
url: resolveUrl,
headers: { Accept: content },
dataType: "text",
type: "GET"
});
jqxhr.done(function() {
document.getElementById("notify").innerHTML = jqxhr.responseText;
});
jqxhr.error(function() {
document.getElementById("notify").innerHTML = "No citation found";
});
}
Some extra info: *.doi.org and *.crossref.org permissions are defined in the manifest.
The function works fine and returns the citation for 10.2331/suisan.32.804 in BibTeX format for the en-US locale. The problem arises when I visit http://dx.doi.org/10.2331/suisan.32.804 in any Chrome tab after running this function (not through the extension, I really mean pasting the url in the omnibox). The site returns a 406 error (Accept header request not accepted). If I try once more, the page loads. Then, the real kicker is that after the page loads successfully, the XMLHttpRequest no longer succeeds. So apparently, my header alteration is globally affecting Chrome and isn't limited to my extension's sandbox. I ended up resolving the issue by changing the GET request to POST and everything functions as desired.
If you're really interested in seeing this behavior first hand, I packaged a minimal example extension: citeTest.crx (35KB)
Now for my questions:
GET: When I make this XMLHttpRequest, is text/bibliography; style=bibtex; locale=en-US supposed to be attached to the Accept header outside of my extension? (am I committing some form of bad programming?)
GET: Each time I make this XMLHttpRequest, is text/bibliography; style=bibtex; locale=en-US being appended, so eventually the Accept header gets really really long? A problem I have is that the style and locale settings may change each time the request is made, so I can't set it just once.
POST: The problem I described only occurs with GET requests. Could this problem similarly surface if a user tries to submit a POST based form on dx.doi.org?

Ok, so I think I've gotten to the bottom of this. The problem surfaced because of caching. When the Accept header was set to text/bibliography, requests were redirected to data.crossref.og (or data.datacite.org) because that's where non-text/html content negotiations are handled. When I later made a request with Accept: text/html, the request was still pointed to data.crossref.org (or data.datacite.org) because of caching. The problem is that data.crossref.org and data.datacite.org do not process text/html, only dx.doi.org handles those requests, so they were returning error 406.
In sum, simply adding cache:false to jQuery.ajax allows me to use GET requests just fine. My questions about headers are mostly irrelevant.

Related

How can I cancel consecutive requests to my server? [duplicate]

What would cause a page to be canceled? I have a screenshot of the Chrome Developer Tools.
This happens often but not every time. It seems like once some other resources are cached, a page refresh will load the LeftPane.aspx. And what's really odd is this only happens in Google Chrome, not Internet Explorer 8. Any ideas why Chrome would cancel a request?
We fought a similar problem where Chrome was canceling requests to load things within frames or iframes, but only intermittently and it seemed dependent on the computer and/or the speed of the internet connection.
This information is a few months out of date, but I built Chromium from scratch, dug through the source to find all the places where requests could get cancelled, and slapped breakpoints on all of them to debug. From memory, the only places where Chrome will cancel a request:
The DOM element that caused the request to be made got deleted (i.e. an IMG is being loaded, but before the load happened, you deleted the IMG node)
You did something that made loading the data unnecessary. (i.e. you started loading a iframe, then changed the src or overwrite the contents)
There are lots of requests going to the same server, and a network problem on earlier requests showed that subsequent requests weren't going to work (DNS lookup error, earlier (same) request resulted e.g. HTTP 400 error code, etc)
In our case we finally traced it down to one frame trying to append HTML to another frame, that sometimes happened before the destination frame even loaded. Once you touch the contents of an iframe, it can no longer load the resource into it (how would it know where to put it?) so it cancels the request.
status=canceled may happen also on ajax requests on JavaScript events:
<script>
$("#call_ajax").on("click", function(event){
$.ajax({
...
});
});
</script>
<button id="call_ajax">call</button>
The event successfully sends the request, but is is canceled then (but processed by the server). The reason is, the elements submit forms on click events, no matter if you make any ajax requests on the same click event.
To prevent request from being cancelled, JavaScript event.preventDefault(); have to be called:
<script>
$("#call_ajax").on("click", function(event){
event.preventDefault();
$.ajax({
...
});
});
</script>
NB: Make sure you don't have any wrapping form elements.
I had a similar issue where my button with onclick={} was wrapped in a form element. When clicking the button the form is also submitted, and that messed it all up...
Another thing to look out for could be the AdBlock extension, or extensions in general.
But "a lot" of people have AdBlock....
To rule out extension(s) open a new tab in incognito making sure that "allow in incognito is off" for the extention(s) you want to test.
In my case, I found that it is jquery global timeout settings, a jquery plugin setup global timeout to 500ms, so that when the request exceed 500ms, chrome will cancel the request.
You might want to check the "X-Frame-Options" header tag. If its set to SAMEORIGIN or DENY then the iFrame insertion will be canceled by Chrome (and other browsers) per the spec.
Also, note that some browsers support the ALLOW-FROM setting but Chrome does not.
To resolve this, you will need to remove the "X-Frame-Options" header tag. This could leave you open to clickjacking attacks so you will need to decide what the risks are and how to mitigate them.
Here's what happened to me: the server was returning a malformed "Location" header for a 302 redirect.
Chrome failed to tell me this, of course. I opened the page in firefox, and immediately discovered the problem.
Nice to have multiple tools :)
Another place we've encountered the (canceled) status is in a particular TLS certificate misconfiguration. If a site such as https://www.example.com is misconfigured such that the certificate does not include the www. but is valid for https://example.com, chrome will cancel this request and automatically redirect to the latter site. This is not the case for Firefox.
Currently valid example: https://www.pthree.org/
A cancelled request happened to me when redirecting between secure and non-secure pages on separate domains within an iframe. The redirected request showed in dev tools as a "cancelled" request.
I have a page with an iframe containing a form hosted by my payment gateway. When the form in the iframe was submitted, the payment gateway would redirect back to a URL on my server. The redirect recently stopped working and ended up as a "cancelled" request instead.
It seems that Chrome (I was using Windows 7 Chrome 30.0.1599.101) no longer allowed a redirect within the iframe to go to a non-secure page on a separate domain. To fix it, I just made sure any redirected requests in the iframe were always sent to secure URLs.
When I created a simpler test page with only an iframe, there was a warning in the console (which I had previous missed or maybe didn't show up):
[Blocked] The page at https://mydomain.com/Payment/EnterDetails ran insecure content from http://mydomain.com/Payment/Success
The redirect turned into a cancelled request in Chrome on PC, Mac and Android. I don't know if it is specific to my website setup (SagePay Low Profile) or if something has changed in Chrome.
Chrome Version 33.0.1750.154 m consistently cancels image loads if I am using the Mobile Emulation pointed at my localhost; specifically with User Agent spoofing on (vs. just Screen settings).
When I turn User Agent spoofing off; image requests aren't canceled, I see the images.
I still don't understand why; in the former case, where the request is cancelled the Request Headers (CAUTION: Provisional headers are shown) have only
Accept
Cache-Control
Pragma
Referer
User-Agent
In the latter case, all of those plus others like:
Cookie
Connection
Host
Accept-Encoding
Accept-Language
Shrug
I got this error in Chrome when I redirected via JavaScript:
<script>
window.location.href = "devhost:88/somepage";
</script>
As you see I forgot the 'http://'. After I added it, it worked.
Here is another case of request being canceled by chrome, which I just encountered, which is not covered by any of answers up there.
In a nutshell
Self-signed certificate not being trusted on my android phone.
Details
We are in development/debug phase. The url is pointing to a self-signed host. The code is like:
location.href = 'https://some.host.com/some/path'
Chrome just canceled the request silently, leaving no clue for newbie to web development like myself to fix the issue. Once I downloaded and installed the certificate using the android phone the issue is gone.
If you use axios it can help you
// change timeout delay:
instance.defaults.timeout = 2500;
https://github.com/axios/axios#config-order-of-precedence
For my case, I had an anchor with click event like
<a href="" onclick="somemethod($index, hour, $event)">
Inside click event I had some network call, Chrome cancelling the request. The anchor has href with "" means, it reloads the page and the same time it has click event with network call that gets cancelled. Whenever i replace the href with void like
<a href="javascript:void(0)" onclick="somemethod($index, hour, $event)">
The problem went away!
If you make use of some Observable-based HTTP requests like those built-in in Angular (2+), then the HTTP request can be canceled when observable gets canceled (common thing when you're using RxJS 6 switchMap operator to combine the streams). In most cases it's enough to use mergeMap operator instead, if you want the request to complete.
I had faced the same issue, somewhere deep in our code we had this pseudocode:
create an iframe
onload of iframe submit a form
After 2 seconds, remove the iframe
thus, when the server takes more than 2 seconds to respond the iframe to which the server was writing the response to, was removed, but the response was still to be written , but there was no iframe to write , thus chrome cancelled the request, thus to avoid this I made sure that the iframe is removed only after the response is over, or you can change the target to "_blank".
Thus one of the reason is:
when the resource(iframe in my case) that you are writing something in, is removed or deleted before you stop writing to it, the request will be cancelled
I have embedded all types of font as well as woff, woff2, ttf when I embed a web font in style sheet. Recently I noticed that Chrome cancels request to ttf and woff when woff2 is present. I use Chrome version 66.0.3359.181 right now but I am not sure when Chrome started canceling of extra font types.
We had this problem having tag <button> in the form, that was supposed to send ajax request from js. But this request was canceled, due to browser, that sends form automatically on any click on button inside the form.
So if you realy want to use button instead of regular div or span on the page, and you want to send form throw js - you should setup a listener with preventDefault function.
e.g.
$('button').on('click', function(e){
e.preventDefault();
//do ajax
$.ajax({
...
});
})
I had the exact same thing with two CSS files that were stored in another folder outside my main css folder. I'm using Expression Engine and found that the issue was in the rules in my htaccess file. I just added the folder to one of my conditions and it fixed it. Here's an example:
RewriteCond %{REQUEST_URI} !(images|css|js|new_folder|favicon.ico)
So it might be worth you checking your htaccess file for any potential conflicts
happened to me the same when calling a. js file with $. ajax, and make an ajax request, what I did was call normally.
In my case the code to show e-mail client window caused Chrome to stop loading images:
document.location.href = mailToLink;
moving it to $(window).load(function () {...}) instead of $(function () {...}) helped.
In can this helps anybody I came across the cancelled status when I left out the return false; in the form submit. This caused the ajax send to be immediately followed by the submit action, which overwrote the current page. The code is shown below, with the important return false at the end.
$('form').submit(function() {
$.validator.unobtrusive.parse($('form'));
var data = $('form').serialize();
data.__RequestVerificationToken = $('input[name=__RequestVerificationToken]').val();
if ($('form').valid()) {
$.ajax({
url: this.action,
type: 'POST',
data: data,
success: submitSuccess,
fail: submitFailed
});
}
return false; //needed to stop default form submit action
});
Hope that helps someone.
For anyone coming from LoopbackJS and attempting to use the custom stream method like provided in their chart example. I was getting this error using a PersistedModel, switching to a basic Model fixed my issue of the eventsource status cancelling out.
Again, this is specifically for the loopback api. And since this is a top answer and top on google i figured i'de throw this in the mix of answers.
For me 'canceled' status was because the file did not exist. Strange why chrome does not show 404.
It was as simple as an incorrect path for me. I would suggest the first step in debugging would be to see if you can load the file independently of ajax etc.
The requests might have been blocked by a tracking protection plugin.
It happened to me when loading 300 images as background images. I'm guessing once first one timed out, it cancelled all the rest, or reached max concurrent request. need to implement a 5-at-a-time
One the reasons could be that the XMLHttpRequest.abort() was called somewhere in the code, in this case, the request will have the cancelled status in the Chrome Developer tools Network tab.
In my case, it started coming after chrome 76 update.
Due to some issue in my JS code, window.location was getting updated multiple times which resulted in canceling previous request.
Although the issue was present from before, chrome started cancelling request after update to version 76.
I had the same issue when updating a record. Inside the save() i was prepping the rawdata taken from the form to match the database format (doing a lot of mapping of enums values, etc), and this intermittently cancels the put request. i resolved it by taking out the data prepping from the save() and creating a dedicated dataPrep() method out of it. I turned this dataPrep into async and await all the memory intensive data conversion. I then return the prepped data to the save() method that i could use in the http put client. I made sure i await on dataPrep() before calling the put method:
await dataToUpdate = await dataPrep();
http.put(apiUrl, dataToUpdate);
This solved the intermittent cancelling of request.

JavaScript set HTTP request header value and FireFox issue

In order to prevent direct url access to download files on my website I use following HTTP header which is setted by the following JavaScript during the click on download url:
$('a.download').on('click', function(){
$.ajax({
url: '/ajax/preventDownload',
headers: { 'x-rarity-download-header': 'download' }
});
})
Server checks if this 'x-rarity-download-header' present in HTTP request and if no doesn't allow user to download file.
Right now this approach works not in the all browsers, for example it works in FireFox 50 and looks like doesn't work on some previous versions like 48. Also, this approach doesn't work in Safari browser.
What can be a reason of this and how to fix it ?
First, you should respect the spelling. It should be X-Rarity-Download-Header.
Some software do take this more serious than others and only allow standarized headers or custom headers set with uppercase X-.
Besides that, I suggest you to switch to a more common method like oauth2 tokens.
Or something way easier, like:
1. Visit the site
2. Set a cookie
3. Allow download.
4. If cookie is not set, don't allow to download.
You can also try beforeSend method from jQuery, which seems a little better place to add headers: http://api.jquery.com/jquery.ajax/

Replacing entire page via AJAX causes Permission Denied error in IE only

I have an AJAX post that retrieves data from the server and either replaces part of the page or in some cases the full page. This is controlled by a javascript fullRefresh parameter. The problem is the refresh code works find in Firefox but causes a Permission Denied error in the bowels of JQuery after it runs in IE although it would appear to actually replace the page contents successfully.
IE version 11.0.9600.16659
JQuery version 1.8.2
Error message
Unhandled exception at line 2843, column 3 in http://localhost:62761/Scripts/jquery-1.8.2.js
0x800a0046 - JavaScript runtime error: Permission denied
My code is
function RefreshScreenContent(formActionUrl, formHTML, fullRefresh) {
fullRefresh = (typeof fullRefresh === "undefined") ? false : fullRefresh;
if (fullRefresh) {
document.write(formHTML);
document.close();
}
else {
$("#content-parent").html(formHTML);
}
}
The partial refreshes work fine but the full refreshes are the problem. I have tried hardcoding the document.write call to write a well formed simple html page rather than formHTML in case that was somehow the problem but even a simple single word page causes the error.
The actual error occurs a some point later with a callback inside JQuery.
The AJAX post to the server is in the same application i.e. is not a cross domain request. I have seen posts online talking aboue cross domain stuff that is not applicable here.
Can anyone tell me why this is happening and how to stop it? Is there an alternative IE way of replacing the page contents?
Your code is fine (at least at first glance). My guess is that you make the call in such a way, that it is interpreted as cross-domain.
I would suggest checking:
http vs https (most common)
the destination port
the root url
maybe the "destination" page makes some requests of its own, check to be on same domain
The reason why IE may be the only one with the problem is that it has higher security demanding by default that other browsers (check advanced security settings - can't remember where they are put in menu) so it interprets requests in a more "paranoid" manor.
I repeat, what I said is just a guess, based on cases I've been put into.
In the end I used the approach here to replace the body tag in the pgae with the one in the markup the AJAX receives back https://stackoverflow.com/a/7839921/463967
I would have preferred to replace all content not just the body but I can always adapt later to include the header etc as body is enough for my uses right now. This works in IE and Firefox.

Converting jquery code to prototype for a cross browser Ajax request in order to obtain Latest Tweets

Converting jquery code to prototype for a cross browser Ajax request
My first post !
I had to fetch my latest tweet and so had to perform a cross browser request. The current app uses prototype, but I am somewhat familiar with jquery.
So, I began in jquery as :
$.ajax('http://twitter.com/status/user_timeline/apocalyptic_AB.json?count=1&callback=?', {
dataType: "jsonp",
success:function(data,text,xhqr){
$.each(data, function(i, item) {
console.log(item.text);
});
}
});
I get a warning as :
'Resource interpreted as Script but transferred with MIME type application/json.'
But, I do get to see my last tweet. Fine.
So, I decided to do the same in prototype and then try eliminating warnings, if any. But, I got nowhere near even after trying for hours.
This is what I came up with initially in prototype. I had a lot of changes/alterations that followed but none worked.
new Ajax.Request('http://twitter.com/status/user_timeline/apocalyptic_AB.json?count=1&callback=?', {
contentType: "jsonp",
onSuccess:function(transport){
console.log(transport) ;
}
});
The request succeeds but the response text is nil / " " . I get no error in Firefox but in Chrome the error is :
XMLHttpRequest cannot load http://twitter.com/status/user_timeline/apocalyptic_AB.json?count=1&callback=?. Origin http://localhost:4000 is not allowed by Access-Control-Allow-Origin.
Refused to get unsafe header "X-JSON"
Any help shall be greatly appreciated. Thank you.
Thanks Darin for getting me back onto Dandean's JSONP for prototype js.
Although I did not mention in the first place(the question was getting a bit long), I had tried using Ajax.JSONRequest from Dandean (the very link you are referring to). The request was constantly getting failed, and I didnt go further into using it as I was assuming it would be pretty straight forward getting it done in prototype too, like jquery. As I got no more answers, justifiably so, I decided to wrap my head around using Ajax.JSONRequest. The request failure was not due to a gateway timeout. It was because the request url had params callback repeated in it.
So, the request url turned out to be
GET (twitter.com/status/user_timeline/apocalyptic_AB.json?count=1&&callback=?&callba‌ck=_prototypeJSONPCallback_0)
Thus, I defined my url without callback and it performed as desired.
However, I still get a warning :
Resource interpreted as Script but transferred with MIME type application/json
Here is the equivalent prototype :
new Ajax.JSONRequest('http://twitter.com/status/user_timeline/apocalyptic_AB.json?count=1', {
onSuccess:function(response){
response.responseJSON.each(function(item){
console.log(item.text);
});
You may take a look at the following page.
Regarding the error [Refused to get unsafe header "X-JSON"], this can happen if your page is under SSL, but the URL referenced in your AJAX call is not also an HTTPS URL.

jQuery $.get() function succeeds with 200 but returns no content in Firefox

I'm writing my first bit of jQuery, and I'm having a problem with jQuery.get(). I'm calling this;
$.get(url, updateList);
where updateList is defined like so;
function updateList(data)
{
if (data)
{
$('#contentlist').html(data);
}
else
{
$('#contentlist').html('<li>Nothing found. Try again</li>');
}
}
The function runs, and updateList is called. It works fine in Internet Explorer. However, in Firefox, the data parameter is always empty. I would expect it to be filled with the content of the webpage I passed in as the URL. Am I using it wrong?
Notes;
in Firebug, I've enabled the Net panel, and I get the request showing up. I get a 200 OK. The Headers tab looks fine, while the Response and HTML panels are both empty.
The page I'm trying to download is a straight HTML page -- there's no problem with server code.
The page with JavaScript is local to my machine; the page I'm downloading is hosted on the Internet.
I've tried checking the URL by copy-pasting it from my page into the browser -- it happily returns content.
The error occurs even in Firefox Safe Mode -- hopefully that rules out rogue addins.
You probably won't be able to do this due to cross-domain security. Internet Explorer will allow you to Ajax remote domain when running from file://, but Firefox and Chrome won't.
Try to put both files on the same server and see if it works (it should).
You'll most likely need to fix your page that you're quering with XHR because it should be returning content. Copy paste the link in the Firebug net tab and make a new tab, and edit that page with your text editor so it spits content back.
Stick alert (or breakpoint in Firebug) and see if the data returned is not an object (or if there is any data). If the former - you may need to drill into the object to get your markup

Categories

Resources