My Chrome Extension performs a get request which works fine. Because testing is faster with snippets, I want to do the exact same thing in the Chrome Console or in the Chrome Snippets. Minimal example:
fetch(url, {
method: "GET"
}).then(response => response.text())
.then(html => console.log(html))
.catch(error => console.log(error))
Unfortunately, there I only get
TypeError: Failed to fetch for the error and
Failed to load resource: net::ERR_FAILED in Chrome's inline error marker
In my Chrome Extension I ran into a CORS issue so what I did in my AWS Lambda function was to set the headers to
const headers = {
'Content-Type': 'application/json',
"Access-Control-Allow-Headers" : "Content-Type",
"Access-Control-Allow-Origin" : "*",
"Access-Control-Allow-Credentials" : true
};
so I suppose CORS isn't the problem here. But I can't seem to figure out as to what differences it could make to have the requests run in the Extension vs. in the console/snippets. What could I be missing here?
I also do not see the request in AWS CloudWatch so I suppose it doesn't even leave my machine. I am testing on a Chrome User that has 0 extensions installed, same happens in incognito
To circle out any issues with my server I have also inserted the examples from https://lockevn.medium.com/snippet-to-fetch-remote-rest-api-and-display-in-console-of-chrome-devtool-6896a7cd6041
async function testGet() {
const response = await fetch('https://jsonplaceholder.typicode.com/posts')
console.log(await response.json())
}
async function testPost() {
let r = await fetch('https://jsonplaceholder.typicode.com/posts', {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({
lockevn: 1
}),
})
console.log(await r.json())
}
testGet()
testPost()
Chrome's Network tab shows the request as stalled
The linked 'explanation' gives
Queueing: The browser queues requests when:
There are higher priority requests.
There are already six TCP connections open for this origin, which is
the limit. Applies to HTTP/1.0 and HTTP/1.1 only.
The browser is briefly allocating space in the disk cache
Stalled: The request could be stalled for any of the reasons described in Queueing.
Higher priority seems odd, 6 connections can't be the issue either since I have restarted my browser before testing, and the disk cache issue doesn't sound like the problem either. I'm not macOS with no anti virus
I managed to find the issue. In order to avoid potentially privileging my requests by opening the Chrome Developer Console in my AWS dashboard tab, I have created a new tab (chrome://new-tab-page/) and performed the requests in the console. This returned the errors described.
When I have updated my question with the example code I wanted to confirm if it was running before asking someone to try it if it works on their machine. For quick runtime-validation I opened the Console in the Stackoverflow tab and it worked. I only wanted to check if the code can be interpreted but it turned out to actually return a result. The same is valid for my AWS instance, if I run it on a https website it works fine. No idea why this is not documented but "disk cache" is mentioned as a potential error.
tldr don't open Chrome Console in new tab for requests in the console, use any website. This may have to do with CORS headers only working if the request doesn't have empty headers to begin with maybe (?)
I specifically avoided using a website console instance for testing because I wanted to prevent potential cookies on the AWS page from doing something that someone else couldn't do on their machine. Good thinking bad result haha
Thank you so much for your comments suggesting the help, much appreciated.
Related
I had this issue for more than 3 months and didn't get an answer that could help me.
The issue:
I'm integrating two applications. The A application has an HTTPS method to authenticate and will assign a cookie to the Web Client (who is calling the method). The B application needs to call some methods from A but to do this need to use the previously retrieved cookie, otherwise will get an Unauthorized error (401). The problem is when I get the cookie Chrome block the assignment.
function HttpRequestForNXV5(url) {
var requestOptions = {
method: 'GET',
redirect: 'follow'
};
fetch(url, requestOptions)
.then(response => response.text())
.then(result => console.log(result))
.catch(error => console.log('error', error));
}
What did I try?
Enable "Allow all cookies" in "Cookies and other site data"
Added "mysite.com" in "Sites that can always use cookies"
Enable "No protection" in "Security"
To be honest, a lot of ways that I do not remember.
The only way that I found that works are by disabling the web security for the Chrome browser.
Running "C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" --disable-web-security --user-data-dir="C:/ChromeDevSession"
This is not a solution for us because disabling ALL SECURITY for ALL SITES does not sound good for our client's security.
What do I want?
Assign the cookie from the domain A to B.
Can use this cookie from B.
No disable ALL Chrome security, just bypass the security for the site.
NOTES: The application interacts in a private network and uses self-signed certificates.
I have a simple test TamperMonkey (Firefox) script that calls fetch.
The script is simple.
(function() {
'use strict';
GM_registerMenuCommand("Test", postData);
})();
function postData(){
fetch('https://https://httpbin.org/post', {
method: 'POST',
headers: HEADERS,
body: JSON.stringify("{test:data}")
})
}
But running this exact same script on different websites gives different requests.
For instance here, on StackOverflow, and on most sites, it will make one HTTP/2 post. And it works.
(Anecdotally on some sites it will first send an OPTIONS request. It is not an issue but just emphasizing the fact that behavior can be different.)
On others (Twitter for instance), it will instead send an HTTP/1.1 POST. And then the API responds with a 202 and nothing happens (the post data that should be mirrored is not returned)
Is there a way to control which HTTP version TamperMonkey uses when making requests?
===
Following #DraganS's comment, I added the (Connection, upgrade) and (Upgrade, HTTP/2.0) headers.
They do not seem to be taken into account (I don't see 'Upgrade' in the final request, and Connection is set to keep-alive.
Interestingly though this makes the websites that didn't before send the OPTIONS request first.
Not on Twitter though, that is still in HTTP/1.1
===
Edit 2: I was initially testing a specific API, but updating to a full testing script that sends requests to httpbin (that should just mirror the request) has the exact same behavior.
===
Starting to think it's not TamperMonkey related.
I'm not getting the error in Firefox but in Chrome, from the console, just doing a
fetch('https://httpbin.org/post', {
method: 'POST',
body: JSON.stringify('{}')
})
on Twitter returns
Refused to connect to 'https://httpbin.org/post' because it violates the following Content Security Policy directive: "connect-src 'self' blob: https://*.giphy.com https://*.pscp.tv https://*.video.pscp.tv https://*.twimg.com https://api.twitter.com https://api-stream.twitter.com https://ads-api.twitter.com https://aa.twitter.com https://caps.twitter.com https://media.riffsy.com https://pay.twitter.com https://sentry.io https://ton.twitter.com https://twitter.com https://upload.twitter.com https://www.google-analytics.com https://accounts.google.com/gsi/status https://accounts.google.com/gsi/log https://app.link https://api2.branch.io https://bnc.lt wss://*.pscp.tv https://vmap.snappytv.com https://vmapstage.snappytv.com https://vmaprel.snappytv.com https://vmap.grabyo.com https://dhdsnappytv-vh.akamaihd.net https://pdhdsnappytv-vh.akamaihd.net https://mdhdsnappytv-vh.akamaihd.net https://mdhdsnappytv-vh.akamaihd.net https://mpdhdsnappytv-vh.akamaihd.net https://mmdhdsnappytv-vh.akamaihd.net https://mdhdsnappytv-vh.akamaihd.net https://mpdhdsnappytv-vh.akamaihd.net https://mmdhdsnappytv-vh.akamaihd.net https://dwo3ckksxlb0v.cloudfront.net".
So the cause would be a Twitter CORS policy ? Is it something that is avoidable ? Or does this mean it's just impossible to have a script making requests outside of these from twitter.com ?
The fact that the HTTP version seen in Firefox is different is probably just a side effect.
The presence or not of a preflight OPTIONS request is normal behavior.
The error is due to to the CSP, not CORS like I initially thought (a lot of my previous errors were CORS related).
Solution: Do not use fetch, but the GM.xmlhttpRequest function. It is able to go around CSP, but you will not see it in the console.
I seem to be missing a key piece here. Any ideas?
componentDidMount(){this.retrieveConfig();}
retrieveConfig = () => {const { locnNbr } = this.props.data;
axios.get(`/some/uri/stuff`).then(res =>
{console.log('retrieveConfig res', res);
this.setState({config: res. data});
});
};
componentDidMount = () => {
this.retrieveConfig();
};
retrieveConfig = () => {
Axios.get("https://jsonplaceholder.typicode.com/todos/1")
.then(res => console.log(res.data))
.catch(err => console.log(err));
};
I hope this fixes your problem.
Html response you got only for mobile and it will work from postman because the url you are trying to hit from server mobile will not able to hit. So you add the cer to your mobile.
public certificate of server where as postman allows it but mobile does not allow to hit all server urls
Consider double checking your URLs.
I'm assuming you're trying to reach a REST API endpoint but you're getting a web page as your response, which for me would suggest you're hitting the wrong port on the right domain. I believe Axios defaults to using port 8080 (web default), and it's unlikely your API is running on that port. So make sure you're specifying your URL like:
Axios.get("http://sub.domain.com:123/endpoint").then...
If you could share which URL you're using in browser and in Postman that might give more information on what's going on differently there.
The 200/304 thing makes sense from that perspective as well. It's a 200 since the request went fine but Chrome is caching the request for that web page and it's reporting that it hasn't changed (that's what 304 means).
Sometimes I come across this issue in Firefox Developer Edition and I have not figured out why this happens (I believe it has to do with the response headers).
I manage to fix it every time by clearing the browser's cache.
I have a generated React site I am hosting in an S3 bucket. One of my components attempts to fetch something when loaded:
require('isomorphic-fetch')
...
componentDidMount() {
fetch(`${url}`)
.then(res => {
console.log(res);
this.setState({
users: res
})
})
.catch(e => {
// do nothing
})
}
The url I am fetching is an AWS API Gateway. I have enabled CORS there, via the dropdown, with no changes to the default configuration.
In my console, for both the remote site and locally during development, I see:
"Failed to load url: No 'Access-Control-Allow-Origin' header is present on the requested resource." etc
However, in the Chrome Network tab, I can see the request and the response, with status 200, etc. In the console, my console.log and this.setState are never called, however.
I understand that CORS is a common pain point, and that many questions have touched on CORS. My question: Why does the response show no error in the Network tab, while simultaneously erroring in the console?
The fetch(`${url}`) call returns a promise that resolves with a Response object, and that Response object provides methods that resolve with text, JSON data, or a Blob.
So to get the data you want, you need to do something like this:
componentDidMount() {
fetch(`${url}`)
.then(res => res.text())
.then(text => {
console.log(text);
this.setState({
users: text
})
.catch(e => {
// do nothing
})
}
Failed to load url: No 'Access-Control-Allow-Origin' header is present on the requested resource." etc
That means the browser isn’t allowing your frontend code to access the response from the server, because the response doesn’t include the Access-Control-Allow-Origin header.
So in order for the above code to work, you’ll need to fix the server configuration on that server so that it sends the necessary Access-Control-Allow-Origin response header.
However, in the Chrome Network tab, I can see the request and the response, with status 200, etc. In the console, my console.log and this.setState are never called, however.
That’s expected in the case where the server doesn’t send the Access-Control-Allow-Origin response header. In that case, the browser still gets the response — and that’s why you can see it in the devtools Network tab — but just because the browser gets the response doesn’t mean it will expose the response to your frontend JavaScript code.
The browser will only let your code access the response if it includes the Access-Control-Allow-Origin response header; if the response doesn’t include that header, then the browser blocks your code from accessing it.
My question: Why does the response show no error in the Network tab, while simultaneously erroring in the console?
For the reason outlined above. The browser itself runs into no error in getting the response. But your code hits an error because it’s trying to access an res object that’s not there; the browser hasn’t created that res object, because the browser isn’t exposing the response to your code.
You may be seeing the status 200 for the OPTIONS not the GET. There is a setting for CORS to handle legacy, so it won't confuse your client. I had to do that last time in a React app. Your error is that your CORS isn't configured properly (sorry, obviously). Chrome won't let your client tlak to the backend if it doesn't get the headers properly. Other browsers probably also, probably React also. It may be some kind of HTTP protocol if only one side has CORS enabled. Someone can correct me there. It's a similar security consideration as sending a request to HTTP from HTTPS. Chrome blocks it.
It looks to me like it's your backend. CORS isn't active or it would put that header on, and after that, you would see errors about origin mismatch in the frontend client.
In my experience, it's a 2-3 step combo, make sure OPTIONS don't send confusing signals to your client (look for settings to do with 200). This is a config setting in your backend. Then, make sure the backend is configured to use CORS. You very specifically need to enter the origin hostname and port that the backend is to expect traffic from.
I could probably give better input if I see what languages and/or frameworks you are using besides React.
This is what you would do in Express JS and node for your Backend:
const cors = require('cors')
// note http or https
app.use(cors({
origin: 'http://example.com:1337',
//origin: '*',
methods: 'GET,HEAD,PUT,PATCH,POST,DELETE',
optionsSuccessStatus: 200
// some legacy browsers (IE11, various SmartTVs) choke on 204
}))
My last React app was detonating without optionsSuccessStatus by throwing success when it was fail.
To give you a little bit of imagery to work with, CORS is simple but finicky. It's a simple matter of alignment. Once your backend is configured to a) use CORS and b) know who to accept traffic from, it's done. Once your frontend is configured to handle this traffic, it's done. It's like aligning a square peg in a round hole until you get the config settings aligned.
Try using Postman to send some GET requests to the Backend. You can observe the headers from there.
I'm attempting to port a Chrome extension to Edge. The Chrome extension works fine, and all HTTP requests are working as expected. When these same requests fire in the port, I get this error:
XMLHttpRequest: Network Error 0x2efd, Could not complete the operation due to error 00002efd.
This issue seems to pop up for a lot of Microsoft stuff, including Windows Phone. Maybe there is a similar answer to my issue for this extension, but I'm permitting ALL URLs in my manifest...
This is the request:
$http.get(url)
.then(function () {
})
.catch(function () {
var args = arguments;
});
I've also tried the jQuery way:
$.ajax({
url: url,
success: function () {
},
error: function () {
var args = arguments;
}
});
I can't share the exact URL because it is part of our business architecture, but the Chrome extension consumes it just fine. If I open the URL directly in a browser (Edge or Chrome) it shows the result just fine... I'm at a loss. I know the error means the request can't connect, but why? And how do I fix it?
Seems to be a known bug that hasn't been fully triaged as of 2016-10-07.
In another bug report Microsoft mysteriously says "This is by design as extensions don’t support loopback" and closed it as such. That would certainly be an incompatibility with the Chrome model.
The symptom seems to be that connections to sites that are considered part of the Local Intranet by Windows network stack are denied as part of an aggressive XSS prevention policy.
There is definitely nothing you can do on the extension side until this is resolved by MS. If anything, extension code needs to be privileged enough to do this, even if that breaks their compartmentalization model.
It's possible that you can do some environment changes though. I would experiment with Internet Options for "Local intranet" zone, for example setting Protected Mode on, disabling that for Internet zone, or more likely somehow making sure the site isn't considered intranet. A domain name instead of an IP address may also help "fooling" Edge that it's not intranet.