I have an Angular app running which uses an external api to get countries ISOs.
This API uses https and it's giving me an error.
The thing is: when I use a proxy in my angular local environment, mapping /iso-api/ to the real url it works ok.
"/iso-api/*": {
"target": "https://www...",
"pathRewrite": { "^/iso-api": "" },
"secure": false,
"changeOrigin": true,
"logLevel": "debug"
}
But I want this to work in production, so I want to use the real url.
In my server I am returning the Access-Control-Allow-Origin: * header already.
I've tried to run the angular server with ssl (as the external api uses https), but I receive the same error.
I know a solution would be to implement the proxy in the server, but I believe this should not be done and there may be a way to retrieve this data from the frontend.
Help please.
Response
This is the network error in Chrome:
In Firefox, the request ends with 200 OK and returns data, but CORS error is thrown and I cannot access the data from the app: CORS header 'Access-Control-Allow-Origin' missing
General
Request URL: https://www...
Referrer Policy: no-referrer-when-downgrade
Request headers
:method: GET
:scheme: https
accept: application/json, text/plain, */*
accept-encoding: gzip, deflate, br
accept-language: es-ES,es;q=0.9,en;q=0.8
origin: http://localhost:4200
referer: http://localhost:4200/app/login
sec-fetch-dest: empty
sec-fetch-mode: cors
sec-fetch-site: cross-site
Response headers
accept-ranges: bytes
cache-control: max-age=0
content-encoding: gzip
content-language: en-US
content-length: 68356
content-type: application/json
date: Mon, 27 Apr 2020 14:49:30 GMT
expires: Mon, 27 Apr 2020 14:49:30 GMT
referrer-policy: strict-origin-when-cross-origin
server-timing: cdn-cache; desc=HIT
server-timing: edge; dur=1
server-timing: ACTT;dur=0,ACRTT;dur=88
set-cookie: ... expires=Mon, 27 Apr 2020 16:49:30 GMT; max-age=7200; path=/; domain=...; HttpOnly
set-cookie: ... Domain=...; Path=/; Expires=Mon, 27 Apr 2020 18:49:30 GMT; Max-Age=14400; HttpOnly
set-cookie: ... Domain=...; Path=/; Expires=Tue, 27 Apr 2021 14:49:30 GMT; Max-Age=31536000; Secure
status: 200
vary: Accept-Encoding
UPDATE
Angular service code
import { HttpClient } from '#angular/common/http';
...
constructor(
private _http: HttpClient,
private _errorUtil: ErrorUtilService,
private _converter: StoreConverter
) {}
...
getCountries(): Observable<CountryWithLanguages[]> {
return this._http.get<GetStoresResponse>(API.storeUrl).pipe(
catchError(this._errorUtil.handle),
map(result => result.stores),
switchMap(stores => stores),
filter(this._isActiveStore),
map(store => this._converter.toView(store)),
toArray()
);
}
To serve the app I use angular dev server, I do not add the 'Access-Control-Allow-Origin' header manually but, in the browser, I see that it is being added.
angular.json
"serve": {
"builder": "#angular-devkit/build-angular:dev-server",
"options": {
"browserTarget": "push-web-app:build",
"proxyConfig": "src/proxy-local.conf.json"
},
}
You can't request a resource from another domain. This would be a security hole. You can read more here: Same-origin policy
Sending Access-Control-Allow-Origin: * from your server won't give you access to the aforementioned API. The provider of this API needs to give you permission to access the API, you can't give yourself this permission.
The error you posted states that the Access-Control-Allow-Origin header is missing. It means that the API isn't sending this header.
There might be two reasons why this API isn't sending Access-Control-Allow-Origin header.
A misconfiguration on the side of this API. In this case you have to ask the provider of this API to fix this issue.
The API provider is restricting access to the API on purpose. In this case you have to ask the provider of this API to give you access from your domain.
You can also proxy the request through your server. The core difference when using proxy is that your server reads the resource from the API and not the client browser. See David's response on how to configure proxy with nginx.
You cannot bypass CORS browser side. If you are not able to modify the server side, your only solution is to use a proxy.
For development purposes, you can use angular's built-in proxy server, but not for production.
Here is a basic nginx config to do this
server {
listen 80;
server_name yourdomain.com; #domain where your angular code is deployed
location /iso-api{
RewriteRule ^/iso-api/(.*)$ /$1 break;
proxy_pass https://thirdpartyapidomain.com; #url of the API you are trying to access
}
location
{
#Your normal angular location
#try_files ...
}
}
This will redirects requests like
http://yourdomain.com/iso-api/countriesList to https://thirdpartyapidomain.com/countriesList;
Since now client and server API calls are on the same domain, you should not have CORS issues
Use this site to resolve the CORS error:
https://cors-anywhere.herokuapp.com/
Use Exemple
https://cors-anywhere.herokuapp.com/https://freegeoip.app/json
this._http.get('https://cors-anywhere.herokuapp.com/https://freegeoip.app/json')
It's just a workaround but it works. Use it even just to understand if the error is related to CORS or something else.
Try the .get call with option withCredentials: true:
return this._http.get<GetStoresResponse>(API.storeUrl, { withCredentials: true }).pipe();
...and/or making sure your browsers are up to date.
Related
My aim is to run my Quasar app to other devices connected to the Local Area Network. I managed to run them as expected although, when I was logging in to the website, I am having this error POST http://10.0.0.20:8080/MyComposer/?submitId=3 404 (Not Found) despite working
fine on my localhost before. Why is it not reading the Classes in my index.php at the backend folder properly?
P.S. I don't know if this could solve my problem but when I used phpinfo() to debug, I noticed that the REQUEST_METHOD there is GET instead of POST. Is it possible to swap them? I'll try whatever you guys give me.
Console
General
Request URL: http://10.0.0.20:8080/MyComposer/?submitId=3
Request Method: POST
Status Code: 404 Not Found
Remote Address: 10.0.0.20:8080
Referrer Policy: no-referrer-when-downgrade
Response Headers
Connection: keep-alive
Content-Length: 151
Content-Security-Policy: default-src 'none'
Content-Type: text/html; charset=utf-8
Date: Tue, 28 Jul 2020 12:18:12 GMT
Vary: Accept-Encoding
X-Content-Type-Options: nosniff
X-Powered-By: Express
Request Headers
Accept: application/json, text/plain,
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.9,es;q=0.8
Connection: keep-alive
Content-Length: 49
Content-Type: application/json
Host: 10.0.0.20:8080
Origin: http://10.0.0.20:8080
Referer: http://10.0.0.20:8080/
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36
Headers.php
<?php
header('Access-Control-Allow-Origin: http://10.0.0.20:8080/'); //OR EITHER http://10.0.0.20:8080/ OR .$_SERVER['HTTP_ORIGIN']
header('Access-Control-Allow-Credentials: true');
header('Access-Control-Allow-Max-Age: 3600');
if (strtoupper($_SERVER['REQUEST_METHOD']) === 'OPTIONS') {
if (isset($_SERVER['HTTP_ACCESS_CONTROL_REQUEST_METHOD']))
header("Access-Control-Allow-Methods: GET, POST, PUT, DELETE, OPTIONS");
if (isset($_SERVER['HTTP_ACCESS_CONTROL_REQUEST_HEADERS']))
header("Access-Control-Allow-Headers: {$_SERVER['HTTP_ACCESS_CONTROL_REQUEST_HEADERS']}");
exit(0);
}
store.js
actions: {
LOGIN (context, payload) {
return new Promise((resolve, reject) => {
axios
.post('/MyComposer/', payload, {
headers: { 'Content-Type': 'application/json' },
params: {
submitId: 3
}
})
index.php
<?php
require 'Classes/Headers.php';
require 'vendor/autoload.php';
echo 'Hello!';
phpinfo();
use Classes\SubjectClass;
use Classes\TestClass;
use Classes\AnswerClass;
use Classes\LoginClass;
use Classes\RegisterClass;
use Classes\TeacherClass;
use Classes\StudentClass;
use Classes\AccountClass;
use Classes\AccessClass;
use Classes\SchoolClass;
if ($_SERVER["REQUEST_METHOD"] == "POST") {
$addsubject = new SubjectClass();
$addsubject->addSubject();
$addtest = new TestClass();
$addtest->addTest();
$submitTest = new AnswerClass();
$submitTest->submitTest();
$submitLoginData = new LoginClass();
$submitLoginData->submitLoginData();
$addAccountData = new RegisterClass();
$addAccountData->addAccountData();
$addSchool = new SchoolClass();
$addSchool->addSchool();
}
The problem was caused not by a coding error but due to two webservers being installed on the affected system, a XAMPP installation running on the default port 80 and a Node.Js server running on port 8080.
To diagnose the problem we first copypasted the URL being used in the script into a browser window which gave the same 404 HTTP error. This excluded the option that the axios.post() method caused the behavior.
Next the basic HTTP port assignment was tested. Calling the address http://10.0.0.20 (user's IP inside the local network) gave the correct XAMPP homepage. When checking the httpd.conf and in it the Listen setting (which should have been Listen 8080) we saw the Apache was using the default HTTP port isntead. Changing it to 8080 (as was used in the script) and restarting Apache resulted in the server not starting with the error:
Problem detected! Port 8080 in use by "C:\Program
Files\nodejs\node.exe" with PID 3808! Apache WILL NOT start without
the configured ports free! You need to uninstall/disable/reconfigure
the blocking application or reconfigure Apache and the Control Panel
to listen on a different port.
It was now sure that messed up ports were the cause of the problem. Removing the :8080 from the scripts made sure the requests were sent to the right server.
I have been getting this error while trying to start a Jenkins job with some javascript code on a web app that makes an XMLHttpRequest. Both Jenkins and the web app are on the same machine, different ports.
Access to XMLHttpRequest at '<JENKINS_URL>' from origin '<WEB_APP_URL:443>' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
Jenkins is on port 8080 but I have redirected it to port 80. I have the web app on port 443. I have the following lines in the nginx configuration file correlating to Jenkins
location / {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
proxy_pass http://localhost:8080;
proxy_read_timeout 90s;
}
But I still get the error. The weird thing is, when I do a curl to the Jenkins URL such as below
curl -u <JENKINS_USER>:<TOKEN> -I <JENKINS_JOB_URL>
I get this output
Server: nginx/1.16.1
Date: Wed, 03 Jun 2020 17:12:12 GMT
Location: http://<JENKINS_URL>/queue/item/331/
Connection: keep-alive
X-Content-Type-Options: nosniff
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, OPTIONS
Access-Control-Allow-Headers: DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range
Access-Control-Expose-Headers: Content-Length,Content-Range
And it has the header, but I don't know what else might be wrong. I might be missing something really obvious, as I am pretty new to this and have never dealt with CORS before, but any help is really appreciated, thanks. Happy to upload additional information if needed.
One extra thing, I also edited the following parameter in the /etc/sysconfig/jenkins file as below.
JENKINS_LISTEN_ADDRESS="127.0.0.1"
Finally figured it out!
I had installed a plugin called "CORS support for Jenkins" the other day without configuring it. I think by default Jenkins was looking at this plugin and saw that I had it disabled.
Once I enabled the plugin and configured it in (JENKINS_URL)/configure, as well as removed the "Access-Control-Allow-Origin: *" line in my nginx configuration, my javascript now works.
I have a ReactJS client running webpack-dev-server on localhost:3000. It connects to a Hapi API server on localhost:8080 and I'm trying to provide a basic cookie using hapi-auth-jwt2 (I've also tried hapi-auth-cookie with equal results).
I can see the response header provides a valid set-cookie header and everything looks okay, but all my browser tests ignore it and the cookie is never set (verified by checking document.cookie and using the browser tools like Chrome's Application tab). When I connect directly to the API server with Postman, it picks up the set-cookie header correctly and stores it so I think it's just some kind of domain/port/host configuration issue.
As a simple test, I tried deploying to our ec2 environment but that didn't help. The ec2 environment is similar, with one instance serving the client and another instance serving the API. I've also tried modifying my local hosts file to redirect a domain like 127.0.0.1 example.com and providing the domain=.example.com field in the cookie, but that also didn't help.
I think I'm just missing something basic but I don't know what it is. See below for response/request headers on login.
Request Headers
POST /login HTTP/1.1
Host: localhost:8080
Connection: keep-alive
Content-Length: 47
Accept: application/json, text/plain, */*
Origin: http://localhost:3000
Authorization: undefined
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.139 Safari/537.36
Content-Type: application/json
Referer: http://localhost:3000/
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9
Response Headers
HTTP/1.1 200 OK
authorization: <jwt token>
vary: origin,accept-encoding
access-control-allow-origin: http://localhost:3000
access-control-allow-credentials: true
access-control-expose-headers: WWW-Authenticate,Server-Authorization
content-type: application/json; charset=utf-8
set-cookie: cookie=token; Max-Age=604800; Expires=Wed, 16 May 2018 21:11:23 GMT; SameSite=Lax; Path=/
cache-control: no-cache
content-encoding: gzip
Date: Wed, 09 May 2018 21:11:23 GMT
Connection: keep-alive
Transfer-Encoding: chunked
http-proxy-middleware, which webpack-dev-server uses, has options for cookie domain/path rewrites.
You should see if those satisfy your needs. Otherwise you can also manually parse and reset cookies in the onProxyRes callback. Here is an example.
I have a local environment and I'm trying to login to a service. I'm using the 'request' library in the client and Express and express-session in the service.
I'm using Chrome and when I login to the service I get the following response headers:
FROM: http://app.dev:3000
TO: http://app.dev:4000/login/local
HTTP/1.1 200 OK
X-Powered-By: Express
Access-Control-Allow-Origin: http://app.dev:3000
Vary: Origin, X-HTTP-Method-Override
Access-Control-Allow-Credentials: true
Content-Type: application/json; charset=utf-8
Content-Length: 304
ETag: W/"130-fdQBs605dSVTeEqXEuXrvdcQTLk"
set-cookie: auth=TOKEN; Path=/; Expires=Sun, 25 Jun 2017 01:48:41 GMT
Date: Sat, 24 Jun 2017 13:48:41 GMT
Connection: keep-alive
When I login with Postman the cookie gets stored correctly. Subsequent requests through Postman include the cookie and everything is working fine.
But doing the same request with the npm request library it won't save the cookie and subsequent requests to the backend do not include cookies. Example request to the service after logging in. No cookie sent.
The request library documentation doesn't mention the withCredentials option but setting it to true fixes the issue. The cookie now gets saved and is being sent on subsequent requests.
const requestBase = request.defaults({
baseUrl: 'http://app.dev:4000/',
withCredentials: true,
});
I have an ajax file upload using (Dropzone js). which sends a file to my hapi server. I realised the browser sends a PREFLIGHT OPTIONS METHOD. but my hapi server seems not to send the right response headers so i am getting errors on chrome.
here is the error i get on chrome
XMLHttpRequest cannot load http://localhost:3000/uploadbookimg. Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:4200' is therefore not allowed access.
this is the hapi js route handler
server.route({
path: '/uploadbookimg',
method: 'POST',
config: {
cors : true,
payload: {
output: 'stream',
parse: true,
allow: 'multipart/form-data'
},
handler: require('./books/webbookimgupload'),
}
});
In my understanding hapi js should send all cors headers from the Pre-fight (OPTIONS) request.
Cant understand why its is not
Network request /response from chrome
**General**
Request Method:OPTIONS
Status Code:200 OK
Remote Address:127.0.0.1:3000
**Response Headers**
view parsed
HTTP/1.1 200 OK
content-type: application/json; charset=utf-8
cache-control: no-cache
vary: accept-encoding
Date: Wed, 27 Apr 2016 07:25:33 GMT
Connection: keep-alive
Transfer-Encoding: chunked
**Request Headers**
view parsed
OPTIONS /uploadbookimg HTTP/1.1
Host: localhost:3000
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
Access-Control-Request-Method: POST
Origin: http://localhost:4200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.87 Safari/537.36
Access-Control-Request-Headers: accept, cache-control, content-type
Accept: */*
Referer: http://localhost:4200/books/upload
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-US,en;q=0.8
Thanks in advance
The hapi cors: true is a wildcard rule that allows CORS requests from all domains except for a few cases including when there are additional request headers outside of hapi's default whitelist:
["accept", "authorization", "content-type", "if-none-match", "origin"]
See the cors option section in the API docs under route options:
headers - a strings array of allowed headers ('Access-Control-Allow-Headers'). Defaults to ['Accept', 'Authorization', 'Content-Type', 'If-None-Match'].
additionalHeaders - a strings array of additional headers to headers. Use this to keep the default headers in place.
Your problem is that Dropzone sends a couple of headers along with the file upload that aren't in this list:
x-requested-with (not in your headers above but was sent for me)
cache-control
You have two options to get things working, you need to change something on either the server or the client:
Option 1 - Whitelist the extra headers:
server.route({
config: {
cors: {
origin: ['*'],
additionalHeaders: ['cache-control', 'x-requested-with']
}
},
method: 'POST',
path: '/upload',
handler: function (request, reply) {
...
}
});
Option 2 - Tell dropzone to not send those extra headers
Not possible yet through their config but there's a pending PR to allow it: https://github.com/enyo/dropzone/pull/685
I want to add my 2 cents on this one as the above did not fully resolve the issue in my case.
I started my Hapi-Server at localhost:3300. Then I made a request from localhost:80 to http://localhost:3300/ to test CORS. This lead to chrome still blocking the ressource because it said that
No 'Access-Control-Allow-Origin' header is present on the requested
resource
(which was not true at all).
Then I changed the XHR-Request to fetch the url to a url for which I actually created a route inside HapiJS which - in my case - was http://localhost:3300/api/test. This worked.
To overgo this issue I created a "catch-all" route in HapiJS (to overgo the built-in 404 catch).
const Boom = require('Boom'); //You can require Boom when you have hapi
Route({
method: '*',
path: '/{any*}',
handler: function(request, reply) {
reply(Boom.notFound());
}
})