Mapbox tiles API gives 403 - javascript

I'm receiving a 403 error when requesting data from the Mapbox static tiles API. This API call has been working fine until recently, and we haven't made any changes to our web server configuration, nor our URL policy.
We are accessing via a web page on one of the whitelisted domains, not a tool/CLI.
A 403 suggests that the issue is some kind of access issue. I tried removing the URL policy for the access token in question (to allow access from any domain), and this seemed to work - the API calls started succeeding again. So I think the problem only concerns tokens scoped to specific URLs/domains.
Here's an example of the 403 response headers I'm seeing;
Access-Control-Allow-Methods GET
Access-Control-Allow-Origin *
Cache-Control no-cache
Connection keep-alive
Content-Length 23
Content-Type application/json; charset=utf-8
Date Tue, 28 Sep 2021 15:41:04 GMT
ETag W/"17-bqIm6pxC4cx+ZoszvXxsClwgWw8"
Via 1.1 572270b8624c0596173ef8189682d917.cloudfront.net (CloudFront)
X-Amz-Cf-Id pxwf39dmi1zB3oFY9dvYia_dVZpcgKpYCTDJT5Vjfp85MsU8NuVeLA==
X-Amz-Cf-Pop LHR52-C1
X-Cache Error from cloudfront
X-Content-Type-Options nosniff
X-Edge-Origin-Shield-Skipped 0
X-Powered-By Express
In particular, the "X-Cache Error from cloudfront" header seems to be of interest - although not really informative enough for me to action anything off of it.

Related

301 response with 'Cross-Origin Request Blocked' despite having correct CORS headers configured

I'm accessing nasa pictures with their public api, but i get this error:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at
[nasa api website] (Reason: CORS header ‘Access-Control-Allow-Origin’ missing).
But when i inspect their response header, the ‘Access-Control-Allow-Origin’ is present and set to '*', here you can see it:
RESPONSE HEADERS:
Access-Control-Allow-Origin *
Age 0
Cache-Control max-age=0, private, must-revalidate
Content-Encoding gzip
Content-Type application/json; charset=utf-8
Date Sat, 28 Mar 2020 14:37:13 GMT
Etag W/"e26hidden..."
Referrer-Policy strict-origin-when-cross-origin
Server openresty
Strict-Transport-Security max-age=31536000; includeSubDomains
Vary Origin
Via https/1.1 api-umbrella (ApacheTrafficServer [cMsSf ]), 1.1 vegur
X-Cache MISS
X-Content-Type-Options nosniff
X-Download-Options noopen
X-Frame-Options SAMEORIGIN
X-Permitted-Cross-Domain-Policies none
X-RateLimit-Limit 1000
X-RateLimit-Remaining 999
X-Request-Id 00c8c415-37ad-474b-bfbd-8e968d60f37f
X-Runtime 0.125778
X-Xss-Protection 1; mode=block
REQUEST HEADERS:
Accept text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Encoding gzip, deflate, br
Accept-Language en-US,en;q=0.5
Connection keep-alive
Host api.nasa.gov
If-None-Match W/"e26chidden.."
Upgrade-Insecure-Requests 1
User-Agent Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:74.0) Gecko/999991 Firefox/74.0
There’s a common mistake that can happen when specifying a URL for a cross-origin request in code, and the mistake can cause browsers to end up reporting a CORS error when in fact the problem is simply an easy-to-overlook mistake in the request URL itself.
The mistake is just a missing "s": using "http" as the URL protocol part instead of "https".
That missing "s" causes the server you sent the request to respond with a 3xx redirect to the equivalent https location of that URL. But the problem is: by default, many/most servers won’t include the Access-Control-Allow-Origin header in 3xx responses. So the browser gets that 3xx, but because it lacks the Access-Control-Allow-Origin header, the browser refuses to let your code follow the redirect; instead the browser stops right there and emits a CORS error.
So when you encounter a case like this, the way to troubleshoot it is: Open the Network pane in devtools and inspect the response. Check the response status code shown there and check the response headers. If the cause is the mistake described in this answer, you’ll see a Location response header. That value is the URL to which the server is trying to redirect the request.
And when you look at the Location value, you might initially think it’s exactly the same as the request URL you have in your code, because it’s easy to overlook that the difference is just that single missing "s". But of course if you take the URL in that Location value and replace the request URL in your frontend code with it, and it works, then the difference becomes apparent.
So in the case of the URL in this question, the problem was just, the frontend code specified a http://mars.jpl.nasa.gov URL that should instead be a https://mars.jpl.nasa.gov URL.

How does google manage sso through multiple domain [google.com, youtube.com]

When we log in google. It set cookies for both Google and youtube domain.
In the network, I see two interesting get call,
https://mail.google.com/accounts/SetOSID?authuser
https://accounts.youtube.com/accounts/SetSID.
How can I achieve the same results?
I have to set a cookie for a website like youtube.com from a website like google.com.
I tried making a XMLHttpRequest(). It didn't work. But If I open my second domain in a separate tab, It set the cookie for the second domain.
My request header :
Request URL: https://mysite.ngrok.io/SetSID?params=xyz
Request Method: GET
Status Code: 302 Found
Remote Address: xx.xx.xxx.xxx:xxx
Referrer Policy: no-referrer-when-downgrade
My Response Header :
Access-Control-Allow-Headers: X-Requested-With
Access-Control-Allow-Origin: *
Content-Length: 184
Content-Type: text/plain; charset=utf-8
Date: Wed, 09 May 2018 16:29:34 GMT
Location: https://mysite.ngrok.io/SetSID/callback?params=xyz
set-cookie: _cookie=sample; Path=/; HttpOnly
Vary: Accept
X-Powered-By: Express
If we inspect the console, the request type is binary.
Did it somehow help them achieve the current goal ?
This API is hit
https://myaccount.google.com/accounts/SetOSID?authuser=1&continue=https%3A%2F%2Faccounts.youtube.com%2Faccounts%2FSetSID
followed by the api
https://accounts.youtube.com/accounts/SetSID?ssdc=1&sidt=xxx&continue=https://mail.google.com/mail/u//?authuser
If I am not wrong, Google is loading the https://accounts.youtube.com in a split second and setting up the cookies. It then continue to mail.google.com. (see the params in continue)

Cookies are not accessible within JavaScript (and the dev tools) but sent along with XHR request (no httponly used)

I'm using both a front-end and a back-end application on a different domain with a session-based authorization. I have setup a working CORS configuration, which works as expected on localhost (e.g. from port :9000 to port :8080). As soon as I deploy the applications on secure domains (both domains only allow HTTPS), the CSRF cookie is not accessible anymore within JavaScript, leading to an incorrect follow-up request of the front-end (missing the CSRF header).
The cookie is set by the back-end in the Set-Cookie header without using the HttpOnly flag. It is actually set somewhere in the browser, because the follow-up request contains both the session cookie and the CSRF cookie. Trying to access it by JavaScript (using e.g. document.cookie in the console) returns an empty string. The DevTools of Chrome do not show any cookies on the front-end domain (the back-end domain is not even listed).
I'm expecting the cookie to be set and being visible on the current domain (front-end domain). I'm using the withCredentials flag of the axios library.
Do you have any idea, why the cookie cannot be accessed from JavaScript nor from the DevTools in Chrome? Does this have anything to do with the Strict-Transport-Security header?
Headers
1. Initial GET Response Header
HTTP/1.1 401 Unauthorized
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: https://[my-frontend-domain]
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Content-Encoding: gzip
Content-Type: application/json;charset=UTF-8
Date: Wed, 20 Sep 2017 11:57:07 GMT
Expires: 0
Pragma: no-cache
Server: Apache-Coyote/1.1
Set-Cookie: CSRF-TOKEN=[some-token]; Path=/
Vary: Origin,Accept-Encoding
X-Content-Type-Options: nosniff
X-Vcap-Request-Id: [some-token]
X-Xss-Protection: 1; mode=block
Content-Length: [some-length]
Strict-Transport-Security: max-age=15768000; includeSubDomains
2. Follow-up POST Request Header
POST /api/authentication HTTP/1.1
Host: [my-backend-host]
Connection: keep-alive
Content-Length: [some-length]
Pragma: no-cache
Cache-Control: no-cache
Accept: application/json, text/plain, */*
Origin: [my-frontend-host]
User-Agent: [Google-Chrome-User-Agent]
Content-Type: application/x-www-form-urlencoded
DNT: 1
Referer: [my-frontend-host]
Accept-Encoding: gzip, deflate, br
Accept-Language: de-DE,de;q=0.8,en-US;q=0.6,en;q=0.4,de-CH;q=0.2,it;q=0.2
Cookie: [some-other-cookies]; CSRF-TOKEN=[same-token-as-in-the-previous-request]
This request should contain a CSRF header which would automatically be added if the cookie was accessible with JavaScript.
TL;DR: Read-access to cross-domain cookies is not possible. Adding the CSRF token to the response header would be a solution. Another solution to completely circumvent CORS & cross-domain requests would be to use a reverse proxy.
Problem
As stated in my question above, the JavaScript part of my front-end (e.g. https://example1.com is trying to access a non-HttpOnly cookie from my back-end on e.g. https://example2.com. To be able to access a remote API with JavaScript, I'm using CORS. This allows the requests to go through. I'm using withCredentials: true on the front-end side and Access-Control-Allow-Credentials: true on the back-end side. The Set-Cookie header then sets the cookie on the back-end origin and not on the front-end origin. Therefore, the cookie is neither visible in the DevTools nor in the document.cookie command in JavaScript.
Cookies, set on the back-end origin, are always part of a request to the back-end via CORS. I would, however, need access to the content of the CSRF cookie to add the token into the request header (to prevent CSRF attacks). As I found out, there is no way to read (or write) cookies from a different domain with JavaScript – no matter what CORS setting is used (see these StackOverflow answers: [1], [2]). The browser restricts access to the content of a cookie to same-domain origins.
Solutions
This leads to the conclusion, that there is no possibility to access the contents of a non-HttpOnly cookie of a different domain. A workaround for this issue would be to set the CSRF token into an additional, custom response header. Those headers can usually also not be accessed by a different domain. They can however be exposed by the back-end's CORS setting Access-Control-Expose-Headers. This is secure, as long as one uses a strictly limited Access-Control-Allow-Origin header.
Another workaround would be to use a reverse proxy, which circumvents the issues with CORS and cross-domain requests at all. Using such a reverse proxy provides a special path on the front-end, which will be redirected to the back-end (server-side). For example, calls to https://front-end/api are proxied to https://back-end/api. Because all requests from the front-end are made to the front-end proxy on the same domain, the browser treats every call as a same-domain request and cookies are directly set on the front-end origin. Drawbacks of this solution comprise potential performance issues because of another server being in-between (delays) and the cookies need to be set on two origins (login twice when directly accessing the back-end). Setting up a reverse proxy can be done with nginx, apache or also very easy by using http-proxy-middleware in Node.js:
var express = require('express');
var proxy = require('http-proxy-middleware');
var options = {
target: 'https://[server]',
changeOrigin: true,
secure: true
};
var exampleProxy = proxy(options);
var app = express();
app.use('/api', exampleProxy);
app.use(express.static(__dirname + "/public"));
app.listen(process.env.PORT || 8080);
In short, it is not possible to access cross-origin cookies, document.cookie can only access the current (or parent) domain cookies.
The hint for that being the root cause, was ssc-hrep3 mentioning "both domains" in his question.
It's very to easy to make that mistake when switching from a localhost deployment, using only different ports for back-end and front-end servers, to one that uses two different hosts. That will work locally, because cookies are shared across ports, and will fail when two different hosts are used. (Unlike some other CORS issues that will be also exposed locally)
See ssc-hrep3's answer for more information and a workaround.
1
You may need to add Access-Control-Allow-Headers header to allow passing of specific headers.
Please try to add following into your server response headers (OPTIONS method) for testing purposes
Access-Control-Allow-Headers: Content-Type, *
In production I recomend to limit headers as following (but I'm not 100% sure in correct header list, need to experiment here if it works)
Access-Control-Allow-Headers: Cookie, Set-Cookie
See this for the reference https://quickleft.com/blog/cookies-with-my-cors/
2
Another problem that you may experince is that you cookies will be set on that domain where your backend service located (not on the domain you querying from)
Please check this also
3
As an option of last problem - browser can prohibit setting cookie for domain b.xxx.com from request which comes from a.xxx.com
In this case you may try to set cookie on the parent domain xxx.com, so it will be available for your client side
As you can read here, the XHR specification explictily disallows reading Set-Cookie. The best way to do it would be to pass information in a header instead of a cookie.

Google oauth 400 response: No 'Access-Control-Allow-Origin' header is present on the requested resource

I'm trying to make request from client-side javascript to the google oauth endpoint (https://accounts.google.com/o/oauth2/v2/auth) with a google calendar scope. Thing is, I can't use google's javascript client, because I'm actually using webpack, and I don't want to have separately include a javascript script outside of my bundle.js.
So instead, I'm using axios (https://github.com/mzabriskie/axios) to make the HTTP GET to the aforementioned token endpoint. Here's what my request looks like:
https://accounts.google.com/o/oauth2/v2/auth?response_type=token&client_id={client id here}&nonce=c8ef445540186351d9108ad64d7a5b65&scope=https:%2F%2Fwww.googleapis.com%2Fauth%2Fcalendar
I generated the nonce using the crypto-js library's MD5 function. Here are the request headers:
Accept:application/json, text/plain, */*
Origin:http://localhost:8000
Referer:http://localhost:8000/admin
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.112 Safari/537.36
The response I get from google looks like this:
alt-svc:quic=":443"; ma=2592000; v="32,31,30,29,28,27,26,25"
alternate-protocol:443:quic
cache-control:no-cache, no-store, max-age=0, must-revalidate
content-encoding:gzip
content-type:text/html; charset=utf-8
date:Mon, 18 Apr 2016 07:16:21 GMT
expires:Fri, 01 Jan 1990 00:00:00 GMT
pragma:no-cache
server:GSE
status:400
x-content-type-options:nosniff
x-frame-options:SAMEORIGIN
x-xss-protection:1; mode=block
And I see this log in my chrome devtools console:
XMLHttpRequest cannot load https://accounts.google.com/o/oauth2/v2/auth?response_type=token&client_id={client id here}&scope=https:%2F%2Fwww.googleapis.com%2Fauth%2Fcalendar. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:8000' is therefore not allowed access. The response had HTTP status code 400.
I've made sure that in my google app console, under the corresponding client id, I've added http://localhost:8000 to the "Authorized Javascript Origins" field, and I've made sure to enable authorization for the google calendar api. I've read that the web client implicit auth flow doesn't use the redirect uri, but I've tried filling that out regardless (I've tried the values http://localhost:8000 and http://localhost:8000/admin, which is the page I'm sending my request from). Having spent hours googling this, I've found nothing to suggest that what I'm doing shouldn't work.
So my question is, if I've allowed http://localhost:8000 as an authorized origin for my client ID, why am I still not able to send a request (albeit via javascript) to that auth endpoint?
I believe you should redirect the browser to this oauth2 endpoint rather than just trying to make an ajax request to it.
https://developers.google.com/identity/protocols/OAuth2UserAgent#overview

HTTPS to HTTP JSONP request

I'm having issues sending JSONP requests from HTTPS site to HTTP site.
I have a (non local) test environment over https (with valid certificate) where i'm able to run all these cross site/"cross protocol" requests successfully (with warnings, but without errors).
Google Chrome Javascript Console output:
The page at https://my.test.environment/ ran insecure content from http://non.secure.site/service?jsonCallback=jsonp1331132928704
However, in production, (on Google App Engine, appspot subdomain) Google Chrome is blocking all requests waiting for user confirmation.
Google Chrome Javascript Console output (special attention to [blocked] text):
[blocked] The page at https://production.appspot.com/ ran insecure content from http://non.secure.site/service?jsonCallback=jsonp1331132928704
I know what i'm doing is not secure, but this services are provided by third-party and there is no SSL communication available so far. I'm really confused with this because i don't get why is working (with warnings) in test environment and not under appspot (Google App Engine).
I tried to investigate headers with no success.
Test environment headers:
Connection:Keep-Alive
Content-Encoding:gzip
Content-Language:es
Content-Length:2524
Content-Type:text/html;charset=utf-8
Date:Wed, 07 Mar 2012 15:48:30 GMT
Keep-Alive:timeout=15, max=100
Set-Cookie: cookie_info...
Vary:Accept-Encoding
APPSpot headers:
access-control-allow-credentials:false
access-control-allow-origin:*
cache-control:no-cache, must-revalidate
content-encoding:gzip
content-length:47890
content-type:text/html; charset=utf-8
date:Wed, 07 Mar 2012 14:52:02 GMT
expires:Fri, 01 Jan 1990 00:00:00 GMT
pragma:no-cache
server:Google Frontend
set-cookie: coookie_info....
status:200 OK
vary:Accept-Encoding
version:HTTP/1.1
I have no idea why this is working on test envinroment and the same approach is blocked on APPSpot by Google Chrome.
Any thoughts?
An apache proxy will make a request to the endpoint on your behalf. You can even have non-jsonp requests to a service (json, xml, images, post, put, delete, etc) because the browser thinks it's doing the request to the same domain.
Your non.secure.site vhost file would contain something like
ProxyRequests Off
ProxyPreserveHost On
<Proxy *>
Allow from all
</Proxy>
ProxyPass /appspot https://production.appspot.com/
ProxyPassReverse /appspot https://production.appspot.com/
Once you set it up you just call the service like...
http://non.secure.site/appspot/service?jsonCallback=jsonp1331132928704
Google proxypass for more info
https://serverfault.com/questions/429404/help-me-understand-how-to-use-proxypass
If you have no other option but using that not secured 3rd-party API you can think about MITM that API yourself.
Create a server side script that will be accessed only through SSL and will act as a proxy or a forwarder between your tag and the API. That way you can increase security by doing your own checks and validations on the data, and because you'll serve it under SSL you won't get any "Mixed Content" errors.
BTW, I haven't tested it there's always the chance that sites under Google certificate served from GAE will act differently.
Hope I could help.
I got the same issue for doing same stuff between http and https. It is a cross domain issue.
The most important thing you need is the server side page you are using for doing curl has to set some headers for allowing http to https connection. This are below....
header("Access-Control-Allow-Origin: your https url");
header("Access-Control-Allow-Methods: POST, GET");
header("Access-Control-Max-Age: 1728000");
header("Access-Control-Allow-Headers: Content-Type, Connection, Depth, User-Agent, X-File-Size, X-Requested-With, If-Modified-Since, X-File-Name, Cache-Control");
header("Connection: close");

Categories

Resources