HTTPS to HTTP JSONP request - javascript

I'm having issues sending JSONP requests from HTTPS site to HTTP site.
I have a (non local) test environment over https (with valid certificate) where i'm able to run all these cross site/"cross protocol" requests successfully (with warnings, but without errors).
Google Chrome Javascript Console output:
The page at https://my.test.environment/ ran insecure content from http://non.secure.site/service?jsonCallback=jsonp1331132928704
However, in production, (on Google App Engine, appspot subdomain) Google Chrome is blocking all requests waiting for user confirmation.
Google Chrome Javascript Console output (special attention to [blocked] text):
[blocked] The page at https://production.appspot.com/ ran insecure content from http://non.secure.site/service?jsonCallback=jsonp1331132928704
I know what i'm doing is not secure, but this services are provided by third-party and there is no SSL communication available so far. I'm really confused with this because i don't get why is working (with warnings) in test environment and not under appspot (Google App Engine).
I tried to investigate headers with no success.
Test environment headers:
Connection:Keep-Alive
Content-Encoding:gzip
Content-Language:es
Content-Length:2524
Content-Type:text/html;charset=utf-8
Date:Wed, 07 Mar 2012 15:48:30 GMT
Keep-Alive:timeout=15, max=100
Set-Cookie: cookie_info...
Vary:Accept-Encoding
APPSpot headers:
access-control-allow-credentials:false
access-control-allow-origin:*
cache-control:no-cache, must-revalidate
content-encoding:gzip
content-length:47890
content-type:text/html; charset=utf-8
date:Wed, 07 Mar 2012 14:52:02 GMT
expires:Fri, 01 Jan 1990 00:00:00 GMT
pragma:no-cache
server:Google Frontend
set-cookie: coookie_info....
status:200 OK
vary:Accept-Encoding
version:HTTP/1.1
I have no idea why this is working on test envinroment and the same approach is blocked on APPSpot by Google Chrome.
Any thoughts?

An apache proxy will make a request to the endpoint on your behalf. You can even have non-jsonp requests to a service (json, xml, images, post, put, delete, etc) because the browser thinks it's doing the request to the same domain.
Your non.secure.site vhost file would contain something like
ProxyRequests Off
ProxyPreserveHost On
<Proxy *>
Allow from all
</Proxy>
ProxyPass /appspot https://production.appspot.com/
ProxyPassReverse /appspot https://production.appspot.com/
Once you set it up you just call the service like...
http://non.secure.site/appspot/service?jsonCallback=jsonp1331132928704
Google proxypass for more info
https://serverfault.com/questions/429404/help-me-understand-how-to-use-proxypass

If you have no other option but using that not secured 3rd-party API you can think about MITM that API yourself.
Create a server side script that will be accessed only through SSL and will act as a proxy or a forwarder between your tag and the API. That way you can increase security by doing your own checks and validations on the data, and because you'll serve it under SSL you won't get any "Mixed Content" errors.
BTW, I haven't tested it there's always the chance that sites under Google certificate served from GAE will act differently.
Hope I could help.

I got the same issue for doing same stuff between http and https. It is a cross domain issue.
The most important thing you need is the server side page you are using for doing curl has to set some headers for allowing http to https connection. This are below....
header("Access-Control-Allow-Origin: your https url");
header("Access-Control-Allow-Methods: POST, GET");
header("Access-Control-Max-Age: 1728000");
header("Access-Control-Allow-Headers: Content-Type, Connection, Depth, User-Agent, X-File-Size, X-Requested-With, If-Modified-Since, X-File-Name, Cache-Control");
header("Connection: close");

Related

Mapbox tiles API gives 403

I'm receiving a 403 error when requesting data from the Mapbox static tiles API. This API call has been working fine until recently, and we haven't made any changes to our web server configuration, nor our URL policy.
We are accessing via a web page on one of the whitelisted domains, not a tool/CLI.
A 403 suggests that the issue is some kind of access issue. I tried removing the URL policy for the access token in question (to allow access from any domain), and this seemed to work - the API calls started succeeding again. So I think the problem only concerns tokens scoped to specific URLs/domains.
Here's an example of the 403 response headers I'm seeing;
Access-Control-Allow-Methods GET
Access-Control-Allow-Origin *
Cache-Control no-cache
Connection keep-alive
Content-Length 23
Content-Type application/json; charset=utf-8
Date Tue, 28 Sep 2021 15:41:04 GMT
ETag W/"17-bqIm6pxC4cx+ZoszvXxsClwgWw8"
Via 1.1 572270b8624c0596173ef8189682d917.cloudfront.net (CloudFront)
X-Amz-Cf-Id pxwf39dmi1zB3oFY9dvYia_dVZpcgKpYCTDJT5Vjfp85MsU8NuVeLA==
X-Amz-Cf-Pop LHR52-C1
X-Cache Error from cloudfront
X-Content-Type-Options nosniff
X-Edge-Origin-Shield-Skipped 0
X-Powered-By Express
In particular, the "X-Cache Error from cloudfront" header seems to be of interest - although not really informative enough for me to action anything off of it.

Cookies are not accessible within JavaScript (and the dev tools) but sent along with XHR request (no httponly used)

I'm using both a front-end and a back-end application on a different domain with a session-based authorization. I have setup a working CORS configuration, which works as expected on localhost (e.g. from port :9000 to port :8080). As soon as I deploy the applications on secure domains (both domains only allow HTTPS), the CSRF cookie is not accessible anymore within JavaScript, leading to an incorrect follow-up request of the front-end (missing the CSRF header).
The cookie is set by the back-end in the Set-Cookie header without using the HttpOnly flag. It is actually set somewhere in the browser, because the follow-up request contains both the session cookie and the CSRF cookie. Trying to access it by JavaScript (using e.g. document.cookie in the console) returns an empty string. The DevTools of Chrome do not show any cookies on the front-end domain (the back-end domain is not even listed).
I'm expecting the cookie to be set and being visible on the current domain (front-end domain). I'm using the withCredentials flag of the axios library.
Do you have any idea, why the cookie cannot be accessed from JavaScript nor from the DevTools in Chrome? Does this have anything to do with the Strict-Transport-Security header?
Headers
1. Initial GET Response Header
HTTP/1.1 401 Unauthorized
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: https://[my-frontend-domain]
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Content-Encoding: gzip
Content-Type: application/json;charset=UTF-8
Date: Wed, 20 Sep 2017 11:57:07 GMT
Expires: 0
Pragma: no-cache
Server: Apache-Coyote/1.1
Set-Cookie: CSRF-TOKEN=[some-token]; Path=/
Vary: Origin,Accept-Encoding
X-Content-Type-Options: nosniff
X-Vcap-Request-Id: [some-token]
X-Xss-Protection: 1; mode=block
Content-Length: [some-length]
Strict-Transport-Security: max-age=15768000; includeSubDomains
2. Follow-up POST Request Header
POST /api/authentication HTTP/1.1
Host: [my-backend-host]
Connection: keep-alive
Content-Length: [some-length]
Pragma: no-cache
Cache-Control: no-cache
Accept: application/json, text/plain, */*
Origin: [my-frontend-host]
User-Agent: [Google-Chrome-User-Agent]
Content-Type: application/x-www-form-urlencoded
DNT: 1
Referer: [my-frontend-host]
Accept-Encoding: gzip, deflate, br
Accept-Language: de-DE,de;q=0.8,en-US;q=0.6,en;q=0.4,de-CH;q=0.2,it;q=0.2
Cookie: [some-other-cookies]; CSRF-TOKEN=[same-token-as-in-the-previous-request]
This request should contain a CSRF header which would automatically be added if the cookie was accessible with JavaScript.
TL;DR: Read-access to cross-domain cookies is not possible. Adding the CSRF token to the response header would be a solution. Another solution to completely circumvent CORS & cross-domain requests would be to use a reverse proxy.
Problem
As stated in my question above, the JavaScript part of my front-end (e.g. https://example1.com is trying to access a non-HttpOnly cookie from my back-end on e.g. https://example2.com. To be able to access a remote API with JavaScript, I'm using CORS. This allows the requests to go through. I'm using withCredentials: true on the front-end side and Access-Control-Allow-Credentials: true on the back-end side. The Set-Cookie header then sets the cookie on the back-end origin and not on the front-end origin. Therefore, the cookie is neither visible in the DevTools nor in the document.cookie command in JavaScript.
Cookies, set on the back-end origin, are always part of a request to the back-end via CORS. I would, however, need access to the content of the CSRF cookie to add the token into the request header (to prevent CSRF attacks). As I found out, there is no way to read (or write) cookies from a different domain with JavaScript – no matter what CORS setting is used (see these StackOverflow answers: [1], [2]). The browser restricts access to the content of a cookie to same-domain origins.
Solutions
This leads to the conclusion, that there is no possibility to access the contents of a non-HttpOnly cookie of a different domain. A workaround for this issue would be to set the CSRF token into an additional, custom response header. Those headers can usually also not be accessed by a different domain. They can however be exposed by the back-end's CORS setting Access-Control-Expose-Headers. This is secure, as long as one uses a strictly limited Access-Control-Allow-Origin header.
Another workaround would be to use a reverse proxy, which circumvents the issues with CORS and cross-domain requests at all. Using such a reverse proxy provides a special path on the front-end, which will be redirected to the back-end (server-side). For example, calls to https://front-end/api are proxied to https://back-end/api. Because all requests from the front-end are made to the front-end proxy on the same domain, the browser treats every call as a same-domain request and cookies are directly set on the front-end origin. Drawbacks of this solution comprise potential performance issues because of another server being in-between (delays) and the cookies need to be set on two origins (login twice when directly accessing the back-end). Setting up a reverse proxy can be done with nginx, apache or also very easy by using http-proxy-middleware in Node.js:
var express = require('express');
var proxy = require('http-proxy-middleware');
var options = {
target: 'https://[server]',
changeOrigin: true,
secure: true
};
var exampleProxy = proxy(options);
var app = express();
app.use('/api', exampleProxy);
app.use(express.static(__dirname + "/public"));
app.listen(process.env.PORT || 8080);
In short, it is not possible to access cross-origin cookies, document.cookie can only access the current (or parent) domain cookies.
The hint for that being the root cause, was ssc-hrep3 mentioning "both domains" in his question.
It's very to easy to make that mistake when switching from a localhost deployment, using only different ports for back-end and front-end servers, to one that uses two different hosts. That will work locally, because cookies are shared across ports, and will fail when two different hosts are used. (Unlike some other CORS issues that will be also exposed locally)
See ssc-hrep3's answer for more information and a workaround.
1
You may need to add Access-Control-Allow-Headers header to allow passing of specific headers.
Please try to add following into your server response headers (OPTIONS method) for testing purposes
Access-Control-Allow-Headers: Content-Type, *
In production I recomend to limit headers as following (but I'm not 100% sure in correct header list, need to experiment here if it works)
Access-Control-Allow-Headers: Cookie, Set-Cookie
See this for the reference https://quickleft.com/blog/cookies-with-my-cors/
2
Another problem that you may experince is that you cookies will be set on that domain where your backend service located (not on the domain you querying from)
Please check this also
3
As an option of last problem - browser can prohibit setting cookie for domain b.xxx.com from request which comes from a.xxx.com
In this case you may try to set cookie on the parent domain xxx.com, so it will be available for your client side
As you can read here, the XHR specification explictily disallows reading Set-Cookie. The best way to do it would be to pass information in a header instead of a cookie.

Google oauth 400 response: No 'Access-Control-Allow-Origin' header is present on the requested resource

I'm trying to make request from client-side javascript to the google oauth endpoint (https://accounts.google.com/o/oauth2/v2/auth) with a google calendar scope. Thing is, I can't use google's javascript client, because I'm actually using webpack, and I don't want to have separately include a javascript script outside of my bundle.js.
So instead, I'm using axios (https://github.com/mzabriskie/axios) to make the HTTP GET to the aforementioned token endpoint. Here's what my request looks like:
https://accounts.google.com/o/oauth2/v2/auth?response_type=token&client_id={client id here}&nonce=c8ef445540186351d9108ad64d7a5b65&scope=https:%2F%2Fwww.googleapis.com%2Fauth%2Fcalendar
I generated the nonce using the crypto-js library's MD5 function. Here are the request headers:
Accept:application/json, text/plain, */*
Origin:http://localhost:8000
Referer:http://localhost:8000/admin
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.112 Safari/537.36
The response I get from google looks like this:
alt-svc:quic=":443"; ma=2592000; v="32,31,30,29,28,27,26,25"
alternate-protocol:443:quic
cache-control:no-cache, no-store, max-age=0, must-revalidate
content-encoding:gzip
content-type:text/html; charset=utf-8
date:Mon, 18 Apr 2016 07:16:21 GMT
expires:Fri, 01 Jan 1990 00:00:00 GMT
pragma:no-cache
server:GSE
status:400
x-content-type-options:nosniff
x-frame-options:SAMEORIGIN
x-xss-protection:1; mode=block
And I see this log in my chrome devtools console:
XMLHttpRequest cannot load https://accounts.google.com/o/oauth2/v2/auth?response_type=token&client_id={client id here}&scope=https:%2F%2Fwww.googleapis.com%2Fauth%2Fcalendar. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:8000' is therefore not allowed access. The response had HTTP status code 400.
I've made sure that in my google app console, under the corresponding client id, I've added http://localhost:8000 to the "Authorized Javascript Origins" field, and I've made sure to enable authorization for the google calendar api. I've read that the web client implicit auth flow doesn't use the redirect uri, but I've tried filling that out regardless (I've tried the values http://localhost:8000 and http://localhost:8000/admin, which is the page I'm sending my request from). Having spent hours googling this, I've found nothing to suggest that what I'm doing shouldn't work.
So my question is, if I've allowed http://localhost:8000 as an authorized origin for my client ID, why am I still not able to send a request (albeit via javascript) to that auth endpoint?
I believe you should redirect the browser to this oauth2 endpoint rather than just trying to make an ajax request to it.
https://developers.google.com/identity/protocols/OAuth2UserAgent#overview

CORS & example.com

I have a trouble with CORS.
I use an API which has
Access-Control-Allow-Origin: http://www.example.com
Because of that, I can't access the informations I need to continue my website.
But, strangely, I can see it if I put the API url into the Firefox address bar.
This is my header request :
Host: carto.strasmap.eu
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:39.0) Gecko/20100101 Firefox/39.0
Accept: application/json, text/plain
Accept-Language: fr,fr-FR;q=0.8,en-US;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
Referer: http://192.168.1.49:9000/
Origin: http://192.168.1.49:9000
Connection: keep-alive
And the Header Answer
Access-Control-Allow-Methods: GET, POST, PUT, DELETE
Access-Control-Allow-Origin: http://www.example.com/
Access-Control-Max-Age: 0
Cache-Control: max-age=31536000
Connection: Keep-Alive
Content-Encoding: gzip
Content-Length: 781
Content-Type: text/javascript; charset=utf-8
Date: Sat, 25 Jul 2015 01:23:50 GMT
Expires: Sun, 24 Jul 2016 01:23:50 GMT
Keep-Alive: timeout=5, max=100
Server: Apache
Vary: Accept-Encoding
X-Powered-By: PHP/5.6.8
Of course, I can't modify the API.
I use AngularJS for my website.
Is there anything I can do to get the data hidden behind this ?
Thank you for your help
Lothigo
No.
If the Access-Control-Allow-Origin header is example.com and you're attempting to access it from any other origin, you won't be able to.
Is there anything I can do to get the data hidden behind this ?
No, not with pure client code, but Yes if you can involve a custom server. See possible work-arounds discussed below.
Same origin security in a browser prevents an Ajax request to a page at origin Y when that request is made from a web page that is not also origin Y. This can only be changed by having the server that is serving the request enable CORS from the origin who's page you are making the request from or from all origins. The only way to change that is by changing the CORS support on the API server. There is nothing you can do purely on the client side to override the same origin protections. And, if there was a pure client thing that could override it, it would be quickly closed as a security hole.
Same origin protections do not apply to a URL typed into the URL bar since there is no "origin" that is different than the URL entered into the URL bar. That explains why you can access the API server by typing URLs directly into the URL bar. The same origin protections for Ajax calls made from a web page are additional security measures implemented by the browser that do not apply when entering a URL directly into the URL bar. But, there is no way to use this capability from Javascript to skirt the same origin protections because Javascript cannot freely reach across windows of different origins.
There are some possible work-arounds.
If the API server supports JSONP, then you could use that. But, since JSONP is specifically for cross origin requests, if the API server isn't allowing cross origin requests with a regular Ajax request, then they probably wouldn't be allowing them via JSONP.
You can implement your own server proxy. From your existing web page, you would make a request of your own server proxy. That proxy would either already be on the same origin as your web page or would support CORS from at least the origin on your web page so that the Ajax call to your own server proxy would be permitted. Your server proxy would then call the API server to get the results you want and return them via the Ajax call made to the server proxy. Since same origin protections are implemented and enforced only in the browser for Ajax calls made from the browser, the server proxy is not limited by them and it can freely access the API server.
if you are accessing the other origin(host) from your origin then you will not access the api in ajax call because other origin will have been disallow the access of another host. So to access the api you need to allow the particular path pattern on server side which you want to access.
web.xml file in java project.
<web-app>
<filter>
<filter-name>myFilter</filter-name>
<filter-class>CorsFilter</filter-class>
</filter>
<filter-mapping>
<filter-name>myFilter</filter-name>
<url-pattern>/rest/*</url-pattern>
</filter-mapping>
<servlet>
<servlet-name>myRestPath</servlet-name>
<servlet-class>com.MyServlet</servlet-class>
</servlet>
<servlet-mapping>
<url-pattern>/rest/*</url-pattern>
<servlet-name>myRestPath</servlet-name>
</servlet-mapping>
</web-app>
You can edit your login in MyFilter.java file or you can also add the init parameter in web.xml file.
MyFilter.java
public class CorsFilter implements Filter{
#Override
public void init(FilterConfig fConfig) throws ServletException {
// do something
}
#Override
public void doFilter(ServletRequest request, ServletResponse response,
FilterChain filterChain) throws IOException, ServletException {
HttpServletRequest request = (HttpServletRequest) request;
HttpServletResponse response = (HttpServletResponse) response;
//here * is used to allow all the origin i.e. anyone can access the api
response.addHeader("Access-Control-Allow-Origin", "*");
// methods which is allowable from the filter
response.addHeader("Access-Control-Allow-Methods", "POST, GET, OPTIONS,PUT, DELETE, HEAD");
//custom header entry
response.addHeader("Access-Control-Allow-Headers", "X-PINGOTHER, Origin, X-Requested-With, Content-Type, Accept");
filterChain.doFilter(request, response);
}
#Override
public void destroy() {
//do something
}
}
I think it will be right solution for your query.

Can browsers react to Set-Cookie specified in headers in an XSS jquery.getJSON() request?

(Note: This is a follow up to my question Can jQuery.getJSON put a domain's cookies in the header of the request it makes? and covers the XSS case of Setting a cookie in an AJAX request?)
I've been told I'm unable to set cookies to be read by other domains that are not subdomains of the current domain using $.cookie(..., ..., {domain: ...}). But in a comment on a response to my last question, #zanlok said "The server's reply, however, can definitely set a cookie" and it got two upvotes.
So I thought I'd try using a service which was created for the explicit purpose of setting cookies called Freebase's "touch" API. The call looks like:
$.getJSON("http://api.sandbox-freebase.com/api/service/touch",
{}, // URL parameters
afterCookieIsSetCallback); // Callback function
Looking in FireBug at the response header it's like this:
Date Wed, 24 Nov 2010 03:35:28 GMT
Server Apache
X-Metaweb-Cost [...]
Etag [...]
Expires Wed, 24 Nov 2010 03:35:29 GMT
Cache-Control no-store
Vary Accept-Encoding
Content-Encoding gzip
Set-Cookie mwLastWriteTime=1290569730|10325_9202a8c04000641f80000000199eff96|sandbox; expires=Thu, 25-Nov-2010 03:35:28 GMT; Path=/
Last-Modified Wed, 24 Nov 2010 03:35:28 GMT
Content-Length 134
Content-Type text/plain; charset=utf-8
X-Cache MISS from cache01.sandbox.sjc1.metaweb.com
Connection keep-alive
X-Metaweb-TID cache;cache01.sandbox.sjc1:8101;2010-11-24T03:35:28Z;0001
So there's definitely a Set-Cookie in there, and the script runs the response handler. Yet the cookie is not present in the request headers for later JSON requests this script makes to .sandbox-freebase.com.
(By contrast, simply typing the touch api URL into the address bar and loading it that way does set the cookie for future requests. That applies even in other tabs.)
This seems to be a deviation from a prior "expected behavior", because there was a toolkit published by MetaWeb circa "2007-2009" which seemed to think such an approach could work:
http://www.google.com/codesearch/p?hl=en#v099O4eZ5cA/trunk/src/freebase/api.js&q=touch%20package:http://mjt%5C.googlecode%5C.com&l=340
Without knowing much about it, I'm wondering if it was a recent change that Firefox adopted and then WebKit followed suit. Perhaps the one mentioned here:
http://trac.webkit.org/browser/trunk/WebCore/xml/XMLHttpRequest.cpp#L856
So is there any canonical documentation on this particular issue?
The AJAX call you are making, is making a request to a domain outside of the domain of the top level url(the url in the address bar). This results in it being a 3rd party cookie, by default Internet explorer won't persist a 3rd party cookie. Meaning that the cookie will come back in the Set-Cookie header on the first request, but subsequent requests that you make to that server will not have that cookie sent in the request.
Like you said, if you go directly to the url in your browser it works. This is because in this case it's a first party cookie.
In order for IE to accept 3rd party cookie's the server that is sending the SET-COOKIE header on it's response, must also have a P3P Policy Header set.
Here is an example, when you navigate to CNN, you will notice one of the requests it makes is to a domain name of b.scorecardresearch.com, scorecardresearch is dropping a tracking cookie, but this cookie is considered a 3rd party cookie. So in order to make it work they had to also in include a p3p header, see headers below:
HTTP/1.1 200 OK
Content-Length: 43
Content-Type: image/gif
Date: Thu, 02 Dec 2010 19:57:16 GMT
Connection: keep-alive
Set-Cookie: UID=133a68a4-63.217.184.91-1288107038; expires=Sat, 01-Dec-2012 19:57:16 GMT; path=/; domain=.scorecardresearch.com
P3P: policyref="/w3c/p3p.xml", CP="NOI DSP COR NID OUR IND COM STA OTC"
Expires: Mon, 01 Jan 1990 00:00:00 GMT
Pragma: no-cache
Cache-Control: private, no-cache, no-cache=Set-Cookie, no-store, proxy-revalidate
Server: CS
If you were to copy this header and add it to the response, you would notice that the cookie's start working,
P3P: policyref="/w3c/p3p.xml", CP="NOI DSP COR NID OUR IND COM STA OTC"
It's best that you craft a P3P header specific for your business, but the above should work for testing purposes.
If I correctly understand you, you are wondering why the server sends Set-Cookie only on the first request. If that is true, then it's by design - take a look here:
http://en.wikipedia.org/wiki/HTTP_cookie
Set-Cookie is like a setter - the server sends it for the browser to cache it locally. It can send every time, but there is no need to do that, so it will send it again only if it needs to change the value stored locally.
Browser, on the other hand, will send Cookie header every time with the contents set by the last issued Set-Cookie from the server.

Categories

Resources