Google Earth api allow https connection with unregistered certificate - javascript

Is it possible to allow "insecure" https connection to load a kml file from server? Because now if it gets https error it does not load kml. Google Earth loads kml but asks for approval, api just does not do anything...

Nope.
This is one of my major gripes with the plugin. It'll only pull data off an HTTPS connection if there are no errors. This means that:
The SSL certificate must be valid
The SSL certificate must be trusted
There can be no authentication prompts
Passthru authentication that produces no prompting works fine
The only workaround I've found is to go in and manually trust the certificate on the client's machine. Make sure you trust the certificate in each browser that will be used (Chrome, IE, Firefox).
After speaking with Google directly about this -- I wonder if this is something that can be solved, or if it's just one of the "brutal realities" put it place by the web browser container.

Related

how can I force fetch to accept a self-signed certificate in a web app front end?

I am prototyping a simple web app front end that needs to fetch JSON data from my server. The server itself works fine -- I can click on the link, and the JSON data shows up in the browser. But the following simple script fails:
fetch('https://x.x.x.x:8000') // MY URL FAILS
// fetch('https://jsonplaceholder.typicode.com/todos/1') // ALTERNATE URL WORKS
.then(function() {
alert("Successful")
})
.catch(function() {
alert("Failure")
});
I'm completely new to this sort of front-end work (and to Javascript in general), so I might be overlooking an obvious reason, but the two that come to mind are
my server uses a self-signed certificate for testing purpose; and/or
I'm using a non-standard port.
The first of these possible explanations seems more likely.
Accessing the web page generates a bunch of errors, none of which mean anything to me (except for not finding the favicon):
I will temporarily post the full URL in a comment below, in case anyone else wants to see what happens, but I would delete it once a working solution is suggested.
To answer your question as asked, no, you definitely can't use fetch to force the client (browser) to ignore cert errors. Especially in cross-origin requests (and going from one port to another is cross-origin), that would be a HUGE security hole. It would allow anybody who could get a man-in-the-middle position on a victim's network (not hard) to steal information from the victim's HTTPS connections using fraudulent certificates to intercept the HTTPS requests and responses.
You might be able to force server-side JS (in Node or similar) to ignore cert validation errors, since in that case you (hopefully!) control the code the server is running. But it doesn't look like that's what you're doing, and in a web page, somebody else (the server owner) controls what code you (the browser) are running, so you definitely can't let that code turn off important security features!
Attack scenario for if JS could turn off cert validation:
Suppose you and I both control web servers. I, a malicious attacker, would like to intercept the traffic between your users and your web server. I even have a man-in-the-middle (MitM) network position on some of your users! However, you are of course using TLS (via HTTPS), so I can't decrypt or modify the traffic.
However, your users sometimes connect to my server as well, not knowing it is malicious (maybe I mostly use it to serve relatively innocuous stuff, like a comment system or analytics tools, so lots of sites embed my scripts). My server can tell when a browser requests content from an IP address where I could launch an MitM attack, and serve them malicious scripts.
Now, in the real world, this doesn't matter! Sites don't trust other sites, because of the Same-Origin Policy, a critical browser security feature. My site (or the scripts I serve) can cause your users to submit requests to any other server that I choose, but they can't read the responses (if the other server is cross-origin), and they can't turn off certificate validation so my MitM position is mostly useless.
However, suppose that there was a way - as you propose - for scripts to tell the browser "it's ok, just trust this one particular self-signed cert when making this request". This changes everything. My MitM host will generate a self-signed cert (and corresponding private key) for your site, and send the cert to my own web server. When a potential victim loads a script from me, it only only contains instructions to make HTTP requests to your site, it also specifies that the browser should trust the self-signed certificate that my MitM node generated.
The victim's browser would then start the request, attempting to establish a TLS connection to your server. My MitM node would intercept the request, and reply with its self-signed certificate. Normally the browser would reject that, but in this case it doesn't because you created a way to tell browsers to accept a particular self-signed cert. Therefore, the victim's browser trusts my self-signed certificate. The actual request never even makes it to your server. The victim's browser, believing itself to be interacting with the legitimate server (yours) rather than with my MitM host, sends an HTTP request containing secrets such as cookies and/or API keys / bearer tokens / login credentials / etc. My MitM intercepts that (as it's intercepting all traffic), decrypts it (because it is in fact one end of the TLS tunnel, this is trivial), and can access the victim's account on your server. (My MitM host can also duplicate the responses from your server that the victim would usually see, to keep them unsuspecting. The MitM host can even tamper with this responses, if I want it to mislead the user.)
The usual way to solve this is to install the server's certificate as trusted in the browser (or in the OS). That way, the browser will recognize the certificate's issuer (itself) as valid, and consider the certificate valid.
What happens if you go to https://x.x.x.x:8000/ in the browser directly? If you get a certificate error, well, that's your problem: the browser doesn't trust the certificate of the server hosted on that port. You should have an opportunity to temporarily or permanently trust that certificate (exact details will depend on the browser).
Note that, of course, doing this on your own computer won't fix it for anybody else's computer. They'd need to manually trust your certificate too.
The actual solution is, of course, to install a trusted certificate. Perhaps you should try Let's Encrypt or similar, for a nice free cert that every client will trust without extra shenanigans?
Just had the same problem and stumbled upon the solution by accident. It is possible by just making the user open the self-signed site, click on 'Show more' and 'Accept the risk and continue'. After doing that, fetch requests go through like nothing ever went wrong.
It works on Firefox:
and Chrome:
This method just has the caveat that you have to do the setup, and on Chrome it displays 'Not secure' even when the rest of the page is secure.
But if you need HTTPS locally, this works like a charm. Hope this helps the people who came here from Google :)
EDIT:
Also worth mentioning, I tested it on localhost but it works everywhere.

Why do I get a getCurrentPosition() and watchPosition() "insecure origins" error in Chrome on localhost?

I'm working on a website in my local development environment (Ubuntu 16.04) and testing the website on Chrome (58) via http://localhost.example/ - which connects to the local web server.
Running this Javascript:
$(document).ready(function() {
if (navigator.geolocation) {
navigator.geolocation.getCurrentPosition(showPosition);
}
});
Triggers this error:
[Deprecation] getCurrentPosition() and watchPosition() no longer work
on insecure origins. To use this feature, you should consider
switching your application to a secure origin, such as HTTPS. See
https://sites.google.com/a/chromium.org/dev/Home/chromium-security/deprecating-powerful-features-on-insecure-origins for more details.
Why is that? I understand that public facing websites need to be running HTTPS for the geolocation library/ functionality to work. We have a number of public websites running similar code across HTTPS.
However according to the depreciation documentation:
localhost is treated as a secure origin over HTTP, so if you're able
to run your server from localhost, you should be able to test the
feature on that server.
The above Javascript is running in-line in the HTML body loaded via http://localhost.example/test-page/ - so why am I getting the "insecure origins" error in Chrome?
Firefox (53) shows the in browser access location prompt, as expected.
Chrome considers localhost over http as secure. As you are using hostnme localhost.example over http, this is not considered as secure.
Note: Firefox will behave similarly as of Firefox 55
SSL over HTTP protocol ensures the private communication within Client and Server. The information might transmit through the private networks while transmission. Any third person (hacker) on the network can steal that information. To avoid that browser forces the user to use a secure connection.
On the local server, the information is not going beyond our private local network since there is not need of this kind of security. So we can expect a browser to allow geolocation without SSL on the local server. Ideally, the browser developer should skip this validation for localhost and 127.0.0.1 or similar origins.
There must be tricks available to avoid such issues i.e. you can install self-signed SSL certificate on the local server or you can edit the Chrome configuration file to allow domains to access the geolocation, webcam etc.
Helpful links,
https://sites.google.com/a/chromium.org/dev/Home/chromium-security/deprecating-powerful-features-on-insecure-origins
https://ngrok.com/

Can I permit geolocation for unsecured origins for development purposes?

I'm trying to develop a web app that uses javascript's geolocation functions. Since version 50, Google Chrome has blocked access to its geolocation functions for origins not using HTTPS. That's not a problem when I deploy my code to a production environment (which has a valid SSL cert), but for development I'm just using a hosts file entry to preview my code running on a local VM (specifically, Laravel's Homestead), which obviously doesn't have a valid SSL cert.
Is there a way to configure Google Chrome to permit access to the geolocation functions on my development VM, even though it's an "unsecure origin"? Alternatively, is there any way I can configure Homestead so that Chrome will believe that it's secure?
With your configuration (modifying hosts file to point the DNS of your domain to your machine), you can create a trusted certificate, using let's encrypt for example.
Just to mention it, http://localhost is considered secure, and chrome has a --unsafely-treat-insecure-origin-as-secure startup flag as mentioned by #4026 in his answer
The simplest answer to this question turns out to be that Homestead actually sets up self-signed certificates by default, so accessing your dev code via HTTPS works already, albeit with Chrome issuing an invalid certificate warning. However, if you accept that warning and agree to proceed to the insecure site anyway, Chrome allows the site to use Geolocation as though it were secure.
However, if that doesn't take your fancy, there are other options:
Set up Homestead with valid SSL certs
If you have a production webserver and control of a public domain name, you can use certbot to generate a trusted certificate on that server, and then copy the cert files to your Homestead box to use instead of the self-signed certs it auto-generates.
The disadvantage to this approach is that the certificates certbot generates are only valid for 90 days, so you'll need to repeat this process every three months (or any time you re-provision your Homestead box).
Add an A record to your DNS that directs the domain you want to use for development (say local-dev.yourdomain.com) to your production server.
Install certbot on the production server, and run certbot-auto certonly to generate a valid cert for local-dev.yourdomain.com.
Copy the files /etc/letsencrypt/live/local-dev.yourdomain.com/fullchain.pem and /etc/letsencrypt/live/local-dev.yourdomain.com/privkey.pem from your production server to your Homestead box.
Update your Homestead.yaml file to ensure that it directs requests for local-dev.yourdomain.com to the correct code directory on the box.
On your Homestead box, overwrite the files /etc/nginx/ssl/local-dev.yourdomain.com.crt and /etc/nginx/ssl/local-dev.yourdomain.com.keywith the fullchain.pem and privkey.pem files (respectively) that you downloaded in step 3.
Update the hosts file on your development machine to point local-dev.yourdomain.com to 192.168.10.10 (or whatever ip is specified in your Homestead.yaml file).
Access your site via https://local-dev.yourdomain.com and enjoy that hard-earned green padlock icon.
Explicitly configure Chrome to treat your (non-https) domain as secure
Chrome has a --unsafely-treat-insecure-origin-as-secure startup flag that can be used for this purpose, but it requires the use of a distinct user profile (settable via a second flag) in order to work.
From the Chromium wiki:
You can run chrome with the --unsafely-treat-insecure-origin-as-secure="http://example.com" flag (replacing "example.com" with the origin you actually want to test), which will treat that origin as secure for this session. Note that you also need to include the --user-data-dir=/test/only/profile/dir to create a fresh testing profile for the flag to work.
For development purposes, I use ngrok. You can get a secure tunnel to localhost. This allows to debug webhooks locally, test mobile apps or APIs with the backend mapped to http or https, really simple to install and use.
ngrok official site

SSL, detecting if the browser supports Cloud Flare SSL then redirecting

I have implemented Cloud Flare flexible SSL.
The problem is that not all browsers support the type of certificate Cloud Flare use, and as such some users get warning messages.
I wish to redirect users who can use Cloud Flare SSL to the SSL version of my web site.
I initially tried using .htaccess to redirect http to https, however I need a way to only redirect the browsers that can use the SSL cert.
Cloud Flare provide a list of browsers supported, however its not accurate and I found that Safari on OSX 10.8 is not supported, unless updates are run.
In my software I simply try and load a https website, if I get a positive result I send the user to the https website. But users who directly visit my website need to be directed to the correct site.
I can use PHP, Javascript and htaccess.

CORS with IE11+ Access Denied with SSL to localhost

The Very Short Version: is anybody successfully requesting local resources via AJAX, in IE, over SSL? I cannot solve getting an "access denied" error.
The Longer Version:
I am using AJAX to retrieve JSON from an application that runs a local web service. The web service channel is encrypted so that if the remote site is being served over HTTPS, no "insecure resource on a secure page" errors appear.
So, in the address bar is a remote site of some sort... mysite.com. It is receiving information from https://localhost/.
The web service is setting correct headers for CORS and everything works in Chrome and Firefox. In IE, if I put my https://localhost resource into the address bar, the correct resource is returned and displayed. However, when using AJAX (not just the address bar), a security setting in IE is denying access. This is documented (in part) here:
Access denied in IE 10 and 11 when ajax target is localhost
The only proper solution in one reply is to add the requesting domain (mysite.com in this case) to the trusted sites. This works, but we would prefer to not have user intervention... pointing to a knowledge base article on how to add a trusted site is hardly a great user experience. The other replies to that question are invalid for the same reasons as below-->
Some more stumbling around and I discovered this:
CORS with IE, XMLHttpRequest and ssl (https)
Which had a reply containing a wrapper for AJAX requests in IE. It seemed promising, but as it turns out, IE11 has now deprecated the XDomainRequest API. This was probably the right thing for Microsoft to do... but now the "hack" workaround of adding a void onProgress handler to the XDR object is obviously not an option and the once-promising workaround wrapper is rendered null and void.
Has anybody come across either:
a) a way to get those requests through without needing to modify the trusted sites in IE? In other words, an updated version of the workaround in the second link?
b) as a "next best" case: a way to prompt the user to add the site to their trusted zone? "mysite.com wishes to be added to your trusted zones. Confirm Yes/No" and have it done, without them actually needing to open up their native settings dialogues and doing it manually?
For security reasons, Internet Explorer's XDomainRequest object blocks access (see #6 here) to the Intranet Zone from the Internet Zone. I would not be surprised to learn that this block was ported into the IE10+ CORS implementation for the XMLHTTPRequest object.
One approach which may help is to simply change from localhost to 127.0.0.1 as the latter is treated as Internet Zone rather than Intranet Zone and as a consequence the zone-crossing is avoided.
However, you should be aware that Internet Explorer 10+ will block all access to the local computer (via any address) when a site is running in Enhanced Protected Mode (EPM)-- see "Loopback blocked" in this post. Currently, IE uses EPM only for Internet sites when run in the Metro/Immersive browsing mode (not in Desktop) but this could change in the future.
No, there's no mechanism to show the Zones-Configuration UI from JavaScript or to automatically move a site from one zone to another. However, the fact that you have a local server implies that you are running code on the client already, which means you could use the appropriate API to update the Zone Mapping on the client. Note that such a change requires that you CLEARLY obtain user permission first, lest your installer be treated as malware by Windows Defender and other security products.
So, in summary, using the IP address should serve as a workaround for many, but not all platforms.
Since those are two different domains, one solution would be to create an application which proxies the requests in the direction you want.
If you have control over the example.com end, and want to support users who bring their own localhost service, this would be harder, as you would have to provide more requirements for what they bring.
If however, you have control over what runs in localhost, and want to access example.com, and have it access the localhost service, set up redirection in your web server of preference, or use a reverse proxy. You could add an endpoint to the same localhost app which doesn't overlap paths, for example, route http://localhost/proxy/%1 to http://%1, leaving the rest of localhost alone. Or, run a proxy on e.g. http://localhost:8080 which performs a similar redirection, and can serve example.com from a path, and the API from another.
This winds up being a type of "glue" or integration code, which should allow you to mock interactions up to a point.

Categories

Resources