As of January Chrome will show it's users that a site is being insecure if it contains either a password or credit-card field and isn't served via https. (see https://security.googleblog.com/2016/09/moving-towards-more-secure-web.html )
This circumstance raises a little problem:
When you have a web service running locally (for example the web-login page of your router at home ) which is not served with HTTPS since there is the strong possibility that the certificate will expire before the user updates it's software, your user will see this warning.
Mocking the password-field seems too hacky and will likely cause problems on mobile devices.
What would be good alternatives to solve this problem without serving the site with HTTPS?
You should not consider this a problem, but a feature.
If you're running the service on HTTP instead of HTTPS, then your users should expect to be warned about it. Allowing for any exceptions to Chrome`s new rule would be likely to cause uncertainty.
The fact that a site owner is worried a certificate will expire is no excuse: Would it not be preferable to use certificates anyway, and rather risk getting a warning about an outdated certificate instead? That is at least a visible problem that can be fixed fairly easily.
If the new standard implies that a user should be warned about an insecure connection, then hiding that warning means the standard is broken, and that you're providing an false sense of security, which may be worse than no security.
If you want to host a page over an unencrypted connection, that's up to you, but you should probably just accept that the warning will be shown.
Related
I am prototyping a simple web app front end that needs to fetch JSON data from my server. The server itself works fine -- I can click on the link, and the JSON data shows up in the browser. But the following simple script fails:
fetch('https://x.x.x.x:8000') // MY URL FAILS
// fetch('https://jsonplaceholder.typicode.com/todos/1') // ALTERNATE URL WORKS
.then(function() {
alert("Successful")
})
.catch(function() {
alert("Failure")
});
I'm completely new to this sort of front-end work (and to Javascript in general), so I might be overlooking an obvious reason, but the two that come to mind are
my server uses a self-signed certificate for testing purpose; and/or
I'm using a non-standard port.
The first of these possible explanations seems more likely.
Accessing the web page generates a bunch of errors, none of which mean anything to me (except for not finding the favicon):
I will temporarily post the full URL in a comment below, in case anyone else wants to see what happens, but I would delete it once a working solution is suggested.
To answer your question as asked, no, you definitely can't use fetch to force the client (browser) to ignore cert errors. Especially in cross-origin requests (and going from one port to another is cross-origin), that would be a HUGE security hole. It would allow anybody who could get a man-in-the-middle position on a victim's network (not hard) to steal information from the victim's HTTPS connections using fraudulent certificates to intercept the HTTPS requests and responses.
You might be able to force server-side JS (in Node or similar) to ignore cert validation errors, since in that case you (hopefully!) control the code the server is running. But it doesn't look like that's what you're doing, and in a web page, somebody else (the server owner) controls what code you (the browser) are running, so you definitely can't let that code turn off important security features!
Attack scenario for if JS could turn off cert validation:
Suppose you and I both control web servers. I, a malicious attacker, would like to intercept the traffic between your users and your web server. I even have a man-in-the-middle (MitM) network position on some of your users! However, you are of course using TLS (via HTTPS), so I can't decrypt or modify the traffic.
However, your users sometimes connect to my server as well, not knowing it is malicious (maybe I mostly use it to serve relatively innocuous stuff, like a comment system or analytics tools, so lots of sites embed my scripts). My server can tell when a browser requests content from an IP address where I could launch an MitM attack, and serve them malicious scripts.
Now, in the real world, this doesn't matter! Sites don't trust other sites, because of the Same-Origin Policy, a critical browser security feature. My site (or the scripts I serve) can cause your users to submit requests to any other server that I choose, but they can't read the responses (if the other server is cross-origin), and they can't turn off certificate validation so my MitM position is mostly useless.
However, suppose that there was a way - as you propose - for scripts to tell the browser "it's ok, just trust this one particular self-signed cert when making this request". This changes everything. My MitM host will generate a self-signed cert (and corresponding private key) for your site, and send the cert to my own web server. When a potential victim loads a script from me, it only only contains instructions to make HTTP requests to your site, it also specifies that the browser should trust the self-signed certificate that my MitM node generated.
The victim's browser would then start the request, attempting to establish a TLS connection to your server. My MitM node would intercept the request, and reply with its self-signed certificate. Normally the browser would reject that, but in this case it doesn't because you created a way to tell browsers to accept a particular self-signed cert. Therefore, the victim's browser trusts my self-signed certificate. The actual request never even makes it to your server. The victim's browser, believing itself to be interacting with the legitimate server (yours) rather than with my MitM host, sends an HTTP request containing secrets such as cookies and/or API keys / bearer tokens / login credentials / etc. My MitM intercepts that (as it's intercepting all traffic), decrypts it (because it is in fact one end of the TLS tunnel, this is trivial), and can access the victim's account on your server. (My MitM host can also duplicate the responses from your server that the victim would usually see, to keep them unsuspecting. The MitM host can even tamper with this responses, if I want it to mislead the user.)
The usual way to solve this is to install the server's certificate as trusted in the browser (or in the OS). That way, the browser will recognize the certificate's issuer (itself) as valid, and consider the certificate valid.
What happens if you go to https://x.x.x.x:8000/ in the browser directly? If you get a certificate error, well, that's your problem: the browser doesn't trust the certificate of the server hosted on that port. You should have an opportunity to temporarily or permanently trust that certificate (exact details will depend on the browser).
Note that, of course, doing this on your own computer won't fix it for anybody else's computer. They'd need to manually trust your certificate too.
The actual solution is, of course, to install a trusted certificate. Perhaps you should try Let's Encrypt or similar, for a nice free cert that every client will trust without extra shenanigans?
Just had the same problem and stumbled upon the solution by accident. It is possible by just making the user open the self-signed site, click on 'Show more' and 'Accept the risk and continue'. After doing that, fetch requests go through like nothing ever went wrong.
It works on Firefox:
and Chrome:
This method just has the caveat that you have to do the setup, and on Chrome it displays 'Not secure' even when the rest of the page is secure.
But if you need HTTPS locally, this works like a charm. Hope this helps the people who came here from Google :)
EDIT:
Also worth mentioning, I tested it on localhost but it works everywhere.
I have a logic written on my server mostly doing curl requests (e.g. accessing social networks). though, some of the sites, will be blocking my server(s) IPs soon.
I can of course, use VPN or deploy multiple servers per location, but it won't get accurate, and still some of the networks might get block the user account.
I am trying to find creative solution to run it from the user browser (it is ok to ask for his permission, as it is an action he is explicitly trying to execute) Though I am trying to avoid extra installations (e.g. downloadable plugins\extension or a desktop app)
Is there a way to turn the client browser into a server-proxy, to run those curl-calls from his machine instead of sending it from my own server? (e.g. using web-sockets, polling, etc.)
It depends on exactly what sort of curl requests you are making. In theory, you could simulate these using an XMLHttpRequest. However, for security reasons these are generally not allowed to access resources hosted on a different site. (Imagine the sort of issues it could cause for instance if visiting any website could cause your browser to start making requests to Facebook to send messages on your behalf.) Basically it will depend on the Cross-origin request policy of the social networks that you are querying. If the requests being sent are intended to be publicly available without authentication then it is possible that your system will work, otherwise it will probably be blocked.
Recently, I switched the server for my site, and I managed to lose the decrypted SSL key, and I cannot remember the password for the encrypted one.
It turned out that the server had set HSTS on, and now many visitors are unable to load the pages since I don't have a valid SSL cert, and their browsers refuse to connect via http due to the HSTS.
So, I need a way to disable that HSTS from their browsers. Asking them to clear browsing data is a no-go, but I was wondering if I could make a firefox/chrome compatible javascript to clear it. (The script would be on a different domain)
I've been digging around a bit, but haven't found much info on how I should approach the problem, if it is even possible. All other suggestions are welcome too.
HSTS is there to make a tradeoff: you take responsibility to from now and forever provide a secure SSL connection which the browser can count on, which will cause the browser to refuse anything but an SSL connection to your domain. It puts an additional burden on you, but increases security for your visitors.
The browser stores this preference in an internal database which cannot be cleared by any website. If it'd be possible for any site to simply revoke this preference via Javascript, the whole system would be pointless.
You'll have to manually clear the database and/or remove that specific entry. Every browser does it differently, see http://classically.me/blogs/how-clear-hsts-settings-major-browsers for an overview.
The real solution:
Install a valid cert
Get people to visit your site
Send a new header
These days, getting a valid cert is free, or costs less than a sandwich ($8 or so).
The Very Short Version: is anybody successfully requesting local resources via AJAX, in IE, over SSL? I cannot solve getting an "access denied" error.
The Longer Version:
I am using AJAX to retrieve JSON from an application that runs a local web service. The web service channel is encrypted so that if the remote site is being served over HTTPS, no "insecure resource on a secure page" errors appear.
So, in the address bar is a remote site of some sort... mysite.com. It is receiving information from https://localhost/.
The web service is setting correct headers for CORS and everything works in Chrome and Firefox. In IE, if I put my https://localhost resource into the address bar, the correct resource is returned and displayed. However, when using AJAX (not just the address bar), a security setting in IE is denying access. This is documented (in part) here:
Access denied in IE 10 and 11 when ajax target is localhost
The only proper solution in one reply is to add the requesting domain (mysite.com in this case) to the trusted sites. This works, but we would prefer to not have user intervention... pointing to a knowledge base article on how to add a trusted site is hardly a great user experience. The other replies to that question are invalid for the same reasons as below-->
Some more stumbling around and I discovered this:
CORS with IE, XMLHttpRequest and ssl (https)
Which had a reply containing a wrapper for AJAX requests in IE. It seemed promising, but as it turns out, IE11 has now deprecated the XDomainRequest API. This was probably the right thing for Microsoft to do... but now the "hack" workaround of adding a void onProgress handler to the XDR object is obviously not an option and the once-promising workaround wrapper is rendered null and void.
Has anybody come across either:
a) a way to get those requests through without needing to modify the trusted sites in IE? In other words, an updated version of the workaround in the second link?
b) as a "next best" case: a way to prompt the user to add the site to their trusted zone? "mysite.com wishes to be added to your trusted zones. Confirm Yes/No" and have it done, without them actually needing to open up their native settings dialogues and doing it manually?
For security reasons, Internet Explorer's XDomainRequest object blocks access (see #6 here) to the Intranet Zone from the Internet Zone. I would not be surprised to learn that this block was ported into the IE10+ CORS implementation for the XMLHTTPRequest object.
One approach which may help is to simply change from localhost to 127.0.0.1 as the latter is treated as Internet Zone rather than Intranet Zone and as a consequence the zone-crossing is avoided.
However, you should be aware that Internet Explorer 10+ will block all access to the local computer (via any address) when a site is running in Enhanced Protected Mode (EPM)-- see "Loopback blocked" in this post. Currently, IE uses EPM only for Internet sites when run in the Metro/Immersive browsing mode (not in Desktop) but this could change in the future.
No, there's no mechanism to show the Zones-Configuration UI from JavaScript or to automatically move a site from one zone to another. However, the fact that you have a local server implies that you are running code on the client already, which means you could use the appropriate API to update the Zone Mapping on the client. Note that such a change requires that you CLEARLY obtain user permission first, lest your installer be treated as malware by Windows Defender and other security products.
So, in summary, using the IP address should serve as a workaround for many, but not all platforms.
Since those are two different domains, one solution would be to create an application which proxies the requests in the direction you want.
If you have control over the example.com end, and want to support users who bring their own localhost service, this would be harder, as you would have to provide more requirements for what they bring.
If however, you have control over what runs in localhost, and want to access example.com, and have it access the localhost service, set up redirection in your web server of preference, or use a reverse proxy. You could add an endpoint to the same localhost app which doesn't overlap paths, for example, route http://localhost/proxy/%1 to http://%1, leaving the rest of localhost alone. Or, run a proxy on e.g. http://localhost:8080 which performs a similar redirection, and can serve example.com from a path, and the API from another.
This winds up being a type of "glue" or integration code, which should allow you to mock interactions up to a point.
I'm designing a widget which, when clicked upon in a third-party site, will require authentication (username/password) from our server. I tend to see most, if not all, sites that use widgets requiring authentication tend to use a pop-up window rather than an in-page dialog box (such as using an inserted iframe), and I was wondering why this is the case. Are there actual security factors that distinguish an inserted iframe vs a popup window?
In many cases, this is due to efficiency and security:
In many sites you authenticate yourself over SSL (HTTPS). However, supporting SSL connection has higher cost than normal HTTP.
Therefore, while you are a visitor in the site, you use HTTP. When you need to authenticate you do it from separate window, and usually over SSL- where you can see the address line (and can verify that you actually use SSL, and check the certificate and such things).
It is common pattern that you can find also when visiting online shops, when you come to pay (only then you move from HTTP to HTTPS).
There is some problem with this approach (it would be safer to use SSL for the whole the connection), but the efficiency (that implies lower cost) usually wins. This is mainly true for not-very-big companies.