Recently, I switched the server for my site, and I managed to lose the decrypted SSL key, and I cannot remember the password for the encrypted one.
It turned out that the server had set HSTS on, and now many visitors are unable to load the pages since I don't have a valid SSL cert, and their browsers refuse to connect via http due to the HSTS.
So, I need a way to disable that HSTS from their browsers. Asking them to clear browsing data is a no-go, but I was wondering if I could make a firefox/chrome compatible javascript to clear it. (The script would be on a different domain)
I've been digging around a bit, but haven't found much info on how I should approach the problem, if it is even possible. All other suggestions are welcome too.
HSTS is there to make a tradeoff: you take responsibility to from now and forever provide a secure SSL connection which the browser can count on, which will cause the browser to refuse anything but an SSL connection to your domain. It puts an additional burden on you, but increases security for your visitors.
The browser stores this preference in an internal database which cannot be cleared by any website. If it'd be possible for any site to simply revoke this preference via Javascript, the whole system would be pointless.
You'll have to manually clear the database and/or remove that specific entry. Every browser does it differently, see http://classically.me/blogs/how-clear-hsts-settings-major-browsers for an overview.
The real solution:
Install a valid cert
Get people to visit your site
Send a new header
These days, getting a valid cert is free, or costs less than a sandwich ($8 or so).
Related
As of January Chrome will show it's users that a site is being insecure if it contains either a password or credit-card field and isn't served via https. (see https://security.googleblog.com/2016/09/moving-towards-more-secure-web.html )
This circumstance raises a little problem:
When you have a web service running locally (for example the web-login page of your router at home ) which is not served with HTTPS since there is the strong possibility that the certificate will expire before the user updates it's software, your user will see this warning.
Mocking the password-field seems too hacky and will likely cause problems on mobile devices.
What would be good alternatives to solve this problem without serving the site with HTTPS?
You should not consider this a problem, but a feature.
If you're running the service on HTTP instead of HTTPS, then your users should expect to be warned about it. Allowing for any exceptions to Chrome`s new rule would be likely to cause uncertainty.
The fact that a site owner is worried a certificate will expire is no excuse: Would it not be preferable to use certificates anyway, and rather risk getting a warning about an outdated certificate instead? That is at least a visible problem that can be fixed fairly easily.
If the new standard implies that a user should be warned about an insecure connection, then hiding that warning means the standard is broken, and that you're providing an false sense of security, which may be worse than no security.
If you want to host a page over an unencrypted connection, that's up to you, but you should probably just accept that the warning will be shown.
I've written a Go server with custom binary websocket protocol, and a Dart client. User authentication on the server employs scrypt and the recommended parameters N=16384, r=8, p=1 (with salt of length 16 and generated key length of 64), my i7 desktop takes maybe a second or two to crank through authentication on the server side. That's compared to practically instant, say, SHA-512 authentication.
I had trouble finding scrypt implementations for Dart and while this one works, generating the same hash with the same parameters in a browser (Firefox) takes too long to practically complete. I can get it down to a handful of seconds on the same machine using N=1024 and r<=8 but if I settle on that for compatibility, on the server side, the authentication time is for practical purposes instant again.
Scrypt is great on the server side but I'm wondering if its practical for a browser client. Admittedly I haven't seen any/many examples of people using scrypt for browser authentication. Should I persevere and tackle the performance (e.g. maybe using other javascript libraries from dart), or is this a basic limitation at the moment? How low can you wind down the scrypt parameters before you may as well just use more widely available, optimised crypto hashing algos such as SHA?
Use HTTPS. If you're hashing the password in the browser and then sending the hash to the server for comparison, what's to prevent an attacker from simply sniffing the hashed password and hijacking the session by sending the same hash himself?
Even if you come up with an encryption scheme to prevent that, the attacker could simply inject an additional <script> tag with a keylogger via a MITM attack to steal the password before it's encrypted.
Basically no matter how you cut it, you have to use HTTPS to ensure that your communications are not sniffable and no MITM attack has taken place. And once your connection is already secured over HTTPS, which is encrypted with a (minimum) 128-bit key and would take longer than the known age of the universe to crack, you might as well just use the HTTPS connection to send your password and doing additional encryption of the password client-side is probably not necessary.
#maaartinus ...
I've never thought about not using HTTPS. I'm curious if offloading the password-based key derivation overhead to the client makes any sense.
If I may, I can come at this problem from non-Web direction and come back to the browser-use case. Way back when, I worked with EFT*POS security standards and working with secure communications for financial transactions. For example; the credit-card machine in the supermarket. Just establish my grounding on this topic. That said, I think the original question HAS be covered comprehensively. I decided to add a comment to enrich the conversation on this area (it is quite topical).
The procedure is about the a conversation between the terminal (iPhone, smartphone, browser, etc). Premise: you naturally don't want anyone sniffing your username/password pairing. Assume your typical web page or login screen at work. Over the Intranet, LAN, WAN and VPN what every you type is dispatched from your keyboard to the host. These links may already be encrypted these days. The WWW web on the Internet has two main options: HTTP (clear text) and HTTPS (encrypted) via the browser. If we just stick with the (username, password) pair.
Your terminal (such as browser or mobile) needs to be "trusted" by the host (server, phone company, etc).
There's a lot of standard stuff you can (should do) first; and get creative from there-on. Think of it as a pyramid. On the bottom are things you can do on your PC. That's the base of the pyramid. And there's loads of good information about that (e.g. Electronic Frontier Foundation, EFF), it is about protecting yourself, your data (intangible property) and your rights.
That said here are a few points to consider:
Everything sent via HTTP is clear-text. It can be read and copied.
A hash sent over clear-text can be decoded to get the password. It is just maths.
Even if you use scrypt or another method the hash is decodable -- Given enough time.
If you're on the web, any hash implemented in the browser (terminal/client), is transparent to anyone who can load the web page and javascript code [pointed out by ntoskrnl above.
HTTPS on the other hand, sends Everything-as-a-hash. In addition the hash used is negotiated is unique to the conversation and agreed at session-completion time. It is a slightly 'better-er' hash over the whole-of-the-Message.
The main thing making it better-er in the first instance, is the negotiation. The idea overall is that message hashes are based on a key only known to both end-points.
Once again that can be cracked if you have enough time, etc. The main thing making this challenging is establishing the to-and-fro for the negotiations.
Let's back-up a little and consider cryptography. The notion is to hide the message in a way that permits the message to be revealed. Think of this a as lock and key, where the door is the procedure/algorithm and your message is the contents of the room.
HTTPS works to separate the lock from the key in a pragmatic fashion, in time and via process.
Whatever is done in the HTTPS room, stays in the HTTPS-room. You must have the key to enter, poke about and do unwanted stuff. Imho, any extra security should be ONLY considered within a HTTPS space.
There are methods to improve on that foundation. I think of security like a pyramid. about 4 or 5 layers above the base considerations like the transport protocol.
Such options include, ...
SMS authentication number to your phone.
Something like a dongle or personalised ID-key.
An physical message like an s-mail or e-mail with an authentication Number.
anything you come up with.
In summary, if your need say make it safe; there are many things that can be accomplished. If you can't use HTTPS, hashing password(s) locally needs to be managed extremely carefully. Hashes have vulnerabilities. When you are not using HTTPS, anything you can do in the browser is like wet-rice-paper trying to stave-off a sword.
I've searched for this quite a lot but the answers are not always clear.
Is there a solid way to get a user's IP address despite them being behind a proxy, tor, etc?
Preferably using ASP.NET
I just cannot imagine "big sites" like Google, Hotmail/Outlook not having some relatively reliable way of bypassing these things, especially since they (atleast hotmail/outlook) require you to use Javascript.
If there was a way to get a user's real IP address when they were using a good proxy or Tor, what would be the point of using either?
You can detect Tor users by checking if their IP address is that of an exit node. There aren't that many. You cannot get their actual IP address without exploiting some browser bug or hoping they use Flash or Java. Most Tor users don't use any browser plugins and disable JavaScript.
Some proxies send X-Forwarded-For headers, so you can catch users using bad proxies. Good ones are indistinguishable from regular users, as they don't send any extra information.
If you are trying to prevent bots, remember that most bots just send and receive HTTP requests. They aren't browsers. Your best bet is detecting bot-like behavior.
It's not possible to unmask someone's IP behind a proxy unless you have some relationship with the proxy.
However sometimes HTTP proxies add a header line "X-Forwarded" which identifies the real source IP address.
Hope this helps.
I am using nodejs to write an image upload service. Paying clients will be able to send an image file to my endpoint that I have set up on my server. However, when every request comes in, I need to confirm that it is actually a paying client making the request. I thought about having the client give me their domain name and I would just check the referer header. However, someone could easily spoof the referer header and use my service without paying. How do SaaS developers face this technical problem? Is it possible to fix this without requiring my clients to have some server side code?
Are you building an external image hosting service for websites or is it to share something that HAS to be private and SECURE? If it is the former then read ahead.
Of course, the header can be spoofed. Here's why you should not worry about it:
Alternative is ugly: To build a secure provisioning service, you will have to develop some kind of token system that the website owner implements at his end as well. Chances are, he would not sign up with you because there are simpler alternatives available.
Spoofing will have to be done on client side. Very few "users" will actually do this. Two geeks spoofing headers on their own machine will not make a big difference to you. If they write some proxy or middle ware that does this work automatically and many people start using it, it could be a problem. However this is not very likely.
Guess you already know, but since you haven't mentioned - it is called Hotlinking. Google this topic to find more resources.
You cannot authenticate a browser with a referrer header.
If you want to authenticate an individual, then you will likely need a login system that they provide credentials to (username/pwd) and you check those against your allowed user base. If they pass, then you set a certain type of cookie in the browser that indicates they are a legit user. Subsequent requests from this user will contain that cookie which you can check on every request.
The cookie needs to be something that you create that you can verify that cannot easily be guessed or forged (like a session or an encrypted token from your server). You would typically set an expiration on the cookie after some time period of time so that the user has to login again.
The way to delete cookies in javascript is to set the expiry date to be in the past. Now this doesn't actually delete the cookie, at least in Firefox. It just means the cookie will be deleted on browser close.
This is a problem for us: we have a product that involves archiving web pages from potentially many sites, with all this content stored on our server. And to make sure that pages render properly we include all js as well. However often cookies are set by js, and given that the page is cached on our server, these cookies are set under our domain.
So over time cookies from dozens of archived sites build up under our domain. And eventually the Cookie header exceeds the max content length, resulting in an HTTP 400 error code.
And because our clients are mostly in corporate environments they never reboot their machines or close their browsers: they can be left on for months. So this "soft" delete doesn't work, at least not reliably.
Is there any way to physically remove cookies intra-session in javscript? Or alternatively, is there any way to stop them being set?
It's not possible. Period. I've been struggling with this for several weeks without finding a solution.
Whoever invented the cookie getter/setter should be %insert_painful_punishment_here%.
Particularly Internet Exploder is a beast when it comes to deleting cookies. I can't remember the exact issue, but I think it involved https and cookie names containing ;.
All I can offer is a workaround: Send a response body with your 400 response, something like 'please restart your browser'.
In addition to setting the expiration in the past, set the value to an empty string. This will at least reduce the size of the cookie immediately.
I would think that cookies should be deleted immediately in all browsers. For example, when I log out of a website, Firefox does not require me to close my browser to delete the cookie that shows that I am logged into the site. If this isn't happening, I suggest you look into Firefox bugs and possibly open a new one with them.
In the meantime, I'd look at my web server and see if it is possibly to set the max content length to something higher than it already is.
You could overwrite the cookie with a new one.
"It is because we are NOT using iframes that we have this issue. The cached page is being rendered by our server, so any cookies get set under our domain." --OP
If you have no control over the javascript that is setting the cookies (which seems extremely odd, why do you not have control?), you can constantly read and empty the cookie, dumping the data to another larger database (preferably server-side, or perhaps HTML5 client storage).