I'm wondering what the serious issues are with the following setup:
Username/password login scheme
Javascript/ajax requests the salt value from the server (we have established in previous questions salt is not a secret value)
Javascript preforms an SHA1 (or otherwise) of the password and salt.
Javascript/ajax return the hash to the server
The server applies another salt/hash on-top of the the one sent via ajax.
Transactions are over HTTPS.
I'm concerned about problems that may exist but can't convince myself that this is that bad of a setup. Assume that all users need javascript enabled as jQuery is heavily used on the site. It's basically attempting to add an additional layer of security to the plain-text of a password.
As always: be very careful about designing cryptographic protocols yourself.
But that being said, I can see the advantage in the scheme. It will protect against the password being revealed through a man-in-the-middle-attack and it will ensure that the server never sees the actual password, thus preventing some inside attacks. On the other hand it does not protect against man-in-the-browser, fishing etc.
You might want to read through RFC 2617 about HTTP Digest access authentication. That scheme is similar to what you propose.
All that effort of passing salts and hashes between the client and server is already built into the underlying HTTPS/SSL protocol. I would be very surprised if a security layer in javascript is going to help very much. I recommend keeping it simple and use plaintext over SSL on the client-side. Worry about encryption on the server-side.
This doesn't add any additional security. The JavaScript code is present in the client, so the hashing algorithm is known. You gain nothing from doing a client-side hash in this case.
Also, there's no reason why the client should know about the hashing salt. It actually should be a secret value, especially if you're using a shared salt.
I'll 100% disagree with the accepted answer and say that under no circumstances should an original password ever Ever EVER leave the client. It should always be salted and hashed. Always, without exception.
Two reasons...
. The client should not be relying that all the server components and internal networks are TSL. It is quite common for the TSL endpoint to be a load balancing reverse proxy, which communicates with app servers using plaintext because devops can't be bothered to generate server certs for all their internal servers.
. Many users are pathologically inclined to use a common password for all of their services. The fact that a server has plaintext passwords, even if only in memory, makes it an attractive target for external attack.
You're not gaining anything. There's no point to a salt if Joe Public can see it by clicking View > Source, and the old maxim about never trusting client input goes double for password hashing.
If you really want to increase security, use a SHA-2 based hash (SHA-224/256/384/512), as SHA-1 has potential vulnerabilities. NIST no longer recommends SHA-1 for applications that are vulnerable to collision attacks (like password hashes).
Related
I'm building a password manager site using nodejs on the back end. When the user registers and saves a password I encrypt it and then store it in the db, so it's safe. The problem is that I need a safe way to send it from the database and show it to the user when needed. Which is the best way, send it encrypted to the client and decrypt it with a script or decrypt on the back end before sending it? Is https safe enough to protect requests and responses?
Try encrypt and decrypt at device level, so the only risk you take is getting the master password. A good way to do that is with: https://nodejs.org/api/fs.html.
With FS can you write and read data from device level
When making decisions about cryptography, ask two essential questions:
What is your threat model?
Should you be implementing it yourself or use an existing solution (such as BitWarden, an open-source password manager)?
Your seemingly-simple question is tricky, because Web security in general is a complex topic. In an application such as yours, several layers are involved, and you must decide what to do about each of them in your threat model:
Client/browser
Front-end server
Back-end server (can be the same as the front-end server depending on architecture)
Database
Depending on who controls what, the answers will be different. For example, if you're developing a password manager to be deployed on a company's premises, then it likely won't matter whether you encrypt/decrypt on the "back-end" or on the "front-end", as they'll usually end up being the same host, managed by the same IT people. A compromise of the host can result in injecting malicious code, which can then intercept all passwords, keys, etc. right there in the browser, and it's little help that all crypto is done on the client when the client-side code is controlled by the attacker.
In such cases, what will matter more is policy decisions - e.g. if the passwords sometimes safeguard GDPR-subjected personal records, you may need to implement the principle of least privilege and make the journey of the plaintext passwords as short as possible - whether a server-hosted "site" can ever accomplish this becomes an organizational/legal question, rather than a technical one.
Analyze your threat model carefully - what attack scenarios are you defending against? Do different people own the DB layer and the back-end servers, or could a single person dump all DB data and undetectably replace the client-side JS? What do your defenses protect - data at rest, data in transit? You might find that, depending on your desired security properties, a Web-based password manager is not feasible. On the other hand, it may be that you're after simple, single-tier deployments, in which case your job is easier as you can do crypto on the back-end.
The least you could do, if you decide to roll your own password manager, is to look at existing software and learn from it. Pick something that has been audited (e.g. Bitwarden!), find the audit document, see what pitfalls the original authors had run into.
HTTPS is extremely scrutinized that you can assume it's safe, so long you can keep openssl on your system updated. And honestly, your password manager's implementation or maybe your DB/OS patch level is more likely be vulnerable to attacks.
But to answer your question. In theory, decrypting on client side could be safer, but that is only if the decryption truly happens on client side, and that the decryption key is never transmitted over the wire.
That way, even if somebody else gains access to the data or taps data in transit (even with HTTPS decrypted), they will not be able to decrypt it because they will not get the decryption key.
Afterall, that is basically end-to-end encryption like some messaging app does, just in asynchronous fashion.
And why not ask how others do it if you are doing it. For example, 1Password actually built their own protocol called Secure Remote Password (SRP) and they published a white paper detailing the exact intricacies. So you can definitely take reference.
My friend has an idea about protecting the stored cookies in browser with adding an encryption on them using library such as Stanford Javascript Crypto Library.
Meanwhile i believe such actions are not possible because, javascript has no access to file system.
The question is:
what would be the functionality the said library?
What does it encrypt? I believe the encryption of it would be limited to variables of js application and not files on the host
You're asking
What kind of data could be encrypted using javascript?
and Bergi answered that in the comments:
In general, you can encrypt all data that can be represented in binary
That's true, but this is not what you're actually trying to ask. I believe you're looking for scenarios where crypto libraries are useful in the browser. But more on that a little further down.
I believe the encryption of it would be limited to variables of js application and not files on the host
Yes and no. Anything that can be accessed by JavaScript, can be encrypted. Whether this encryption adds any security is a whole other issue. Values that are accessible through variables in JavaScript code can be encrypted. The same goes to user input which includes files that the user explicitly opened in order to upload in a file dialog (example).
Additionally, your JavaScript code has access to the whole file system in Chrome if you really want it.
Here are some scenarios where using Cryptography in JavaScript could make sense, but not all of them are recommended (not exhaustive, but common):
File storage (i.e. Mega) where the symmetric encryption key is never sent to the server but kept on the client or is directly entered by the user. Its security depends on your trust that the service provider doesn't change their own JavaScript and log the key that was used for encryption.
Password-manager (i.e. clipperz) is similar to file storage, but its code is injected to other sites and it must be resilient to not blurt out all its secrets. It can use many different cryptographic primitives.
Poor-man's HTTPS (i.e. too many Stack Overflow questions) where the server has its RSA private key and sends the RSA public key over HTTP (sic!) to the browser. The browser can encrypt any data and send it back to the server (maybe also establishing a symmetric key in the process). The server can decrypt the message with its private key and respond. This is sort-of secure as long as there is no man-in-the-middle attacker that simply injects its own JavaScript that copies any browser data to the attacker's server. SJCL implements ElGamal encryption instead of RSA for this use case.
Hashing data before uploading in order to check for transmission errors or achieve deduplication (no need to upload file, because somebody else already did so). Hashing is technically in the realm of cryptography and many libraries to that.
Online calculators (i.e. my authenticated encryption tests) where valid and easy to use implementations or algorithms can be used directly when implementing the same algorithms in another language. The data is never sent to the server and is encrypted purely in the browser. My "calculator" can be used to test ones own implementation, because it is verified by various test vectors. Others are there to help friends pass hidden messages without proper e-mail encryption.
These should not be done with browser-based crypto:
If you're using only symmetric encryption over HTTP and the exact same key is present at the server and the client, then you have a problem, because the key must be sent in some way for the client to the server or back. If you send the encryption key from the server to the client or the other way around you need to encrypt your symmetric encryption key. The easiest way to do this would be to use TLS. If you use TLS, then the data as well as key are encrypted, so you don't need to encrypt it yourself. This doesn't provide any security, just a little bit of obfuscation. Any passive attacker (observer) can read your messages. You should read: Javascript Cryptography Considered Harmful
Hashing a password for log in is a bad practice. The general consensus is that you need to hash a password many times (PBKDF2, bcrypt, scrypt, Argon2) in order to check whether a user has sent the correct username and password. Some think that if we hash on the client, the password is not sent in the clear over the network and everything is secure. The problem is that if they think that, they are not using HTTPS (which they need). At the same time, the hashed password is their new password. If the server doesn't implement a constant-time comparison, it is trivial to use a timing side-channel attack to log in as any person which you know the username of.
JWT for sessions: Part 1 and part 2
Cookies are in fact accessible via JavaScript, just like the DOM is.
You could encrypt them by running the value you want to store through the encryption algorithm.
Depending on what you want to store and how the encryption/decryption mechanism works this may or may not be a good idea.
I've written a Go server with custom binary websocket protocol, and a Dart client. User authentication on the server employs scrypt and the recommended parameters N=16384, r=8, p=1 (with salt of length 16 and generated key length of 64), my i7 desktop takes maybe a second or two to crank through authentication on the server side. That's compared to practically instant, say, SHA-512 authentication.
I had trouble finding scrypt implementations for Dart and while this one works, generating the same hash with the same parameters in a browser (Firefox) takes too long to practically complete. I can get it down to a handful of seconds on the same machine using N=1024 and r<=8 but if I settle on that for compatibility, on the server side, the authentication time is for practical purposes instant again.
Scrypt is great on the server side but I'm wondering if its practical for a browser client. Admittedly I haven't seen any/many examples of people using scrypt for browser authentication. Should I persevere and tackle the performance (e.g. maybe using other javascript libraries from dart), or is this a basic limitation at the moment? How low can you wind down the scrypt parameters before you may as well just use more widely available, optimised crypto hashing algos such as SHA?
Use HTTPS. If you're hashing the password in the browser and then sending the hash to the server for comparison, what's to prevent an attacker from simply sniffing the hashed password and hijacking the session by sending the same hash himself?
Even if you come up with an encryption scheme to prevent that, the attacker could simply inject an additional <script> tag with a keylogger via a MITM attack to steal the password before it's encrypted.
Basically no matter how you cut it, you have to use HTTPS to ensure that your communications are not sniffable and no MITM attack has taken place. And once your connection is already secured over HTTPS, which is encrypted with a (minimum) 128-bit key and would take longer than the known age of the universe to crack, you might as well just use the HTTPS connection to send your password and doing additional encryption of the password client-side is probably not necessary.
#maaartinus ...
I've never thought about not using HTTPS. I'm curious if offloading the password-based key derivation overhead to the client makes any sense.
If I may, I can come at this problem from non-Web direction and come back to the browser-use case. Way back when, I worked with EFT*POS security standards and working with secure communications for financial transactions. For example; the credit-card machine in the supermarket. Just establish my grounding on this topic. That said, I think the original question HAS be covered comprehensively. I decided to add a comment to enrich the conversation on this area (it is quite topical).
The procedure is about the a conversation between the terminal (iPhone, smartphone, browser, etc). Premise: you naturally don't want anyone sniffing your username/password pairing. Assume your typical web page or login screen at work. Over the Intranet, LAN, WAN and VPN what every you type is dispatched from your keyboard to the host. These links may already be encrypted these days. The WWW web on the Internet has two main options: HTTP (clear text) and HTTPS (encrypted) via the browser. If we just stick with the (username, password) pair.
Your terminal (such as browser or mobile) needs to be "trusted" by the host (server, phone company, etc).
There's a lot of standard stuff you can (should do) first; and get creative from there-on. Think of it as a pyramid. On the bottom are things you can do on your PC. That's the base of the pyramid. And there's loads of good information about that (e.g. Electronic Frontier Foundation, EFF), it is about protecting yourself, your data (intangible property) and your rights.
That said here are a few points to consider:
Everything sent via HTTP is clear-text. It can be read and copied.
A hash sent over clear-text can be decoded to get the password. It is just maths.
Even if you use scrypt or another method the hash is decodable -- Given enough time.
If you're on the web, any hash implemented in the browser (terminal/client), is transparent to anyone who can load the web page and javascript code [pointed out by ntoskrnl above.
HTTPS on the other hand, sends Everything-as-a-hash. In addition the hash used is negotiated is unique to the conversation and agreed at session-completion time. It is a slightly 'better-er' hash over the whole-of-the-Message.
The main thing making it better-er in the first instance, is the negotiation. The idea overall is that message hashes are based on a key only known to both end-points.
Once again that can be cracked if you have enough time, etc. The main thing making this challenging is establishing the to-and-fro for the negotiations.
Let's back-up a little and consider cryptography. The notion is to hide the message in a way that permits the message to be revealed. Think of this a as lock and key, where the door is the procedure/algorithm and your message is the contents of the room.
HTTPS works to separate the lock from the key in a pragmatic fashion, in time and via process.
Whatever is done in the HTTPS room, stays in the HTTPS-room. You must have the key to enter, poke about and do unwanted stuff. Imho, any extra security should be ONLY considered within a HTTPS space.
There are methods to improve on that foundation. I think of security like a pyramid. about 4 or 5 layers above the base considerations like the transport protocol.
Such options include, ...
SMS authentication number to your phone.
Something like a dongle or personalised ID-key.
An physical message like an s-mail or e-mail with an authentication Number.
anything you come up with.
In summary, if your need say make it safe; there are many things that can be accomplished. If you can't use HTTPS, hashing password(s) locally needs to be managed extremely carefully. Hashes have vulnerabilities. When you are not using HTTPS, anything you can do in the browser is like wet-rice-paper trying to stave-off a sword.
Sorry if this question was asked, but i haven't found exact question. I have HTML form that is being submited in plaintext. I know that there is HTTPs with SSL, but i don't want to buy certificate. Is it possible in some way to encrypt form data? I am thinking about two things:
hashing form data via javascript - in fact i only want to send password so i don't need to know its' origin value.
RSA - not sure if it could be implemented in javascript.
What would you suggest? Any other variants?
Whatever browser-side encryption you perform will require the use of an encryption key - this will be available to an attacker. So while your password will be encrypted to the casual observer, there is no extra security afforded against a targeted attack.
Hashing is useless in this context because the hashed version of the password becomes the password used to authorise/register the user.
The only solution to this problem is an SSL certificate - they are remarkably cheap!
http://en.gandi.net/ssl/grid (no affiliate link)
You could even use a self-signed certificate (if you can educate your users to trust the browser warning that will appear). As self-signed certs don't have a "certificate authority" to certify that the certificate was legitimately procured (and not, for example, presented by a remote host in a man-in-the-middle attack) browsers (and users) are pretty vociferous in their dismissal of them as "insecure".
There is a good article on Javascript Security at Matasano Security:
Secure delivery of Javascript to browsers is a chicken-egg problem.
Browser Javascript is hostile to cryptography.
The "view-source" transparency of Javascript is illusory.
Until those problems are fixed, Javascript isn't a serious crypto
research environment, and suffers for it.
Hashing is a one way process.. you cannot find the original value
from a hash.
There is a blowfish encryption library for
javascript but i don't really see the purpose in that since (like Andy stated in his answer) the key you use for encryption will be available in the plaintext that is sent to the client.
The standard (and btw only) way to do this is https. You can just use your own certificate to enable ssl, no need to buy one. But.. browsers might warn the visitor that the certificate has not been signed by a known authority.
I am using the basic-auth twitter API (no longer available) to integrate twitter with my blog's commenting system. The problem with this and many other web APIs out there is that they require the user's username and password to do anything useful. I don't want to deal with the hassle and cost of installing a SSL certificate, but I also don't want passwords passed over the wire in clear text.
I guess my general question is: How can I send sensitive data over an insecure channel?
This is my current solution and I'd like to know if there are any holes in it:
Generate a random key on the server (I'm using php).
Save the key in a session and also output the key in a javascript variable.
On form submit, use Triple DES in javascript with the key to encrypt the password.
On the server, decrypt the password using the key from the session and then destroy the session.
The end result is that only the encrypted password is sent over the wire and the key is only used once and never sent with the password. Problem solved?
Generate a random key on the server (I'm using php).
Save the key in a session and also output the key in a javascript variable.
On form submit, use Triple DES in javascript with the key to encrypt the password.
This avoids sending the password in the clear over the wire, but it requires you to send the key in the clear over the wire, which would allow anyone eavesdropping to decode the password.
It's been said before and I'll say it again: don't try to make up your own cryptographic protocols! There are established protocols out there for this kind of thing that have been created, peer reviewed, beat on, hacked on, poked and prodded by professionals, use them! No one person is going to be able to come up with something better than the entire cryptographic and security community working together.
Your method has a flaw - if someone were to intercept the transmission of the key to the user and the user's encrypted reply they could decrypt the reply and obtain the username/password of the user.
However, there is a way to securely send information over an unsecure medium so long as the information is not capable of being modified in transit known as the Diffie-Hellman algorithm. Basically two parties are able to compute the shared key used to encrypt the data based on their conversations - yet an observer does not have enough information to deduce the key.
Setting up the conversation between the client and the server can be tricky though, and much more time consuming than simply applying SSL to your site. You don't even have to pay for it - you can generate a self-signed certificate that provides the necessary encryption. This won't protect against man-in-the-middle attacks, but neither will the Diffie-Hellman algorithm.
You don't have to have a certificate on your server; it's up to the client whether they are willing to talk to an unauthenticated server. Key agreement can still be performed to establish a private channel. It wouldn't be safe to send private credentials to an unauthenticated server though, which is why you don't see SSL used this way in practice.
To answer your general question: you just send it. I think your real general question is: “How do I send sensitive data over an insecure channel—and keep it secure?” You can't.
It sounds like you've decided that security isn't worth the $10–20 per month a certificate would cost, and to protect Twitter passwords, that's probably true. So, why spend time to provide the illusion of security? Just make it clear to your users that their password will be sent in the clear and let them make their own choice.
So how is this any more secure? Even though you might have secured browser<>your server, what about the rest of the Internet (your server<>twitter)?
IMHO, it's unacceptable to ask for a username and password of another service and expect people to enter that. And if you care that much - don't integrate them until they get their act straight and re-enable OAuth. (They supported it for a while, but disabled it a few months ago.)
In the mean time, why not offer OpenID? Every Google, Yahoo!, VOX etc. account has one. People might not be aware of it but chances are really, really high that they already have OpenID. Check this list to see what I mean.
When the key is sent between the client and the server it is clear text and subject to interception. Combine that with the encrypted text of the password and the password is decrypted.
Diffie-Hellman is a good solution. If you only need to authenticate them, and not actually transmit the password (because the password is already stored on the server) then you can use HTTP Digest Authentication, or some variation there of.
APIs and OAuth
Firstly, as others have said, you shouldn't be using a user's password to access the API, you should be getting an OAuth token. This will allow you to act on that user's behalf without needing their password. This is a common approach used by many APIs.
Key Exchange
If you need to solve the more general problem of exchanging information over insecure connections, there are several key exchange protocols as mentioned by other answers.
In general key exchange algorithms are secure from eavesdroppers, but because they do not authenticate the identity of the users, they are vulnerable to man-in-the-middle attacks.
From the Wikipedia page on Diffie Hellman:
In the original description, the
Diffie–Hellman exchange by itself does not provide authentication of
the communicating parties and is thus vulnerable to a
man-in-the-middle attack. A person in the middle may establish two
distinct Diffie–Hellman key exchanges, one with Alice and the other
with Bob, effectively masquerading as Alice to Bob, and vice versa,
allowing the attacker to decrypt (and read or store) then re-encrypt
the messages passed between them. A method to authenticate the
communicating parties to each other is generally needed to prevent
this type of attack. Variants of Diffie-Hellman, such as STS, may be
used instead to avoid these types of attacks.
Even STS is insecure in some cases where an attacker is able to insert their own identity (signing key) in place of either the sender or receiver.
Identity and Authentication
This is exactly the problem SSL is designed to solve, by establishing a hierarchy of 'trusted' signing authorities which have in theory verified who owns a domain name, etc, someone connecting to a website can verify that they are indeed communicating with that domain's server, and not with a man-in-the-middle who has placed themselves in between.
You can create a self-signed certificate which will provide the necessary configuration to encrypt the connection, but will not protect you from man in the middle attacks for the same reason that unauthenticated Diffie-Hellman key exchange will not.
You can get free SSL certificates valid for 1 year from https://www.startssl.com/ - I use them for my personal sites. They're not quite as 'trusted' whatever that means, since they only do automatic checks on people who apply for one, but it's free. There are also services which cost very little (£10/year from 123-Reg in the UK).
I've implemented a different approach
Server: user name and password-hash stored in the database
Server: send a challenge with the form to request the password, store it in the session with a timestamp and the client's IP address
Client: hash the password, concat challenge|username|passwordhash, hash it again and post it to the server
Server: verify timestamp, IP, do the same concatenation/hashing and compare it
This applies to a password transmission. Using it for data means using the final hash as the encryption key for the plain text and generating a random initialization vector transmitted with the cipher text to the server.
Any comments on this?
The problem with client-side javascript security is that the attacker can modify the javascript in transit to a simple {return input;} thereby rendering your elaborate security moot. Solution: use browser-provided (ie. not transmitted) RSA. From what I know, not available yet.
How can I send sensitive data over an
insecure channel
With a pre-shared secret key. This is what you attempt in your suggested solution, but you can't send that key over the insecure channel. Someone mentioned DH, which will help you negotiate a key. But the other part of what SSL does is provide authentication, to prevent man-in-the-middle attacks so that the client knows they are negotiating a key with the person they intend to communicate with.
Chris Upchurch's advice is really the only good answer there is for 99.99% of engineers - don't do it. Let someone else do it and use their solution (like the guys who wrote the SSL client/server).
I think the ideal solution here would be to get Twitter to support OpenID and then use that.
An ssl certificate that is self-signed doesn't cost money. For a free twitter service, that is probably just fine for users.
TO OLI
In your approch for example i'm in the same subnet with same router, so i get the same ip as my collegues in my work. I open same url in browser, so server generates the timestamp with same ip, then i use tcp/ip dump to sniff the hashed or non hashed password from my collegues connection. I can sniff everything he sends. So i have all hashes from his form also you have timestamp(my) and same ip. So i send everything using post tool and hey i'm loggen in.
If you don't want to use SSL, why not try some other protocol, such as kerberos?
A basic overview is here:
http://www.kerberos.org/software/tutorial.html
Or if you want to go somewhat more in depth, see
http://www.hitmill.com/computers/kerberos.html
I have a similar issue(wanting to encrypt data in forms without paying for an ssl certificate) so I did some hunting and found this project: http://www.jcryption.org/
I haven't used it yet but it looks easy to implement and thought I'd share it here in-case anyone else is looking for something like it and finds themselves on this page like I did.