Some of the guys here are developing an application which incorporates some 'secure areas' accessible by logging in. In the past, the login form and subsequent 'secure' pages were all plain text transmitted over http, as it's an application that goes out for use on shared servers where there is little chance of being able to use SSL (think WordPress and the like). Most people just shrugged their shoulders as that's all they expected - it's hardly a national bank.
We are now thinking of writing the next version using a JavaScript front end, with the advantage of loading all the images & CSS once, then writing HTML into the DOM thereafter with extJS (or maybe jQuery). We'd like to encrypt user input at the client before being sent to the server, then decrypt server output at the browser before being rendered to HTML so as to introduce some sort of security for users. There are also gains to be had with reducing page loading times, as we're only sending gzipped JSON back and forth.
While playing around, we realised that the method we were looking at to encrypt the basic stuff also doubled up as an authentication mechanism for login in the first place.
For simplicity...:
The user connects to the login page over standard http, where the browser downloads the JavaScript package containing the hashing and encryption algorithms (SHA-256 and AES for example).
User enters username, password and secret into a login form.
The browser JavaScript sends a hash of username and password to the server via AJAX. The secret is only stored in JavaScript and is never sent across the internet.
The server looks up the hash and retrieves username and secret from the database.
The server sends a hash (same algorithm as the browser) of username and secret back to the browser.
The browser JavaScript creates a hash of username and secret and compares it to the hash sent back from the server.
If they are the same, the browser JavaScript encrypts response with secret and sends the message back to the server.
The server decrypts the message with secret to find the expected response and starts a new session.
Subsequent communications are encrypted and decrypted both ways with secret.
There seem to be a few advantages of this type of system, but are we right in thinking:
The user knows they are talking to their server if the server manages to create a hash of username and secret, proving the server knows and understands username and secret.
The server knows the user is genuine if they manage to encrypt response with secret, proving the user knows secret.
At no time is secret ever transmitted in plain text, or is it possible to determine secret from the hash.
A sniffer will only ever find out the 'secure' URL and detect compressed hashes and encryptions in the query string. If they send a request to to the URL that is malformed, no response is given. If they somehow manage to guess an appropriate request, they still have to be able to decrypt it.
It all seems quick enough as to be imperceptible to the user. Can anyone see through this, as we all just assumed we shouldn't be playing with JavaScript encryption!
Don't do this. Please use SSL/TLS. See Javascript Cryptography Considered Harmful.
If you can provide a single SSL site to deliver your JavaScript securely (to avoid the attack mentioned above), then you can use the opensource Forge library to provide cross-domain TLS connections to your other sites after generating self-signed certificates for them. The Forge library also provides other basic crypto stuff if you opt to go in a different direction. Forge has an XMLHttpRequest wrapper that is nearly all JavaScript, with a small piece that leverages Flash's socket API to enable cross-domain communication.
http://digitalbazaar.com/2010/07/20/javascript-tls-1/
https://github.com/digitalbazaar/forge
Related
I'm looking to implement some efficient (i.e. with good performance) logic that does payload signing in our web application. The goal is for the HTML5 client to have a guarantee that the contents of a received payload are indeed those that were generated by our backend.
We don't want to do payload hash generation with shared salt because the user can easily open the HTML5 source and find the salt phrase.
We have implemented RSA signing for now where our backend adds a payload signature using its Private Key and our HTML5 client validates it using its baked in Public Key. However the signature generation process takes 250ms (for a relatively small payload) and due to the nature of the signed request this amount of time is unacceptable.
The only other idea is to generate a shared secret at runtime every time a client initializes its session with the backend. The secret however can't be sent in plaintext form so it seems we're going to have to implement a Diffie-Hellman exchange mechanism, something we'd like to avoid if possible or automate with existing libraries.
Remember that the secrecy and encryption need to be done at the Application layer, due to the nature of how we sell our product. We're not looking to encrypt our traffic, this is something that our customers might or might not implement (since it's an intranet application). However, we have to avoid exposing stuff that are related to our licensing checking mechanisms etc to them. The backend is not cloud based and is not controlled by us, but installed on the customers' machines, on premises.
Frontend is Javascript and backend is Java.
Note that Diffie-Hellman exchange mechanism is not protected against MITM attack, therefore not encrypting traffic means that you need to authenticate the DH data coming from the server. This is why a web server using a DH-based cipher suite signs the DH elements sent over the network with the private key of its server certificate, for the client to check that those elements are really from the server that he wants to connect to. Those elements are public but need to be signed.
What you call "payload hash generation with shared salt" is a keyed-hash message authentication code, so it is based on a shared secret, as you noticed, and since you do not want to use this mechanism, it means that you do not trust the client. Therefore, you have to use asymetric cryptography to sign your payload.
Signing a server payload with an asymetric algorithm means that you first need to let the server share a public key with the client. Since you do not encrypt data between the client and the server, you need to deploy the server public key inside the client source code.
You talk about the signature generation process, but the signature check process on the client side is also very important in your case, because the total time the user has to wait for the result is the addition of the time to sign and the time to check the signature (moreover, the signature can often be anticipated on the server, if the data to sign is not dynamically generated, but the verification can never be anticipated). So you need a rapid way of checking a signature on the client side. First, sign a hash, not the whole payload. Then choose the fastest asymetric signature algorithm that is available in your development environment, on the client side. Note that checking an RSA signature is faster that checking a DSA or ECDSA one, for respective keys length corresponding to the same security level. So you should stay with RSA.
All of this until this line may not help you so much! Now there is a way to increase the performances using RSA to sign and verify signatures, and this way is rather the same that SSL/TLS implements to increase browser performances when downloading multiple pages or other objects from the same server: use a session cache. You share a common secret for a specific session with one specific user. Never use this common secret for other sessions. When the user is connecting for the first time, use RSA only once, to exchange an ephemeral shared secret or exchange DH material to create this shared secret. Then, each time the server needs to sign an object, it creates a keyed-hash message authentication code with this specific secret. Therefore, if the user finds the secret, for instance using the debug mode of his browser, it's not a problem: this secret is only here to help him know that something that comes from the server has not been altered. So the user can not use this secret to alter data exchanges between the server and other users.
We ended up by using TweetNaCl both on the client and on the server side. The library provides a every easy and fast way to do DH-like shared secret exchange without going through a custom implementation. With an ephemeral shared secret we can easily generate hashes instead of signatures for our payloads dropping from 250ms to 10μs. Also RSA signing the initial DH exchange is important and the only place we use RSA.
Please read #AlexandreFenyo answer for proper theory on how to usually handle such cases.
As title shown,
Environment: password is already hash in database, and connection using HTTPS protocol
my question is simple, over the HTTPS, i saw some of the website using SSL, but also doing password encryption in frontend(client browser) when submit form.
Is require?
In my mind, since website is using SSL. there is no need to encrypt password in frontend(client browser). Because whatever frontend(client browser) doing, hacker also can do the same way by using the library(client browser imported the javascript hashing algorithn) to encrypt and send the token. Even put a salt also point less, it just a extra steps for hacker to encrypt it.
Unless the salt is come from other way, not from the same source(page rendering from server). example: from mobile, and using otp as a salt then can resolve.
else i don't think encryption is useful in frontend(client browser).
Am i correct? or i missed out somethings......
The problem is, a website doesn't have any alternatives but to trust in the HTTPS/SSL connection. Whatever encryption you do on client side (browser) will be done in JavaScript and this script must first be sent to the client. A ManInTheMiddle can just do the same as your client does, or he can simply remove the whole script.
It is the same problem you have, when you and your colleague try to invent a secret language, while the bad guy is listening. If you do not already share a secret, this is impossible. An SSL certificate solves this problem, because the browsers have built in a list of root certificates, which will act as the already shared secret.
The situation is a bit different for apps with a client and a server part. There you could install a secret key with your app, and based on this already shared secret you can establish a secure connection.
So the short answer is: yes it is ok to send the password plaintext, as long as the connection is encrypted with HTTPS/SSL.
I just read about the Stanford Javascript Crypto Library (jsfiddle example) which supports SHA256, AES, and other standard encryption schemes entirely in javascript. The library seems very nifty, but I don't know of a reasonable use case for it.
As some questions have already pointed out, client side encryption is not a safe way to pass secure data to a server. HTTPS should be used instead. So, are there any projects that would benefit from or require client side encryption?
Use Case 1
How about local storage? You might want to store some data, but encrypt it so that other users of the computer cannot access it?
For example:
User connects to server over HTTPS.
Server authenticates user.
Server serves an encryption password specific to this user.
User does some stuff locally.
Some data is stored locally (encrypted with the password).
User wanders off
User comes back to site at later stage.
User connects over HTTPS.
Server authenticates user.
Server serves the user's encryption password.
Client-side JS uses encryption password to decrypt local data.
User does something or other locally with their now-decrypted, in-memory local data.
This could be useful in cases where you have a fat client, with lots of (sensitive) data that needs to be used across sessions, where serving the data from the server is infeasible due to size. I can't think of that many instances where this would apply...
It could also be useful in cases where the user of the application generates sensitive data and that data does not need to (or shouldn't) ever be sent to (or stored on) the server.
For an applied example, you could store the user's credit card details locally, encrypted and use JS to auto-enter it into a form. You could have done this by instead storing the data server side, and serving a pre-populated form that way, but with this approach you don't have to store their credit card details on the server (which in some countries, there are strict laws about). Obviously, it's debatable as to whether storing credit card details encrypted on the user's machine is more or less of a security risk than storing it server side.
There's quite probably a better applied example...
I don't know of any existing project which use this technique.
Use Case 2
How about for performance improvements over HTTPS, facilitated via password sharing?
For example:
User connects to server over HTTPS.
Server authenticates user.
Server serves an encryption password specific to this user.
Server then redirects to HTTP (which has much less of an overhead than HTTPS, and so will be much better in terms of performance).
Because both the server and the client have the encryption password (and that password was shared over a secure connection), they can now both send and receive securely encrypted sensitive data, without the overhead of encrypting / decrypting entire requests with HTTPS. This means that the server could serve a web page where only the sensitive parts of it are encrypted. The client could then decrypt the encrypted parts.
This use case is probably not all that worthwhile, because HTTPS generally has acceptable performance levels, but would help if you need to squeeze out a bit more speed.
Use Case 3
Host proof storage. You can encrypt data client side and then send it to the server. The server can store the data and share it, but without knowing the client's private key, it cannot decrypt it. This is thought to be the basis for services such as lastpass.
Like anything on the client, you can use obfuscation to make things more difficult for casual users to peek inside, but since the client would also need to have a copy of the decryptor there's nothing to stop the user from using the decryptor themselves either.
JavaScript is an insecure environment, period.
One use that comes to mind is host-proofing. That is where you want to store the data on the server or store and forward through the server but not give the server access to the data.
The client can encrypt the data prior to transmission to the server and keep the private key or at least the password for the private key locally.
I believe that this is the basis for services such as lastpass.
My security knowledge is kind of limited but I might learn something.I´m planning to create an ajax application where I encrypt/decrypt passwords client-side with a typed master password
using a javascript AES library and then send/retrieve the encrypted data to/from Google App Engine(user authenticated). I actually found a project with the same idea: http://code.google.com/p/safety-vault/
In my mind as long as I keep my local computer secure (keyloggers) this should be quite secure or am I missing something?
As long as you use SSL for the webapp, this should be fine. Without SSL, an attacker could modify the page to insert some Javascript that sends them your password when you type it.
You might want to reconsider your threat model, though. Do you trust the server? If not, you shouldn't trust it to not send you a page that captures your master password when you enter it. If you do, you shouldn't have any qualms in sending your master password to the server.
There is a problem here, as I assume at some point you're going to have to send your master password to the browser client? If you have the master password, then you can decrypt the stream you send...
Use HTTPS, it's what it was designed for.
You effectively are trusting Google App Engine employees, and transitively, the entire trust chain behind them, to not steal your passwords. Encrypting client side doesn't mean anything if you are executing JavaScript code the server sends you, furthermore if you have no HTTPS implemented properly, it's trivial for someone to do a man in the middle attack and steal your passwords as they are transmitted. Just store the passwords locally or encrypt them with a well known tool like GPG and upload them.
Hey everyone, I am researching a project where we would need to keep a value encrypted from the client all the way to a black box system without decrypting it at any point in between. We are using SSL between the browser and web server, but the values are automatically decrypted at the web server, which is what we need to keep from happening. We need to be able to pass it through the web server (still encrypted) and through other back end systems until it hits its final destination where it would be decrypted.
So my question is what options are available to us for maintaining an encrypted state for a value from the browser back, without decrypting it anywhere along the way?
Thanks
Mark
Have you thought about doing a simple RSA encryption on the values and sending that through the system? You will need to make sure the clients have the public key in which to encrypt the data with, but that would be easy and secure enough to pass around.
To my knowlege, most libraries out there will support RSA. A nice demo of how to do it purely in Javascript can be found here.
you'll want to take a look at public key encryption. SSL protects your session (browser <-> server) but not the full transport. i'd suggest encrypting your data once it's received from the client, then sending the encrypted data all the way in.
here's a terrible diagram outlining the flow of data
client browser web server random server blackbox
route ---- SSL -------------><------------- not encrypted ------->
data *-------- PGP/GPG encrypted --------->
basically your data is encrypted via SSL to the web server, where it is PGP/GPG encrypted, then sent downstream. SSL doesn't matter at this point (or at least, isn't the primary form of encryption).
unless you can guarantee javascript in your environment, it may be better to encrypt at the web server to make sure your data is secure if the user has javascript off for some reason.
If you use a binary type in your database, the web server should send it as-is. Your client can then encrypt the data before inserting it, and would then have to decrypt the data after fetching it. Neither the web server nor the database server itself would be able to view the data.
The black box system, by definition, can't decrypt the data unless it was built to do that. I'll suggest discussing the problem with the developers of the black box system.