I'm looking to implement some efficient (i.e. with good performance) logic that does payload signing in our web application. The goal is for the HTML5 client to have a guarantee that the contents of a received payload are indeed those that were generated by our backend.
We don't want to do payload hash generation with shared salt because the user can easily open the HTML5 source and find the salt phrase.
We have implemented RSA signing for now where our backend adds a payload signature using its Private Key and our HTML5 client validates it using its baked in Public Key. However the signature generation process takes 250ms (for a relatively small payload) and due to the nature of the signed request this amount of time is unacceptable.
The only other idea is to generate a shared secret at runtime every time a client initializes its session with the backend. The secret however can't be sent in plaintext form so it seems we're going to have to implement a Diffie-Hellman exchange mechanism, something we'd like to avoid if possible or automate with existing libraries.
Remember that the secrecy and encryption need to be done at the Application layer, due to the nature of how we sell our product. We're not looking to encrypt our traffic, this is something that our customers might or might not implement (since it's an intranet application). However, we have to avoid exposing stuff that are related to our licensing checking mechanisms etc to them. The backend is not cloud based and is not controlled by us, but installed on the customers' machines, on premises.
Frontend is Javascript and backend is Java.
Note that Diffie-Hellman exchange mechanism is not protected against MITM attack, therefore not encrypting traffic means that you need to authenticate the DH data coming from the server. This is why a web server using a DH-based cipher suite signs the DH elements sent over the network with the private key of its server certificate, for the client to check that those elements are really from the server that he wants to connect to. Those elements are public but need to be signed.
What you call "payload hash generation with shared salt" is a keyed-hash message authentication code, so it is based on a shared secret, as you noticed, and since you do not want to use this mechanism, it means that you do not trust the client. Therefore, you have to use asymetric cryptography to sign your payload.
Signing a server payload with an asymetric algorithm means that you first need to let the server share a public key with the client. Since you do not encrypt data between the client and the server, you need to deploy the server public key inside the client source code.
You talk about the signature generation process, but the signature check process on the client side is also very important in your case, because the total time the user has to wait for the result is the addition of the time to sign and the time to check the signature (moreover, the signature can often be anticipated on the server, if the data to sign is not dynamically generated, but the verification can never be anticipated). So you need a rapid way of checking a signature on the client side. First, sign a hash, not the whole payload. Then choose the fastest asymetric signature algorithm that is available in your development environment, on the client side. Note that checking an RSA signature is faster that checking a DSA or ECDSA one, for respective keys length corresponding to the same security level. So you should stay with RSA.
All of this until this line may not help you so much! Now there is a way to increase the performances using RSA to sign and verify signatures, and this way is rather the same that SSL/TLS implements to increase browser performances when downloading multiple pages or other objects from the same server: use a session cache. You share a common secret for a specific session with one specific user. Never use this common secret for other sessions. When the user is connecting for the first time, use RSA only once, to exchange an ephemeral shared secret or exchange DH material to create this shared secret. Then, each time the server needs to sign an object, it creates a keyed-hash message authentication code with this specific secret. Therefore, if the user finds the secret, for instance using the debug mode of his browser, it's not a problem: this secret is only here to help him know that something that comes from the server has not been altered. So the user can not use this secret to alter data exchanges between the server and other users.
We ended up by using TweetNaCl both on the client and on the server side. The library provides a every easy and fast way to do DH-like shared secret exchange without going through a custom implementation. With an ephemeral shared secret we can easily generate hashes instead of signatures for our payloads dropping from 250ms to 10μs. Also RSA signing the initial DH exchange is important and the only place we use RSA.
Please read #AlexandreFenyo answer for proper theory on how to usually handle such cases.
Related
I've recently started using modern front end technologies like React/Angular and as a result have started using tools like JSON Server to recreate dummy restful db interactions.
My understanding is that most rest api's authenticate via some kind of token and secret that is either passed as part of the url or as a header. This seems fine for retrieving data, but is it not risky exposing these login credentials in a front end language like JS when writing is possible?
My thinking is that all it would take is a simple view source for somebody to steal my token/secret and potentially start populating my db with data.
In the problem that you describe the client (browser) has the login credentials because the server provide them. There is no "exposing" as the credentials are already exposed. Exposing your credentials to every client means that there is no security.
When we talk about security we consider as a client the browser not the real person that operates the browser. As you said, the real person can access all the browser's data.
To secure your API the secret key must be kept secret. This means that each client has a different key and uses it to get their data/services from your RESTfull server.
In a simple senario this key can be used/managed like the session id.
The client should first pass through an authorization process (login maybe) and then a temporary key can be generated for the client's session.
Generally, a key is converted to rights. If every client by default has the key, everyone has the default rights, so you may also remove the key and set the default rights to every request.
A client that you don't want to have full access to your db should have a key that gives him limited access to your db.
On the other hand, if the client provides the key, this is secure. For example a php code on a server that uses the secret key for accessing your API.
My friend has an idea about protecting the stored cookies in browser with adding an encryption on them using library such as Stanford Javascript Crypto Library.
Meanwhile i believe such actions are not possible because, javascript has no access to file system.
The question is:
what would be the functionality the said library?
What does it encrypt? I believe the encryption of it would be limited to variables of js application and not files on the host
You're asking
What kind of data could be encrypted using javascript?
and Bergi answered that in the comments:
In general, you can encrypt all data that can be represented in binary
That's true, but this is not what you're actually trying to ask. I believe you're looking for scenarios where crypto libraries are useful in the browser. But more on that a little further down.
I believe the encryption of it would be limited to variables of js application and not files on the host
Yes and no. Anything that can be accessed by JavaScript, can be encrypted. Whether this encryption adds any security is a whole other issue. Values that are accessible through variables in JavaScript code can be encrypted. The same goes to user input which includes files that the user explicitly opened in order to upload in a file dialog (example).
Additionally, your JavaScript code has access to the whole file system in Chrome if you really want it.
Here are some scenarios where using Cryptography in JavaScript could make sense, but not all of them are recommended (not exhaustive, but common):
File storage (i.e. Mega) where the symmetric encryption key is never sent to the server but kept on the client or is directly entered by the user. Its security depends on your trust that the service provider doesn't change their own JavaScript and log the key that was used for encryption.
Password-manager (i.e. clipperz) is similar to file storage, but its code is injected to other sites and it must be resilient to not blurt out all its secrets. It can use many different cryptographic primitives.
Poor-man's HTTPS (i.e. too many Stack Overflow questions) where the server has its RSA private key and sends the RSA public key over HTTP (sic!) to the browser. The browser can encrypt any data and send it back to the server (maybe also establishing a symmetric key in the process). The server can decrypt the message with its private key and respond. This is sort-of secure as long as there is no man-in-the-middle attacker that simply injects its own JavaScript that copies any browser data to the attacker's server. SJCL implements ElGamal encryption instead of RSA for this use case.
Hashing data before uploading in order to check for transmission errors or achieve deduplication (no need to upload file, because somebody else already did so). Hashing is technically in the realm of cryptography and many libraries to that.
Online calculators (i.e. my authenticated encryption tests) where valid and easy to use implementations or algorithms can be used directly when implementing the same algorithms in another language. The data is never sent to the server and is encrypted purely in the browser. My "calculator" can be used to test ones own implementation, because it is verified by various test vectors. Others are there to help friends pass hidden messages without proper e-mail encryption.
These should not be done with browser-based crypto:
If you're using only symmetric encryption over HTTP and the exact same key is present at the server and the client, then you have a problem, because the key must be sent in some way for the client to the server or back. If you send the encryption key from the server to the client or the other way around you need to encrypt your symmetric encryption key. The easiest way to do this would be to use TLS. If you use TLS, then the data as well as key are encrypted, so you don't need to encrypt it yourself. This doesn't provide any security, just a little bit of obfuscation. Any passive attacker (observer) can read your messages. You should read: Javascript Cryptography Considered Harmful
Hashing a password for log in is a bad practice. The general consensus is that you need to hash a password many times (PBKDF2, bcrypt, scrypt, Argon2) in order to check whether a user has sent the correct username and password. Some think that if we hash on the client, the password is not sent in the clear over the network and everything is secure. The problem is that if they think that, they are not using HTTPS (which they need). At the same time, the hashed password is their new password. If the server doesn't implement a constant-time comparison, it is trivial to use a timing side-channel attack to log in as any person which you know the username of.
JWT for sessions: Part 1 and part 2
Cookies are in fact accessible via JavaScript, just like the DOM is.
You could encrypt them by running the value you want to store through the encryption algorithm.
Depending on what you want to store and how the encryption/decryption mechanism works this may or may not be a good idea.
By a parent-company-wide mandate, I need to structure my web-app that any personally identifiable client information will only be shown to users if the user clicks on a "View" button in the field.
(This isn't a high-level security thing, and it doesn't need ultra-high level encryption. All authenticated and authorized of the app will be able to view the data. The mandate is for the data to be stored encrypted and not decrypted until just prior to display to the end-user. The intent is to eliminate any employee from having easy access to a screen full of sensitive info that could make large-scale information stealing easy for even tech un-savvy. "Locks keep honest people honest.")
The obvious challenge of doing this in javascript is that both the ciphertext and the key would need to be used to decrypt the value, and they'd both have to come from the server(s) via HTTPS. Implemented poorly, the web-app could make the encrypted date easier to get in clear-text since it could expose the keys and shared secrets used by the encryption algorithm.
At minimum, I can send a guid to the client as a limited time one-use-only token to allow them to access an HTTPS web service that would return one item of clear-text data per click of a view button. (Which technically does nothing to enhance security except throttling the speed that the data could be stolen.)
Are there known methods for web apps to use splitting data between an initial request, and a later AJAX request to enhance the data security? I certainly could roll my own system for the AJAX view requests that could use things like time of day, session key, etc. to validate requests, but I'm not sure if that would really get me any benefit -- I would still end up with two options: Responding to the AJAX request via HTTPS with the clear text, or sending both the cipher text and the decryption keys/secrets to a browser to decrypt with.
NOTE: Keep in mind, the only point here is to stop non-techie employees from being tempted to do bad things with "easy-stealing" data. I'm totally comfortable if anyone wants to debunk or prove whether any techniques actually benefit security or are just busy-work that are equally vulnerable.
I just read about the Stanford Javascript Crypto Library (jsfiddle example) which supports SHA256, AES, and other standard encryption schemes entirely in javascript. The library seems very nifty, but I don't know of a reasonable use case for it.
As some questions have already pointed out, client side encryption is not a safe way to pass secure data to a server. HTTPS should be used instead. So, are there any projects that would benefit from or require client side encryption?
Use Case 1
How about local storage? You might want to store some data, but encrypt it so that other users of the computer cannot access it?
For example:
User connects to server over HTTPS.
Server authenticates user.
Server serves an encryption password specific to this user.
User does some stuff locally.
Some data is stored locally (encrypted with the password).
User wanders off
User comes back to site at later stage.
User connects over HTTPS.
Server authenticates user.
Server serves the user's encryption password.
Client-side JS uses encryption password to decrypt local data.
User does something or other locally with their now-decrypted, in-memory local data.
This could be useful in cases where you have a fat client, with lots of (sensitive) data that needs to be used across sessions, where serving the data from the server is infeasible due to size. I can't think of that many instances where this would apply...
It could also be useful in cases where the user of the application generates sensitive data and that data does not need to (or shouldn't) ever be sent to (or stored on) the server.
For an applied example, you could store the user's credit card details locally, encrypted and use JS to auto-enter it into a form. You could have done this by instead storing the data server side, and serving a pre-populated form that way, but with this approach you don't have to store their credit card details on the server (which in some countries, there are strict laws about). Obviously, it's debatable as to whether storing credit card details encrypted on the user's machine is more or less of a security risk than storing it server side.
There's quite probably a better applied example...
I don't know of any existing project which use this technique.
Use Case 2
How about for performance improvements over HTTPS, facilitated via password sharing?
For example:
User connects to server over HTTPS.
Server authenticates user.
Server serves an encryption password specific to this user.
Server then redirects to HTTP (which has much less of an overhead than HTTPS, and so will be much better in terms of performance).
Because both the server and the client have the encryption password (and that password was shared over a secure connection), they can now both send and receive securely encrypted sensitive data, without the overhead of encrypting / decrypting entire requests with HTTPS. This means that the server could serve a web page where only the sensitive parts of it are encrypted. The client could then decrypt the encrypted parts.
This use case is probably not all that worthwhile, because HTTPS generally has acceptable performance levels, but would help if you need to squeeze out a bit more speed.
Use Case 3
Host proof storage. You can encrypt data client side and then send it to the server. The server can store the data and share it, but without knowing the client's private key, it cannot decrypt it. This is thought to be the basis for services such as lastpass.
Like anything on the client, you can use obfuscation to make things more difficult for casual users to peek inside, but since the client would also need to have a copy of the decryptor there's nothing to stop the user from using the decryptor themselves either.
JavaScript is an insecure environment, period.
One use that comes to mind is host-proofing. That is where you want to store the data on the server or store and forward through the server but not give the server access to the data.
The client can encrypt the data prior to transmission to the server and keep the private key or at least the password for the private key locally.
I believe that this is the basis for services such as lastpass.
Some of the guys here are developing an application which incorporates some 'secure areas' accessible by logging in. In the past, the login form and subsequent 'secure' pages were all plain text transmitted over http, as it's an application that goes out for use on shared servers where there is little chance of being able to use SSL (think WordPress and the like). Most people just shrugged their shoulders as that's all they expected - it's hardly a national bank.
We are now thinking of writing the next version using a JavaScript front end, with the advantage of loading all the images & CSS once, then writing HTML into the DOM thereafter with extJS (or maybe jQuery). We'd like to encrypt user input at the client before being sent to the server, then decrypt server output at the browser before being rendered to HTML so as to introduce some sort of security for users. There are also gains to be had with reducing page loading times, as we're only sending gzipped JSON back and forth.
While playing around, we realised that the method we were looking at to encrypt the basic stuff also doubled up as an authentication mechanism for login in the first place.
For simplicity...:
The user connects to the login page over standard http, where the browser downloads the JavaScript package containing the hashing and encryption algorithms (SHA-256 and AES for example).
User enters username, password and secret into a login form.
The browser JavaScript sends a hash of username and password to the server via AJAX. The secret is only stored in JavaScript and is never sent across the internet.
The server looks up the hash and retrieves username and secret from the database.
The server sends a hash (same algorithm as the browser) of username and secret back to the browser.
The browser JavaScript creates a hash of username and secret and compares it to the hash sent back from the server.
If they are the same, the browser JavaScript encrypts response with secret and sends the message back to the server.
The server decrypts the message with secret to find the expected response and starts a new session.
Subsequent communications are encrypted and decrypted both ways with secret.
There seem to be a few advantages of this type of system, but are we right in thinking:
The user knows they are talking to their server if the server manages to create a hash of username and secret, proving the server knows and understands username and secret.
The server knows the user is genuine if they manage to encrypt response with secret, proving the user knows secret.
At no time is secret ever transmitted in plain text, or is it possible to determine secret from the hash.
A sniffer will only ever find out the 'secure' URL and detect compressed hashes and encryptions in the query string. If they send a request to to the URL that is malformed, no response is given. If they somehow manage to guess an appropriate request, they still have to be able to decrypt it.
It all seems quick enough as to be imperceptible to the user. Can anyone see through this, as we all just assumed we shouldn't be playing with JavaScript encryption!
Don't do this. Please use SSL/TLS. See Javascript Cryptography Considered Harmful.
If you can provide a single SSL site to deliver your JavaScript securely (to avoid the attack mentioned above), then you can use the opensource Forge library to provide cross-domain TLS connections to your other sites after generating self-signed certificates for them. The Forge library also provides other basic crypto stuff if you opt to go in a different direction. Forge has an XMLHttpRequest wrapper that is nearly all JavaScript, with a small piece that leverages Flash's socket API to enable cross-domain communication.
http://digitalbazaar.com/2010/07/20/javascript-tls-1/
https://github.com/digitalbazaar/forge