So now I am pretty much sold to the idea of having pure html+js front end where all processing happens at client side browser and the backend provides all the data in JSON/xml/other format and so on.
Here's the dilemma,
For authentication, I am using OAuth2 Bearer token which gets generated when user authenticate using username and password (for e.g. at login stage).
There is an extra security for which clientside application (i.e.a front end web server or mobile app) that is making request to this WebAPI. When it makes the initial request, it passes "client_id " and "client_secret" to make sure the client is app is authorized to make this request to back end server.
In traditional .NET way I would store the encrypted clientid and key in web.config and my C# (Or VB.NET) code would retrieve it and send it over SSL to the server. So in the manner the client_id and client_secret is not exposed in rendered HTML (for e.g.) to the client side browser.
In pure javascript environment how can I secure my client_id and client_secret (or any other sensitive data for that matter)?
Thanks
I don't think you can secure your "secrets".
HTML5/JS code is pure text, anyone with a text editor can see it. What people normally try to do is obfuscate their code by using javascript minifiers/compressors; see here for a good discussion. The practice is called Security through Obscurity. But note that obfuscation is not security. Given time and effort, a determined "hacker" will eventually find your secrets. Another step you can take to deter, delay and frustrate such attacks is to spread bits of your secrets in the code, in different modules, etc. Having said that, you'll need to write code to assemble them at some point, so again, no real security.
I have a similar problem because I wanted to use a "shared secret" with the server so I can hash my client requests such that they are tamper-proof and can't be recreated without the attacher knowing the shared secret. Unfortunately I had to give up on the idea, since I realised I couldn't keep it secret enough.
Related
I am currently architecting a new SaaS based application which will include a RESTful API. I want to secure the communications between the Browser and API Server. To do this I plan on using a "Two Legged" OAuth approach. My question is how secure can this approach be if I am encrypting in the browser using JavaScript HMAC library? Wouldn't this approach expose the secret key?
General observations about client-side Javascript based "security":
anything you do on the client in Javascript is entirely visible to said client; you cannot hide anything from the user
yes, if you're sending out private keys to the client in Javascript, they cease to be private keys
anything happening client side cannot be trusted, at all; you don't even have any proof that the client is running your code, all you see is the result of it
if you're trying to do some client-side magic to protect from third parties: a third party in a position to do any harm is typically also in the position to intercept all the Javascript that your server is sending to the client in the first place...
if you're protecting the transport of said Javascript from said third parties by using SSL... you don't need any more client-side Javascript code to add any more protection to that channel
Beyond this, I'm not entirely sure who is supposed to authenticate against whom here and what you want to keep secret from whom; but hopefully these points will get you thinking.
This quite tricky. I would prefer having serverside key authorization, but we are limited to a JavaScript only implementation. Customers will use a JavaScript library, that will request certain pieces of data - but this customer has to be authorized to use this data. That's where the authorization part comes in, that does not involve any serverside (At the customers side) implementation.
The JavaScript library is requesting data at my server, but not all customers are allowed to see every piece of data. Thats why I need to authorize the customer.
Currently I simply place a customer-ID in the JavaScript library which is being sent to the server to authorize. This is not very safe though, you could simply copy this ID over to your own library to get data from the server you can normally not retrieve.
I don't need a 100% waterproof solution, but my current implementation is just pure garbage. As the solution needs to be pure JavaScript, I understand there will be many ways to spoof the authorization. I just need something some authorization that's the safest as it will get with JavaScript only. Any idea?
Some of the guys here are developing an application which incorporates some 'secure areas' accessible by logging in. In the past, the login form and subsequent 'secure' pages were all plain text transmitted over http, as it's an application that goes out for use on shared servers where there is little chance of being able to use SSL (think WordPress and the like). Most people just shrugged their shoulders as that's all they expected - it's hardly a national bank.
We are now thinking of writing the next version using a JavaScript front end, with the advantage of loading all the images & CSS once, then writing HTML into the DOM thereafter with extJS (or maybe jQuery). We'd like to encrypt user input at the client before being sent to the server, then decrypt server output at the browser before being rendered to HTML so as to introduce some sort of security for users. There are also gains to be had with reducing page loading times, as we're only sending gzipped JSON back and forth.
While playing around, we realised that the method we were looking at to encrypt the basic stuff also doubled up as an authentication mechanism for login in the first place.
For simplicity...:
The user connects to the login page over standard http, where the browser downloads the JavaScript package containing the hashing and encryption algorithms (SHA-256 and AES for example).
User enters username, password and secret into a login form.
The browser JavaScript sends a hash of username and password to the server via AJAX. The secret is only stored in JavaScript and is never sent across the internet.
The server looks up the hash and retrieves username and secret from the database.
The server sends a hash (same algorithm as the browser) of username and secret back to the browser.
The browser JavaScript creates a hash of username and secret and compares it to the hash sent back from the server.
If they are the same, the browser JavaScript encrypts response with secret and sends the message back to the server.
The server decrypts the message with secret to find the expected response and starts a new session.
Subsequent communications are encrypted and decrypted both ways with secret.
There seem to be a few advantages of this type of system, but are we right in thinking:
The user knows they are talking to their server if the server manages to create a hash of username and secret, proving the server knows and understands username and secret.
The server knows the user is genuine if they manage to encrypt response with secret, proving the user knows secret.
At no time is secret ever transmitted in plain text, or is it possible to determine secret from the hash.
A sniffer will only ever find out the 'secure' URL and detect compressed hashes and encryptions in the query string. If they send a request to to the URL that is malformed, no response is given. If they somehow manage to guess an appropriate request, they still have to be able to decrypt it.
It all seems quick enough as to be imperceptible to the user. Can anyone see through this, as we all just assumed we shouldn't be playing with JavaScript encryption!
Don't do this. Please use SSL/TLS. See Javascript Cryptography Considered Harmful.
If you can provide a single SSL site to deliver your JavaScript securely (to avoid the attack mentioned above), then you can use the opensource Forge library to provide cross-domain TLS connections to your other sites after generating self-signed certificates for them. The Forge library also provides other basic crypto stuff if you opt to go in a different direction. Forge has an XMLHttpRequest wrapper that is nearly all JavaScript, with a small piece that leverages Flash's socket API to enable cross-domain communication.
http://digitalbazaar.com/2010/07/20/javascript-tls-1/
https://github.com/digitalbazaar/forge
This is probably a generic security question, but I thought I'd ask in the realm of what I'm developing.
The scenario is: A web service (WCF Web Api) that uses an API Key to validate and tell me who the user is, and a mix of jQuery and application on the front ends.
On the one hand, the traffic can be https so it cannot be inspected, but if I use the same key per user (say a guid), and I am using it in both then there's the chance it could be taken and someone could impersonate the user.
If I implement something akin to OAuth, then a user and a per-app key is generated, and that could work - but still for the jQuery side I would need the app API key in the javascript.
This would only be a problem if someone was on the actual computer and did a view-source.
What should I do?
md5 or encrypt the key somehow?
Put the key in a session variable, then when using ajax retrieve it?
Get over it, it's not that big a deal/problem.
I'm sure it's probably a common problem - so any pointers would be welcome.
To make this clearer - this is my API I have written that I am querying against, not a google, etc. So I can do per session tokens, etc, I'm just trying to work out the best way to secure the client side tokens/keys that I would use.
I'm being a bit overly cautious here, but just using this to learn.
(I suggest tagging this post "security".)
First, you should be clear about what you're protecting against. Can you trust the client at all? A crafty user could stick a Greasemonkey script on your page and call exactly the code that your UI calls to send requests. Hiding everything in a Javascript closure only means you need a debugger; it doesn't make an attack impossible. Firebug can trace HTTPS requests. Also consider a compromised client: is there a keylogger installed? Is the entire system secretly running virtualized so that an attacker can inspect any part of memory at any time at their leisure? Security when you're as exposed as a webapp is is really tricky.
Nonetheless, here are a few things for you to consider:
Consider not actually using keys but rather HMAC hashes of, e.g., a token you give immediately upon authentication.
DOM storage can be a bit harder to poke at than cookies.
Have a look at Google's implementation of OAuth 2 for an example security model. Basically you use tokens that are only valid for a limited time (and perhaps for a single IP address). That way even if the token is intercepted or cloned, it's only valid for a short length of time. Of course you need to be careful about what you do when the token runs out; could an attacker just do the same thing your code does and get a new valid token?
Don't neglect server-side security: even if your client should have checked before submitting the request, check again on the server if the user actually has permission to do what they're asking. In fact, this advice may obviate most of the above.
It depends on how the API key is used. API keys like that provided by Google are tied to the URL of the site originating the request; if you try and use the key on a site with an alternate URL then the service throws and error thus removing the need to protect the key on the client side.
Some basic API's however are tied to a client and can be used across multiple domains, so in this instance I have previously gone with the practice of wrapping this API in server side code and placing some restrictions on how the client can communicate with the local service and protecting the service.
My overall recommendation however would be to apply restrictions on the Web API around how keys can be used and thus removes the complications and necessity of trying to protect them on the client.
How about using jQuery to call server side code that handles communication with the API. If you are using MVC you can call a controller action that can contain the code and API key to hit your service and return a partial view (or even JSON) to your UX. If you are using web forms you could create an aspx page that will do the API communication in the code behind and then write content to the response stream for your UX to consume. Then your UX code can just contain some $.post() or $.load() calls to your server side code and both your API key and endpoint would be protected.
Generally in cases like this though you proxy requests through the server using 'AJAX' which verifies the browser making requests is authorized to do so. If you want to call the service directly from JavaScript, then you need some kind of token system like JSON Web Tokens (JWT) and you'll have to work out cross-domain issues if the service is located somewhere other than the current domain.
see http://blogs.msdn.com/b/rjacobs/archive/2010/06/14/how-to-do-api-key-verification-for-rest-services-in-net-4.aspx for more information
(How to do API Key Verification for REST Services in .NET 4)
I'm checking out amazon simpledb documentation. They mention only server side languages.
Is there anyway to insert data into the db directly from the client side without going through a server?
If not, how come?
Yes and no. Since you need to protect your secret key for AWS (hackers could use it to abuse your account), you can't authenticate requests in JS directly.
While you could create an implementation in JS, it would be inherently insecure. Practical for some internal uses, it could never be safely deployed (as that would expose your secret key). What you could do instead is use your server to authenticate the requests to SimpleDB and let the JS perform the actual request to Amazon. Though it's a bit roundabout, it would work.
The downside is that you'd need to do a bunch of processing on the client side. You're also likely fetching more data than your app consumes/outputs, so processing the data on the client instead of on the server would likely encounter more latency simply because you're transferring more data to the user and processing it more slowly.
Hope this helps
If not, how come?
Security. You authenticate to the DB with your developer account. Amazon does not know about your end users (which it would need to, in order to authenticate access directly from the browser). It is up to the application code to determine what end users are allowed to do and what not.
That said, there is the Javascript Scratchpad for Amazon SimpleDB sample application. It does access SimpleDB directly from the browser (you have to type in your AWS credendials).
SimeplDBAdmin is a Javascript/PHP based interface:
http://awsninja.com/2010/07/08/simpledbadmin-a-phpmyadmin-like-interface-for-amazon-simpledb/
The PHP side is a relay script[relay.php] which will pass the requests made from the Javascript client and send them on to the server, takes the response from the server and reformats it for the client. This is to easily get around the cross-domain problems with Javascript[if the web client had downloaded the web page containing the javascript code from www.example.com it will only allow javascript to connect back to www.example.com by default].
Everything else, including request signing, is done by the Javascript code.
Also note that Amazon has released a new beta service recently to allow you to setup sub-accounts under your Amazon account. The simpleDB protection is very basic[either on or off per account] but as it does provide some limited form of request tracking, it could be argued that using Javascript and giving each user their OWN userid and key for request signing is MORE secure. Having every user use the SAME userid and certificate would, of course, be insecure.
There is a free, pure JavaScript interface available. Please see https://chrome.google.com/webstore/detail/ddhigekdfabonefhiildaiccafacphgg
See this answer to the similar question on allowing secure, anonymous, read-only access to SimpleDB from untrusted clients: anonymous read with amazon simpledb .
Some variations from that answer:
don't set access policy to read-only. However, it allows fine grained control, so you may still wish to limit the kind of writes allowed
don't be anonymous. The AWS docs on token based auth and example apps show parallel paths: anonymous access or non-anonymous AWS/federated access with your credentials but without exposing your secret key.