I'm writing a Node.js server that answers to HTTP GET requests with a dynamic HTML page, rendered "on-the-fly" with some data retrieved according to the client requests.
To identify each client I use a session token (JWT) and this is sent back to the server as query parameter in each GET request, along with the other information, i.e.:
my.domain/api/service?token=blablabla&req=123
It works, indeed. I wonder if sending the session tokens as query parameters is a good (and safe) idea.
I would send it in the headers, but it's harder on the client's page because now I just set an href tag to the url above.
Do you recommend another way?
Security wise, doesn't really matter how you send it, as long as it doesn't contain sensitive information (e.g password) because it's not encrypted, it's encoded and token can be decoded very easily.
Even if someone (hacker, user etc) alters the token, server will verify and notice that (if you've set up verification correctly) and you can deny access to page, media, data or whatever your user requests.
Important! Use SSL! Otherwise hacker can steal the token from its owner and use it himself, server only checks if it's valid and not altered, not where it came from. Read more: man-in-the-middle attack
How you do it, is totally up to you and your project, however I would personally send it via header.
Related
We offer a web service, where a user can execute a POST request and get HTML. This is done server to server. The post that is sent with the request data includes his secret key, and other parameters.
We want to allow for this to be loaded via AJAX. So the request will be done client side, after the page has loaded. This way, no server side implementation will be required to install our service, beyond a slight modification to output the script.
We are not sure how to secure this operation, because we can not output the secret key as a JavaScript parameter (it is exposed that way). We usually use a user ID + apiKey combination to authenticate the request when it is don't server to server.
We know the users's key, and their valid domain, so if there's a way to make absolutely sure the domain that is sending the request cannot be faked, it may also solve our issue.
How can we make it so that we can differentiate requests coming that way, without exposing our secret key, so that we may provide the information only the authenticated requests?
I've been reading up on REST and there are a lot of questions on SO about it, as well as on a lot of other sites and blogs. Though I've never seen this specific question asked...for some reason, I can't wrap my mind around this concept...
If I'm building a RESTful API, and I want to secure it, one of the methods I've seen is to use a security token. When I've used other APIs, there's been a token and a shared secret...makes sense. What I don't understand is, requests to a rest service operation are being made through javascript (XHR/Ajax), what is to prevent someone from sniffing that out with something simple like FireBug (or "view source" in the browser) and copying the API key, and then impersonating that person using the key and secret?
We're exposing an API that partners can only use on domains that they have registered with us. Its content is partly public (but preferably only to be shown on the domains we know), but is mostly private to our users. So:
To determine what is shown, our user must be logged in with us, but this is handled separately.
To determine where the data is shown, a public API key is used to limit access to domains we know, and above all to ensure the private user data is not vulnerable to CSRF.
This API key is indeed visible to anyone, we do not authenticate our partner in any other way, and we don't need REFERER. Still, it is secure:
When our get-csrf-token.js?apiKey=abc123 is requested:
Look up the key abc123 in the database and get a list of valid domains for that key.
Look for the CSRF validation cookie. If it does not exist, generate a secure random value and put it in a HTTP-only session cookie. If the cookie did exist, get the existing random value.
Create a CSRF token from the API key and the random value from the cookie, and sign it. (Rather than keeping a list of tokens on the server, we're signing the values. Both values will be readable in the signed token, that's fine.)
Set the response to not be cached, add the cookie, and return a script like:
var apiConfig = apiConfig || {};
if(document.domain === 'example.com'
|| document.domain === 'www.example.com') {
apiConfig.csrfToken = 'API key, random value, signature';
// Invoke a callback if the partner wants us to
if(typeof apiConfig.fnInit !== 'undefined') {
apiConfig.fnInit();
}
} else {
alert('This site is not authorised for this API key.');
}
Notes:
The above does not prevent a server side script from faking a request, but only ensures that the domain matches if requested by a browser.
The same origin policy for JavaScript ensures that a browser cannot use XHR (Ajax) to load and then inspect the JavaScript source. Instead, a regular browser can only load it using <script src="https://our-api.com/get-csrf-token.js?apiKey=abc123"> (or a dynamic equivalent), and will then run the code. Of course, your server should not support Cross-Origin Resource Sharing nor JSONP for the generated JavaScript.
A browser script can change the value of document.domain before loading the above script. But the same origin policy only allows for shortening the domain by removing prefixes, like rewriting subdomain.example.com to just example.com, or myblog.wordpress.com to wordpress.com, or in some browsers even bbc.co.uk to co.uk.
If the JavaScript file is fetched using some server side script then the server will also get the cookie. However, a third party server cannot make a user’s browser associate that cookie to our domain. Hence, a CSRF token and validation cookie that have been fetched using a server side script, can only be used by subsequent server side calls, not in a browser. However, such server side calls will never include the user cookie, and hence can only fetch public data. This is the same data a server side script could scrape from the partner's website directly.
When a user logs in, set some user cookie in whatever way you like. (The user might already have logged in before the JavaScript was requested.)
All subsequent API requests to the server (including GET and JSONP requests) must include the CSRF token, the CSRF validation cookie, and (if logged on) the user cookie. The server can now determine if the request is to be trusted:
The presence of a valid CSRF token ensures the JavaScript was loaded from the expected domain, if loaded by a browser.
The presence of the CSRF token without the validation cookie indicates forgery.
The presence of both the CSRF token and the CSRF validation cookie does not ensure anything: this could either be a forged server side request, or a valid request from a browser. (It could not be a request from a browser made from an unsupported domain.)
The presence of the user cookie ensures the user is logged on, but does not ensure the user is a member of the given partner, nor that the user is viewing the correct website.
The presence of the user cookie without the CSRF validation cookie indicates forgery.
The presence of the user cookie ensures the current request is made through a browser. (Assuming a user would not enter their credentials on an unknown website, and assuming we don’t care about users using their own credentials to make some server side request.) If we also have the CSRF validation cookie, then that CSRF validation cookie was also received using a browser. Next, if we also have a CSRF token with a valid signature, and the random number in the CSRF validation cookie matches the one in that CSRF token, then the JavaScript for that token was also received during that very same earlier request during which the CSRF cookie was set, hence also using a browser. This then also implies the above JavaScript code was executed before the token was set, and that at that time the domain was valid for the given API key.
So: the server can now safely use the API key from the signed token.
If at any point the server does not trust the request, then a 403 Forbidden is returned. The widget can respond to that by showing a warning to the user.
It's not required to sign the CSRF validation cookie, as we're comparing it to the signed CSRF token. Not signing the cookie makes each HTTP request shorter, and the server validation a bit faster.
The generated CSRF token is valid indefinitely, but only in combination with the validation cookie, so effectively until the browser is closed.
We could limit the lifetime of the token's signature. We could delete the CSRF validation cookie when the user logs out, to meet the OWASP recommendation. And to not share the per-user random number between multiple partners, one could add the API key to the cookie name. But even then one cannot easily refresh the CSRF validation cookie when a new token is requested, as users might be browsing the same site in multiple windows, sharing a single cookie (which, when refreshing, would be updated in all windows, after which the JavaScript token in the other windows would no longer match that single cookie).
For those who use OAuth, see also OAuth and Client-Side Widgets, from which I got the JavaScript idea. For server side use of the API, in which we cannot rely on the JavaScript code to limit the domain, we're using secret keys instead of the public API keys.
api secret is not passed explicitly, secret is used to generate a sign of current request, at the server side, the server generate the sign following the same process, if the two sign matches, then the request is authenticated successfully -- so only the sign is passed through the request, not the secret.
This question has an accepted answer but just to clarify, shared secret authentication works like this:
Client has public key, this can be shared with anyone, doesn't
matter, so you can embed it in javascript. This is used to identify the user on the server.
Server has secret key and this secret MUST be protected. Therefore,
shared key authentication requires that you can protect your secret
key. So a public javascript client that connects directly to another
service is not possible because you need a server middleman to
protect the secret.
Server signs request using some algorithm that includes the secret
key (the secret key is sort of like a salt) and preferably a timestamp then sends the request to the service. The timestamp is to prevent "replay" attacks. A signature of a request is only valid for around n seconds. You can check that on the server by getting the timestamp header that should contain the value of the timestamp that was included in the signature. If that timestamp is expired, the request fails.
The service gets the request which contains not only the signature
but also all the fields that were signed in plain text.
The service then signs the request in the same way using the shared
secret key and compares the signatures.
I will try to answer the the question in it's original context. So question is "Is the secret (API) key safe to be placed with in JavaScript.
In my opinion it is very unsafe as it defeats the purpose of authentication between the systems. Since the key will be exposed to the user, user may retrieve information he/she is not authorized to. Because in a typical rest communication authentication is only based on the API Key.
A solution in my opinion is that the JavaScript call essentially pass the request to an internal server component who is responsible from making a rest call. The internal server component let's say a Servlet will read the API key from a secured source such as permission based file system, insert into the HTTP header and make the external rest call.
I hope this helps.
I supose you mean session key not API key. That problem is inherited from the http protocol and known as Session hijacking. The normal "workaround" is, as on any web site, to change to https.
To run the REST service secure you must enable https, and probably client authentification. But after all, this is beyond the REST idea. REST never talks about security.
What you want to do on the server side is generate an expiring session id that is sent back to the client on login or signup.
The client can then use that session id as a shared secret to sign subsequent requests.
The session id is only passed once and this MUST be over SSL.
See example here
Use a nonce and timestamp when signing the request to prevent session hijacking.
I'm now two weeks into learning and building an AngularJS+ PHPsystem and I'm still struggling with authentication. I've been reading a lot of posts about AngularJSand not one of them seem to consider the security aspect of authentication. I also had an interesting response when I asked about the security of AngularJS storages on another post, and got two great links to Stormpath's blogs which cover areas of security when dealing with tokens.
Most tutorials and examples about AngularJS seem to be taking a JWT approach and sending that token to your REST API via HTTP headers, but given that the token is stored in Javascript this can expose it to multiple attack types. One of them being MITM. To be secure against this type of attack the solution is to set a cookie with HttpOnly and Secure flags. Now the token gets passed on every request, it's not being stored by Javascript and it's secure. However, this raises the question at the point where you authenticate the user: How is this any different than using sessions when you're only dealing with HTTP requests originating from the same server?
When checking if a user has already logged in we usually check if a $_SESSION variable exists, let's say uid. Now on a token based approach we send the token in HTTP headers and read that token, then validate it and get user information. In AngularJSwe then get the successful response and return a promise.
Sessions have the advantage of being handled by the server. They create a session and they handle it's destruction automatically if it still lingers there. When dealing with a token based authentication you have to take care of it's expiration, refreshing and destruction with a scheduled script if the user has not destroyed it himself. This seems like too much work.
The idea of using tokens is to allow for a server to be completely stateless. The server just provides a login service, that upon successful login returns a temporary token, and it immediately forgets the token, it does not store it anywhere (database, memory).
Then the client sends the token at each subsequent request. The token has the property that it's self-validating: it includes the validity, the username and a cryptographic signature.
Such signature proves that the token is valid to the server, even if the server had thrown away the token completely.
This way the server does not have to take care of expiration/destruction of tokens: it can inspect incoming tokens and validate them inspecting only the token (thanks to the signature).
And this is the advantage of JSON Web Tokens: they allow for a completely stateless server that does not have to manage authentication token lifecycle.
Quick background:
Full Javascript SPA AngularJS client that talks to a REstful API server. I am trying to work out the best authentication for the API Server. The client will have roles and I am not concerned if the user can see areas of the client they aren't allowed because the server should be air tight.
Authentication flow:
User Posts Username and Password to let's say /api/authenticate
If a user the server generates api token ( sha hash of fields or md5) and some other meta data determining roles to pass back in 1) post reply.
The token is stored in a session cookie (no exp, http only, ssl)
Each request after authentication takes the token in the cookie and verifies this is the user.
SSL user on server.
Questions:
Is this the best way to secure the server?
Do I need to worry about replay attacks w/ SSL? If so best way to manage this?
I tried to think of a way to do HMAC security with AngularJS but I can't store a private key on a javascript client.
I initially went the http authentication method but sending the username and password each request seems odd.
Any suggestions or examples would be appreciated.
I'm currently working on a similar situation using angularjs+node as a REST API, authenticating with HMAC.
I'm in the middle of working on this though, so my tune may change at any point. Here's what I have though. Anyone willing to poke holes in this, i welcome that as well:
User authenticates, username and password over https
Server (in my case node.js+express) sends back a temporary universal private key to authenticated users. This key is what the user will use to sign HMACs client side and is stored in LocalStorage on the browser, not a cookie (since we don't want it going back and forth on each request).
The key is stored in nodejs memory and regenerates every six hours, keeping record of the last key generated. For 10 seconds after the key changes, the server actually generates two HMACs; one with the new key, one with the old key. That way requests that are made while the key changed are still valid. If the key changed, the server sends the new one back to the client so its can flash it in LocalStorage. The key is a SHA256 of a UUID generated with node-uuid, hashed with crypto. And after typing this out, i realize this may not scale well, but anyway ...
The key is then stored in LocalStorage on the browser (the app actually spits out a your-browser-is-too-old page if LocalStorage is not supported before you can even try to login).
Then all requests beyond the initial authentication send three custom headers:
Auth-Signature: HMAC of username+time+request.body (in my case request.body is a JSON.stringify()'d representation of the request vars) signed with the locally stored key
Auth-Username: the username
X-Microtime: A unix timestamp of when the client generated its HMAC
The server then checks the X-Microtime header, and if the gap between X-Microtime and now is greater than 10 seconds, drop the request as a potential replay attack and throw back a 401.
Then the server generates is own HMAC using the same sequence as the client, Auth-Username+X-Microtime+req.body using the 6-hour private key in node memory.
If HMACs are identical, trust the request, if not, 401. And we have the Auth-Username header if we need to deal with anything user specific on the API.
All of this communication is intended to happen over HTTPS obviously.
Edit:
The key would have to be returned to the client after each successful request to keep the client up to date with the dynamic key. This is problematic since it does the same thing that a cookie does basically.
You could make the key static and never changing, but that seems less secure because the key would never expire. You could also assign a key per user, that gets returned to the client on login, but then you still have to do user lookups on each request anyway, might as well just use basic auth at that point.
Edit #2
So, after doing some testing of my own, i've decided to go with a backend proxy to my REST API still using HMAC.
Angular connects to same-domain backend, the backend runs the HMAC procedure from above, private key stored on this proxy. Having this on same domain allows us to block cors.
On successful auth, angular just gets a flag, and we store logged in state in LocalStorage. No keys, but something that identifies the user and is ok to be made public. For me, the presence of this stored value is what determines if the user is logged in. We remove the localStorage when they logout or we decide to invalidate their "session".
Subsequent calls from angular to same domain proxy contain user header. The proxy checks for user header (which can only be set by us because we've blocked cross-site access), returns 401 if not set, otherwise just forwards the request through to the API, but HMAC'd like above. API passes response back to proxy and thus back to angular.
This allows us to keep private bits out of the front end, while still allowing us to build an API that can authenticate quickly without DB calls on every request, and remain state-less. It also allows our API to serve other interfaces like a native mobile app. Mobile apps would just be bundled with the private key and run the HMAC sequence for each of their requests.
I need to get some data from Site B into Site A's server side. In order to make the request to Site B to retrieve the data, there are cookies associated with Site B's domain which need to be present. I assume I therefore need to do this in javascript with JSONP?
My ideas was to use JavaScript to make the request to B and then capture the result and stick it a cookie on As domain such that subsequent requests to A would carry the cookie with the returned data (it doesnt matter that it takes two requests to A to get the information to A's serverside). This would work fine, except its completely hackable.
The data itself isn't secret but I need to prevent request forgery or people on Site A calling the JSONP callback function manually, or setting the A cookie manually with stolen or otherwise faked data. Also, is there any other loophole for hacking? This would also need preventing!
The only way I can think of doing this is:
Site A generates a random token and stores it in the session. It then appends this token to the querystring of the JSONP request to Site B. Site B then responds but encrypts the usual data along with the token using digital signing. Site A then sticks this value in a cookie on A. In the next request to A, As server side can capture the cookie, get the value, decrypt it, check the token and if it matches the value in the session, trust the rest of the data.
Does this sound sensible? Is there an easier way? My goal is to reduce the complexity at As end.
Thanks
The way to avoid it being hackable is to have the sites communicate with each other directly, rather than using client-side JavaScript. Write a small light-weight REST API which allows the data to be transferred behind the scenes, server to server.
When linking to Site A, include an authentication token in the URL which can then be checked using the behind-the-scenes call to Site B. This call can transfer any additional required information. The token should probably be IP-bound, and expire after use. Upon success, you can set up your cookie information in Site A, to avoid the need for further round trips.
You could use easyXDM to communicate between the domains. With it you have two javascript Programs, one on the consumers domain, and one on the providers, which can assert the domain of the consumer. Both these Programs can interact with the user, and the user can authenticate itself to both parties. With the providers Program knowing who the user is, and knowing who the consumer is, the provider can pass whatever data it wants to the consumer.
This is what big companies like Twitter, Disqus and LinkedIn use for their API's.