I have an API that I would like to restrict access to. I can provide access keys and check them with each request, but I'm not sure how far this is really going to go.
The API is used by applications, but it is also used by a web app which someone can just view the source of. If they did, they would have the key and could easily make API calls.
Is there a more reliable way to secure access? I'm not sure what the standard practice here is.
Edit: After thinking about it, I could use a two-prong approach. The web app can use POST with CSRF, and applications can use API keys. Any other ideas, or is this a generally accepted solution? (Note, this still wouldn't work for third-party web apps.)
Your API is never private since it's used by a web app which I am assuming is available to the general public. If this is the case, there really is no impetus to secure it since anyone and everyone would have access to the API.
If on the other hand, this web app is only available to registered users, you can use a token system to check for authorization. When the user successfully logs in, you pass back a token (usually something 20 to 30 characters long). Every API request would require a valid token. Tokens can be set to expire automatically (using a database job) X hours after creation if your application requires higher security thresholds. If security isn't a big issue, they can be renewed automatically every time a request is made.
This is essentially a two tiered approach. Temporary tokens are generated for users to directly connect to your API so that permanent credentials are never sent to the client. Predefined keys are given to third party developers who build applications on top of your API and have their own back-end.
If it's your API you can simply do this.
1) Insert the following code into your API file(s)
$authToken = "APItoken"; //variables
if( !isset($_REQUEST["authToken"]) || $_REQUEST["authToken"] != $authToken )
die("Need auth token");
2) You will now need to GET/POST/PUT the URL like this:
http://www.yoursite.com/api1.php?authToken=APItoken&nextParam=¶mAfterThat=
If this helped please mark it as the answer
EDIT:
Nevermind, read it wrong. Updating answer in a few minutes.
Related
I don't have any idea how to implement this. After a bit of search I found out that medium keeps track of the browser and not the user, what is mean is you can access three free articles from each new browser on the same machine (if I am wrong do point it out). I am using React and Firebase for my website.
Edit: I was thinking along the lines of getting some kind of id which is unique to a browser. As cookies and local storage can always be bypassed.
I don't know if it's a clean way to do it but you can associate an IP to an unique counter. Or with a cookie but he can bypass that by cleaning the cookies
The answer would tightly depend on your application setup and especially on the service backing your front store.
If you are using a self-backed backend, for example a nodejs - express based server, within your route middleware you can access the remote address from the req.connection.remoteAddress request property along with the user-agent req.header('User-Agent') and forward these to your datastore being Firebase in this case.
If you are deploying your application to Google Cloud Function, you can then access the remote peer address using the fastly-client-ip request header and still forward this to your storage system.
Use javascript and implement a system that uses a cookie or local-storage to verify how many articles are read on your website.
On most of these websites however you are still able to bypass this limit by clearing the cache or using a incognito window.
To also limit these scenarios you can use a cookie in combination with an IP address, which has its own drawbacks, especially in corporate environments, and mobile connections where IP addresses are heavily shared or changed. Depending on your situation this may matter or not.
Circumstances
I develope a WebApp with AngularJS.
I've an restful API on server-side with GET and POST commands.
I want to use the API within my module (means: in JavaScript) to display and edit my data.
I want to protect the API with some kind of authentication (basic auth with an API key for example)
I don't want to protect the API when a user uses the app itself
Actual question
Okay, I guess the last point is a bit unclear.
I want that a user can use the app with his browser without any authentication
But when a third-party app want to access the API it have to use authentication
Since JavaScript is executed on client-side of course I can't write a master key into js or something similar..
Is there any kind of pattern or solution to solve this problem?
More specifications
referring to #EliranMalka and #shaunhusain
On the server-side I do use Tornado with it's builtin template engine. I do use the template engine actually just to write the index page and insert CSS, JS dynamically.
The code for authentication would just something like:
def is_authenticated(request):
if 'api_key' in request.arguments:
return sql('SELECT id FROM keys WHERE key=%S' % request.arguments['api_key']).count == 1
My AngularJS module is doing something similar to:
$http.get('/api/foo?api_key=1234')
.then(function (result) {
$scope.data = result.data
});
As you can see I'm writing my API key into js at the moment. But I wan't to avoid this.
Also, what do you mean exactly by third-party?
not a third-party request would be: Using the App on http:/ /app.example.com with a browser
A third-party request would be from an Android app for example. Something that comes from outside or remote.
A JS request from the browser on the actual page would be not from remote (again: since it's JS it is actually from remote - but I hope it gets more clear now)
Oh and before I forget...
I'm aware of that my plan is a bit weird - but it's just a learning(-web-development)-by-doing project.
Also the API key is not absolutely to avoid abusion, it is rather to log 3rd-party usage.
PS I hope my question was clear for you
Hmm, well I'll try to address the questions but here's a few things.
Question isn't really appropriate in it's current format for stackoverflow.com (should be programming questions, I tried X and Y happened) perhaps closer to a StackExchange question but is still fairly open ended.
Include more information about specifics of the languages (and/or frameworks) your using server side and any code you have that is relevant (authentication code?).
Putting the key into the client code and transmitting it from the client means anyone with a web proxy (check out Charles or Wireshark) can grab the key so just to reiterate you're right there that's not the way to go.
Check out how other organizations allow you to get access to their APIs (for example Google, LinkedIn, Facebook, Twitter) to get a feel for how it works. In all of these cases you are signed into the service to be able to make an API key, in some cases you have to specify which domain the requests with that API key will come from. If you use the same precautions and check the API key sent with a request against a database of registered API users and verify the domain in the request then I'd say you're in pretty good shape.
I'm planning to refactor a legacy Rails 2 app by splitting the logic into a RESTful API, and the view into a separate Javascript client. The API itself will be protected by oAuth2. This is basically the second option explained on this question:
Separate REST JSON API server and client?
There's a lot of questions out there concerning the security of using oAuth with a JS app, the main concern seems to be that storing the access token on the client is a bad idea since it acts as a password and someone that has physical access to the computer can hijack the user's identity. A possible solution I've read is to expire the access token every 1h or so and use the refresh token stored on Yahoo's YQL to request a new token when necessary. This doesn't looks to me like a good solution since at the end you'll need again a token to access the YQL service.
But at the end, aren't we facing the same problem as when using persistent sessions? I mean, AFAIK, the common method to keep the session alive across browser opening/closing (when you tick "remember me") is to generate a token associated to a user and store it both on the DB and on a long-living cookie. So again, anyone with access to this cookie has the "key" to your session. AFAIK this is the method all the "big guys" use.
I am right? And if I am, aren't we worrying too much about something that we cannot control at all? Of course I'm talking about those applications where an intrusion is not too harmful for the user like social networks, blogs, forums, etc.
For my CMS component I'm implementing integration with Twitter API to fetch and display list of tweets (either connected to user or search query). I'm using Twitter Restful API v1.1, since the 1.0 version is going to be dropped in two months. Two interesting requests for me are user_timeline and search one.
Since my technology strongly relies on caching I need to avoid server-side processing as much as possible providing static html and piece of javascript. I've done it already for old version API and it worked fine. New approach however requires providing authentication data via OAuth. One of the property (oath_signature) is a hash of other properties (in which there are oauth_timestamp and oath_nonce, which should (should they?) be unique per each twitter request) and secret keys, thus make it unsecure to generate it on client side.
Is there any secure way to get list of tweets on client-side using new API?
The simple answer to your question is, "no, there is no secure way of doing this without server-side code." What I would do is set up a service to poll Twitter every xxxx seconds and retrieve the desired tweets. You should cache or store the results and then empty them each time you make your next request. If you are using C#, I have been working on a C# Twitter library that replicates Twitter's API and already has support for grabbing a user's timeline. I will be adding support for filter and search within the next two days (each one should take no more than ten minutes to implement, excluding testing, if you decide to do it yourself). You can reference this library in the service that I mentioned before.
If you do not have the server resources that you need for this, then I strongly caution you against using solutions that circumvent Twitter's intended securities, as it could leave you or your client in a vulnerable position.
You'll have to write a proxy web service on your server side. And as you say caching will be critical to avoid the 15 requests per 15 minutes for basic stuff like pulling tweets.
Definitely avoid doing any auth stuff on the front end. The new "application only" auth using OAuth 2 would allow you to embed bearer tokens in JavaScript meaning you don't need to do any of the signature stuff you're talking about. But don't. Anyone could use your bearer token and if your own users didn't exhaust the rate limit, other people stealing your token might.
If you don't have the server side resources to do this yoursefl, you might want to look at Flamingo. It'll do the auth and the caching for you, so you only need to work in JS like you used to.
As you may know, pinterest api seems down now.( api.pinterest.com )In this site: http://tijn.bo.lt/pinterest-api, it says that readonly access still works. What does exactly mean ?
I can build an application using this api but cannot use pinning or creating my own board ?
Sorry if my question is too ridiculous, i am very newbie to create an application with an API..
If the API permits read-only alone, that means you can consume data from the source, but you cannot write to it. You could probably get a list of items from your board, but you wouldn't be able to programmatically push a new item to your board.
It's a one-way road, until they open up another lane.
The information posted on that site is a bit out of date.
The API was until recently allowing read/write access, but about two weeks ago Pinterest stopped issuing new access tokens via their original authentication scheme. The new scheme requires API users to generate an oauth signature to receive an access token (needed to use the API), and consequently the API is only accessible to those who have received a client_id and client_secret for their application from Pinterest.
Caveat: if you happen to have an old access_key issued using the old API, you apparently can still use that to make API calls, though I'm guessing those tokens will expire soon.