eBay API - securing AppID in a JavaScript application - javascript

I am about to publish a demo JavaScript application based on eBay finding API on my personal website; I was wondering if there is a way to prevent my AppID from being read and exploited.
Is it possible to associate the AppID to a specific domain ? I haven't been able to find an answer neither on eBay Developer Forums nor in the official documentation.

If you send data to the client, the client can read the data. There is no way to prevent this (if JavaScript can decode it, so can the user). In order to avoid that, you need to keep the data (your AppID) on your site, and process the request on your server. So the JavaScript needs to talk to your server, and your server will then pass on the request to eBay, adding the AppID, and then pass the results back to the JavaScript.

To answer your question...
It doesn't seem possible to restrict AppIDs as the limits don't work on a per-site basis like that and you usually have just one AppID for all your uses/sites. See this comprehensive thread from 2010 (quoted below), I doubt much has changed. The end result is it basically doesn't matter for a read-only application such as search results on your website.
More generally about securing JSON API calls in-browser
Checking the referrer is the best way to secure an otherwise public API. This is how Google restricts their API keys for maps, for instance: https://developers.google.com/maps/documentation/javascript/tutorial
About the only thing that will prevent fraud is activity monitoring, given that the API is called from third-party computers, one would have to track trends for abuse, perhaps by comparing a list of calls to other website activity, or by using JSONP to inspect the browser's properties with AJAX. Google can cross-reference their API calls with their Google Analytics calls, for example, though there could always be false positives.
In the end, if the fear is CSRF, there's this: How to reliably secure public JSONP requests?
Quoting verbatim from the eBay thread in case the URL changes again:
There is one DevID per developer account.
There could be multiple AppID, but these are only available via paid support ticket.
Each AppID can have multiple CertID. The CertID determines your call limits.
You can generate unlimited tokens for each AppID. Each token is a pairing of AppID, UserID, and the associated eBay user's password. Tokens are currently active for 18 months. They must then be regenerated. Tokens can also be prematurely 'revoked' either via the API or website preferemces.
For the API families that require a token, you can use a single token based on your own UserID to retrieve most public information. However, private transaction details are only available when you use a token generated for the target UserID. Some calls actually derive the UserID from the token.
If multiple applications share the same AppID, they will both contribute towards the daily call limits. That's why you might want to request a separate AppID.
https://www.x.com/developers/ebay/ebay-api-call-limits
The limits shown in the chart are 'aggregate' for the given API family. There's an implicit per-AppID. For the Trading API, eBay further limits use on a per-call or per-time-interval basis. Some calls like AddItem have higher limits. GetApiAccessRules will return your actual limits and usage.
Per-IP-address means the IP address of the calling machine. If you were to rotate through multiple IP addresses, you'd actually multiply your limit. There are many read-only 'widgets' written in JavaScript or Flash which run in the client browser and thus use the client IP to make the calls. In that case, the call limit is pretty insignificant.
AppID, DevID and CertID belong to the creator of the developer account. That creator is bound by the API license provisions.
As the owner of the keys, you are not to allow any 3rd-party programmatic control of the API. Strictly speaking, that means that both the keys and any token derived from those keys should remain private (i.e. under your exclusive control).
Obviously, eBay does not enforce that strict interpretation since FetchToken is suggested for client-side applications. A sophisticated user could easily grab the token coming or going. What harm can someone do with a token based on their own UserID?
Burn through your daily call limit
Create an API application that violates the license
For more of the debate, see this earlier thread. (Link broken)
Once your application passes the eBay Compatible Application Check, you can request either 1.5M shared or 20K calls per user.
For further information about eBay's APIs, I suggest asking on their forum.

Related

how to secure parse initialize with app and secret?

I'm setting up parse framework in javascript. I notice that I need to call
Parse.initialize("app", "secret")
Since this is in the page source, couldn't anyone take this and make calls against my account?
Is there a more secure way to store this info?
As per Parse Security Guide your JavaScript key is NOT secret:
When an app first connects to Parse, it identifies itself with an Application ID and a Client key (or REST Key, or .NET Key, or JavaScript Key, depending on which platform you're using). These are not secret and by themselves they do not secure an app. These keys are shipped as a part of your app, and anyone can decompile your app or proxy network traffic from their device to find your client key. This exploit is even easier with JavaScript — one can simply "view source" in the browser and immediately find your client key.
So yes, anyone who found your key can make calls.
But you can (and should) restrict what such anyone can do.
Using Class-Level Permissions you restrict what can be done with individual classes.
Using Object-Level Permissions you restrict what can be done with selected objects.
See also Roles and Roles Hierarchy for simultaneously setting permissions for a group of several users.
For instance, you can restrict access to only specific users. Only if one of those users is logged in, the access is granted. Any other "hacker" can try to use your keys but the request will be rejected by Parse.

How do you securely access Windows Azure Mobile Services with Javascript in a web app?

I need a primer web/javascript security.
According to How to use an HTML/JavaScript client for Windows Azure Mobile Services, in javascript on the client side, after including a link to MobileServices.Web-1.0.0.min.js you're supposed to create a client like this:
var MobileServiceClient = WindowsAzure.MobileServiceClient;
var client = new MobileServiceClient('AppUrl', 'AppKey');
which means including my AppKey in the javascript on the page. Should I be worried about the AppKey being public?
Also, it seems easy enough for someone to put an XHR breakpoint in to read the X-ZUMO-APPLICATION and X-ZUMO-AUTH headers while making a REST call when logged in. The usefulness of this is somewhat reduced with a cross-origin resource sharing whitelist, but what's to stop someone with this information from adding javascript to the page and executing arbitrary operations on my backend database? Restricting table permissions to authenticated users wouldn't help in this scenario.
Do I need to be concerned? What do banking apps do about this sort of thing?
In the same link which you shared, application key is defined as a not safe mechanism to authenticate users - A unique value that is generated by Mobile Services, distributed with your app, and presented in client-generated requests. While useful for limiting access to your mobile service from random clients, this key is not secure and should not be used to authenticate users of your app.
More over when you enable some authentication on all the endpoints either using ACS or through Open Authentication, if you main ASP.Net/PHP etc page got authorized, then browser is going to handle federation of identity through cookies for next on-going calls till your session ends.
In most of the applications having HTTPS would protect from Man in middle attacks. Also strong encryption logic on cookies along with very specific expiry times would increase the bar of security. Also IP address based checks would definitely help in improving security.
ramiramilu's answer covers most of the question. There's one more thing which I'll add:
Also, it seems easy enough for someone to put an XHR breakpoint in to read the X-ZUMO-APPLICATION and X-ZUMO-AUTH headers while making a REST call when logged in
Yes, someone can add a breakpoint and find out the value of the X-ZUMO-AUTH header which they're sending. But the value of that header is specific for the logged in user (in this case it would be the "attacker" [him/her]self) - it wouldn't be able to get information from other people out of that header. And there are even easier ways to get the value of that header (just browse to https://<mobileservicename>.azure-mobile.net/login/<authProvider> and after entering your credentials you'll see the header encoded in the URI).

Twitter API 1.1 to fetch tweets list

For my CMS component I'm implementing integration with Twitter API to fetch and display list of tweets (either connected to user or search query). I'm using Twitter Restful API v1.1, since the 1.0 version is going to be dropped in two months. Two interesting requests for me are user_timeline and search one.
Since my technology strongly relies on caching I need to avoid server-side processing as much as possible providing static html and piece of javascript. I've done it already for old version API and it worked fine. New approach however requires providing authentication data via OAuth. One of the property (oath_signature) is a hash of other properties (in which there are oauth_timestamp and oath_nonce, which should (should they?) be unique per each twitter request) and secret keys, thus make it unsecure to generate it on client side.
Is there any secure way to get list of tweets on client-side using new API?
The simple answer to your question is, "no, there is no secure way of doing this without server-side code." What I would do is set up a service to poll Twitter every xxxx seconds and retrieve the desired tweets. You should cache or store the results and then empty them each time you make your next request. If you are using C#, I have been working on a C# Twitter library that replicates Twitter's API and already has support for grabbing a user's timeline. I will be adding support for filter and search within the next two days (each one should take no more than ten minutes to implement, excluding testing, if you decide to do it yourself). You can reference this library in the service that I mentioned before.
If you do not have the server resources that you need for this, then I strongly caution you against using solutions that circumvent Twitter's intended securities, as it could leave you or your client in a vulnerable position.
You'll have to write a proxy web service on your server side. And as you say caching will be critical to avoid the 15 requests per 15 minutes for basic stuff like pulling tweets.
Definitely avoid doing any auth stuff on the front end. The new "application only" auth using OAuth 2 would allow you to embed bearer tokens in JavaScript meaning you don't need to do any of the signature stuff you're talking about. But don't. Anyone could use your bearer token and if your own users didn't exhaust the rate limit, other people stealing your token might.
If you don't have the server side resources to do this yoursefl, you might want to look at Flamingo. It'll do the auth and the caching for you, so you only need to work in JS like you used to.

Securing API Keys in Javascript

I'm building a payment plugin for a website, where users can buy some website intern currency with real money. the backend i use, which handles the payment process, is this.
It provides (beside others) a JavaScript library to communicate with their API, so you don't have to let your system touch sensitive payment data like credit card numbers etc.
The problem is:
For now the api-key, secret hash and other vulnerable data are hardcoded just into my script which initiates the communication with the server. so in theory every half-descent user could just copy them out of the browser and could do nasty sh*t with it, especially if they have access to the api documentation.
So, this isn't secure and it definitely cannot go live this way.
im working with cakephp and i thought of collecting those sensitive keys with some ajax calls to my controllers/models, after pressing on the submit button.
There's the problem, that this connection isn't secured and can easily be 'man-in-the-middled'.
Are there other, better ways to secure my API Keys in javascript?
Use token based auth, https and csrf tokens and never, ever, but a secret on the client.
Use oauth so users don't even need to send you a password. Use someone else's authentication system.

Is it possible to use the Google Analytics API to provide stats for customer's page views?

Let's say I run a site where customers are willing to pay for a page that shows some sort of cool info about them. The whole site is tracked using Google Analytics.
To provide stat tracking for the customers, would it be possible to mine the data from Google Analytics, using the AJAX API?
Are there any show-stoppers I should look out for before attempting this?
Trying to prevent from writing my own stat tracking solution.
Update, a bit more clarification: I'm looking to be able to build a stats page that shows a few stats for a specific url (page views, traffic sources, etc...), not necessarily in real-time. I would cache the page to prevent hitting API rate limits.
There are 2 major impediments: One technological, and one legal-ish. Together, they make using Google Analytics Data Export API an unfit solution.
Technological: Google Analytics Data is not available in Real-Time. Delays in data processing run from 3-4 hours to 24-48 hours. Page-views are processed fasted; things like custom variables often take a day or so). In theory, you could tag each user with a custom variable, and then query against that custom variable for information.
Legal-ish The Google Analytics Terms of Service prohibits you from collecting personally identifiable information. So, you can't use a custom variable that stores their username on your site without violating the Terms of Service. Here's the relevant section.
PRIVACY . You will not (and will not allow any third party to) use the
Service to track or collect personally
identifiable information of Internet
users, nor will You (or will You allow
any third party to) associate any data
gathered from Your website(s) (or such
third parties' website(s)) with any
personally identifying information
from any source as part of Your use
(or such third parties' use) of the
Service. You will have and abide by an
appropriate privacy policy and will
comply with all applicable laws
relating to the collection of
information from visitors to Your
websites. You must post a privacy
policy and that policy must provide
notice of your use of a cookie that
collects anonymous traffic data.
As far as alternatives, it depends on what information you want. You can access their IP address on the server side and use that with a third party tool or a command line call to find out their rough location (much the same way that Google does). You can similarly access their referer on the server side. Much of the information that gets sent to Google actually gets stored in the Analytics cookies (_utm prefixed cookies). There's a wide body of literature on reading these cookies (See: http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=how+to+parse+google+analytics+cookies)

Categories

Resources