Hide 3rd party API-key with firebase - javascript

Im building a website in firebase. It's a simple look-up service which only has an input element that fires a request to a 3rd party api.
www.3rdparty.com/api/[myapikey]/method
The problem is that I'm limited to x requests per second and I can't expose my api-key to the users.
My mission eventually is to store the responses in firebase so that I can limit the number of requests that reach the 3rd party (a cache function)

Putting such an API key into the client-side code of your application introduces the risk of malicious users taking your key and using it to their own purposes. There is nothing you can do about that, except for simply not including the API key into the client-side code. This applies equally to Android and iOS code btw.
Since you can't put the API key in client-side code, you'll have to run it on a server. This is a quite common scenario for using server-side code within a Firebase architecture: the code needs access to some information that common clients cannot be trusted with. It is covered by pattern 2 in our blog post on common Firebase application architectures.
From that blog post:
An example of such an architecture in action would be clients placing tasks for the server to process in a queue. You can have one or more servers picking off items from the queue whenever they have resources available, and then place the result back into your Firebase database so the clients can read them.

Related

Where to store API key in ReactJS?

I quick question!
I have a webapp that only fetches data from an API so i dont have i backend part. My question is where do you keep your API key? Accordingly to React docs you should not store API keys in the app so how do you manage this when you have an webapp that consumes an API and you have to use the API key in your get requests?
Let's do a bit of explanation, so you connect the dots and design it more robustly. There are two (three) places where to store it in the end:
frontend (your React application)
backend (your server)
third-party service
TL;DR: use non-frontend solution + rate limiting to a registered user and have the registration step secured properly (+ captcha).
Frontend
Storing anything on the frontend side is mostly a bad idea, unless you're completely sure you can allow such data to be exposed - constants, names, icons, maybe some URLs so you don't have it hardcoded in the JS files.
Your "compiled" ReactJS (or any other framework) when built is just a slightly mangled (minified/transpiled/etc etc) JavaScript, but for that to work the client has to retrieve it and have it executed in the browser. Therefore before the ReactJS application even starts, there are 1+ requests downloading the JavaScript code and other parts of the application depending on the framework.
Those you can see in the network monitoring tab in any modern browser or simply use Wireshark (unless encrypted, then it's a little bit annoying) or a local proxy if using a less sane browser.
After retrieval you can simply try Ctrl+F or any online deminifier/deobfuscator if you don't know how to do it yourself and you can retrieve the key.
Implications when retrieved
I can impersonate you for the service that issues the API key
I can lock your key/account by calling too often (just for fun or to retrieve some info)
I can use your web for scraping while not needing to pay for the API key (if paid) or to register to such service vendor
If it's per-request API key and there's some limitation that would make it cost you money, I can just run some silly while (true) { callYourApi() } via a service to make me anonymous just to make it cost you
Depending on the API key and how serious you intend to approach this problem, you might utilize the .env file for development purposes only. Though you should never ever store an API key in the frontend unless you explicitly have to store it in there (e.g. maps) because it's mostly a very stupid idea and allows anyone to misuse it.
Backend
Your server, if properly configured and secured, will store the key anywhere which isn't accessible by simply path traversing (if in a file) or scraping (if you attempt to retrieve the key to execute on the frontend part).
Therefore the most sane and secure way would be to retrieve the data (of any service) by having either a custom API or a scheduled script collecting the data, which when your frontend gets called will be able to retrieve as pre-rendered or already fetched, thus no key needed for that case.
However! There's a trick to that. If you design your custom API as /api/<key>=123 or /api/<param> and you use that parameter for the original API to filter on frontend, the attacker couldn't care less for the API key because you've already created an API for free and made it public and unsecure.
So GET /yourapi/<my data> and API key for free without even needing to have one displayed.
How to do it safely? Two simple approaches:
pre-rendering data to HTML
You then fetch with frontend and just display - but this one can be scraped, it's just a bit annoying if more complex, but that's it. Server-side rendering sounds nice, but doesn't really work for this case. It should be mostly used to make the frontend fast or to template the views, but never for security purposes as the silver bullet solution (because it doesn't work that way).
rate limiting + CORS + account management
with rate limiting you make sure that a user (preferably you have that API called only after a user is logged in) can call that API only e.g. 10 times within 1 hour and with CORS you make sure it's callable only by your frontend.
It's not a silver bullet either, anybody with a little bit of brain can simply scrape your API locally thus go around CORS, but the rate limit will still hit hard, if you forbid registering more than 1 user from a single IP or if you require a phone number for verification. And add some annoying captcha, so it's problematic to automate for some people.
Still it can be attacked and misused, but it's painful unless you allow the same phone number (or any other ID less comfortable to get / requiring effort to get) to be used multiple times, so it'll make the most incompetent people go away... and the remaining ones, well, they'd play with your website anyway, so have a proper security assessment / harden your server if you maintain it alone.
Third-party
It's like 2., but you don't maintain the "low-level" server part, because the third-party is then managing it for you, you just need to specify conditions under which it'll be called. This applies to Firebase or Supabase which kind of behaves like a separate backend, but can have multiple modules (for FB, 1, 2).
Thus you'd use Firebase functions (or other alternatives), where you'd have your key e.g. even hardcoded and the client (browser) wouldn't have any access to that, add a limit, cors, perhaps some user registration limit and you're kind of done.
Note: Any domain, IP, region, phone number restrictions can be bypassed, so do not rely on them. It's just a mean to require effort when misusing your website for something different than you intended.
domain: curl http(s)://yourweb/path -H "Host: spoofed-domain"
region or IP: proxy, VPN, Tor, I2P, just somebody else's computer/server + ssh, some random WiFi
phone number: can go to a local shop and buy 10 fresh ones if I wanted to
It's more of a recommendation for them to keep your API keys server-side, and let your web app communicate with your server. Otherwise malicious users could easily steal your API key and use it for whatever.
If you think it isn't much of a security risk if your key gets (scratch that, is) compromised, that's fine then, you can just keep it in your webapp. Really depends on your use case and what that API key is for.
The only way to do this without exposing your API keys in your client app is to create a backend and serve the client app from the backend app as stated by Kelvin Schoofs and Peter Badida answers above (or use a third party service such as AWS Credential Vault). I suggest you use Node Express library for a backend as this will handle a lot of the boiler plate code for you. There are plenty of tutorials online for this.
Using a dotenv file as suggested by a few other users will only hide your API code from version sharing tools like Git (because you can ignore the dotenv in gitignore). It is very important that you understand the process of dotenv with a react app. Any user who opens the Dev console in their browser can view your exposed API keys in the static HTML.
Create a dotenv file and store all secret and API keys. Make sure to use REACT_APP_ before every variable.
DOCS: https://create-react-app.dev/docs/adding-custom-environment-variables/
dotenv package: https://www.npmjs.com/package/dotenv

How to communicate securely between shell app and micro application(frontend) via pubsub

I have a shell application which is the container application that performs all the API communication. Also, I do have multiple Micro application which just broadcast the API request signal to shell application.
Now, keeping the security in mind, as a shell application how it can ensure that API request signal is coming from the trusted micro app which I own.
To be very precise, My ask is, is there a way to let shell application know that the signal is coming from the micro app that it owns and not from any untrusted(like hacking, XSS) source
As per the Micro-Frontend architecture each Micro Frontend should make call to it's own API (micro service). However, your Shell app can provide some common/global library which can help the Micro Frontends make the AJAX call. But the onus of making the call must remain with the individual micro frontend.
From your question it is unclear if your apps are running in iframes, or are being loaded directly into your page.
In the case of iFrames your using postMessage and you can check the origin on received message via event.origin. Compare this with a list of allowed domains.
If your micro apps are directly on your page then you just control what is allowed to load into them.
So, in most microfrontends, each microapp does its own API calls to the corresponding microservice on the backend, and the shell app is ignorant of it. The most the shell app would do relative to this is passing some app config to all microapps which has config like the hostname of the various backends, and, an auth token if all the backends are using the same auth.
But to ensure the shell app doesn't have, say, an advertisement with malicious code trying to pose as another microapp, well..
how are the microapps talking to the shell? Is there a common custom event? The name of the customEvent would have to be known to the intruder, but that's only security-by-obscurity, which isn't real.
other methods like postMessage are between window objects, which I don't think help your case.
You might be able to re-use the authToken that the shell and microapps both know, since it was communicated at startup. But if you have microapps that come and go that won't work either.

Upload from Client Browser to Google Cloud Storage Using JavaScript

I am using Google Cloud Storage. To upload to cloud storage I have looked at different methods. The method I find most common is that the file is sent to the server, and from there it is sent to Google Cloud storage.
I want to move the file directly from the user's web browser to Google Cloud Storage. I can't find any tutorials related to this. I have read through the Google API Client SDK for JavaScript.
Going through the Google API reference, it states that files can be transferred using a HTTP request. But I am confused about how to do it using the API client library for JavaScript.
People here would require to share some code. But I haven't written any code, I have failed in finding a method to do the job.
EDIT 1: Untested Sample Code
So I got really interested in this, and had a few minutes to throw some code together. I decided to build a tiny Express server to get the access token, but still do the upload from the client. I used fetch to do the upload instead of the client library.
I don't have a Google cloud account, and thus have not tested this, so I can't confirm that it works, but I can't see why it shouldn't. Code is on my GitHub here.
Please read through it and make the necessary changes before attempting to run it. Most notably, you need to specify the location of the private key file, as well as ensure that it's there, and you need to set the bucket name in index.html.
End of edit 1
Disclaimer: I've only ever used the Node.js Google client library for sending emails, but I think I have a basic grasp of Google's APIs.
In order to use any Google service, we need access tokens to verify our identity; however, since we are looking to allow any user to upload to our own Cloud Storage bucket, we do not need to go through the standard OAuth process.
Google provides what they call a service account, which is an account that we use to identify instances of our own apps accessing our own resources. Whereas in a standard OAuth process we'd need to identify our app to the service, have the user consent to using our app (and thus grant us permission), get an access token for that specific user, and then make requests to the service; with a service account, we can skip the user consent process, since we are, in a sense, our own user. Using a service account enables us to simply use our credentials generated from the Google API console to generate a JWT (JSON web token), which we then use to get an access token, which we use to make requests to the cloud storage service. See here for Google's guide on this process.
In the past, I've used packages like this one to generate JWT's, but I couldn't find any client libraries for encoding JWT's; mostly because they are generated almost exclusively on servers. However, I found this tutorial, which, at a cursory glance, seems sufficient enough to write our own encoding algorithm.
I'd like to point out here that opening an app to allow the public free access to your Google resources may prove detrimental to you or your organization in the future, as I'm sure you've considered. This is a major security risk, which is why all the tutorials you've seen so far have implemented two consecutive uploads.
If it were me, I would at least do the first part of the authentication process on my server: when the user is ready to upload, I would send a request to my server to generate the access token for Google services using my service account's credentials, and then I would send each user a new access token that my server generated. This way, I have an added layer of security between the outside world and my Google account, as the burden of the authentication lies with my server, and only the uploading gets done by the client.
Anyways, once we have the access token, we can utilize the CORS feature that Google provides to upload files to our bucket. This feature allows us to use standard XHR 2 requests to use Google's services, and is essentially designed to be used in place of the JavaScript client library. I would prefer to use the CORS feature over the client library only because I think it's a little more straightforward, and slightly more flexible in its implementation. (I haven't tested this, but I think fetch would work here just as well as XHR 2.).
From here, we'd need to get the file from the user, as well as any information we want from them regarding the file (read: file name), and then make a POST request to https://www.googleapis.com/upload/storage/v1/b/<BUCKET_NAME_HERE>/o (replacing with the name of your bucket, of course) with the access token added to the URL as per the Making authenticated requests section of the CORS feature page and whatever other parameters in the body/query string that you wish to include, as per the Cloud Storage API documentation on inserting an object. An API listing for the Cloud Storage service can be found here for reference.
As I've never done this before, and I don't have the ability to test this out, I don't have any sample code to include with my answer, but I hope that my post is clear enough that putting together the code should be relatively straightforward from here.
Just to set the record straight, I've always found OAuth to be pretty confusing, and have generally shied away from playing with it due to my fear of its unknowns. However, I think I've finally mastered it, especially after this post, so I can't wait to get a free hour to play around with it.
Please let me know if anything I said is not clear or coherent.

Azure Event Hubs: How to grant SAS tokens to Javascript publishers (running in browser)?

I am building a website analytics solution based on Azure Event Hubs. I have Javascript code embedded in each web page that publishes events directly to an Event Hub via the Azure Event Hubs REST API.
The REST API requires that each call be authenticated via a SAS token. My questions is - Do I have to code up a server side endpoint that will provide my publishers with temporary tokens before they can start publishing?
Are there alternative approaches?
Does the REST API provide this "authenticate" end point out of the box? (couldn't find it here)
Or, how terrible, security wise would it be to have a token hard coded into the client-side code?
Or, technically feasible but security-wise much worse than option 2, Hard-code the Event Hub's Shared Access Key in the client-side code and use something like the (unofficial) Azure ServiceBus JavaScript SDK to generate the SAS token on the fly?
Event Hub REST api does not provide an authentication end point. You will have to code up the generation of SAS tokens per client (browser or device) on your server side (may be as part of your AuthN/Z routines?). Refer to RedDog.ServiceBus nuget package to generate SAS tokens for your Event Hub, per client. Also this article on IoT, explains authenticating against Event Hubs using the aforementioned package.
In my opinion, I would much rather do the above and rule out #2 and #3. They (2 & 3) leave the solution vulnerable and violate best practices.
Considering the example set by Google Analytics and other browser analytics providers, the second alternative in my question is quite acceptable.
That is, a SAS token can be generated on a "per site" (or "per analytics customer") basis and be shared by all browsers that this site is tracked on. The generation of the keys can be done via a tool like Sandrino Di Mattia's Event Hubs Signature Generator based on his RedDog Azure library.
This way tokens can be generated once when a publisher is onboarded and there is no need for an online Web API endpoint to be constantly available.
As an alternative approach, you could consider Application Insights for event ingestion. Depending on the type of event collection you're doing, you could be using it and exporting data using built in archiving mechanisms or querying endpoints for specific events from time to time. App Insights was designed for JS in-browser scenarios, can handle a large number of RPS + you get some reports, analytics, querying endpoints and some other interesting features. It provides an SDK and JS lib you can use, and implemnts batching for you using browser's local storage.
As a side note, consider that browsers (and any other JS code running on it) as an insecure client. That means, even if you write a mechanism to do a request to a server-side app written by you to get the SAS key, any developer will be able to intercept in memory. So, the most secure thing you could do is a) have a server-side code that generates a short-lived SAS key and b) let your clients authenticate before calling this server-side code. Or, ignore the problem and filter invalid events you receive.
Both GA and App Insights work by exposing a common key. As far as I know, Google Analytics uses heuristics to filter invalid requests. I suppose App Insights do the same.

How to limit entries to aws javascript dynamodb per user?

Using the javascript web sdk that amazon just released, what is to prevent a user (even after federated auth) from abusing access to the database?
For example, how could you limit the length of a string that they can submit for a field?
Also, how could you limit the number of entries they submit for a table with multiple rows? Or on s3, limit the amount of files, or size of files, they can submit?
I can imagine how to control this in the node.js implementation, but it seems like someone could write their own script on the client side to circumvent rules.
EDIT: Just to clarify which SDK we are discussing: AWS SDK for JavaScript in the Browser
I would suggest that the only good answer to the question is very straightforward, even though you may not like it: "don't let them directly access the resource," followed closely by "trust no one." The credentials for your services only belong in the hands of fully-trusted entities for whatever those credentials theoretically authorize, so I'm not sure who you're trying to protect against.
You limit the size of a field they can submit by validating their request to your application server and sending the request to the database or other underlying system -- or not -- depending on whether the request is valid. Properly implemented at this point in your stack, such restrictions would be impossible to violate no matter what happened at the client side, and,conversely, any implementation that presumes the client side will behave correctly, predictably, reasonably, and as-intended is a very naive implementation, indeed.
Controlling such things is the job of the your application server, receiving client requests via your API -- the one that you design for your application to interact with clients -- validates them, and sends calls to the underlying systems to fulfill those requests that it deems legitimate.
You can use IAM and provide specific permission for each user to access only his record.
For example, you can use DynamoDB fine-grained access control (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/FGAC_DDB.html), what when work together with web identity can provide access to records that start with his user-id. Read more here: http://aws.typepad.com/aws/2013/10/fine-grained-access-control-for-amazon-dynamodb.html

Categories

Resources