how to secure api keys in the front-end / client side Javascript? - javascript

Scenarios where i require to keep the keys as a secret
Azure / Google Maps API KEYs
STUN & TURN Server credentials for WEBRTC
First I was adding the keys directly , then I found out that those are very high vulnerabilities that others could take those credentials and use it for their needs
Later i tried using environment variables in .env file
But i could not find a way to use it properly
(express + nodeJS)
res.render("pages/crimestats", { apikey: process.env.AZURE_KEY })
here i was passing to the rendering page as a variable , but the key was still visible when i tried to see the source code on the browser.
So what is the proper way to use it ?
What i have in my mind
using an api call to get the key in the frontend
//but then any one can call that api request in the browser console right?

Related

Is it possible for PaaS's HTTP trigger by Javascript's fetch to be restricted on domain basis?

I've developed a js to help users input there info in a form by fetching public data.
Now I'm thinking to deploy it as kind of an API service.
Is it possible and safe enough for HTTP trigger of PaaS's like GCF and Amazon Lambda to be triggered only from specif domains I allow? Like js's fetching and reading its header's origin and check its domain.
I've considered generating passcodes per my customer and placing it in key.js in user's directory or env value, have my js file open on URL, let user website read the js with return of key.js in query param and check its validity.
But forms can be everywhere in cutomers tree, placing it in env for each custmomer can be bothersome at scaling.
you can use ReCaptcha v3, add the allowed domains that can access your function endpoint, and verify the token is valid on the function implementation.
This isn't a native GCF feature, but you could try
Adding a filter in your GCF code (e.g. express.js) to check the requested domain
Making your GCF private and letting it ensure callers are authorized (GCP callers)
Run in Cloud Run, App Engine or another service with Identity Aware Proxy and screen out callers that way

InvalidCastException: Azure Durable Functions Error

Locally testing Azure Durable Functions with VSCode + JavaScript. Able to successfully trigger the HTTP triggered Orchestration Client and can even see the request headers + body no problem. However, I receive the following error when attempting to trigger the Orchestrator:
Unable to cast object of type 'Microsoft.Azure.WebJobs.DurableOrchestrationContext' to type 'System.String'
I don't understand why DurableOrchestrationContext is trying to be turned into a string.
Code calling the Orchestrator:
context.bindings.patient = [{
FunctionName: "OrchestratorJS",
Input: req,
InstanceId: id
}];
Notes:
- I tried sending just a string as the Input, but to no effect.
- I have successfully created Durable Functions for a different project which makes this even more frustrating.
The Functions runtime is trying to cast DurableOrchestrationContext to a string because of how languages are handled in Functions v2. Unlike v1, v2 runs JavaScript functions through a Node language worker hosted in a different process from the runtime host. The language worker and the host communicate via gRPC protocol. When a function is called, the runtime host must pass bound parameter information to the function over gRPC. Parameters bound to complex objects, like DurableOrchestrationContext, must be serialized to JSON strings, passed via gRPC, and finally rehydrated for a function to consume them.
We introduced DurableOrchestrationContext to string conversion in the 1.4.0 release. Could you try updating to the latest version of the extension (1.5.0) and trying your function again?

Google Plus API - Keyinvalid

I am trying to use the javascript sdk to do an oauth login and access the google plus api. Basically the same code here: https://developers.google.com/api-client-library/javascript/features/authentication
In my firebug console, this is the url that is sending the api request to:
https://content.googleapis.com/discovery/v1/apis/plus/v1/rest?fields=servicePath%2Cresources%2Cparameters%2Cmethods&pp=0&key={key}
This is the error that comes back:
{"error":{"errors":[{"domain":"usageLimits","reason":"keyInvalid","message":"Bad Request"}],"code":400,"message":"Bad Request"}}
I have:
1. Added Google Plus Api to my project
2. Created oauth credentials
3. Setup my consent screen
However, I am still getting the error.
The reason is that you have the key defined in the request. As specified in the discovery API docs (https://developers.google.com/discovery/v1/getting_started#before_starting):
"The APIs Discovery Service provides only public methods that do not
require authentication. In addition, unlike the requests you make to
many other Google APIs, the requests you make to the Discovery Service
API should not include an API key. If you do provide a key, the
requests will fail. This behavior helps ensure that you don't
accidentally disclose your API key when distributing tools that are
based on the Google APIs Discovery Service."
So you can solve the problem by removing the key from your request entirely.
If you are using Google's javascript client to do this and the error occurs when loading further APIs, you have to unset the key first:
gapi.client.setApiKey( null );
gapi.client.load( "plus", "v1", function( apiresponse ) { ... } );
If another function requires the key later, you have to set it again.
To avoid setting and unsetting the key constantly, I load all the needed APIs before authentication, then set the API key and thus will no longer have the issue.

Google OAuth WildCard Domains

I am using the google auth but keep getting an origin mismatch. The project I am working has sub domains that are generated by the user. So for example there can be:
john.example.com
henry.example.com
larry.example.com
In my app settings I have one of my origins being http://*.example.com but I get an origin mismatch. Is there a way to solve this? Btw my code looks like this:
gapi.auth.authorize({
client_id : 'xxxxx.apps.googleusercontent.com',
scope : ['https://www.googleapis.com/auth/plus.me',
state: 'http://henry.example.com',
'https://www.googleapis.com/auth/userinfo.email', 'https://www.googleapis.com/auth/userinfo.profile'],
immediate : false
}, function(result) {
if (result != null) {
gapi.client.load('oath2', 'v2', function() {
console.log(gapi.client);
gapi.client.oauth2.userinfo.get().execute(function(resp) {
console.log(resp);
});
});
}
});
Hooray for useful yet unnecessary workarounds (thanks for complicating yourself into a corner Google)....
I was using Google Drive using the javascript api to open up the file picker, retrieve the file info/url and then download it using curl to my server. Once I finally realized that all my wildcard domains would have to be registered, I about had a stroke.
What I do now is the following (this is my use case, cater it to yours as you need to)
On the page that you are on, create an onclick event to open up a new window in a specific domain (https://googledrive.example.com/oauth/index.php?unique_token={some unique token}).
On the new popup I did all my google drive authentication, had a button to click which opened the file picker, then retrieved at least the metadata that I needed from the file. Then I stored the token (primary key), access_token, downloadurl and filename in my database (MySQL).
Back on step one's page, I created a setTimeout() loop that would run an ajax call every second with that same unique_token to check when it had been entered in the database. Once it finds it, I kill the loop and then retrieve the contents and do with them as I will (in this case I uploaded them through a separate upload script that uses curl to fetch the file).
This is obviously not the best method for handling this, but it's better than entering each and every subdomain into googles cloud console. I bet you can probably do this with googles server side oauth libraries they use, but my use case was a little complicated and I was cranky cause I was frustrated at the past 4 days I've spent on a silly little integration with google.
Wildcard origins are not supported, same for redirect URIs.
The fact that you can register a wildcard origin is a bug.
You can use the state parameter, but be very careful with that, make sure you don't create an open redirector (an endpoint that can redirect to any arbitrary URL).

Publish data from browser app without writing my own server

I need users to be able to post data from a single page browser application (SPA) to me, but I can't put server-side code on the host.
Is there a web service that I can use for this? I looked at Amazon SQS (simple queue service) but I can't call their REST APIs from within the browser due to cross origin policy.
I favour ease of development over robustness right now, so even just receiving an email would be fine. I'm not sure that the site is even going to catch on. If it does, then I'll develop a server-side component and move hosts.
Not only there are Web Services, but nowadays there are robust systems that provide a way to server-side some logic on your applications. They are called BaaS or Backend as a Service providers, usually to provide some backbone to your front end applications.
Although they have multiple uses, I'm going to list the most common in my opinion:
For mobile applications - Instead of having to learn an API for each device you code to, you can use an standard platform to store logic and data for your application.
For prototyping - If you want to create a slick application, but you don't want to code all the backend logic for the data -less dealing with all the operations and system administration that represents-, through a BaaS provider you only need good Front End skills to code the simplest CRUD applications you can imagine. Some BaaS even allow you to bind some Reduce algorithms to calls your perform to their API.
For web applications - When PaaS (Platform as a Service) came to town to ease the job for Backend End developers in order to avoid the hassle of System Administration and Operations, it was just logic that the same was going to happen to the Backend. There are many clones that showcase the real power of this strategy.
All of this is amazing, but I have yet to mention any of them. I'm going to list the ones that I know the most and have actually used in projects. There are probably many, but as far as I know, this one have satisfied most of my news, whether it's any of the previously ones mentioned.
Parse.com
Parse's most outstanding features target mobile devices; however, nowadays Parse contains an incredible amount of API's that allows you to use it as full feature backend service for Javascript, Android and even Windows 8 applications (Windows 8 SDK was introduced a few months ago this year).
How does a Parse code looks in Javascript?
Parse works through classes and objects (ain't that beautiful?), so you first create a specific class (can be done through Javascript, REST or even the Data Browser manager) and then you add objects to specific classes.
First, add up Parse as a script tag in javascript:
<script type="text/javascript" src="http://www.parsecdn.com/js/parse-1.1.15.min.js"></script>
Then, through a given Application ID and a Javascript Key, initialize Parse.
Parse.initialize("APPLICATION_ID", "JAVASCRIPT_KEY");
From there, it's all object manipulation
var Person = Parse.Object.extend("Person"); //Person is a class *cof* uppercase *cof*
var personObject = new Person();
personObject.save({name: "John"}, {
success: function(object) {
console.log("The object with the data "+ JSON.stringify(object) + " was saved successfully.");
},
error: function(model, error) {
console.log("There was an error! The following model and error object were provided by the Server");
console.log(model);
console.log(error);
}
});
What about authentication and security?
Parse has a User based authentication system, which pretty much allows you to store a base of users that can manipulate the data. If map the data with User information, you can ensure that only a given user can manipulate specific data. Plus, in the settings of your Parse application, you can specify that no clients are allowed to create classes, to ensure innecesary calls are performed.
Did you REALLY used in a web application?
Yes, it was my tool of choice for a medium fidelity prototype.
Firebase.com
Firebase's main feature is the ability to provide Real Time to your application without all the hassle. You don't need a MeteorJS server in order to bring Push Notifications to your software. If you know Javascript, you are half way through to bring Real Time magic to your users.
How does a Firebase looks in Javascript?
Firebase works in a REST fashion, and I think they do an amazing job structuring the Glory of REST. As a good example, look at the following Resource structure in Firebase:
https://SampleChat.firebaseIO-demo.com/users/fred/name/first
You don't need to be a rocket scientist to know that you are retrieve the first name of the user "Fred", giving there's at least one -usually there should be a UUID instead of a name, but hey, it's an example, give me a break-.
In order to start using Firebase, as with Parse, add up their CDN Javascript
<script type='text/javascript' src='https://cdn.firebase.com/v0/firebase.js'></script>
Now, create a reference object that will allow you to consume the Firebase API
var myRootRef = new Firebase('https://myprojectname.firebaseIO-demo.com/');
From there, you can create a bunch of neat applications.
var USERS_LOCATION = 'https://SampleChat.firebaseIO-demo.com/users';
var userId = "Fred"; // Username
var usersRef = new Firebase(USERS_LOCATION);
usersRef.child(userId).once('value', function(snapshot) {
var exists = (snapshot.val() !== null);
if (exists) {
console.log("Username "+userId+" is part of our database");
} else {
console.log("We have no register of the username "+userId);
}
});
What about authentication and security?
You are in luck! Firebase released their Security API about two weeks ago! I have yet to explore it, but I'm sure it fills most of the gaps that allowed random people to use your reference to their own purpose.
Did you REALLY used in a web application?
Eeehm... ok, no. I used it in a Chrome Extension! It's still in process but it's going to be a Real Time chat inside a Chrome Extension. Ain't that cool? Fine. I find it cool. Anyway, you can browse more awesome examples for Firebase in their examples page.
What's the magic of these services? If you read your Dependency Injection and Mock Object Testing, at some point you can completely replace all of those services for your own through a REST Web Service provider.
Since these services were created to be used inside any application, they are CORS ready. As stated before, I have successfully used both of them from multiple domains without any issue (I'm even trying to use Firebase in a Chrome Extension, and I'm sure I will succeed soon).
Both Parse and Firebase have Data Browser managers, which means that you can see the data you are manipulating through a simple web browser. As a final disclaimer, I have no relationship with any of those services other than the face that James Taplin (Firebase Co-founder) was amazing enough to lend me some Beta access to Firebase.
You actually CAN use SQS from the browser, even without CORS, as long as you only need the browser to send messages, not receive them. Warning: this is a kludge that would make my CS professors cry.
When you perform a GET request via javascript, the browser will always perform the request, however, you'll only get access to the response if it was from the same origin (protocol, host, port). This is your ticket to ride, since messages can be posted to an SQS queue with just a GET, and who really cares about the response anyways?
Assuming you're using jquery, your queue is https://sqs.us-east-1.amazonaws.com/71717171/myqueue, and allows anyone to post a message, the following will post a message with the body "HITHERE" to the queue:
$.ajax({
url: 'https://sqs.us-east-1.amazonaws.com/71717171/myqueue' +
'?Action=SendMessage' +
'&Version=2012-11-05' +
'&MessageBody=HITHERE'
})
The'll be an error in the console saying that the request failed, but the message will show up in the queue anyways.
Have you considered JSONP? That is one way of calling cross-domain scripts from javascript without running into the same origin policy. You're going to have to set up some script somewhere to send you the data, though. Javascript just isn't up to the task.
Depending in what kind of data you want to send, and what you're going to do with it, one way of solving it would be to post the data to a Google Spreadsheet using Ajax. It's a bit tricky to accomplish though.Here is another stackoverflow question about it.
If presentation isn't that important you can just have an embedded Google Spreadsheet Form.
What about mailto:youremail#goeshere.com ? ihihi
Meantime, you can turn on some free hostings like Altervista or Heroku or somenthing else like them .. so you can connect to their server , if i remember these free services allows servers p2p, so you can create a sort of personal web services and push ajax requests as well, obviously their servers are slow for free accounts, but i think it's enought if you do not have so much users traffic, else you should turn on some better VPS or Hosting or Cloud solution.
Maybe CouchDB can provide what you're after. IrisCouch provides free CouchDB instances. Lock it down so that users can't view documents and have a sensible validation function and you've got yourself an easy RESTful place to stick your data in.

Categories

Resources