Securely store data in a Node CLI app - javascript

I am currently writing a NodeJS command-line app. The app makes an API call and returns some data to the user. Given that this is a public API, the user requires an API token. This CLI will be installed globally on the user's machine via npm i -g super-cool-api-cli.
The first time the user runs the CLI they are prompted for the token, and then I store it so that each subsequent time they run it they don't need to put it in. I have provided the user a way to reset it as well. I am storing it in the actual directory of my CLI module, which as stated is installed globally, and it looks something like this:
fs.writeFile( __dirname+'/.token.json', JSON.stringify( { "token": token }, null, 2 ), 'utf8', (e)=>{
// error handling and whatever
});
I name the file .token.json, using a dot to at least make the file hidden by default.
I guess what I am asking is if there is a better/more secure way of storing sensitive information in a NodeJS command line app, that you would be running more than once. I thought about using things like environment variables but they seem to expire at the end of the process.
Security considerations are a skill I somewhat lack, but greatly desire to learn more about, so thank you in advance for your tips.

I think it's best to use the credential storage facilities provided by the OS for this sort of thing, assuming of course that each user has their own account on the machine. The only NPM package I know that handles that is node-keytar.

You can store your token in sqlite, and set a username/password for the sqlite.db file, here are the bindings for sqlite https://github.com/mapbox/node-sqlite3

The standard place to store such tokens is in the user's ~/.netrc file (see specifications here). Heroku does this for example.
A nice consequence of this standard is that there exist libraries to read/write this file (such as netrc-rw).

A semi-conventional location to store secrets, like keys, is the .ssh directory.
It often has ACLs restricted to the user, and
your file would follow the related ACL pattern
the typical files of this directory include unencrypted secret keys. Nothing prevents you from further encrypting.
a dot-file in there should not get in the way of typical uses of the directory.

Related

How to write to enviornment variables on Heroku

I have made a website meant to me used by only one person, so I want to dynamically write to .env file on Heroku without it resting,
because this is meant only for one person. I don’t want to deal with a database.
Something like this:
require(`dotenv`).config();
console.log(process.env.MYVAL); // Not my value
process.env.MYVAL = "MYVAL"
console.log(process.env.MYVAL); // MYVAL
You could use the heroku api to do that
but it will have to restart the dyno Docs
You can set the environment variables in the settings tab on your Heroku dashboard and also using the command line. Please check the following documentation to get more information.
Configuration and Config Vars
You need to persist data (even if it is a single value). Therefore you should not write to Heroku file system nor storing it in environment variables (Heroku configuration variables).
I understand using a database could be not worth it, and in this case I would use an external file storage (Amazon S3, Dropbox, and even using GitHub private repository).
On Files on Heroku you can see some options and (Python) code.

My bot cannot access to blob storage account with a system assigned managed identity

I'm exploring using Azure blob storage with my bot. I'd like to use it as a persistent store for state, as well as storing transcripts.
I configure the BlobStorage object like this:
storageProvider = new BlobStorage( {
containerName: process.env.BlobContainerName,
storageAccountOrConnectionString: process.env.BlobConnectionString
} );
As sensitive information is stored in these files, especially transcripts, I'm working with my team on securing the storage account and the container within it.
We have created a system assigned managed identity for the application service hosting the bot, and we have given this account the 'Storage Blob Data Contributor' role. Which, as I understand it, provides read, write and delete access to content stored.
Unfortunately when the bot tries to access the storage the access attempt fails. I see the following error in the 'OnTurnError trace':
StorageError: Forbidden
Interestingly running the bot locally with the same blob storage connection string works. Suggesting that this issue is related to the service identity and/or the permissions that it has.
Does anyone know what could be causing the error? Are more permissions required to the storage account? Any thoughts on increasing the logging of the error to potentially see a more detailed error message is also most welcome.
At this moment in time I do not believe that the framework supports using a system assigned managed identity for access to the blob storage.
In looking into this I found a number of examples of Node.js that use two specific packages for accessing blob storage using a system assigned identity. Specifically:
#azure/storage-blob - https://www.npmjs.com/package/#azure/storage-blob
#azure/identity - https://www.npmjs.com/package/#azure/identity
The identity package is the one that provides the functionality to get a token associated with a credential, that is then used by code in the storage-blob package to interact with the storage account.
If I look at the dependency tree for the bot framework I don’t see either of these packages. Instead I see:
azure-storage - https://www.npmjs.com/package/azure-storage
botbuilder-azure - https://www.npmjs.com/package/botbuilder-azure
Taking a deep dive into these two packages I don’t see any code for connecting to an azure storage account that uses a credential. The only code I can find uses access keys. Therefore my conclusion currently is that the bot framework doesn’t support accessing a storage account using a credential.
While we could explore adding code that uses these packages, such significant development work is outside the scope of our project at present.
If anyone with more knowledge than I can see that this is incorrect please let me know via a comment and I'll explore further.
For the moment we have settled on continuing to use access keys. As it is not any less secure than the way the bot accesses other services. Such as the cognitive services like QnA Maker.

GitHub Collaborators and Write Privileges

Let's pretend that we have a GitHub organization with 2 members in which one is the admin and the other a machine user. The machine user is added as a collaborator to each new repository that is created in the organization. In order to allow the machine user to push to the repositories we can use an acces token with just that scope. Now we have a problem, in fact the machine user could potentially push anything to any repository since the access token grants account-wide access. There are at least 3 potential ways that might be used to solve this problem:
Create a new deploy key for each repository and use it to restrict write access only to whom has the key;
Use pre-receive hooks to reject commits that are not compliant with some check;
Create a machine user for each new repository and add it just to that precise repository as a collaborator with its push only access token.
These solutions all have drawbacks:
Deploy keys are more unconvenient to create than access tokens because I need to create them myself instead of issuing a request to GitHub. Moreover I don't know how can I use them with JavaScript to be able to push to GitHub without the terminal and without installing Git on the local machine. There are many tutorials (example 1, example 2) about using deploy keys with the terminal and also about utilizing passwords and access tokens to authenticate JavaScript libraries to upload to GitHub, but what if you want to be able to push without the terminal and by only granting access to a single specific repository?
Pre-receive hooks are an enterprise-only feature;
Creating a bunch of machine users is not very efficent.
Are there other ways to solve this problem? Are there other ways to reject commits besides pre-receive hooks? Can you help me figure out how to use JavaScript to push commits to GitHub by using deploy keys (what about this)? Is there a way to discover which access token was used to push using a webhook?

Securing JS client-side SDKs

I'm working on a React-Redux web-app which integrates with AWS Cognito for user authentication/data storage and with the Shopify API so users can buy items through our site.
With both SDKs (Cognito, Shopify), I've run into an issue: Their core functionality attaches data behind the scenes to localStorage, requiring both SDKs to be run client-side.
But running this code entirely client-side means that the API tokens which both APIs require are completely insecure, such that someone could just grab them from my bundle and then authenticate/fill a cart/see inventory/whatever from anywhere (right?).
I wrote issues on both repos to point this out. Here's the more recent one, on Shopify. I've looked at similar questions on SO, but nothing I found addresses these custom SDKs/ingrained localStorage usage directly, and I'm starting to wonder if I'm missing/misunderstanding something about client-side security, so I figured I should just ask people who know more about this.
What I'm interested in is whether, abstractly, there's a good way to secure a client-side SDK like this. Some thoughts:
Originally, I tried to proxy all requests through the server, but then the localStorage functionality didn't work, and I had to fake it out post-request and add a whole bunch of code that the SDK is designed to take care of. This proved prohibitively difficult/messy, especially with Cognito.
I'm also considering creating a server-side endpoint that simply returns the credentials and blocks requests from outside the domain. In that case, the creds wouldn't be in the bundle, but wouldn't they be eventually scannable by someone on the site once that request for credentials has been made?
Is the idea that these secret keys don't actually need to be secure, because adding to a Shopify cart or registering a user with an application don't need to be secure actions? I'm just worried that I obviously don't know the full scope of actions that a user could take with these credentials, and it feels like an obvious best practice to keep them secret.
Thanks!
Can't you just put the keys and such in a .env file? This way nobody can see what keys you've got stored in there. You can then access your keys through process.env.YOUR_VAR
For Cognito you could store stuff like user pool id, app client id, identity pool id in a .env file.
NPM package for dotenv can be found here: NPM dotenv
Furthermore, what supersecret stuff are you currently storing that you're worried about? By "API tokens", do you mean the OpenId token which you get after authenticating to Cognito?
I can respond to the Cognito portion for this. Your AWS Secret Key and Access Key are not stored in the client. For your React.js app, you only need the Cognito User Pool Id and the App Client Id in your app. Those are the only keys that are exposed to the user.
I cover this in detail in a comprehensive tutorial here - http://serverless-stack.com/chapters/login-with-aws-cognito.html

Get access to DocumentDB with JS

I'm developing an app, which should connect to an external DocumentDB database (not mine). The app is build with Cordova/Ionic.
I founda JavaScript library from Microsoft Azure in order to ensure a DocumentDB database connection, but it is asking for some weird stuff like collection_rid and tokens.
I've got the following from the guys of the external DocumentDB database:
Endpoint: https://uiuiui.documents.azure.com:443/
Live DocumentDB API ReadOnly Key: P8riQBgFUH...VqFRaRA==
.Net Connection String: AccountEndpoint=https://uiuiui.documents.azure.com:443/;AccountKey=jl23...lk23==;
But how am I supposed to retrieve the collection_rid and token from this information?
Without row-level authorization, DocumentDB is designed to be accessed from a server-side app, not directly from javascript in the browser. When you give it the master token, you get full access which is generally not what you want for your end-user clients. Even the read-only key is usually not what you want to hand out to your clients. The Azure-provided javascript library is designed to be run from node.js as your server-side app.
That said, if you really want to access it from the browser without a proxy app running on a server, you can definitely do so using normal REST calls directly hitting the DocumentDB REST API. I do not think the Azure-provided SDK will run directly in the browser, but with help from Browserify and some manual tweaking (it's open source) you may be able to get it to run.
You can get the collection name from the same folks who provided you the connection string information and use name-based routing to access the collection. I'm not sure exactly what you mean by token but I'm guessing that you are referring to the session token (needed for session-level consistency). Look at the REST API specs if you want to know the details about how that token gets passed back and forth (in HTTP headers) but it's automatically taken care of by the SDKs if you go that route.
Please note that DocumentDB also provides support equivalent to row-level authorization by enabling you to create specific permissions on the desired entities. Once you have such a permission, you can retrieve the corresponding token, which is scoped to be valid for a certain time period. You would need to set up a mid-tier that can fetch these tokens and distribute to your user application. The user application can then use these tokens as bearer-tokens instead of using the master key.
You can find more details at https://msdn.microsoft.com/en-us/library/azure/dn783368.aspx
https://msdn.microsoft.com/en-us/library/azure/7298025b-bcf1-4fc7-9b54-6e7ca8c64f49

Categories

Resources