I have made a website meant to me used by only one person, so I want to dynamically write to .env file on Heroku without it resting,
because this is meant only for one person. I don’t want to deal with a database.
Something like this:
require(`dotenv`).config();
console.log(process.env.MYVAL); // Not my value
process.env.MYVAL = "MYVAL"
console.log(process.env.MYVAL); // MYVAL
You could use the heroku api to do that
but it will have to restart the dyno Docs
You can set the environment variables in the settings tab on your Heroku dashboard and also using the command line. Please check the following documentation to get more information.
Configuration and Config Vars
You need to persist data (even if it is a single value). Therefore you should not write to Heroku file system nor storing it in environment variables (Heroku configuration variables).
I understand using a database could be not worth it, and in this case I would use an external file storage (Amazon S3, Dropbox, and even using GitHub private repository).
On Files on Heroku you can see some options and (Python) code.
Related
I have downloaded node.js, and have created the firebase-function files at the website directory (firebase.json, the functions folder and others).
If I was to write javascript cloud functions inside the project/functions/index.js file, it won't be private when I upload it to my Github repository for my static website. (something.github.io)
So how would I go about calling the firebase cloud functions in my index.js to my static website without uploading the index.js (to keep certain functions private)?
Edit:
I understand now that there are environmental variables, but how can I incorporate that with Github Pages website with firebase Admin SDK and cloud functions?
How do I upload my GitHub pages project and still link my client-side to the environmental variables? Would I need to upload my index.js containing my cloud functions? But also doesn't upload the index.js defeat the purpose of the client not being able to see the functions/data?
The comment below mentioned a software calledHeroku, what exactly is the purpose of it when I already have GitHub and firebase interacting with my website and database?
Also I saw a method of using dotenv, to create a .env file to put secret data (such as API keys) and use gitignore to prevent the file being uploaded? Would that be able to work on Github Pages, and if so, would the client be able to see the .env? And if they can't, can the client-server website link it to the .env even if it isn't pushed to Github
This is a good use of environment variables. Essentially, say you had an API key of 12345. And you had a function as such:
async function fetchResults() {
await fetch("myapi.com/lookup?key=12345")
}
You could do instead:
async function fetchResults() {
await fetch("myapi.com/lookup?key=${process.env.API_KEY}")
}
The environment variable is kept on your machine so it never goes to GitHub, hence allowing you to expose the code as open source while maintaining confidentiality of your sensitive keys.
Edit: So I reread your question and I see you're talking about publishing to GitHub pages. The key thing to note is that anything that is hosted client-side the user will be able to see. GitHub Pages only hosts the "client" portion of your application. So if your client (browser) website makes an API call to myapi.com/lookup?key=12345, they will be able to see it no matter what, since it's their browser making a request and they can see anything their browser does.
However, the best practice here is to write server-side code to run your application as well. For this, you can use what I suggested above, where you add environment variables to whichever server you use to host (for example, you can do this easily with [Zeit Now][2] or Heroku). That way, you can share your code but your environment variables stay secret on the machine that runs your server-side code.
Assume we worked on a staging real time database during development, it shaped up to be big and complex structure json. Is there any way to define blueprint for firebase's real time database so that structure from staging database can be moved to production, without data that is currently in staging database?
The Firebase Database is a schemaless database. If you remove the actual values from the database, nothing will be left. So there also won't be anything to clone in that case.
Any rules about the data structure (validation) and access permissions are captured in Firebase's security rules. If you define those in a separate file, you can use the Firebase CLI to deploy them to either environment. See this for how to do that: How do I deploy Firebase Database Security rules using the command line?
I am currently writing a NodeJS command-line app. The app makes an API call and returns some data to the user. Given that this is a public API, the user requires an API token. This CLI will be installed globally on the user's machine via npm i -g super-cool-api-cli.
The first time the user runs the CLI they are prompted for the token, and then I store it so that each subsequent time they run it they don't need to put it in. I have provided the user a way to reset it as well. I am storing it in the actual directory of my CLI module, which as stated is installed globally, and it looks something like this:
fs.writeFile( __dirname+'/.token.json', JSON.stringify( { "token": token }, null, 2 ), 'utf8', (e)=>{
// error handling and whatever
});
I name the file .token.json, using a dot to at least make the file hidden by default.
I guess what I am asking is if there is a better/more secure way of storing sensitive information in a NodeJS command line app, that you would be running more than once. I thought about using things like environment variables but they seem to expire at the end of the process.
Security considerations are a skill I somewhat lack, but greatly desire to learn more about, so thank you in advance for your tips.
I think it's best to use the credential storage facilities provided by the OS for this sort of thing, assuming of course that each user has their own account on the machine. The only NPM package I know that handles that is node-keytar.
You can store your token in sqlite, and set a username/password for the sqlite.db file, here are the bindings for sqlite https://github.com/mapbox/node-sqlite3
The standard place to store such tokens is in the user's ~/.netrc file (see specifications here). Heroku does this for example.
A nice consequence of this standard is that there exist libraries to read/write this file (such as netrc-rw).
A semi-conventional location to store secrets, like keys, is the .ssh directory.
It often has ACLs restricted to the user, and
your file would follow the related ACL pattern
the typical files of this directory include unencrypted secret keys. Nothing prevents you from further encrypting.
a dot-file in there should not get in the way of typical uses of the directory.
I'm working on a project where we'll (hopefully) be using backbone.js to power our web app. The caveat is that it will be run either on a web server (i.e. using http:// type URLs) or from the local file system (i.e. using file:/// URLs).
What would be the simplest way to adapt a model object to read files from a local file (i.e. file:///...?
I face a similar problem in my book: I wanted to configure persistence to use localStorage, but keep the model/collection code the same as when working with a server.
I used the Backbbone.localStorage adapter and wrote a mixin to configure storage on an entity (see https://github.com/davidsulc/marionette-gentle-introduction/commit/3b441c9355ac49348eebb3eca27c06ec79b9f64d) then, in the code I can simply execute the mixin function to configure that model/collection to use localStorage (see line 6 at https://github.com/davidsulc/marionette-gentle-introduction/blob/bcb16d45876c321e071624319bf87c8a9cf1d656/assets/js/entities/contact.js#L6 )
You can get the code using this technique at https://github.com/davidsulc/marionette-gentle-introduction and the book is available at https://leanpub.com/marionette-gentle-introduction
As a server developer, I would get my PHP code to access environment variables for deployment settings.
How would you approach the same problem for a purely HTML/JavaScript/jQuery page?
For example, would you load in a JSON file?
I'm tracking the page in git, and I don't want to save person-specific information in the main repo.
Use some build system (ant, phing, shell-scripts...) and create the template for config file.
On the build step just fill the template with real values (taken from environment or wherever you want) and prepend the real script with the configuration object.
As a result of building process you'll have the specific file for particular client.