Front end Sensitive info - javascript

I am building my first react app and not sure about front end security. I am making a call to the following third party library: emailjs.sendForm(serviceID, templateID, templateParams, userID);
The userId field is sensitive information. I make the following call on my onSubmit handler. I am wondering if I need to secure this information somehow? Also, is there a way for me to check if a user can somehow see this information my inspecting and finding the code in the method somehow?
emailjs
.sendForm(
"gmail",
"client-email",
"#form",
"**here_is_the_sensitive_info**"
)
.then(() => {
resetForm({});
})
.catch(() => {
const acknowledgement = document.createElement("H6");
acknowledgement.innerHTML = "Something went wrong, please try.";
document.getElementById("form").appendChild(acknowledgement);
});

In this case, EmailJS is meant to be used in the browser, so I don't think that the userId is sensitive at all.
In their own documentation, you can see the following instruction to get started.
<script type="text/javascript"
src="https://cdn.jsdelivr.net/npm/emailjs-com#2.4.1/dist/email.min.js">
</script>
<script type="text/javascript">
(function(){
emailjs.init("YOUR_USER_ID");
})();
</script>
That said, anyone can definitely see this in the source of the page in their browser. You are right to be cautious with anything sensitive in client-side JavaScript.
To avoid anyone using your userId on their own website (which is very unlikely since it only triggers emails that you configured), you can whitelist your own domain with their paid plan apparently.
The .env file, when used in a frontend project, only serves to set environment variables that are used at compilation time. The file never gets to the browser, but the values are often just interpolated (e.g. with the DefinePlugin) in the final bundle source, so there's nothing necessarily more secure here.
WARNING: Do not store any secrets (such as private API keys) in your
React app!
Environment variables are embedded into the build, meaning anyone can
view them by inspecting your app's files.
# (s) for sensitive info
.env -> compilation -> bundle -> browser -> third-party
(s) (s) (s) (s) (s)
That said, when used in a Node.js server, the .env file serves to set, again, environment variables, but this time, at the start of the application. These values are not shared with the frontend though, so one way to use this as a secure solution is to expose your own endpoint, whitelisting only your own domain, which then uses the sensitive information only on the server.
.env -> Node.js server -> third-party
(s) (s) (s)
^
/ (api call)
...bundle -> broswer
But then again, here, EmailJS' userId is not sensitive information.

You should never have sensitive info in the frontend. You should have for instance, a nodejs instance running, expose and endpoint, to the frontend, and call it. Then, inside your nodejs application, you should have a .env file with your credentials.
Then, just use the .env info from your node.js server.
If you have sensitive info in the frontend, you are exposing everything.

1.first we need install DotENV in you are project
command: npm install dotenv
and now check in your package.json file install or not , if install file we can see like this "dotenv": "^10.0.0", and we can configure the file in your file in top of the file like "require('dotenv').config();" and now where you want now your using .env file.
first we need to understand how to using .env file in your file
any .env file are using (process.env)
and more information for Sensitive info Questions please go to the like
https://www.youtube.com/watch?v=17UVejOw3zA
Thankyou,

Related

Deploying react js and node js full stack on AWS production?

I have currently deployed the React and Node.js on nginx which sits on AWS . I have no issues in deployment and no errors.
The current environment is: PRODUCTION.
But I have a doubt whether the method I follow is right or wrong. This is the method I followed, https://jasonwatmore.com/post/2019/11/18/react-nodejs-on-aws-how-to-deploy-a-mern-stack-app-to-amazon-ec2
The following is my nginx configuration
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
root /var/apps/front_end/build;
try_files $uri /index.html;
}
location /api/ {
proxy_pass http://0.0.0.0:3005/;
}
As shown above , I have copied the build folder after npm run build to the AWS instance and gave the location to nginx and the backend is copied as it is to the AWS instance and I gave npm start it runs on 3005 port , I gave that IP to /api location to proxy pass
I see a couple of others using server.js for the front end and putting the build folder files there and setting up the nginx to that server.js .
So should I do it that way ? or am I good with the current method as seen in the link above ?
Just like everything else, there are multiple ways to go about this. Depending on the way you have ended the question looks like you are open to exploring them.
Here are my preferences depending on the increasing order of responsibilities on my side vs what AWS handles for me:
AWS Amplify :
Given that you are already using React and Node, this will be a relatively easy switch. Amplify is a not only a set of very useful frontend framework by makeing it easy to add functionalities like Authentication, Social Logins, Rotating API keys (via Cognito and API Gateway) etc but also backend logic that can be eventually deployed on AWS ApiGateway and AWS Lambda. Not only this but AMplify also provides a CICD pipeline and connects with Gothub.
In minutes, you can have a scalable service, with opetion to host frontend via AWS CloudFront, a global CDN service or via S3 hosting, deploy the API via ApiGateway and Lambda, have a CICD pipeline setup via AWS CodeDeploy and Code Build and also have user management via AWS Cognito. You can have multiple enviornments dev, test, beta etc and have it setup such that any push to the master branch is automatically deployed on the infra, and so on and so forth other branches being mapeed to specific enviornment. To top it all off, the same stack can be used to test and develop locally.
If you are rather tied down to use a specific service or function in a specific way, you can build up any of the combination of the above services. API Gateway for managing API, Cognito for user management, Lambda for compute capacity etc.
Rememebr, these are managed services so you offload a lot of engineering hours to AWS and being serverles means you are paying for what you use.
Comming to the example you have shared, you don't want your node process to be responsible of serving static assets - its a waste of the compute power as there is no intelligence attached to serving JS, CSS or images and also because in that case you introduce a new process in the loop. Instead have NGINX serve static assets itself. Refer this official guide or this StackOverflow answer.

Load local file using Netlify functions

I've written a script which takes a JSON file and outputs it to an API endpoint using Netlify's Functions feature (https://functions.netlify.com/). For the most part, this works without a hitch, however, one of my endpoints has a lot of text and for ease of editing, I've split the large text blocks into markdown files which I then loaded into the endpoint.
Locally, this works perfectly, but when deployed I get a console error saying Failed to load resource: the server responded with a status of 502 (). I presume this is because I used a node fs method and Netlify doesn't allow that, however, I can't find any information about this.
The code I've used is here:
const marked = require('marked')
const clone = require('lodash').cloneDeep
const fs = require('fs')
const resolve = require('path').resolve
const data = require('../data/json/segments.json')
// Clone the object
const mutatedData = clone(data)
// Mutate the cloned object
mutatedData.map(item => {
if (item.content) {
const file = fs.readFileSync(resolve(`./src/data/markdown/${item.content}`), 'utf-8')
item.content = marked(file)
}
})
exports.handler = function(event, context, callback) {
callback(null, {
statusCode: 200,
body: JSON.stringify({data: mutatedData})
});
}
I've also attempted to replace
const file = fs.readFileSync(resolve(`./src/data/markdown/${item.content}`), 'utf-8')
with
const file = require(`../data/markdown/${item.content}`)
but that complains about a loader and I'd like to avoid adding webpack configs if possible as I'm using create-react-app, besides, I doubt it will help as I'd still be accessing the file-system after build time.
Has anyone else come across this issue before?
At the time when this answer is written (September 2019), Netlify does not seem to upload auxiliary files to AWS Lambda, it appears that only the script where the handler is exported will be uploaded. Even if you have multiple scripts exporting multiple handlers, Netlify seems to upload them into isolated "containers" (different AWS instances), which means the scripts will not be able to see each other in relative paths. Disclaimer: I only tested with a free account and there could be settings that I'm not aware of.
Workaround:
For auxiliary scripts, make them into NPM packages, add into package.json and require them in your main script. They will be installed and made available to use.
For static files, you can host them on Netlify just like before you have AWS Lambda, and make http requests to fetch the files in your main script.

Environment variables returning undefined when connecting Firebase Firestore to a node.js server hosted on Heroku

I am quite new to the node.js way of doing things coming from using PHP and SQL to do everything. I was recently trying to set up a node server on Heroku in order to do some back end logic on my google firestore database.
I didn't want my private key to be publically accessible as a file and I saw that the easiest way was to store the variables as environment variables and use them to set up the certificate to initialise firebase-admin from a previous stack overflow question.
I have now tried to set this up and hosting it on heroku. Having defined my environment variables I double checked that the names are correct and have been set by heroku config:set and then checking using heroku config. I would show this but it obviously contains my private key!
So when i try to run this segment of code:
//requires firebase module
var admin = require('firebase-admin');
const private_key = process.env.FIREBASE_PRIVATE_KEY_ID;
console.log(private_key);
//initialises a firebase app with the credential
admin.initializeApp({
credential: admin.credential.cert({
"private_key": process.env.FIREBASE_PRIVATE_KEY_ID,
"client_email": process.env.FIREBASE_CLIENT_EMAIL,
"project_id": process.env.FIREBASE_PROJECT_ID,
"private_key_id": process.env.FIREBASE_PRIVATE_KEY_ID
}),
databaseURL: "https://MY_APP.firebaseio.com"
});
//get access to firestore from initialised app
var db = admin.firestore();
With MY_APP changed out from what it is in the source code.
So when i run this and console log the key I get:
[1
I am sorry if this is a trivial problem, as I say I am definitely a beginner with node. I have done a bit with the HTTP module for handling some requests but not connecting up to firebase. Any advice or help would be greatly appreciated!
heroku config:set sets environment variables on Heroku, but heroku local runs your application on your local machine:
Heroku Local is a command-line tool to run Procfile-backed apps. It is installed automatically as part of the Heroku CLI. Heroku Local reads configuration variables from a .env file. Heroku Local makes use of node-foreman to accomplish its tasks.
heroku local will automatically read a file called .env and set environment variables for you based on what it finds there. This file should not be tracked by Git (it's for your local environment, not for Heroku, and as you mentioned it will contain sensitive information that shouldn't be checked in anyway).
If you want to copy a configuration variable that you currently have on Heroku you can add it to your .env by running
heroku config:get CONFIG-VAR-NAME -s >> .env
(There are actually many ways to set environment variables on your local machine, and any of them will work with heroku local. If you prefer another method, go for it.)

MongoDB Cloud9 Connection

So, I am wondering if there is a way to connect to the mongoDB I have setup in my Cloud9 from an html. I mean, I have already connected to the db from the terminal and everything is working like a charm but I need to do some stuff inside my script in an html document and when I try calling the function which contains this code it does nothing
var MongoClient = require('mongodb').MongoClient
, format = require('util').format;
MongoClient.connect('mongodb://127.0.0.1:27017/ingesoft', function (err, db) {
if (err) {
throw err;
} else {
console.log("successfully connected to the database");
}
db.close();
});
I have saved the same code into a "file.js" and ran it from console using node file.js and it outputs into the console log "successfully connected to the database", plus the terminal which is running mongo's connection shows me one more connection to the db. The thing is, when I try to run that code inside my script it doesn't work. Sorry for my ignorance I am new to mongo.
Any help would be much appreciated
To simplify your question, here's what's going on:
node file.js containing the code in your question is working
pasting the same code to your html file is not
So, getting to the bottom of the issue, let's ask first: what's the difference between running node file.js and putting the code in html?
The difference is that node ... is running on your Cloud9 workspace (let's call it the server machine).
Your MongoDB server is also running on that server machine
The mongodb npm package you installed is also present on the server machine
The url: mongodb://127.0.0.1:27017/ingesoft references 127.0.0.1 which is the localhost for your server
whereas with the code on your browser:
The code is being run on your customer's machine
That machine doesn't have your Mongodb server
Browser's usually don't support require
You can do requires if you bundle code and use something like webpack or browserify. Did you perhaps do that?
If you did indeed package everything, was the mongodb package that you're requiring packaged?
Can the mongodb package be run from the client side?
The url: mongodb://127.0.0.1:27017/ingesoft references 127.0.0.1 which is the localhost for your customer's machine
Basically, as you can see from the above, the two are very different.
If you want to talk to your db, a lot of people go the following route:
Make a server application that implements some form of REST API
That REST API talks to your DB
Your client code knows how to talk to the REST API and get the required data
That way, you only talk to your MongoDB using your server, and the client can talk to your server via the internet.
This is, of course, an oversimplification, but I hope this resolves your confusion.

Save File to Webserver from POST Request

I am making a post request with some javascript to a python script in my /var/www/cgi-bin on my web server, and then in this python script I want to save the image file to my html folder, so it can later be retrieved.
Located at /var/www/html, but right now the only way I know how to do this is to set the python script to chmod 777 which I do not want to do.
So how else can I save a file that I grab from my webpage using javascript and then send to server with javascript via POST?
Currently when I do this I get an error saying the python does not have permission to save, as its chmod is 755.
I here is python code, I know it works as the error just says I dont have permission to write the file
fh = open("/var/www/html/logo.png", "wb")
fh.write(photo.decode('base64'))
fh.close()
If you don't want to change the permission of that directory to 777, you can change the owner of the directory to your HTTP server user, then the user of your web app will be able to write file into that directory because they have rwx - 7 permission of the directory.
To do that, via (since you're using Apache as your web server, remember login as `root):
chown -R apache:apache /var/www/cgi-bin/
Remember that then only user apache and root has rwx to that directory, and others has rx.
And this command means:
chown - change the owner of the directory
-R - operate on files and directories recursively
apache:apache - apache user, apache group
/var/www/cgi-bin/ - the directory
Try man chown command to check the manual page of chown and learn more, here's a online version.
If you need change it back, I think the default user of that directory is root. So login as root, and run command:
chown -R root:root /var/www/cgi-bin/
We were solved the problem in chat room.
The error message is directly indicating the role/use the python server is running on doesn't have write access to the folder. You need to assign either the role or the web server user. Make sure to give only write access and not write + execute access.

Categories

Resources