In Meteor we put all sensitive code in /server and browser code in /client. Meteor then automatically compiles and minifies all /client side code for us. Thanks Meteor.
However, I'm using https://github.com/alanning/meteor-roles to manage content by user roles. One of those roles is an administrator and I have a client side scripts for use only by that user eg: /client/admin-only/**.js. All code in those scripts checks the user is an administrator and only calls the server to do sensitive tasks, but I don't want anyone but an adminstrator to be able to even see that code.
What I want to ensure is that these client admin JS files are only downloaded to users who are actual administrators and not included in the auto-compiled/minified JS created by Meteor.
Is there any way to setup Meteor to generate 2 versions of it's client JS - One for normal users and one for administrators - and only download those files based on user role?
The Meteor Guide addresses this issue:
While the client-side code of your application is necessarily accessible by the browser, every application will have some secret code on the server that you don’t want to share with the world. Secret business logic in your app should be located in code that is only loaded on the server. This means it is in a server/ directory of your app, in a package that is only included on the server, or in a file inside a package that was loaded only on the server.
Basically, MDG's guidance is to dumb down that admin view as much as possible. If that's not acceptable, you'll need to have it bundled in a separate Meteor application on either an internally accessible network only, or by using two MongoDB instances so you can separate authentication out for the second app.
Related
I have downloaded node.js, and have created the firebase-function files at the website directory (firebase.json, the functions folder and others).
If I was to write javascript cloud functions inside the project/functions/index.js file, it won't be private when I upload it to my Github repository for my static website. (something.github.io)
So how would I go about calling the firebase cloud functions in my index.js to my static website without uploading the index.js (to keep certain functions private)?
Edit:
I understand now that there are environmental variables, but how can I incorporate that with Github Pages website with firebase Admin SDK and cloud functions?
How do I upload my GitHub pages project and still link my client-side to the environmental variables? Would I need to upload my index.js containing my cloud functions? But also doesn't upload the index.js defeat the purpose of the client not being able to see the functions/data?
The comment below mentioned a software calledHeroku, what exactly is the purpose of it when I already have GitHub and firebase interacting with my website and database?
Also I saw a method of using dotenv, to create a .env file to put secret data (such as API keys) and use gitignore to prevent the file being uploaded? Would that be able to work on Github Pages, and if so, would the client be able to see the .env? And if they can't, can the client-server website link it to the .env even if it isn't pushed to Github
This is a good use of environment variables. Essentially, say you had an API key of 12345. And you had a function as such:
async function fetchResults() {
await fetch("myapi.com/lookup?key=12345")
}
You could do instead:
async function fetchResults() {
await fetch("myapi.com/lookup?key=${process.env.API_KEY}")
}
The environment variable is kept on your machine so it never goes to GitHub, hence allowing you to expose the code as open source while maintaining confidentiality of your sensitive keys.
Edit: So I reread your question and I see you're talking about publishing to GitHub pages. The key thing to note is that anything that is hosted client-side the user will be able to see. GitHub Pages only hosts the "client" portion of your application. So if your client (browser) website makes an API call to myapi.com/lookup?key=12345, they will be able to see it no matter what, since it's their browser making a request and they can see anything their browser does.
However, the best practice here is to write server-side code to run your application as well. For this, you can use what I suggested above, where you add environment variables to whichever server you use to host (for example, you can do this easily with [Zeit Now][2] or Heroku). That way, you can share your code but your environment variables stay secret on the machine that runs your server-side code.
I have a Node.js app on Heroku linked with GitHub. The problem is that every time I deploy the master branch to Heroku, the whole app gets overwritten on Heroku.
Usually this is perfectly fine, however in the root folder there's a database file which constantly gets updated through a chat app, and on every deploy it gets reset. The SQLite database gets automatically generated if it does not exist when the main script is run.
My question is how do I make the SQLite database file be the only persistent file in the Heroku app, without getting overwritten on deploys from master branch on GitHub.
I have tried adding a .slugignore file and including database.sqlite there
To answer your question, as you've discovered, you can ignore files when deploying to Heroku using a .slugignore file.
However, this won't solve your problem. Heroku's filesystem is ephemeral. Any changes you make to it will be lost whenever your dyno restarts. This happens frequently (at least once per day). As a result, even if you add database.sqlite to your .slugignore, your data will be lost.
The solution is to use a client-server database instead of a file-based database like SQLite. Heroku Postgres is a fairly straightforward choice that has a free tier. If you don't want to use PosgreSQL you can choose another database.
Whatever database you choose, I urge you to also switch to the same database in development. Database systems aren't drop-in replacements for each other, and you don't want to discover the differences when you're trying to deploy to production.
I have a laravel php app which is basically an api that the user will access through an angular single page app. Currently the angular app is contained with the public folder but I want to break it off on its own so that I can deploy it via amazon cloudfront.
I found this article for hosting static websites on CloudFront which explains the basics but I cannot find anything that discusses the hitting of an api with your cdn served site.
I would like to still be able to have 3 different environments, dev/staging and production which each currently have their own elastic beanstalk managed instances and seperate databases. I would like their addresses to be dev.blah.com / staging.blah.com and blah.com respectively and have each version of the angular app hit the correct backend etc.
I would like to be able to deploy the angular app in a similar way to how I deploy to elastic beanstalk, ie git push
Can I set it up so I dont need to modify the api endpoints in the angular app for each environment. ie the dev version hits dev.blah.com/get/user/1 and with the same source the staging hits staging.blah.com/get/user/1? Will this happen automatically or do I need to take specific actions to allow for this?
Are all these things possible? I dont expect a step by step guide but just looking for an outline of the process and a push towards where I can find the resources to learn how to do this myself as my searches have not resulted in much
On CloudFront, in the "behaviors" tab of your distribution, you can assign a path to every origin. For example, you can specify that /* requests are redirected to a S3 bucket with your static resources but /api/* is redirected to your api backend.
As for the dev/staging/prod environments, those would probably be 3 different distributions too. They can point to the same or to different origins.
See "Whole Site Delivery with CloudFront"
I have developed a rails app that currently uses a lot of javascript.
Most of this javascript is code that I have generated myself, and is generally only used in a few places (all by authenticated users).
Basically, what I would like to do, is to be able to only allow a user access these js files if they are logged in and authorized (I use devise and cancan for authentication and authorization).
I would still like the files to be precompiled (concatenated and minimised) the same as the asset pipeline does, but these files should then be stored somewhere not accessible to the public, and served by rails (or similar) only when the user is authorized to access them.
I have tried and failed searching, but feel I must be missing something simple as this is surly common practice in a lot of rails apps.
Therefore, I was hoping to get some help finding information on this matter as I'm at a loss of what I can do other than compiling the js file manually and adding this to a view the user is authorized to access.
Any help would be appreciated!
Edit:
To Clarify what I'm asking:
I want to try to find something similar to the asset pipeline that will concatenated and minimize the js files as normal.
Then, when the user tries to access this js file:
1. If the user is logged in, the js file is served to the user as normal.
2. If the user is not logged in, the user is given a error message (or a 401 not authorized, or 404 not found, or similar), meaning a unauthorized user cannot access the script.
Basically, something similar as what happens when you try to access a json file you arn't entitled to view.
you could simply use different layouts for the logged-in users or render a partial, that includes your precompiled javascript.
e.g. in application.html.haml
- if current_user.
= javascript_include_tag "your_special_user_js"
I don't know, if this answers your question, but it was my understanding, that you are trying to achieve this behaviour.
I have an HTML5 app which is capable of running offline. However, I need to password protect the directory this app resides in to only allow access to authorized users. Initially I was using a PHP login page which set a cookie (outside of the app directory) then redirected to the app directory. The app (JavaScript) checks for the cookie and if it's there it lets the user run the app. If not, it redirects them back out of the app directory.
The problem with this method is that all of the files in the directory are still accessible if referenced directly (which I don't want). I do not want users to have to authenticate every time they hit the directory (it's a one-time authentication process; the cookie is there so that they never have to type their username/password again), and I also want to have a stylized login form (i.e. not using the default browser login box for http authentication).
Finally, because this is an offline HTML5 app, I can't include any PHP code in the app itself.
Any suggestions?
That doesn't sound like something you could do from Javascript. The script would need access to the file system to be able to restrict access to the folder, wouldn't it?
Unless this feature is exposed by the browser via a javascript API, I don't think it will be possible. It sounds like it would be a useful feature though.
Perhaps you could encrypt vital data, but apart from slowing down the application, I'm not sure what good it would do, since all the necessary keys would have to be stored locally as well...
Since the general rule of security on the web is that you can never ever rely on anything that happens client-side (e.g. in Javascript) without a double check on the server-side, this will of course pose a problem when the app is running offline and the server-side is not available :(
Looking at the "make Javascript redirect if the cookie exists" problem, unless I'm mistaken, it would be trivial for a malicious user to edit the Javascript, using for example Firebug, to redirect in any case.
EDIT: By the way, what level of security are you looking for? The "mom won't be able to accidentally access my account"-level (which it sounds like you already achieved), or the "no one, except maybe the NSA, should be able to hack it"-level?