I want to make a server in Node.js that allows users to upload files to AWS S3.
For each file uploaded there is a global unique url by default as shown below:
https://s3.amazonaws.com/aws-website-test-psjjm/about.html
What are the different strategies to setup access control for these files uploaded to S3, using Node.js?
Take a look at the s3 access overview:
https://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-overview.html
There are two types of access control, bucket policy and ACL. Which you use will depend on what you are trying to achieve. You'll need to go through each and decide which works best for you. These can control both account level access and external access to your bucket.
ACL docs
IAM policies docs
Once you know what you want to achieve you can apply those policies to the bucket with a number of methods such as directly on the console, cloudformation, cli.
You can find examples here
Related
I have am working on a React project where users can upload files and generate a unique passcode, then create a folder in my S3 bucket named with this passcode. Then the user (or someone else) could access the website on another computer, type in this passcode and retrieve the files.
I don't have much experience with S3 so the settings are a bit overwhelming. How can I configure a bucket for this project? I read about something called a "signed-URL". Would that accomplish what I want to do?
This sounds like a Pastebin with a password, except that it is multiple files under one code. It's also a bit similar to Dropbox, in the way that it can 'share' files.
I would recommend:
Your app generates a Unique ID (UUID)
Your app invites the user to upload a set of files:
These can be Uploaded to Amazon S3 using presigned URLs, which allow the files go to straight to S3. Make sure they are uploaded to a path prefixed with the UUID.
The app gives the user the UUID for later retrieval
Another user goes to the app and requests files, providing the UUID
The app then presents a list of files from that directory. When showing this list, the app creates an Amazon S3 pre-signed URLs for each file, allowing the user to download them directly from S3.
You have some process that 'cleans up' files after a period of time, either based on the upload time and/or the download time
Basically, the Amazon S3 bucket is kept private and all objects are kept private. There is no configuration required on the bucket or the objects. Instead, the 'magic' comes from your application generating pre-signed URLs, which allow time-limited access to a private object.
Please refer to this article : How to host a website on S3 without getting lost in the sea
Maybe this could be of your help for the scenario you have mentioned in the question.
Please refrain from asking questions with full tutorial on stackoverflow as it is against community guidelines. We do are happy to send you in the right direction however its not right to ask for full code or tutorial...
If you really need the full app or code here....just post the wrong code, someone will definitely fix itfor you, but no free service
Firebase hosting my website. Website has a <iframe> element, it will load aaa.html from bucket in google cloud storage.
<iframe src="https://storage.googleapis.com/bucket/aaa.html" />
And aaa.html will also load other files (js files or img files) stored in the same bucket.
When I use gsutil iam ch allUsers:objectViewer gs://bucket to set bucket public and all files public, website works perfectly. But I do not want that users could link aaa.html without my website. Set bucket public is seems to be incorrect.
So, is there a way to make bucket public to a specific website?
Ok. You mentioned you don't want to make your bucket public. Singed URLs are not for you.
I think you can achieve your goal by doing the following:
Define a service account with the roles and permissions you need to get the stored objects in your bucket. This service account will be used by your application.
Use the GCS API to get into the stored files in your bucket. Here is how to Use a service account to call an API in your code
Hope this is helpful :D
I have a web service involving the attachment of multiple documents to one of many "objects". To simplify this process and make it easier to edit each file individually if the user so desires, I want the user to be able to synchronise all of these files onto a directory on his/her computer (much like that of Google Drive or Dropbox). If the user were to change one of these files, remove a file, or add a file, this would reflect on my web service, and hence affect the files that are attached to this "object".
What would be the best choice of services to do this? I am currently using a Node.JS back-end, although I suspect this will do little to influence the choice of storage. I'm looking for a service that allows the user the flexibility of full filesystem-level CRUD, whilst synchronising the files in a secure manner to a subset of a larger object storage collection, much like synchronising (and only providing access to) a subdirectory of an AWS S3 bucket, hence the title of this question.
I'm currently looking into (somehow) doing this with AWS S3, although I am open to using another storage service.
Thanks in advance.
AWS Storage Gateway (cached gateway) allows you to edit files locally and the Gateway will synchronize the update automatically over to S3.
You will need to install a small VM on the machine. Typically, if your clients have a private data centre / server , this config will allows a "share folder" (or a NAS) to be synchronized with S3.
We are creating an app where every user has a designated dropbox folder which is located in a dropbox folder created only for the app. The user should have only access to his own folder.
The problem is with the created API access token you have access to all folders of all users. In our app we are able to restrict the access so the user has only access to his own folder but because the access token must be hard coded into the web app anyone could eventually get hold of it. With the access token they would have access to all user folders (and the client data would be unsecured).
So there are two possibilities:
We access Dropbox via PHP and restrict the access. The app gets the user folder per AJAX and the PHP script handles the restrictions. But there is no possibility to access Dropbox via PHP (in API v2).
The data is stored on the users own Dropbox accounts, but we don't want the users to need an own Dropbox account to get access to our app functionalities. And the company should always have access to all user folders.
Is there any possibility to encrypt and hide the access token in the javascript code? Or are there other ways to solve this problem?
As noted in the comments, you can't just hide the access token in JavaScript. While you can make it more difficult for an attacker to extract the token, you can't make it impossible. (Client-side apps, such as in browser JavaScript, fundamentally can't keep secrets.)
A few other notes:
But there is no possibility to access Dropbox via PHP (in API v2).
This isn't really true. While Dropbox does not offer an official PHP SDK for Dropbox API v2, you can still access Dropbox API v2 from PHP either using the HTTPS endpoints directly, or using a third party library.
The data is stored on the users own Dropbox accounts, but we don't want the users to need an own Dropbox account to get access to our app functionalities
The API was designed with the intention that each user would link their own Dropbox account, in order to interact with their own files. Accessing a single account like this isn't recommended.
I just set up a simple JS photo uploader on my site. It uploads photos directly to a Google Cloud Storage bucket.
I used the code from their official JavaScript library
But if you look at that code, you'll see that it requires authentication. I'm authenticated and able to upload photos, but I want everyone to be able to just upload their files without signing in to their Google Accounts.
Is that possible?
You could use a POST policy doc: https://cloud.google.com/storage/docs/xml-api/post-object#policydocument
This will let you build a signed request with various constraints built in (content-length, content-size, etc.), allowing unauthenticated users to upload to your bucket with the given constraints.