I am using Node and AWS javascript SDK. The workflow is like this:
Web app calls an API to get a few files created and downloaded to EBS,then these files are uploaded to s3. Depending on file sizes etc, it is possible that one file starts uploading while another uploading hasn't finished yet.
Is this ok to s3?Thanks.
it is possible that one file starts uploading while another uploading hasn't finished yet
It is normal, but this means asynchronous upload. Synchronous loading means, that you start to upload next file after previous one was uploaded.
IMO, the best way is to run async (batch) uploading, collect responces from S3 and then send back URIs of uploaded files.
Related
I need to upload large files into S3 using AWS SDK. I see the upload API https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#upload-property does exactly that but the issue is with setting the body. I need to read the whole file upfront and that may not be possible for huge file due to memory constraints
Is there a better way of doing it without reading the file locally?
It is possible to upload files in chunks rather than a single upload. In fact, AWS recommend using Multipart Upload when uploading files that are bigger than 100 MB.
Multipart upload allows you to upload a single object as a set of parts. If transmission of any part fails, you can re-transmit that part without affecting other parts. After all parts of your object are uploaded, Amazon S3 assembles these parts and creates the object.
Follow this official link on AWS to learn more about uploading using multipart:
http://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html
I'm developing a website in which i have to upload files to the server. There are many file upload controls out there but none of them has served my purpose, that is I want to upload lets say 1000 files but I want to do it in chunks of 200 files so that server calls are minimum. In the above explained scenario 5 calls would be made to the server. I have look into the Plupload and Dropzone each of them make a separate call to the server i.e. 10 files 10 server calls. Is there any file upload control that serves this purpose or any option in the above mentioned controls that I can make use of?
look into Fineuploader, and perhaps use Amazon S3 as a server-side solution
http://fineuploader.com/
Is there a file select library, similar to drop zone, that just manages the file selection process and lets me control how the files are uploaded? It seems to me, unless I have missed something - most of these libraries require you to identify a server side file that will handle the upload. Currently, my set up is such that files are sent to my server and then to S3 - which is far less efficient than sending them directly to AWS via the javascript API.
I like how dropzone lets me select files from multiple directories - adding to the list of files to upload on the fly.
We are currently using filepicker.io's pickAndStore to allow users to upload multiple files (some small, some big) to a S3 bucket.
What happens to files that have completed upload after a user closes filepicker modal? On the javascript-client side the onError is passed a 101 error, but gives no information about completed files.
Are they removed by filepicker from our S3 bucket, or should we assume that there are stale files hanging around?
Thanks!
I haven't found a solution to this issue yet either and it's causing orphaned files in our S3 bucket (that our app does not know about) which we need to clean up later.
How to reproduce the issue:
Setup filepicker pickandstore to allow multiple file uploads.
Upload 5 files.
Let three complete.
Close Filepicker window.
Issue: We never receive a partial JS callback for the three successes letting me know that three files are now sitting in my s3 bucket.
Two proposed solutions:
1. For multiple uploads, need a callback when all are complete/failed/window-was-closed with which arrays for which files were successful and which were failures. I like this option more.
2. Have a success/failure callback PER file?
Is it possible to download files with the help of AJAX requests via XHR2? I am able to upload files to the sever, but I can not find a way to download them.
PHP Version 5.4.12
Apache/2.4.4 (Win64) PHP/5.4.12
I need to find a solution to be able to download very large files (up to 4GB). To save server memory, the files need to be downloaded in chunks. I need to monitor download progress too, to provide feedback especially on large files.
I have tried many things, but nothing works well. I run our PHP memory, it takes very long to start the download, I can not monitor the progress, cURL is very slow, x-sendfile provides no download progress, or I can not run two PHP scripts at once(when one sends the data and the other monitors the progress.)