Changes before upload in slingshot meteor package - javascript

I am using slingshot to upload some images in my Amazon s3 and it works like charm. Nevertheless, I want to use slingshot to upload some files (text,doc,odf etc) too in my Amazon s3. BUT, I want to convert all these files into PDF first before uploading into s3. I am familiar with nodeJs package like "https://github.com/gfloyd/node-unoconv", that converts file into PDF. But how would i be able to integrate it into slingshot.
Eventually all the text, doc, etc files, that the client uploads, I want them to be stored in S3 but in PDF format. So is there any way to do that.
Amateur in meteor, so would be grateful if the explanation is detailed.
Thanks.

Slingshot is uploading directly to S3 so I am not sure how you would efficiently convert the files before the upload. You could do it client side, but this is obviously not 100% reliable and can be heavy for the client.
For a recent application I used Amazon Lambda in order to do images resizing as soon as they are uploaded to an S3 bucket.
It is very easy to use. You just have to create a package with your code and upload it to Lambda and then trigger the Lambda function when a file is uploaded to S3.
The service is really cheap and the tutorial is easy to follow. It is about file resizing but it should be easy to modify in order to do PDF conversion as you can use any npm package.
There is a lot of steps involved: create S3 buckets, create IAM roles, create a Lambda function, etc. But as soon as it is configured it is quite reliable.
I think it is the best tools to pair with S3 especially when you are uploading directly to buckets.

Related

Is there are way to compress Buffer data to a smaller size with NodeJS?

I'm working on a Lambda function which fetches images from a Google drive and uploads it to S3 bucket.
The format of the data I'm working with is a Buffer and when I upload it to S3 bucket the size is 2.8mb. However I need to compress it to be under 2mb. I can't seem to find a suitable library which can handle this server side.
Any advice?
For compressing images and storing it, you may use Jimp which is an image processing library and provides all the facilities you're looking for.
For reference, you may go to this documentation

Creating a web UI for an ftp file server that can archive folders as ZIP to download

I'm trying to create a file server to serve a bunch of files for download and have a web-based UI to search/filter and download the files.
Right now i'm using a basic FTP server and i'm planning to create a simple (flask+js) web app that will redirect me to the ftp url for the selected file so it can be downloaded. The web app is needed so I can create my own tagging and filtering system.
However, i'm having trouble finding out how to download whole directories and folders as a zip. Basically like how google drive can have a "Download as zip" function while still keeping the raw folder available for browsing.
Is there a way to do this?
This involves working with "blobs", a JavaScript way of dealing with streamable binary data.
I haven't tried this particular library myself, but there's an article here about using the JSZip library specifically for the kind of thing I think you're talking about: https://davidwalsh.name/javascript-zip

S3 : How to upload a large file using S3 in nodejs using aws-sdk

I need to upload large files into S3 using AWS SDK. I see the upload API https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#upload-property does exactly that but the issue is with setting the body. I need to read the whole file upfront and that may not be possible for huge file due to memory constraints
Is there a better way of doing it without reading the file locally?
It is possible to upload files in chunks rather than a single upload. In fact, AWS recommend using Multipart Upload when uploading files that are bigger than 100 MB.
Multipart upload allows you to upload a single object as a set of parts. If transmission of any part fails, you can re-transmit that part without affecting other parts. After all parts of your object are uploaded, Amazon S3 assembles these parts and creates the object.
Follow this official link on AWS to learn more about uploading using multipart:
http://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html

List all files in my S3 bucket on my website using JavaScript

I have a AWS S3 bucket with lots of images split in separate folders. I need to list all those images on my website using javascript and be able to upload a new image to the S3 bucket directly from my website. What is the best possible way to do it in javascript ?
Sorry mate, no offense meant here. Why don't you try to read the manual or read the official documentation?
Here it is, a Javascript example, from Amazon, to list the S3 objects in a bucket
https://aws.amazon.com/code/Amazon-S3/1713
I hope this helps

Download and extract gz file using javascript [ClientSide]

I'd like to download a database compressed as a .gz file in a cordova application and extract it to create a local WEBSQL database.
Is this posible?
I have the need to do this because my app needs to work offline, besides that this database I have to sync is so big that is unthinkable to download it uncompressed.
This sync only happens one time in my app. So after that is not a big deal.
Thanks in advance!
PD: I'm currently downloading each table separately but it takes too long, the code is a mess with all the callbacks and stuff. A compressed file with all data would be much more helpful.
You can find gz uncompressors for JavaScript in this question: JavaScript implementation of Gzip

Categories

Resources