Node.js/Mongodb/GridFS resize images on upload - javascript

I am saving uploaded images in Mongodb GridFS with Node.js/Express/gridfs-stream/multyparty using streams.
Works fine.
Now I would like to "normalize" (resize) images to some standard format before storing to database.
I could use gm https://github.com/aheckmann/gm and have streaming but I would have to install native ImageMagic (not an option) or
Use something like lwip https://github.com/EyalAr/lwip and have a "pure Node" setup, but then I cannot have streaming
So is there a solution to have a streaming solution to request -> resize -> store to GridFS without installing external libraries?
Current solution (missing the resize step):
function storeImage(req, err, succ){
var conn = mongoose.connection;
var gfs = Grid(conn.db);
var context = {};
var form = new multiparty.Form();
form.on('field', function(name, value){
context[name] = value;
console.log(context);
});
form.on('part', function(part){
// handle events only if file part
if (!part.filename) { return; }
var options =
{
filename: part.filename,
metadata: context,
mode: 'w',
root: 'images'
};
var ws = gfs.createWriteStream(options);
// success GridFS
ws.on('close', function (file) {
console.log(file.filename + file._id);
succ(file._id);
});
// error GridFS
ws.on('error', function (errMsg) {
console.log('An error occurred!', errMsg);
err(errMsg);
});
part.pipe(ws);
});
// Close emitted after form parsed
form.on('close', function() {
console.log('Upload completed!');
});
form.parse(req);
}

For posterity
1) Initially I used lwip while I was storing images locally. When people started uploading bigger images (which was added as requirement) lwip started exploding my instance on Heroku and I switched to
2) gm over ImageMagick running on AWS Lambda that has ImageMagick preconfigured in the default instance. Images now stored on S3 and distributed via CloudFront.

Related

Using SSH2 and SFTPStream to stream file from server to AWS S3 Bucket

I'm trying to use the ssh2 module to take a file from a server and add it to an S3 bucket in AWS. I would like to be able to stream the file so that I don't have to have it in memory. I tried the following:
const Client = require('ssh2').Client;
const aws = require('aws-sdk');
const s3 = new aws.S3();
exports.handler = function(event, context, callback) {
let connSettings = {
host: event.serverHost,
port: event.port,
username: event.username,
password: event.password
};
let conn = new Client();
conn.on('ready', function() {
conn.sftp(true, function(err, sftp) {
if (err) throw err;
let stream = sftp.createReadStream(filename);
let putParams = {
Bucket: s3Bucket,
Key: s3Key,
Body: stream
};
s3.putObject(putParams, function (err) {
if (err) throw err;
console.log("Uploaded!");
});
});
}).connect(connSettings);
};
However, the method sftp.createReadStream(filename) is looking at my local directory and not the server. Which other than that, it works.
Is there a way I can stream a file from a server to S3?
I know I could use the sftp.fastGet method to download the file from the server, save it locally, and then upload it to S3. But I would prefer not to have to save the file locally. The s3 SDK accepts a stream, so it would be much more convenient to just stream it.
UPDATE: the method sftp.createReadStream(filename) is correctly reading from the server and not locally. It is the s3.putObject method that is for some reason trying to get the file locally even though I'm giving it a stream.
For some reason the method s3.putObject is looking for the file locally even though I give it a stream. The stream contains the path from the server, but for whatever reason, when the method s3.putObject method is reading the stream, it tries reading the file locally.
I fixed this by instead using the s3.upload method.
s3.upload(putParams, function (err) {
if (err) throw err;
console.log("Uploaded!");
});

Generate Download URL After Successful Upload

I have successfully uploaded files to Firebase's storage via Google Cloud Storage through JS! What I noticed is that unlike files uploaded directly, the files uploaded through Google Cloud only have a Storage Location URL, which isn't a full URL, which means it cannot be read! I'm wondering if there is a way to generate a full URL on upload for the "Download URL" part of Firebase's actual storage.
Code being used:
var filename = image.substring(image.lastIndexOf("/") + 1).split("?")[0];
var gcs = gcloud.storage();
var bucket = gcs.bucket('bucket-name-here.appspot.com');
request(image).pipe(bucket.file('photos/' + filename).createWriteStream(
{metadata: {contentType: 'image/jpeg'}}))
.on('error', function(err) {})
.on('finish', function() {
console.log(imagealt);
});
When using the GCloud client, you want to use getSignedUrl() to download the file, like so:
bucket.file('photos/' + filename).getSignedUrl({
action: 'read',
expires: '03-17-2025'
}, function(err, url) {
if (err) {
console.error(err);
return;
}
// The file is now available to read from this URL.
request(url, function(err, resp) {
// resp.statusCode = 200
});
});
You can either:
a) Create a download url through the firebase console
b) if you attempt to get the downloadurl programmatically from a firebase client, one will be created on the fly for you.

AWS SDK JS: Multipart upload to S3 resulting in Corrupt data

Trying to upload an mp4 file using the AWS JS SDK initiating a multipart upload, I keep getting a file corrupt error when I try to download and play it on my local.
Gists of my code:
Initiating the multipart upload with params:
const createMultipartUploadParams = {
Bucket: bucketname,
Key: fileHash.file_name,
ContentType: 'video/mp4' // TODO: Change hardcode
};
Call:
s3Instance.createMultipartUpload(createMultipartUploadParams, function(err, data) {
}
Doing the chunking:
Params:
const s3ChunkingParams = {
chunkSize,
noOfIterations,
lastChunk ,
UploadId: data.UploadId
}
Reading the file:
const reader = new FileReader();
reader.readAsArrayBuffer(file)
Uploading each chunk:
reader.onloadend = function onloadend(){
console.log('onloadend');
const partUploadParams = {
Bucket: bucketname,
Key: file_name,
PartNumber: i, // Iterating over all parts
UploadId: s3ChunkingParams.UploadId,
Body: reader.result.slice(start, stop) // Chunking up the file
};
s3Instance.uploadPart(partUploadParams, function(err, data1) {
}
Finally completing the multipartUpload:
s3Instance.completeMultipartUpload(completeMultipartParams, function(err, data)
I am guessing the problem is how I am reading the file, so I have tried Content Encoding it to base64 but that makes the size unusually huge. Any help is greatly appreciated!
Tried this too
Only thing that could corrupt is perhaps you are uploading additionally padded content for your individual parts which basically leads to final object being wrong. I do not believe S3 is doing something fishy here.
You can verify after uploading the file what is the final size of the object, if it doesn't match with your local copy then you know you have a problem somewhere.
Are you trying to upload from browser?
Alternatively you can look at - https://github.com/minio/minio-js. It has minimal set of abstracted API's implementing most commonly used S3 calls.
Here is a nodejs example for streaming upload.
$ npm install minio
$ cat >> put-object.js << EOF
var Minio = require('minio')
var fs = require('fs')
// find out your s3 end point here:
// http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
var s3Client = new Minio({
url: 'https://<your-s3-endpoint>',
accessKey: 'YOUR-ACCESSKEYID',
secretKey: 'YOUR-SECRETACCESSKEY'
})
var outFile = fs.createWriteStream('your_localfile.zip');
var fileStat = Fs.stat(file, function(e, stat) {
if (e) {
return console.log(e)
}
s3Client.putObject('mybucket', 'hello/remote_file.zip', 'application/octet-stream', stat.size, fileStream, function(e) {
return console.log(e) // should be null
})
})
EOF
putObject() here is a fully managed single function call for file sizes over 5MB it automatically does multipart internally. You can resume a failed upload as well and it will start from where its left off by verifying previously upload parts.
So you don't necessarily have to go through the trouble of writing lower level multipart calls.
Additionally this library is also isomorphic, can be used in browsers as well.

Downloading Torrent with Node.JS

I was wondering if anyone had an example of how to download a torrent using NodeJS? Essentially, I have an RSS Feed of torrents that I iterate through and grab the torrent file url, then would like to initiate a download of that torrent on the server.
I've parsed and looped through the RSS just fine, however I've tried a few npm packages but they've either crashed or were just unstable. If anyone has any suggestions, examples, anything... I would greatly appreciate it. Thanks.
router.get('/', function(req, res) {
var options = {};
parser.parseURL('rss feed here', options, function(err, articles) {
var i = 0;
var torrent;
for (var title in articles.items) {
console.log(articles.items[i]['url']);
//download torrent here
i++;
}
});
});
You can use node-torrent for this.
Then, to download a torrent:
var Client = require('node-torrent');
var client = new Client({logLevel: 'DEBUG'});
var torrent = client.addTorrent('a.torrent');
// when the torrent completes, move it's files to another area
torrent.on('complete', function() {
console.log('complete!');
torrent.files.forEach(function(file) {
var newPath = '/new/path/' + file.path;
fs.rename(file.path, newPath);
// while still seeding need to make sure file.path points to the right place
file.path = newPath;
});
});
Alternatively, for more control, you can use transmission-dæmon and control it via its xml-rpc protocol. There's a node module called transmission that does the job! Exemple:
var Transmission = require('./')
var transmission = new Transmission({
port : 9091,
host : '127.0.0.1'
});
transmission.addUrl('my.torrent', {
"download-dir" : "/home/torrents"
}, function(err, result) {
if (err) {
return console.log(err);
}
var id = result.id;
console.log('Just added a new torrent.');
console.log('Torrent ID: ' + id);
getTorrent(id);
});
If you are working with video torrents, you may be interested in Torrent Stream Server. It a server that downloads and streams video at the same time, so you can watch the video without fully downloading it. It's based on torrent-stream library.
Another interesting project is webtorrent. It's a nice torrent library that works in both: NodeJs & browser and has streaming support. From my experience, it doesn't have very good support in the browser, but should fully work in NodeJS.

Convert wav to mp3 using Meteor FS Collections on Startup

I'm trying to transcode all wav files into a mp3 using Meteor and Meteor FS Collections. My code works when I upload a wav file to the uploader -- That is it will convert the wav to a mp3 and allow me to play the file. But, I'm looking for a Meteor Solution that will transcode and add the file to the DB if the file is a wav and exist in a certain directory. According to the Meteor FSCollection it should be possible if the files have already been stored. Here is their example code: *GM is for ImageMagik, I've replaced gm with ffmpeg and installed ffmpeg from atmosphereJS.
Images.find().forEach(function (fileObj) {
var readStream = fileObj.createReadStream('images');
var writeStream = fileObj.createWriteStream('images');
gm(readStream).swirl(180).stream().pipe(writeStream);
});
I'm using Meteor-CollectionFS [https://github.com/CollectionFS/Meteor-CollectionFS]-
if (Meteor.isServer) {
Meteor.startup(function () {
Wavs.find().forEach(function (fileObj) {
var readStream = fileObj.createReadStream('.wavs/mp3');
var writeStream = fileObj.createWriteStream('.wavs/mp3');
this.ffmpeg(readStream).audioCodec('libmp3lame').format('mp3').pipe(writeStream);
Wavs.insert(fileObj, function(err) {
console.log(err);
});
});
});
}
And here is my FS.Collection and FS.Store information. Currently everything resides in one JS file.
Wavs = new FS.Collection("wavs", {
stores: [new FS.Store.FileSystem("wav"),
new FS.Store.FileSystem("mp3",
{
path: '~/wavs/mp3',
beforeWrite: function(fileObj) {
return {
extension: 'mp3',
fileType: 'audio/mp3'
};
},
transformWrite: function(fileObj, readStream, writeStream) {
ffmpeg(readStream).audioCodec('libmp3lame').format('mp3').pipe(writeStream);
}
})]
});
When I try and insert the file into the db on the server side I get this error: MongoError: E11000 duplicate key error index:
Otherwise, If I drop a wav file into the directory and restart the server, nothing happens. I'm new to meteor, please help. Thank you.
Error is clear. You're trying to insert a next object with this same (duplicated) id, here you should first 'erase' the id or just update the document instead of adding the new one. If you not provide the _id field, it will be automatically added.
delete fileObj._id;
Wavs.insert(fileObj, function(error, result) {
});
See this How do I remove a property from a JavaScript object?
Why do you want to convert the files only on startup, I mean only one time? Probably you want to do this continuously, if yes then you should use this:
Tracker.autorun(function(){
//logic
});

Categories

Resources