I'm trying to transcode all wav files into a mp3 using Meteor and Meteor FS Collections. My code works when I upload a wav file to the uploader -- That is it will convert the wav to a mp3 and allow me to play the file. But, I'm looking for a Meteor Solution that will transcode and add the file to the DB if the file is a wav and exist in a certain directory. According to the Meteor FSCollection it should be possible if the files have already been stored. Here is their example code: *GM is for ImageMagik, I've replaced gm with ffmpeg and installed ffmpeg from atmosphereJS.
Images.find().forEach(function (fileObj) {
var readStream = fileObj.createReadStream('images');
var writeStream = fileObj.createWriteStream('images');
gm(readStream).swirl(180).stream().pipe(writeStream);
});
I'm using Meteor-CollectionFS [https://github.com/CollectionFS/Meteor-CollectionFS]-
if (Meteor.isServer) {
Meteor.startup(function () {
Wavs.find().forEach(function (fileObj) {
var readStream = fileObj.createReadStream('.wavs/mp3');
var writeStream = fileObj.createWriteStream('.wavs/mp3');
this.ffmpeg(readStream).audioCodec('libmp3lame').format('mp3').pipe(writeStream);
Wavs.insert(fileObj, function(err) {
console.log(err);
});
});
});
}
And here is my FS.Collection and FS.Store information. Currently everything resides in one JS file.
Wavs = new FS.Collection("wavs", {
stores: [new FS.Store.FileSystem("wav"),
new FS.Store.FileSystem("mp3",
{
path: '~/wavs/mp3',
beforeWrite: function(fileObj) {
return {
extension: 'mp3',
fileType: 'audio/mp3'
};
},
transformWrite: function(fileObj, readStream, writeStream) {
ffmpeg(readStream).audioCodec('libmp3lame').format('mp3').pipe(writeStream);
}
})]
});
When I try and insert the file into the db on the server side I get this error: MongoError: E11000 duplicate key error index:
Otherwise, If I drop a wav file into the directory and restart the server, nothing happens. I'm new to meteor, please help. Thank you.
Error is clear. You're trying to insert a next object with this same (duplicated) id, here you should first 'erase' the id or just update the document instead of adding the new one. If you not provide the _id field, it will be automatically added.
delete fileObj._id;
Wavs.insert(fileObj, function(error, result) {
});
See this How do I remove a property from a JavaScript object?
Why do you want to convert the files only on startup, I mean only one time? Probably you want to do this continuously, if yes then you should use this:
Tracker.autorun(function(){
//logic
});
Related
So I'm completely lost at this point. I have had a mixture of success and failure but I can't for the life of me get this working. So I'm building up a zip file and storing it in a folder structure that's based on uploadRequestIds and that all works fine. I'm fairly new to the node but all I want is to take the file that was built up which is completely valid and works if you open it once it's been constructed in the backend and then send that on to the client.
const prepareRequestForDownload = (dirToStoreRequestData, requestId) => {
const output = fs.createWriteStream(dirToStoreRequestData + `/Content-${requestId}.zip`);
const zip = archiver('zip', { zlib: { level: 9 } });
output.on('close', () => { console.log('archiver has been finalized.'); });
zip.on('error', (err) => { throw err; });
zip.pipe(output);
zip.directory(dirToStoreRequestData, false);
zip.finalize();
}
This is My function that builds up a zip file from all the files in a given directory and then stores it in said directory.
all I thought I would need to do is set some headers to have an attachment disposition type and create a read stream of the zip file into the res.send function and then react would be able to save the content. but that just doesn't seem to be the case. How should this be handled on both the API side from reading the zip and sending to the react side of receiving the response and the file auto-downloading/requesting a user saves the file.
This is what the temp structure looks like
There is some strategies to resolve it, all browser when you redirect to URL where extension ending with .zip, normally start downloading. What you can do is to return to your client the path for download something like that.
http://api.site.com.br/my-file.zip
and then you can use:
window.open('URL here','_blank')
I have followed many solutions provided in the previous questions but mine is not working. The problem is in .json extension. Whenever I use filename.json, the app will crash with ERR_CONNECTION_RESET but successfully created an empty .json file. However, if I change the extension to filename.txt, the fs.writeFile will successfully create the filename.txt with the data inside and the app will work as expected. Did I miss any configuration here to create the JSON file?
Here is the example code I used.
var jsonData = '{"persons":[{"name":"John","city":"New York"},{"name":"Phil","city":"Ohio"}]}';
// parse json
var jsonObj = JSON.parse(jsonData);
console.log(jsonObj);
// stringify JSON Object
var jsonContent = JSON.stringify(jsonObj);
console.log(jsonContent);
fs.writeFile("./public/output.json", jsonContent, 'utf8', function(err) {
if (err) {
console.log("An error occured while writing JSON Object to File.");
return console.log(err);
}
console.log("JSON file has been saved.");
});
So, ERR_CONNECTION_RESET means that the connection was closed midway. My guess, as in the comments, would be that it's a reloading server.
Try using --ignore public/**/*.json and it should work.
I use the following code to read the get files from the file system
The code is from a blog post called Building a File Uploader with NodeJs.
I was able to see the UI, etc when I ran my project.
I cannot use the following code since I don't have an uploads folder in my project (form.uploadDir)
app.post('/upload', function(req, res){
// create an incoming form object
var form = new formidable.IncomingForm();
// specify that we want to allow the user to upload multiple files in a single request
form.multiples = true;
// store all uploads in the /uploads directory - cannot use it
//form.uploadDir = path.join(__dirname, '/uploads');
// every time a file has been uploaded successfully,
// rename it to it's original name
form.on('file', function(field, file) {
fs.rename(file.path, path.join(form.uploadDir, file.name));
});
// log any errors that occur
form.on('error', function(err) {
console.log('An error has occurred: \n' + err);
});
// once all the files have been uploaded, send a response to the client
form.on('end', function() {
res.end('success');
});
// parse the incoming request containing the form data
form.parse(req);
});
My question is how should I get the file from the UI with the code
above? I need to get the file content from the form when I choose my file.
The application is deployed to the cloud and when I use localhost, I use the following code (which works)
const readStream = fs.createReadStream("./file.html");
...
const writeStream = sftp.createWriteStream("/index.html");
...
readStream.pipe(writeStream);
Which creates a file from the file system with the correct path and overwrites it with another file (like here index.html).
As #aedneo said, when the user choose file the file is created in the upload folder , I just needed to rename it and give the right path to the write stream method and it works!
I am saving uploaded images in Mongodb GridFS with Node.js/Express/gridfs-stream/multyparty using streams.
Works fine.
Now I would like to "normalize" (resize) images to some standard format before storing to database.
I could use gm https://github.com/aheckmann/gm and have streaming but I would have to install native ImageMagic (not an option) or
Use something like lwip https://github.com/EyalAr/lwip and have a "pure Node" setup, but then I cannot have streaming
So is there a solution to have a streaming solution to request -> resize -> store to GridFS without installing external libraries?
Current solution (missing the resize step):
function storeImage(req, err, succ){
var conn = mongoose.connection;
var gfs = Grid(conn.db);
var context = {};
var form = new multiparty.Form();
form.on('field', function(name, value){
context[name] = value;
console.log(context);
});
form.on('part', function(part){
// handle events only if file part
if (!part.filename) { return; }
var options =
{
filename: part.filename,
metadata: context,
mode: 'w',
root: 'images'
};
var ws = gfs.createWriteStream(options);
// success GridFS
ws.on('close', function (file) {
console.log(file.filename + file._id);
succ(file._id);
});
// error GridFS
ws.on('error', function (errMsg) {
console.log('An error occurred!', errMsg);
err(errMsg);
});
part.pipe(ws);
});
// Close emitted after form parsed
form.on('close', function() {
console.log('Upload completed!');
});
form.parse(req);
}
For posterity
1) Initially I used lwip while I was storing images locally. When people started uploading bigger images (which was added as requirement) lwip started exploding my instance on Heroku and I switched to
2) gm over ImageMagick running on AWS Lambda that has ImageMagick preconfigured in the default instance. Images now stored on S3 and distributed via CloudFront.
I'm using dropbox.js to upload the files from my web app to the cloud.
I noticed that if you upload two files with the same name, it just create another version or revision.
The thing is I dont found any way to "programatically" download an specific revision of the file.
Is there any workaround? Any help will be appreciated
I'm using this function to generate the download link:
function downFile(i) {
var client = new Dropbox.Client({
key: "xxxxxxxxxxx",
secret: "xxxxxxxxxx",
sandbox: false,
token: "xxxxxxxxxxxx"
});
client.makeUrl(i, {
downloadHack: false
}, function(error, data) {
if (error) {
return console.log(error); // Something went wrong.
}
$("#mylink").html(data.url);
$("#mylink").attr("href", data.url);
});
}
In dropbox.js, the method to download a file is called readFile, and it takes an optional rev parameter to specify which revision of the file you want to access.
So something like client.readFile(path, { rev: 'abc123' }, function (err, contents) { ... }); should work.
The code you have so far seems to be creating a share link for the file. There's no way to create a share link that points to a specific revision of the file... you'll always get the latest file contents from a share link.