fluent-ffmpeg: Unable to remove video screenshot after creation - javascript

I've the following code that creates a screenshot for the video I've uploaded;
var thumbFileName = 'tmp_file.jpg';
ffmpegCommand = ffmpeg(videoFile)
.on('end', function() {
callback(null, tempUploadDir + thumbFileName)
})
.on('error', function(err) {
callback(err);
})
.screenshots({
timestamps: ['50%'],
filename: thumbFileName,
folder: tempUploadDir
});
the code works pretty well and the screenshot is created. The callback read the file stream and store it into the database and eventually try to delete the thumbFileName from the filesystem.
And here is the issue I'm encountering, basically I'm not able to delete the file, even if I try it manually its say that the file is locked by another process (NodeJS) and I can't download it until I stop the application.
In the callback I've also tried to kill the command with ffmpegCommand.kill() before to delete the screenshot but I'm still having the same issue. The file will be removed using fs.unlink and its working when the thumbnail is generated for an image (even post-processed with effects, achieved with sharp) but not with ffmpeg. Apparently ffmpeg is still running and that's why I can't delete the thumb.

Related

'cannot open input file' error using webp-converter to convert webp to PNG

Expected behavior
My event listener below is supposed to:
Download a webp.
Save it to a temp directory.
Pass the file path of the saved webp to the webp.dwebp function.
Convert it to a PNG using the webp-converter package function.
And finally save that PNG to the the same directory (I'm using a static temp folder for now).
//Discord bot Event listener to put the webp image in temp folder and then covert to png
let timer;
client.on(Events.MessageUpdate, (oldMessage, newMessage) => {
newMessage.attachments.forEach((attachment) => {
if (attachment.url) {
clearTimeout(timer);
timer = setTimeout(() => {
// Use tmp package to create a temp directory
//download the image at attachment.url and save it to the temp directory
request(attachment.url)
.pipe(fs.createWriteStream(`temp/image${attachment.id}.webp`))
.on("close", () => {
console.log("Image downloaded to temp directory");
//convert webp to png
webp.dwebp(
`temp/image${attachment.id}.webp`,
`temp/image${attachment.id}.png`,
"-quiet -quiet",
function (status, error) {
if (status == 100) {
console.log("Successfully converted to PNG");
} else {
console.log("Error during conversion: ", error);
}
}
);
});
});
}
}, 14000);
});
Actual behavior
I can successfully download the webp to the temp directory, but it fails when trying to convert. I get the following error.
Error: Command failed: "C:\Users\mr\camp\discordapi\node_modules\webp-converter\bin\lemp/image1064098822244020244.webp" -quiet -quiet "temp/image1064098822244020244.png" "function
== 100) { console.log("Successfully converted to PNG"); } else { console.log("Error during con
cannot open input file ', error); } }'
at ChildProcess.exithandler (node:child_process:412:12)
at ChildProcess.emit (node:events:513:28)
code: 4294967295,
killed: false,
signal: null,
What I've tried
At first I thought it was a problem with the tmp library I was using, so I cut that out to test and just went with a static /temp folder. Downloading to /temp still works as expected, but I still run into the same error at the webp-to-PNG process.
I tried moving the webp.dwebp process outside of the request chain, but that doesn't work because I can't grab the file path variable outside the scope of request function.
My next idea was to append the attachment.id to the file path as a UID, to make it easier to find the file webp file and then write a function to construct a path to it outside of the event listener entirely, essentially doing the webp-dwebp process by itself.
So I defined an empty pngImageId object outside the function to write that attachment.id to, so I could find it easily, but got stuck trying to smuggle that variable itself out of the event handler, so same problem.
I just need to convert this pesky webp file to a .PNG so I can send it to the client and render it. Really scratching my head here.

How do I open and display a base64 pdf from inside my Cordova App?

I am creating an App for Android using Cordova, and I would like to open and display a file (PDF or image) that is served from the server as Base64-encoded binary data.
Of course I have read the multiple other posts on the subject that already exist on this website, but none of the proposed solutions have worked for me, more details below.
To be more precise, the server sends a JSON-file to the app, which among many other things contains a string consisting of the base64-encoded contents of a PDF file. I want to convert this data back into the represented PDF and display it to the user.
If this were a pure browser page, I would simply package my base64 data into a data-URL, attach this as the href of some anchor, and add a download-attribute. Optionally I could wrap all of my data into a blob and create an object url for that first.
In Cordova, this does not work. Clicking the <a> does nothing. Here is what I have attempted so far:
Using the file plugin, I can write the binary data to a file on the device. This works, and using a terminal I can see that the file was downloaded correctly, but into an app-private directory which I cannot access normally (e.g. through the file explorer).
Accessing the user's "downloads" folder is blocked by the file system
Using window.open with the file path as the first argument and "_system" as the target does nothing. There is no error but also nothing happens. Setting the target to "_blank" instead, I get an error saying ACCESS_DENIED.
Using cordova.InAppBrowser behaves the same was as window.open
With the plugin file-opener2 installed, the app will not compile, because the plugin is looking for an android4 toolchain, and I am building for android 9 and up
The plugin document-viewer (restricting to PDFs for the time being) suffers the same problem and does not compile.
Passing the data-URI to window.open (or cordova.InAppBrowser) directly loads for a very long time and eventually tells me that the desired page could not be loaded.
The PDF file I am using for testing is roughly 17kb after converting to base64. I know this is technically above the spec for how long data-URIs can be, but Chrome in the browser has no trouble with it whatsoever, and using a much shorter URI (only a few dozen bytes) produces the same behavior.
Ideally, what I would like to do, is download the file and then trigger the user's standard browser to open the file itself. That was, I would not have to deal with MIME types and also it would look exactly how the user expected from their own device.
Alternatively, if that doesn't work, I would be ok with downloading the file into a system-wide directory and prompting the user to open it themselves. This is not optimal, but I would be able to swallow that pill.
And lastly, if there is a plugin or some other solution that solves the problem amazingly, but for PDFs only, then I can also work out something else for images (e.g. embedding a new into my app and assigning the URI to that).
I would be thankful for any suggestion you might have on how to solve this problem. The code I use to download the file currently is shown below.
Thank you for your time.
var filePath = cordova.file.externalDataDirectory; // Note: documentsDirectory is set to "" by Cordova, so I cannot use that
var fileName = "someFileName.pdf";
var mime = "application/pdf";
var dataBlob = /* some blob containing the binary data for a PDF */
function writeFile(fileEntry, dataBlob) {
// Create a FileWriter object for our FileEntry.
// This code is taken directly from the cordova-plugin-file documentation
fileEntry.createWriter(function (fileWriter) {
fileWriter.onwriteend = function() {
console.log("Successful file write...");
readFile(fileEntry);
};
fileWriter.onerror = function (e) {
console.log("Failed file write: " + e.toString());
};
fileWriter.write(dataBlob);
});
}
window.resolveLocalFileSystemURL(
filePath,
function onResolveSuccess (dirEntry) {
dirEntry.getFile(
fileName,
{ create: true },
function onGetFileSuccess (file) (
writeFile(file, dataBlob);
// At this point, the file has been downloaded successfully
window.open(file.toURL(), "_system"); // This line does nothing, and I don't understand why.
}
);
}
);
I managed to solve the problem.
As per the documentation of the file-opener2 plugin, you need to also add the androidx-adapter plugin to correct for the outdated (android 4) packages. With the plugins file, file-opener2 and androidx-adapter installed, the complete code is the following:
var filePath = cordova.file.externalDataDirectory; // Note: documentsDirectory is set to "" by Cordova, so I cannot use that
var fileName = "someFileName.pdf";
var mime = "application/pdf";
var dataBlob = /* some blob containing the binary data for a PDF */
function writeFile(fileEntry, dataBlob) {
// Create a FileWriter object for our FileEntry.
// This code is taken directly from the cordova-plugin-file documentation
fileEntry.createWriter(function (fileWriter) {
fileWriter.onwriteend = function() {
console.log("Successful file write...");
readFile(fileEntry);
};
fileWriter.onerror = function (e) {
console.log("Failed file write: " + e.toString());
};
fileWriter.write(dataBlob);
});
}
window.resolveLocalFileSystemURL(
filePath,
function onResolveSuccess (dirEntry) {
dirEntry.getFile(
fileName,
{ create: true },
function onGetFileSuccess (file) (
writeFile(file, dataBlob);
// At this point, the file has been downloaded successfully
cordova.plugins.fileOpener2.open(
    filepath + filename,
    mime,
    {
     error : function(){ },
success : function(){ }
    }
);
}
);
}
);

How to download entire website from inside the website

I'm making a website, in which I want to offer the user to download the whole website (CSS and images included) for them to modify. I know I can download individual resources with
Click Me
but like I said, this only downloads one file, whereas I would like to download the entire website.
If it helps you visualise what I mean: in chrome, IE and Firefox you can press ctrl+s to download the entire website (make sure you save it as Web page, Complete.
Edit: I know I can create a .zip file that it will download, however doing so requires me to update it every time I make a change, which is something I'd rather not do, as I could potentially be making a lot of changes.
As I mention, it is better that you will have a cron job or something like this that once in a while will create you a zip file of all the desired static content.
If you insist doing it in javascript at the client side have a look at JSZip .
You still have to find a way to get the list of static files of the server to save.
For instance, you can create a txt file with each line is a link to a webpage static file.
you will have to iterate over this file and use $.get to get it's content.
something like this:
// Get list of files to save (either by GET request or hardcoded)
filesList = ["f1.json /echo/jsonp?name=1", "inner/f2.json /echo/jsonp?name=2"];
function createZip() {
zip = new JSZip();
// make bunch of requests to get files content
var requests = [];
// for scoping the fileName
_then = (fname) => data => ({ fileName: fname, data });
for (var file of filesList) {
[fileName, fileUrl] = file.split(" ");
requests.push($.get(fileUrl).then(_then(fileName)));
}
// When all finished
$.when(...requests).then(function () {
// Add each result to the zip
for (var arg of arguments) {
zip.file(arg.fileName, JSON.stringify(arg.data));
}
// Save
zip.generateAsync({ type: "blob" })
.then(function (blob) {
saveAs(blob, "site.zip");
});
});
}
$("#saver").click(() => {
createZip();
});
JSFiddle
Personally, I don't like this approach. But do as you prefer.

node.js - How to chain download many images from a url

Basically, I want to download a large amount of images from an image service. I have a very large JSON object with all of the URLs (~500 or so) in that JSON object. I tried a few npm image downlader packages as well as some other code that did each image downloading all at the same time; however, about 50% of the downloaded images had data loss while downloaded (a large portion of the image was transparent when viewed). How can I download each image, one after another (waiting until the last one is complete before starting the next) to avoid the data loss?
Edit: here is the relevant code, using request:
var download = function(url, dest, callback){
request.get(url)
.on('error', function(err) {console.log(err)} )
.pipe(fs.createWriteStream(dest))
.on('close', callback);
};
links.forEach( function(str) {
var filename = str[0].split('/').pop() + '.jpeg';
console.log(filename);
console.log('Downloading ' + filename);
download(str[0], filename, function(){console.log('Finished Downloading ' + filename)});
});
My links JSON looks like this:
[["link.one.com/image-jpeg"], ["link.two.com/image-jpeg"]]
Okay, so first thing first :
I really do not believe that downloading those 500+ images will all start at once. The V8 engine (kind of the nodejs code executor) actually manages a reasonable number of threads and reuse them to do the stuff. So, it wont create "lots of" new threads, but will wait for other threads to get done.
Now, even if it was all starting at once, I don't think the files would get damaged. I the files were getting corrupt, you wouldn't have been able to open those files.
So, I am pretty sure the problem with the images is not what you think.
Now, for the original question, and to test if I am wrong, you can try to download those files in a sequence like this :
var recursiveDowload = function (urlArray, nameArray, i) {
if (i < urlArray.length) {
request.get(urlArray[i])
.on('error', function(err) {console.log(err)} )
.pipe(fs.createWriteStream(nameArray[i]))
.on('close', function () { recursiveDownload (urlArray, nameArrya, i+1); });
}
}
recursiveDownload(allUrlArrya, allNameArray, 0);
Since you are doing large number of downloads, try Aria2c. Use Aria2 documentations for further details.

Simple video upload system w/ gridfs-stream works with direct iOS recording, but not when choosing a video from the phone's gallery

I'm trying to build a very basic video uploading system with gridfs-stream.
It does work on desktop, but only partially on my phone (iOS9 iphone 5, using Chrome browser).
It works perfectly using the direct recording iOS feature (recording a video that is not in the photo gallery and without saving it), but when I try to choose a saved video in the photo gallery, the video is not uploaded properly.
When saved to the temp directory before being written to the database, the file is empty (size: 0kb)
No chunks are saved in the fs.chunks collection
The "length" attribute in the fs.files collection is 0
It doesn't display, just as if the file didn't exist.
Here is the code:
function saveVideo(file){
var name = file.filename.split('.').slice(0, -1),
extension = file.filename.split('.').pop(),
randomString = Math.random().toString(36).substring(7);
file.filename = name + '_' + randomString + '.' + extension;
var writeStream = gridFs.createWriteStream({
filename: file.filename,
metadata: {
validated: false
}
});
fs.createReadStream(file.file).pipe(writeStream);
writeStream.on('close', function(){
fs.unlink(file.file);
});
}
app.post('/addVideo', function(req, res){
var files = req.files['file-input'];
if (files.length){
files.forEach(function (file, index){
saveVideo(file);
if (index == files.length - 1){
res.redirect('/admin')
}
});
}
else {
if (files.filename){
saveVideo(files);
}
res.redirect('/admin');
}
});
app.get('/videos/:filename', function(req, res){
gridFs.exist({'filename': req.params.filename}, function(err, found){
if (err){
return next(err);
}
if (found){
var readStream = gridFs.createReadStream({
filename: req.params.filename
});
readStream.pipe(res);
}
else {
res.status(404).send('Not Found');
}
});
});
Things I've thought about:
Using photos instead of videos does work, but I need my webapp to be able to receive videos.
Maybe there is a difference between direct recording and choosing a video from the gallery? I really don't understand because apart from the name that is automatically assigned by iOS, I don't see any. I tried to force a specific mimetype/encoding, but I wasn't really sure how to do it and didn't manage to make it work. (I'm not even sure how it works to be honest)
Using another browser: strangely, when using Safari instead of Chrome mobile, the file is properly put into the temp directory before being written to the database. It is not properly displayed afterward (the video doesn't have the right size and there's the black background placeholder just like if the video didn't exist, but I can right click the link in the chrome inspector and open it in a new window, resulting in a file download. The video is not corrupted, the file is not "empty" nor is the length attribute in the database as it is when uploading from Chrome (the video does works).
If it can help, when submitting my form on Chrome using the direct recording feature, it reloads the page and the input is empty. When choosing a file from the gallery, it takes a little bit more time but it doesn't reloads the page and the file is still in the input, just as if it didn't do anything. The loading bar does appear and stops in the middle. This is the code I'm using. I tried multiple versions (using the capture attribute for example), but it didn't work. I'm using the express-busboy module to decode the form's data.
<form action="/addVideo" method="POST" enctype="multipart/form-data">
<input type="file" accept="video/*" multiple name="file-input"/>
<input type="submit"/>
</form>
I tried using connect-busboy module instead of express-busboy (it doesn't write files before adding them, it directly streams it to the db), but the problem still happens.
Thanks.

Categories

Resources