I want to save all my images in a imageStore, saving, uploading and so on works fine, just this "little" problem. With this code, i go through all images and want to save them.
jQuery('img').each(function () {
FS.Utility.eachFile(event, function(file) {
Images.insert(file, function (err, fileObj) {
});
});
});
The Problem, I'm getting all images as an image, not as a file but the insertion method need files. So how can i convert my image to a file ?
Related
So, this is my problem. I am trying to save all the images in the local drive from Javascript and the code that does that looks like this:
window.onload = function () {
console.log(" IN ON LOAD FUNCTION ")
for (filename=1; filename<=80; filename++)
{
console.log(filename)
html2canvas(document.querySelector(`.lead${filename}`)).then(canvas => {
canvas.toBlob(function(blob) {
saveAs(blob, `${filename}.png`);
});
});
}
}
So the code is quite clear. I have about 80 divs with different class name and I am extracting the text from it and saving it as canvas and downloading that canvas as png format, but the problem is sometimes it downloads about 10 images, and sometimes it doesn't download at all. Any help regarding this issue? Any other way around?
I'm making a website, in which I want to offer the user to download the whole website (CSS and images included) for them to modify. I know I can download individual resources with
Click Me
but like I said, this only downloads one file, whereas I would like to download the entire website.
If it helps you visualise what I mean: in chrome, IE and Firefox you can press ctrl+s to download the entire website (make sure you save it as Web page, Complete.
Edit: I know I can create a .zip file that it will download, however doing so requires me to update it every time I make a change, which is something I'd rather not do, as I could potentially be making a lot of changes.
As I mention, it is better that you will have a cron job or something like this that once in a while will create you a zip file of all the desired static content.
If you insist doing it in javascript at the client side have a look at JSZip .
You still have to find a way to get the list of static files of the server to save.
For instance, you can create a txt file with each line is a link to a webpage static file.
you will have to iterate over this file and use $.get to get it's content.
something like this:
// Get list of files to save (either by GET request or hardcoded)
filesList = ["f1.json /echo/jsonp?name=1", "inner/f2.json /echo/jsonp?name=2"];
function createZip() {
zip = new JSZip();
// make bunch of requests to get files content
var requests = [];
// for scoping the fileName
_then = (fname) => data => ({ fileName: fname, data });
for (var file of filesList) {
[fileName, fileUrl] = file.split(" ");
requests.push($.get(fileUrl).then(_then(fileName)));
}
// When all finished
$.when(...requests).then(function () {
// Add each result to the zip
for (var arg of arguments) {
zip.file(arg.fileName, JSON.stringify(arg.data));
}
// Save
zip.generateAsync({ type: "blob" })
.then(function (blob) {
saveAs(blob, "site.zip");
});
});
}
$("#saver").click(() => {
createZip();
});
JSFiddle
Personally, I don't like this approach. But do as you prefer.
Basically, I want to download a large amount of images from an image service. I have a very large JSON object with all of the URLs (~500 or so) in that JSON object. I tried a few npm image downlader packages as well as some other code that did each image downloading all at the same time; however, about 50% of the downloaded images had data loss while downloaded (a large portion of the image was transparent when viewed). How can I download each image, one after another (waiting until the last one is complete before starting the next) to avoid the data loss?
Edit: here is the relevant code, using request:
var download = function(url, dest, callback){
request.get(url)
.on('error', function(err) {console.log(err)} )
.pipe(fs.createWriteStream(dest))
.on('close', callback);
};
links.forEach( function(str) {
var filename = str[0].split('/').pop() + '.jpeg';
console.log(filename);
console.log('Downloading ' + filename);
download(str[0], filename, function(){console.log('Finished Downloading ' + filename)});
});
My links JSON looks like this:
[["link.one.com/image-jpeg"], ["link.two.com/image-jpeg"]]
Okay, so first thing first :
I really do not believe that downloading those 500+ images will all start at once. The V8 engine (kind of the nodejs code executor) actually manages a reasonable number of threads and reuse them to do the stuff. So, it wont create "lots of" new threads, but will wait for other threads to get done.
Now, even if it was all starting at once, I don't think the files would get damaged. I the files were getting corrupt, you wouldn't have been able to open those files.
So, I am pretty sure the problem with the images is not what you think.
Now, for the original question, and to test if I am wrong, you can try to download those files in a sequence like this :
var recursiveDowload = function (urlArray, nameArray, i) {
if (i < urlArray.length) {
request.get(urlArray[i])
.on('error', function(err) {console.log(err)} )
.pipe(fs.createWriteStream(nameArray[i]))
.on('close', function () { recursiveDownload (urlArray, nameArrya, i+1); });
}
}
recursiveDownload(allUrlArrya, allNameArray, 0);
Since you are doing large number of downloads, try Aria2c. Use Aria2 documentations for further details.
I've the following code that creates a screenshot for the video I've uploaded;
var thumbFileName = 'tmp_file.jpg';
ffmpegCommand = ffmpeg(videoFile)
.on('end', function() {
callback(null, tempUploadDir + thumbFileName)
})
.on('error', function(err) {
callback(err);
})
.screenshots({
timestamps: ['50%'],
filename: thumbFileName,
folder: tempUploadDir
});
the code works pretty well and the screenshot is created. The callback read the file stream and store it into the database and eventually try to delete the thumbFileName from the filesystem.
And here is the issue I'm encountering, basically I'm not able to delete the file, even if I try it manually its say that the file is locked by another process (NodeJS) and I can't download it until I stop the application.
In the callback I've also tried to kill the command with ffmpegCommand.kill() before to delete the screenshot but I'm still having the same issue. The file will be removed using fs.unlink and its working when the thumbnail is generated for an image (even post-processed with effects, achieved with sharp) but not with ffmpeg. Apparently ffmpeg is still running and that's why I can't delete the thumb.
I'm kinda new to programming in general. My problem is that I want to download a file and after that do something.
Danbooru.search(tag, function (err, data) { //search for a random image with the given tag
data.random() //selects a random image with the given tag
.getLarge() //get's a link to the image
.pipe(require('fs').createWriteStream('random.jpg')); //downloads the image
});
now I want to do a console.log after the file has been downloaded. I don't want to work with setTimeout since the files will take a diffrent time to download.
Thanks for the help.
See if this works for you. Just saving the request to a variable and checking for the finish event on it.
Danbooru.search(tag, function (err, data) {
var stream = data.random()
.getLarge()
.pipe(require('fs').createWriteStream('random.jpg'));
stream.on('finish', function() { console.log('file downloaded'); });
});