I am making a node.js app that will be primarily used to download images from the web. I have created this function that successfully downloads images. I want to make the function also show a live preview of the image as it is downloading without impacting download speed. Is it possible to "tap the pipe" and draw the image as it downloads to a html canvas or img? I am using electron so I am looking for a chromium/node.js based solution.
Edit:
I've also found out you can chain pipes (r.pipe(file).pipe(canvas);) but I'm not sure if that would download the file first and then show up on the canvas or if it would update them both as the file downloads.
I've also thought of creating two separate pipes from the request (var r = request(url); r.pipe(file); r.pipe(canvas);), but I'm not sure if that would try to download the image twice.
I'm also not particularly familiar with the html canvas and haven't been able to test these ideas because I don't know how to pipe a image to a canvas or an img element for display in the application.
const fs = require('fs-extra');
downloadFile(url, filename) {
return new Promise(resolve => {
var path = filename.substring(0, filename.lastIndexOf('\\'));
if (!fs.existsSync(path)) fs.ensureDirSync(path);
var file = fs.createWriteStream(filename);
var r = request(url).pipe(file);
// How would also pipe this to a canvas or img element?
r.on('error', function(err) { console.log(err); throw new Error('Error downloading file') });
r.on('finish', function() { file.close(); resolve('done'); });
});
}
I ended up asking another similar question that provides the answer to this one. It turns out the solution is to pipe the image into memory, encode it with base64 and display it using a data:image url.
Related
I'm currently creating a real-time chat application. This is a web application that uses node.js for the backend and uses socket.io to connect back and forth.
Currently, I'm working on creating user profiles with profile pictures. These profile pictures will be stored in a folder called images/profiles/. The file will be named by the user's id. For example: user with the id 1 will have their profile pictures stored in images/profiles/1.png. Very self-explanatory.
When the user submits the form to change their profile picture, the browser JavaScript will get the image, and send it to the server:
form.addEventListener('submit', handleForm)
function handleForm(event) {
event.preventDefault(); // stop page from reloading
let profilePicture; // set variable for profile picture
let profilePictureInput = document.getElementById('profilePictureInput'); // get image input
const files = profilePictureInput.files[0]; // get input's files
if (files) {
const fileReader = new FileReader(); // initialize file reader
fileReader.readAsDataURL(files);
fileReader.onload = function () {
profilePicture = this.result; // put result into variable
socket.emit("request-name", {
profilePicture: profilePicture,
id: userID,
}); // send result, along with user id, to server
}
}
I've commented most of the code so it's easy to follow. The server then gets this information. With this information, the server is supposed to convert the sent image to a png format (I can do whatever format, but it has to be the same format for all images). I am currently using the jimp library to do this task, but it doesn't seem to work.
const jimp = require('jimp'); // initialize Jimp
socket.on('request-name', (data) => { // when request has been received
// read the buffer from image (I'm not 100% sure what Buffer.from() does, but I saw this online)
jimp.read(Buffer.from(data.profilePicture), function (error, image) {
if (error) throw error; // throw error if there is one
image.write(`images/profiles/${data.id}.png`); // write image to designated place
}
});
The error I get:
Error: Could not find MIME for Buffer <null>
I've scoured the internet for answers but was unable to find any. I am available to use another library if this helps. I can also change the file format (.png to .jpg or .jpeg, if needed; it just needs to be consistent with all files). The only things I cannot change are the use of JavaScript/Node.js and socket.io to send the information to the server.
Thank you in advance. Any and all help is appreciated.
If you're just getting the data URI as a string, then you can construct a buffer with it and then use the built in fs to write the file. Make sure the relative path is accurate.
socket.on('request-name', data => {
const imgBuffer = Buffer.from(data.profilePicture, 'base64');
fs.writeFile(`images/profiles/${data.id}.png`, imgBuffer);
}
imagine i have a video file and i want to build a blob URL from that file then play it in a html page, so far i tried this but i could not make it work ...
var files = URL.createObjectURL(new Blob([someVideoFile], {type: "video/mp4"}));
document.getElementById(videoId).setAttribute("src", files);//video tag id
document.getElementById(videoPlayer).load();//this is source tag id
document.getElementById(videoPlayer).play();//this is source tag id
it gives me a blob URL but wont play the video... am i doing something wrong? i am pretty new to electron so excuse me if my code is not good enough
i saw the similar questions mentioned in comments but they dont work for me as they dont work for others in those pages....
I know this is an old question, but it still deserves a working answer.
In order to play a video in the renderer context, you're on the right track: you can use a blob url and assign it as the video source. Except, a local filepath is not a valid url, which is why your current code doesn't work.
Unfortunately, in electron, currently there are only 3 ways to generate a blob from a file in the renderer context:
Have the user drag it into the window, and use the drag-and-drop API
Have the user select it via a file input: <input type="file">
Read the entire file with the 'fs' module, and generate a Blob from it
The third option (the only one without user input) can be done as long as nodeIntegration is enabled or if it is done in a non-sandboxed preloader. For accomplishing this via streaming vs. loading the entire file at once, the following module can be used:
// fileblob.js
const fs = require('fs');
// convert system file into blob
function fileToBlob(path, {bufferSize=64*1024, mimeType='aplication/octet-stream'}={}) {
return new Promise((resolve,reject) => {
// create incoming stream from file
const stream = fs.createReadStream(path, {highWaterMark:bufferSize});
// initialize empty blob
var blob = new Blob([], {type:mimeType});
stream.on('data', buffer => {
// append each chunk to blob by building new blob concatenating new chunk
blob = new Blob([blob, buffer], {type:mimeType});
});
stream.on('close', () => {
// resolve with resulting blob
resolve(blob);
});
});
}
// convert blob into system file
function blobToFile(blob,path, {bufferSize=64*1024}={}) {
return new Promise((resolve,reject) => {
// create outgoing stream to file
const stream = fs.createWriteStream(path);
stream.on('ready', async () => {
// iterate chunks at a time
for(let i=0; i<blob.size; i+=bufferSize) {
// read chunk
let slice = await blob.slice(i, i+bufferSize).arrayBuffer();
// write chunk
if(!stream.write(new Uint8Array(slice))) {
// wait for next drain event
await new Promise(resolve => stream.once('drain', resolve));
}
}
// close file and resolve
stream.on('close', () => resolve());
stream.close();
});
});
}
module.exports = {
fileToBlob,
blobToFile,
};
Then, in a preloader or the main context with nodeIntegration enabled, something like the following would load the file into a blob and use it for the video player:
const {fileToBlob} = require('./fileblob');
fileToBlob("E:/nodeJs/test/app/downloads/clips/test.mp4", {mimeType:"video/mp4"}).then(blob => {
var url = URL.createObjectURL(blob);
document.getElementById(videoId).setAttribute("src", url);
document.getElementById(videoPlayer).load();
document.getElementById(videoPlayer).play();
});
Again, unfortunately this is slow for large files. We're still waiting for a better solution from electron:
https://github.com/electron/electron/issues/749
https://github.com/electron/electron/issues/35629
Try
video.src = window.URL.createObjectURL(vid);
For more details please refer to this answer
I'm making a website, in which I want to offer the user to download the whole website (CSS and images included) for them to modify. I know I can download individual resources with
Click Me
but like I said, this only downloads one file, whereas I would like to download the entire website.
If it helps you visualise what I mean: in chrome, IE and Firefox you can press ctrl+s to download the entire website (make sure you save it as Web page, Complete.
Edit: I know I can create a .zip file that it will download, however doing so requires me to update it every time I make a change, which is something I'd rather not do, as I could potentially be making a lot of changes.
As I mention, it is better that you will have a cron job or something like this that once in a while will create you a zip file of all the desired static content.
If you insist doing it in javascript at the client side have a look at JSZip .
You still have to find a way to get the list of static files of the server to save.
For instance, you can create a txt file with each line is a link to a webpage static file.
you will have to iterate over this file and use $.get to get it's content.
something like this:
// Get list of files to save (either by GET request or hardcoded)
filesList = ["f1.json /echo/jsonp?name=1", "inner/f2.json /echo/jsonp?name=2"];
function createZip() {
zip = new JSZip();
// make bunch of requests to get files content
var requests = [];
// for scoping the fileName
_then = (fname) => data => ({ fileName: fname, data });
for (var file of filesList) {
[fileName, fileUrl] = file.split(" ");
requests.push($.get(fileUrl).then(_then(fileName)));
}
// When all finished
$.when(...requests).then(function () {
// Add each result to the zip
for (var arg of arguments) {
zip.file(arg.fileName, JSON.stringify(arg.data));
}
// Save
zip.generateAsync({ type: "blob" })
.then(function (blob) {
saveAs(blob, "site.zip");
});
});
}
$("#saver").click(() => {
createZip();
});
JSFiddle
Personally, I don't like this approach. But do as you prefer.
Basically, I want to download a large amount of images from an image service. I have a very large JSON object with all of the URLs (~500 or so) in that JSON object. I tried a few npm image downlader packages as well as some other code that did each image downloading all at the same time; however, about 50% of the downloaded images had data loss while downloaded (a large portion of the image was transparent when viewed). How can I download each image, one after another (waiting until the last one is complete before starting the next) to avoid the data loss?
Edit: here is the relevant code, using request:
var download = function(url, dest, callback){
request.get(url)
.on('error', function(err) {console.log(err)} )
.pipe(fs.createWriteStream(dest))
.on('close', callback);
};
links.forEach( function(str) {
var filename = str[0].split('/').pop() + '.jpeg';
console.log(filename);
console.log('Downloading ' + filename);
download(str[0], filename, function(){console.log('Finished Downloading ' + filename)});
});
My links JSON looks like this:
[["link.one.com/image-jpeg"], ["link.two.com/image-jpeg"]]
Okay, so first thing first :
I really do not believe that downloading those 500+ images will all start at once. The V8 engine (kind of the nodejs code executor) actually manages a reasonable number of threads and reuse them to do the stuff. So, it wont create "lots of" new threads, but will wait for other threads to get done.
Now, even if it was all starting at once, I don't think the files would get damaged. I the files were getting corrupt, you wouldn't have been able to open those files.
So, I am pretty sure the problem with the images is not what you think.
Now, for the original question, and to test if I am wrong, you can try to download those files in a sequence like this :
var recursiveDowload = function (urlArray, nameArray, i) {
if (i < urlArray.length) {
request.get(urlArray[i])
.on('error', function(err) {console.log(err)} )
.pipe(fs.createWriteStream(nameArray[i]))
.on('close', function () { recursiveDownload (urlArray, nameArrya, i+1); });
}
}
recursiveDownload(allUrlArrya, allNameArray, 0);
Since you are doing large number of downloads, try Aria2c. Use Aria2 documentations for further details.
I have a question about the File API and uploading files in JavaScript and how I should do this.
I have already utilized a file uploader that was quite simple, it simply took the files from an input and made a request to the server, the server then handled the files and uploaded a copy file on the server in an uploads directory.
However, I am trying to give people to option to preview a file before uploading it. So I took advantage of the File API, specifically the new FileReader() and the following readAsDataURL().
The file object has a list of properties such as .size and .lastModifiedDate and I added the readAsDataURL() output to my file object as a property for easy access in my Angular ng-repeat().
My question is, it occurred to me as I was doing this that I could store the dataurl in a database rather than upload the actual file? I was unsure if modifying the File data directly with it's dataurl as a property would affect its transfer.
What is the best practice? Is it better to upload a file or can you just store the dataurl and then output that, since that is essentially the file itself? Should I not modify the file object directly?
Thank you.
Edit: I should also note that this is a project for a customer that wants it to be hard for users to simply take uploaded content from the application and save it and then redistribute it. Would saving the files are urls in a database mitigate against right-click-save-as behavior or not really?
There is more then one way to preview a file. first is dataURL with filereader as you mention. but there is also the URL.createObjectURL which is faster
Decoding and encoding to and from base64 will take longer, it needs more calculations, more cpu/memory then if it would be in binary format.
Which i can demonstrate below
var url = 'https://upload.wikimedia.org/wikipedia/commons/c/cc/ESC_large_ISS022_ISS022-E-11387-edit_01.JPG'
fetch(url).then(res => res.blob()).then(blob => {
// Simulates a file as if you where to upload it throght a file input and listen for on change
var files = [blob]
var img = new Image
var t = performance.now()
var fr = new FileReader
img.onload = () => {
// show it...
// $('body').append(img)
var ms = performance.now() - t
document.body.innerHTML = `it took ${ms.toFixed(0)}ms to load the image with FileReader<br>`
// Now create a Object url instead of using base64 that takes time to
// 1 encode blob to base64
// 2 decode it back again from base64 to binary
var t2 = performance.now()
var img2 = new Image
img2.onload = () => {
// show it...
// $('body').append(img)
var ms2 = performance.now() - t2
document.body.innerHTML += `it took ${ms2.toFixed(0)}ms to load the image with URL.createObjectURL<br><br>`
document.body.innerHTML += `URL.createObjectURL was ${(ms - ms2).toFixed(0)}ms faster`
}
img2.src = URL.createObjectURL(files[0])
}
fr.onload = () => (img.src = fr.result)
fr.readAsDataURL(files[0])
})
The base64 will be ~3x larger. For mobile devices I think you would want to save bandwidth and battery.
But then there is also the latency of doing a extra request but that's where http 2 comes to rescue