Download file from link and check if download is complete nodejs - javascript

I try to download a file from a link
This my code
const url ="https://lalalai.s3-us-west-2.amazonaws.com/media/split/a703a60c-54f3-48f4-91b5-43a60c-54f3-48f4-91b5-4b3bc71dfd2f/accompaniment";
const file = fs.createWriteStream("./uploads/exp.mp3");
https.get(url, async function (response) {
await response.pipe(file);
console.log("done");
});
The problem is, the server return "done" but the downloaded file still not finished.
Can someone help me? thanks:D

I think you can listen to event and apply callback for what will happen for that particular event. below is the example code.
const url ="https://lalalai.s3-us-west-2.amazonaws.com/media/split/a703a60c-54f3-48f4-91b5-43a60c-54f3-48f4-91b5-4b3bc71dfd2f/accompaniment";
const file = fs.createWriteStream("./uploads/exp.mp3");
https.get(url, async function (response) {
response.pipe(file);
console.log("downloading started");
response.on("error", (err) => {
console.log("some error occurred while downloading");
throw err;
});
response.on("end", () => {
console.log("it worked, download completed");
});
});
Sometimes file being downloaded comes in response.body instead of response so check for that also if above doesn't work.

Related

Get Azure uploaded blob file url

I'm uploading a data stream to Azure Storage,
I would get the link to the blob file.
let insertFile = async function (blobName,stream){
const containerName= 'texttospeechudio';
try{
await blobService.createContainerIfNotExists(containerName, {
publicAccessLevel: 'blob'},(err,result, response) => {
if(!err) {
console.log(result);
}
});
let resultstream = blobService.createWriteStreamToBlockBlob(containerName, blobName,(err,result, response)=>{
console.log(res)
});
stream.pipe(resultstream);
stream.on('error', function (error) {
console.log(error);
});
stream.once('end', function (end) {
console.log(end)
//OK
});
}
catch(err) {
console.log(err);
}
}
I added createWriteStreamToBlockBlob callback , but I'm not getting inside it.
I would find a way to get uploaded file url.
There is no file URL returned in the response according to put-blob's rest spec.
And Azure storage's resource URL can be commonly composed with following pattern:
https://{myaccount}.blob.core.windows.net/{mycontainer}/{myblob}

Get image as buffer from URL

I have spent couple of hours trying to get image data as a buffer, search results lead me to using "request" module, others suggestions lead to using other modules in higher version of node, which I cannot use because we depend on node v 6.11 so far.
Here are my trials:
request(imageURL).pipe(fs.createWriteStream('downloaded-img-
1.jpg')).on('close', function () {
console.log('ok');
});
request(imageURL, function (err, message, response) {
fs.writeFile('downloaded-img-2.jpg', response, 'binary', function (err) {
console.log('File saved.');
});
fs.writeFile('downloaded-img-3.jpg', chunks, 'binary', function (err) {
console.log('File saved.');
})
resolve(response);
})
.on('data', function (chunk) {
chunks.push(chunk);
})
.on('response', function (response) {
});
});
The "downloaded-img-1.jpg" gets downloaded correctly, but I have to avoid saving the file on disk, then read it as a stream, it's a PRD environment constraint. So the next option is to use image data, as demonstrated by "downloaded-img-2.jpg" and "downloaded-img-3.jpg", by waiting for the "response" or the hand-made "chunks", the problem is that these 2 images are always corrupted, and I don't know why?
What is the point behind all of that? I am trying to add the image behind the URL in a zip file, and the zip lib I use (js-zip) accepts buffer as an input. Any ideas why I am not getting the "chunks" or the "response" correctly?
I've tested the code below in Node 6.9.2, it will download an image as a buffer. I also write the buffer to a file (just to test all is OK!), the body object is a buffer containing the image data:
"use strict";
var request = require('request');
var fs = require('fs');
var options = {
url: "https://upload.wikimedia.org/wikipedia/commons/thumb/5/52/Hubble2005-01-barred-spiral-galaxy-NGC1300.jpg/1920px-Hubble2005-01-barred-spiral-galaxy-NGC1300.jpg",
method: "get",
encoding: null
};
console.log('Requesting image..');
request(options, function (error, response, body) {
if (error) {
console.error('error:', error);
} else {
console.log('Response: StatusCode:', response && response.statusCode);
console.log('Response: Body: Length: %d. Is buffer: %s', body.length, (body instanceof Buffer));
fs.writeFileSync('test.jpg', body);
}
});

Disable the listener to src of iframe

I have an iframe in my template, src refers to a physical html file.
<iframe id="myiframe" src="{{src}}">
</iframe>
In the controller, a function rewriteAllFiles is called from time to time to update the physical html, js, css files. It first removes all the old files, then write the new one, then refresh the iframe.
var refreshIframe = function () {
var iframe = document.getElementById('myiframe');
iframe.src = iframe.src;
console.log("refreshIframe, done")
}
rewriteAllFiles = function (path, files) {
return $http.post('/emptyDir', { dir: prefix + idP + "/", path: path })
.then(function (res) {
console.log("rewriteAllFiles /emptyDir, done");
return $http.post('/writeFiles', { dir: prefix + idP + "/", files: files })
.then(function (res) {
console.log("rewriteAllFiles /writeFiles, done");
refreshIframe();
return res
})
})
}
$scope.$watch(... {
return rewriteAllFiles($location.path(), $scope.files);
})
My test shows sometimes it works without error, but sometimes, it gives the following log:
rewriteAllFiles /emptyDir, done
GET http://localhost:3000/tmp/P0L7JSEOWux3YE1FAAA6/index.html 404 (Not Found)
rewriteAllFiles /writeFiles, done
refreshIframe, done
So it seems that after the old files are removed and before new files are written, the html template tries to load src, and of cause the file is temporarily unavailable. I did not set a watcher for src, does anyone know where is the listener to src, and how to disable it?
Edit 1: here is the code of writeFiles in the server side:
router.post('/writeFiles', function (req, res, next) {
var dir = req.body.dir, files = req.body.files;
var fs = require('fs');
var queue = files.map(function (file) {
return new Promise(function (resolve, reject) {
fs.writeFile(dir + file.name, file.body, function (err) {
if (err) return reject(err);
console.log(dir + file.name + " is written");
resolve(file);
})
})
});
Promise.all(queue)
.then(function(files) {
console.log("All files have been successfully written");
res.json(dir);
})
.catch(function (err) {
return console.log(err)
});
});
You can try {{::src}}. It just be rendered once.
I realise that it is related to another line of code in the controller.
$scope.src = getSrc() // getSrc(); returns the right html link initiated by a resolving
$scope.$watch(... {
return rewriteAllFiles($location.path(), $scope.files);
})
When we load the template, the above code has a possibility that $scope.src = getSrc() is executed just after after the old files are removed and before new files are written, which raises a Not Found error.
So we just need to make sure the execution order by then; the following code works:
$scope.$watch(... {
return rewriteAllFiles($location.path(), $scope.files)
.then(function () {
$scope.src = getSrcP();
});
});

how to write array obj into file with nodejs

I'm trying with this code below, I can write file on server and the file has content but, when I download the file, it's blank.
var file = fs.createWriteStream('./tmp/text.txt');
file.on('error', function(err) { console.log(err) });
rows.forEach(function(v) {
file.write(v + "\r\n");
productExport.DeleteProduct(v.ProductId, function(err){
if(err) throw err;
});
});
file.end();
var file = "./tmp/text.txt";
res.download(file); // Set disposition and send it.
How can I download the file with all the content?
The way your code is structured is incorrect for what you're trying to do. Primarily, your issue is that you're responding with res.download() before the file stream is done being written to. Additionally, you have var file being initialized twice in the same scope, this isn't correct either.
var file = "./tmp/text.txt";
var writeStream = fs.createWriteStream(file);
writeStream.on('error', err => console.log );
writeStream.on('finish', () => {
return res.download(file); // Set disposition and send it.
});
rows.forEach((v) => {
writeStream.write(v + "\r\n");
productExport.DeleteProduct(v.ProductId, function(err){
if(err) throw err;
});
});
writeStream.end();
If you're confused by this, and perhaps async processing in general, this is the defacto answer on SO for understanding async in Node.js
Writing data to a file through I/O is an asynchronous operation. You have to wait for the WriteStream to complete before you can download the file.
var file = fs.createWriteStream('./tmp/text.txt');
file.on('error', function(err) { console.log(err) });
rows.forEach(function(v) {
file.write(v + "\r\n");
productExport.DeleteProduct(v.ProductId, function(err){
if(err) throw err;
});
});
file.end();
file.on('finish', function() {
var file = "./tmp/text.txt";
res.download(file); // Set disposition and send it.
});
Extra Information:
FileSystem#createWriteStream return WriteStream object.
From the document of WriteStream it stats there are 6 events [close, drain, error, finish, pipe, unpipe].
in the node.js world, you should always expect to use callback or seek for complete/finish event when there are I/O operations.

Node: Downloading a zip through Request, Zip being corrupted

I'm using the excellent Request library for downloading files in Node for a small command line tool I'm working on. Request works perfectly for pulling in a single file, no problems at all, but it's not working for ZIPs.
For example, I'm trying to download the Twitter Bootstrap archive, which is at the URL:
http://twitter.github.com/bootstrap/assets/bootstrap.zip
The relevant part of the code is:
var fileUrl = "http://twitter.github.com/bootstrap/assets/bootstrap.zip";
var output = "bootstrap.zip";
request(fileUrl, function(err, resp, body) {
if(err) throw err;
fs.writeFile(output, body, function(err) {
console.log("file written!");
}
}
I've tried setting the encoding to "binary" too but no luck. The actual zip is ~74KB, but when downloaded through the above code it's ~134KB and on double clicking in Finder to extract it, I get the error:
Unable to extract "bootstrap" into "nodetest" (Error 21 - Is a directory)
I get the feeling this is an encoding issue but not sure where to go from here.
Yes, the problem is with encoding. When you wait for the whole transfer to finish body is coerced to a string by default. You can tell request to give you a Buffer instead by setting the encoding option to null:
var fileUrl = "http://twitter.github.com/bootstrap/assets/bootstrap.zip";
var output = "bootstrap.zip";
request({url: fileUrl, encoding: null}, function(err, resp, body) {
if(err) throw err;
fs.writeFile(output, body, function(err) {
console.log("file written!");
});
});
Another more elegant solution is to use pipe() to point the response to a file writable stream:
request('http://twitter.github.com/bootstrap/assets/bootstrap.zip')
.pipe(fs.createWriteStream('bootstrap.zip'))
.on('close', function () {
console.log('File written!');
});
A one liner always wins :)
pipe() returns the destination stream (the WriteStream in this case), so you can listen to its close event to get notified when the file was written.
I was searching about a function which request a zip and extract it without create any file inside my server, here is my TypeScript function, it use JSZIP module and Request:
let bufs : any = [];
let buf : Uint8Array;
request
.get(url)
.on('end', () => {
buf = Buffer.concat(bufs);
JSZip.loadAsync(buf).then((zip) => {
// zip.files contains a list of file
// chheck JSZip documentation
// Example of getting a text file : zip.file("bla.txt").async("text").then....
}).catch((error) => {
console.log(error);
});
})
.on('error', (error) => {
console.log(error);
})
.on('data', (d) => {
bufs.push(d);
})

Categories

Resources