I'm trying to get ready to move a node.js application I've been working on into a production environment (I'm using Heroku). Users add images to the site via url. Right now they are just saved on the server- I'd like to move the storage to s3 but am having some difficulties.
The idea is to save the image to disk first and then upload it, but I'm struggling to find a way to be notified when the file has finished writing to the disk. It seems like it doesn't use the typical node callback style.
Here is the code as it is now. I'm using the request node module which may be complicating things rather than simplifying them:
requestModule(request.payload.url).pipe(fs.createWriteStream("./public/img/submittedContent/" + name));
// what do I do here?
fs.readFile("./public/img/submittedContent/" + name, function(err, data){
if (err) { console.warn(err); }
else {
s3.putObject({
Bucket: "submitted_images",
Key: name,
Body: data
}).done(function(s3response){
console.log("success!");
reply({message:'success'});
}).fail(function(s3response){
console.log("failure");
reply({message:'failure'});
});
}
});
Any advice would be appreciated. Thanks!
Try listening for the finish event on the writable stream:
requestModule(request.payload.url).pipe(fs.createWriteStream("./public/img/submittedContent/" + name)).on('finish', function(){
// do stuff with saved file
});
Unless you're modifying the image, you shouldn't upload to Heroku - but rather directly to S3.
See Direct to S3 File Uploads in Node.js
Related
So I'm completely lost at this point. I have had a mixture of success and failure but I can't for the life of me get this working. So I'm building up a zip file and storing it in a folder structure that's based on uploadRequestIds and that all works fine. I'm fairly new to the node but all I want is to take the file that was built up which is completely valid and works if you open it once it's been constructed in the backend and then send that on to the client.
const prepareRequestForDownload = (dirToStoreRequestData, requestId) => {
const output = fs.createWriteStream(dirToStoreRequestData + `/Content-${requestId}.zip`);
const zip = archiver('zip', { zlib: { level: 9 } });
output.on('close', () => { console.log('archiver has been finalized.'); });
zip.on('error', (err) => { throw err; });
zip.pipe(output);
zip.directory(dirToStoreRequestData, false);
zip.finalize();
}
This is My function that builds up a zip file from all the files in a given directory and then stores it in said directory.
all I thought I would need to do is set some headers to have an attachment disposition type and create a read stream of the zip file into the res.send function and then react would be able to save the content. but that just doesn't seem to be the case. How should this be handled on both the API side from reading the zip and sending to the react side of receiving the response and the file auto-downloading/requesting a user saves the file.
This is what the temp structure looks like
There is some strategies to resolve it, all browser when you redirect to URL where extension ending with .zip, normally start downloading. What you can do is to return to your client the path for download something like that.
http://api.site.com.br/my-file.zip
and then you can use:
window.open('URL here','_blank')
I am trying to download a file from outside of my root directory however every time I try, it tries to take it from the root directory. I will need the user of my site to be able to download these files.
The file has initially been uploaded to Amazon S3 and I have accessed it using the getObject function.
Here is my code:
app.get('/test_script_api', function(req, res){
var fileName = req.query.File;
s3.getObject(
{ Bucket: "bucket-name", Key: fileName },
function(error, s3data){
if(error != null){
console.log("Failed to retrieve an object: " + error);
}else{
//I have tried passing the S3 data but it asks for a string
res.download(s3data.Body);
//So I have tried just passing the file name & an absolute path
res.download(fileName);
}
}
);
});
This returns the following error:
Error: ENOENT: no such file or directory, stat '/home/ec2-user/environment/test2.txt'
When I enter an absolute path it just appends this onto the end of /home/ec2-user/environment/
How can I change the directory res.download is trying to download from?
Is there an easier way to download your files from Amazon S3?
Any help would be much appreciated here!
I had the same problem and I found this answer:
NodeJS How do I Download a file to disk from an aws s3 bucket?
Based on that, you need to use createReadStream() and pipe().
R here more about stream.pipe() - https://nodejs.org/en/knowledge/advanced/streams/how-to-use-stream-pipe/
res.attachment() will set the headers for you.
-> https://expressjs.com/en/api.html#res.attachment.
This code should work for you (based on the answer in the above link):
app.get('/test_script_api', function (req, res) {
var fileName = req.query.File;
res.attachment(fileName);
var file = s3.getObject({
Bucket: "bucket-name",
Key: fileName
}).createReadStream()
.on("error", error => {
});
file.pipe(res);
});
In my case, on the client side, I used
This made sure that the file is downloading.
I need to figure out where my files are downloading when I use the filesDownload(). I don't see an argument for file destination. Here's my code:
require('isomorphic-fetch');
var Dropbox = require('dropbox').Dropbox;
var dbx = new Dropbox({ accessToken: 'accessToken', fetch});
dbx.filesDownload({path: 'filepath}).
then(function(response) {
console.log(response);
})
.catch(function(error) {
console.log(error);
});
I'm getting a successful callback when I run the code but I don't see the file anywhere.
I need to know where my files are downloading to and how to specify the file destination in my function.
Thanks,
Gerald
I've used the function as described in the SDK's documentation (http://dropbox.github.io/dropbox-sdk-js/Dropbox.html#filesDownload__anchor) but I have no idea where my file goes.
Expected Result: Files are downloaded to Dropbox to path that I have designated.
Actual Results: I get a successful callback from Dropbox but I cannot find the files downloaded.
In Node.js, the Dropbox API v2 JavaScript SDK download-style methods return the file data in the fileBinary property of the object they pass to the callback (which is response in your code).
You can find an example of that here:
https://github.com/dropbox/dropbox-sdk-js/blob/master/examples/javascript/node/download.js#L20
So, you should be able to access the data as response.fileBinary. It doesn't automatically save it to the local filesystem for you, but you can then do so if you want.
You need to use fs module to save binary data to file.
dbx.filesDownload({path: YourfilePath})
.then(function(response) {
console.log(response.media_info);
fs.writeFile(response.name, response.fileBinary, 'binary', function (err) {
if (err) { throw err; }
console.log('File: ' + response.name + ' saved.');
});
})
.catch(function(error) {
console.error(error);
});
I have a file repository and when i call it from the browser it automatically downloads the file and this is fine.
But i want to do this request on my server, and then serve the file result to the browser. Here is the example of the get request from my server.
downloadFile(req , res , next) {
let options = {
url: 'url to my file repo',
};
request(options, function (err, resp, body) {
if (err) {
res.status(500).send("");
return
}
for (const header in resp.headers) {
if (resp.headers.hasOwnProperty(header)) {
res.setHeader(header, resp.headers[header]);
}
}
resp.pipe(res);
})
}
The request is working fine, and when i access my server from the browser it starts downloading the file. Everything seems to work fine except one thing, the file can't be opened. This file format can't be opened, says me the image player (if the file is image for example).
Where is the problem, Is it the way i serve the file from the server?
Thank you in advance. I lost a lot of time and can't find the solution.
Because what you probably want to send is the body of the request (e.g. the data of your image).
So instead of resp.pipe(res):
res.send(body)
Use the developer mode of your browser to check the network messages passing between the server and your browser.
I want to download .csv file on frontend.
this is my code:
$http.get('/entity/consultations/_/registerationReport' )
.success(function (data) {
myWindow = window.open('../entity/consultations/_/registerationReport', '_parent');
myWindow.close();
});
and I use json2csv converter to write in csv file.
json2csv({data: report, fields: fields}, function (err, csv) {
if (err) throw err;
res.setHeader('Content-Type', 'application/csv');
res.setHeader("Content-Disposition", "attachment; filename=" + "Report.csv");
res.end(csv, 'binary');
});
but it prints data of csv file on browser instead of downloading csv file.
#Pawan, there's nothing wrong with your json2csv function. The issue is the fact that you're trying to trigger the download with an XMLHttpRequest (XHR) request using Angular's $http service. An XHR call suggests that your code will be handling the response from the server. As such the Content-Disposition headers are ignored by the browser and do not trigger a download on the browser.
From what I can tell you have several options:
If you don't have any pre-processing to do on the client, why not just use a direct link to /entity/consultations/_/registerationReport (using and <a> tag),
You may also write $window.open(...) from your Angular code (this will have the ugly side effect of a flashing popup tab or window)
There are probably a number of other solutions, but these are the only ones that immediately come to mind. The bottom line is that XHR is not the right tool for the task you're trying to accomplish.