Electron upload with progress - javascript

I have an Electron app which is able to upload very big files to the server via HTTP in renderer process without user input. I decided to use axios as my HTTP client and it was able to retrieve upload progress but with this I met few problems.
Browser's supported js and Node.js aren't "friendly" with each other in some moments. I used fs.createReadStream function to get the file but axios does not understand what ReadStream object is and I can't pipe (there are several topics on their GitHub issue tab but nothing was done with that till now) this stream to FormData (which I should place my file in).
I ended up using fs.readFileSync and then form-data module with its getBuffer() method but now my file is loaded entirely in the memory before upload and with how big my files are it kills Electron process.
Googling I found out about request library which in-fact is able to pipe a stream to request but it's deprecated, not supported anymore and apparently I can't get upload progress from it.
I'm running out of options. How do you upload files with Electron without user input (so without file input) not loading them in the memory upfront?
P.S. on form-data github page there is a piece of code explaining how to upload a file stream with axios but it doesn't work, nothing is sent and downgrading the library as one issue topic suggested didn't help either...
const form = new FormData();
const stream = fs.createReadStream(PATH_TO_FILE);
form.append('image', stream);
// In Node.js environment you need to set boundary in the header field 'Content-Type' by calling method `getHeaders`
const formHeaders = form.getHeaders();
axios.post('http://example.com', form, {
headers: {
...formHeaders,
},
})
.then(response => response)
.catch(error => error)

I was able to solve this and I hope it will help anyone facing the same problem.
Since request is deprecated I looked up for alternatives and found got.js for NodeJS HTTP requests. It has support of Stream, fs.ReadStream etc.
You will need form-data as well, it allows to put streams inside FormData and assign it to a key.
The following code solved my question:
import fs from 'fs'
import got from 'got'
import FormData from 'form-data'
const stream = fs.createReadStream('some_path')
// NOT native form data
const formData = new FormData()
formData.append('file', stream, 'filename');
try {
const res = await got.post('https://my_link.com/upload', {
body: formData,
headers: {
...formData.getHeaders() // sets the boundary and Content-Type header
}
}).on('uploadProgress', progress => {
// here we get our upload progress, progress.percent is a float number from 0 to 1
console.log(Math.round(progress.percent * 100))
});
if (res.statusCode === 200) {
// upload success
} else {
// error handler
}
} catch (e) {
console.log(e);
}
Works perfectly in Electron renderer process!

Related

GET vs POST file download using express.js - changing the REST verb results in larger and incorrect file

My situation is that in one of my services I provide many REST GET APIs to allow the downloading of files (typically xlsx).
In a new API I have a POST API which does exactly the same thing as another GET API (I've copied pasted the code line by line), except that it's using a POST instead of a GET
export async function postDownloadMyFile (
req: Request,
res: Response,
next: NextFunction
): Promise<Response> {
... // xlsx workBook creation code
const buffer = await workBook.xlsx.writeBuffer();
res.setHeader('Content-Type', 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet');
res.setHeader('Content-Disposition', 'attachment; filename=' + 'MyFile.xlsx');
res.write(buffer, 'binary');
res.end();
return res;
}
but when the frontend client makes a request to download the file, the file comes back about twice the size than if I used a GET request and cannot be opened (presumably wrong format/buffer written out wrong).
changing the frontend and backend to use a GET (not modifying the body of the function above) 'fixes' the issue.
Are there some additional headers I'm meant to be setting?
Thanks

POST cutting off PDF data

I posted a question yesterday (linked here) where I had been trying to send a PDF to a database, and then retrieve it a later date. Since then I have been advised that it is best to (in my case as I cannot use Cloud Computing services) to upload the PDF files to local storage, and save the URL of the file to the database instead. I have now begun implementing this, but I have come across some trouble.
I am currently using FileReader() as documented below to process the input file and send it to the server:
var input_file = "";
let reader = new FileReader();
reader.readAsText(document.getElementById("input_attachment").files[0]);
reader.onloadend = function () {
input_file = "&file=" + reader.result;
const body = /*all the rest of my data*/ + input_file;
const method = {
method: "POST",
body: body,
headers: {
"Content-type": "application/x-www-form-urlencoded"
}
};
After this bloc of code I do the stock standard fetch() and a route on my server receives this. Almost all data comes in 100% as expected, but the file comes in cut off somewhere around 1300 characters in (making it quite an incomplete PDF). What does appear to come in seems to match the first 1300 characters of the original PDF I uploaded.
I have seen suggestions that you are meant to use "multipart/form-data" content-type to upload files, but when I do this I seem to only then receive the first 700 characters or so of my PDF. I have tried using the middleware Multer to handle the "multipart/form-data" but it just doesn't seem to upload anything (though I can't guarantee that I am using it correctly).
I also initially had trouble with fetch payload too large error message, but have currently resolved this through this method:
app.use(bodyParser.urlencoded({ limit: "50mb", extended: false, parameterLimit: 50000 }));
Though I have suspicions that this may not be correctly implemented as I have seen some discussion that the urlencoded limit is set prior to the file loading, and cannot be changed in the middle of the program.
Any and all help is greatly appreciated, and I will likely use any information here to construct an answer on my original question from yesterday so that anybody else facing these sort of issues have a resource to go to.
I personally found the solution to this problem as follows. On the client-side of my application this code is an example of what was implemented.
formData = new FormData();
formData.append("username", "John Smith");
formData.append("fileToUpload", document.getElementById("input_attachment").files[0]);
const method = {
method: "POST",
body: formData
};
fetch(url, method)
.then(res => res.json())
.then(res => alert("File uploaded!"))
.catch(err => alert(err.message))
As can be noted I have changed from using "application/x-www-form-urlencoded" encoding to "multipart/form-data" to upload files. nodeJS and Express however do not natively support this encoding type. I chose to use the library Formidable (found this to be easiest to use without too much overhead) which can be investigated about here. Below is an example of my server-side implementation of this middleware (Formidable).
const express = require('express');
const app = express();
const formidable = require('formidable');
app.post('/upload', (req, res) => {
const form = formidable({ uploadDir: `${__dirname}/file/`, keepExtensions: true });
form.parse(req, (err, fields, files) => {
if (err) console.log(err.stack);
else {
console.log(fields.username);
});
});
The file(s) are automatically uploaded to the directory specified in uploadDir, and the keepExtensions ensures that the file extension is saved as well. The non-file inputs are accessible through the fields object as seen through the fields.username example above.
From what I have found, this is the easiest method to take to setup an easy file upload system.

Getting ERR_FS_FILE_TOO_LARGE while using unirest file send with put

I am using unirest to upload a file like so
unirest.put(fullUri)
.auth({
user: self.userName,
pass: self.password
})
.header('X-Checksum-Sha1', sha1Hash)
.header('X-Checksum-Md5', md5Hash)
.send(fs.readFileSync(filePath))
.end(function (response) {
This works fine for smaller files but for large files I get ERR_FS_FILE_TOO_LARGE error. I have already tried max_old_space_size without success. Looks like I can fix this by streaming the file but I can't find an api to do that in unirest js library.
Looks like this is an issue with form-data From GitHub Issues
It turns out that the unirest are using the NPM module form-data, and the form-data module requires that if it received a stream that not fs.ReadStream, we need to provide the file information by additional.
Example:
form.append(
'my_file',
stream,
{
filename: 'bar.jpg',
contentType: 'image/jpeg',
knownLength: 19806,
},
)
See: https://github.com/form-data/form-data#void-append-string-field-mixed-value--mixed-options-
Streaming files with unirest is available via the .attach method:
unirest.put(fullUri)
.auth({
user: self.userName,
pass: self.password
})
.header('X-Checksum-Sha1', sha1Hash)
.header('X-Checksum-Md5', md5Hash)
// .send(fs.readFileSync(filePath))
.attach('filename', fs.createReadStream(filePath))
.end(function (response) {
I can't find an API to do that in unirest js library.
That's because there is none: https://github.com/Kong/unirest-nodejs/issues/49:
You can use the underlying request library to do streaming if you want, I am open to a pull request either on this branch or the 1.0 version to add streaming.
Issue is still open.
But from this issue and from the source code you can find out that end() returns request (see https://github.com/request/request)
Unirest.request = require('request')
...
end: function (callback) {
...
Request = Unirest.request($this.options, handleRequestResponse)
Request.on('response', handleGZIPResponse)
...
return Request
}
and from request's source code you can find out that no actual request is sent yet (it's defered). So you can hack into it. And use it's API instead:
const request = unirest.put(constants.server2)
.auth({
user: self.userName,
pass: self.password
})
.header('X-Checksum-Sha1', sha1Hash)
.header('X-Checksum-Md5', md5Hash)
// .send(fs.readFileSync(filePath))
.end(...)
fs.createReadStream(filePath).pipe(request) // just pipe it!
As a side note: unirest is based on request, request is deprecated now. So... maybe you need to steer away from unirest.

createReadStream not working/extremely slow for large files

Im using DropBox API to upload files. To upload the files to dropbox I am going through the following steps:
First upload file from form to a local directory on the server.
Read File from local directory using fs.createReadStream
Send file to Dropbox via the dropbox API.
The issue:
For some reason fs.createReadStream takes absolute ages when reading and uploading a large file. Now the file I'm trying to upload is only 12MB which is not a big file and it takes approximately 18mins to upload/process a 12MB file.
I don't know where the issue is either it's in createReadStream or dropbox api code.
It works with files of size within kb.
My Code:
let options = {
method: 'POST',
uri: 'https://content.dropboxapi.com/2/files/upload',
headers: {
'Authorization': 'Bearer TOKEN HERE',
'Dropbox-API-Arg': "{\"path\": \"/test/" + req.file.originalname + "\",\"mode\": \"overwrite\",\"autorename\": true,\"mute\": false}",
'Content-Type': 'application/octet-stream'
},
// I think the issue is here.
body: fs.createReadStream(`uploads/${req.file.originalname}`)
};
rp(options)
.then(() => {
return _deleteLocalFile(req.file.originalname)
})
.then(() => {
return _generateShareableLink(req.file.originalname)
})
.then((shareableLink) => {
sendJsonResponse(res, 200, shareableLink)
})
.catch(function (err) {
sendJsonResponse(res, 500, err)
});
Update:
const rp = require('request-promise-native');
I had an experience similar to this issue before and after a large amount of head scratching and digging around, I was able to resolve the issue, in my case anyway.
For me, the issue arose due to the default chunking size for createReadStream() being quite small 64kb and this for some reason having a knock on effect when uploading to Dropbox.
The solution therefore was to increase the chunk size.
// Try using chunks of 256kb
body: fs.createReadStream(`uploads/${req.file.originalname}`, {highWaterMark : 256 * 1024});
https://github.com/request/request#streaming
I believe you need to pipe the stream to the request.
see this answer:
Sending large image data over HTTP in Node.js

Piping zip file from SailsJS backend to React Redux Frontend

I have a SailsJS Backend where i generate a zip File, which was requested by my Frontend, a React App with Redux. I'm using Sagas for the Async Calls and fetch for the request. In the backend, it tried stuff like:
//zipFilename is the absolute path
res.attachment(zipFilename).send();
or
res.sendfile(zipFilename).send();
or
res.download(zipFilename)send();
or pipe the stream with:
const filestream = fs.createReadStream(zipFilename);
filestream.pipe(res);
on my Frontend i try to parse it with:
parseJSON(response) => {
return response.clone().json().catch(() => response.text());
}
everything i tried ends up with an empty zip file. Any suggestions?
There are various issues with the options that you tried out:
res.attachment will just set the Content-Type and Content-Disposition headers, but it will not actually send anything.
You can use this to set the headers properly, but you need to pipe the ZIP file into the response as well.
res.sendfile: You should not call .send() after this. From the official docs' examples:
app.get('/file/:name', function (req, res, next) {
var options = { ... };
res.sendFile(req.params.name, options, function (err) {
if (err) {
next(err);
} else {
console.log('Sent:', fileName);
}
});
});
If the ZIP is properly built, this should work fine and set the proper Content-Type header as long as the file has the proper extension.
res.download: Same thing, you should not call .send() after this. From the official docs' examples:
res.download('/report-12345.pdf', 'report.pdf', function(err) { ... });
res.download will use res.sendfile to send the file as an attachment, thus setting both Content-Type and Content-Disposition headers.
However, you mention that the ZIP file is being sent but it is empty, so you should probably check if you are creating the ZIP file properly. As long as they are built properly and the extension is .zip, res.download should work fine.
If you are building them on the fly, check this out:
This middleware will create a ZIP file with multiples files on the fly and send it as an attachment. It uses lazystream and archiver
const lazystream = require('lazystream');
const archiver = require('archiver');
function middleware(req, res) {
// Set the response's headers:
// You can also use res.attachment(...) here.
res.writeHead(200, {
'Content-Type': 'application/zip',
'Content-Disposition': 'attachment; filename=DOWNLOAD_NAME.zip',
});
// Files to add in the ZIP:
const filesToZip = [
'assets/file1',
'assets/file2',
];
// Create a new ZIP file:
const zip = archiver('zip');
// Set up some callbacks:
zip.on('error', errorHandler);
zip.on('finish', function() {
res.end(); // Send the response once ZIP is finished.
});
// Pipe the ZIP output to res:
zip.pipe(res);
// Add files to ZIP:
filesToZip.map((filename) => {
zip.append(new lazystream.Readable(() => fs
.createReadStream(filename), {
name: filename,
});
});
// Finalize the ZIP. Compression will start and output will
// be piped to res. Once ZIP is finished, res.end() will be
// called.
zip.finalize();
}
You can build around this to cache the built ZIPs instead of building them on the fly every time, which is time and resource consuming and totally unadvisable for most uses cases.

Categories

Resources