FileReader() is unable to read large files - javascript

I'm using the following approach in order to preview images before uploading them:
$("#file").change(function() {
var reader = new FileReader();
reader.readAsArrayBuffer(this.files[0]);
var fileName = this.files[0].name;
var fileType = this.files[0].type;
alert(fileType)
reader.onloadend = function() {
var base64Image = btoa(String.fromCharCode.apply(null, new Uint8Array(this.result)));
// I show the image now and convert the data to base 64
}
}
I have noticed that when the image is large, the method fails and I cannot preview the image.
I am unsure if the problem is due to base64 conversion or the FileReader.
Is there any setting to increase the max size, or is there any work around?
Here is the error message thrown in the console :
Uncaught RangeError: Maximum call stack size exceeded
at FileReader.reader.onloadend

Your problem is that you use Function.apply which will convert your Typed Array items to arguments to the String.fromCharCode method.
Functions have a maximum arguments length limit.
To avoid this, when dealing with large files, the best way is to not process it at all.
If you need to send the file to your server, simply send the Blob directly, this can be easily achieved with the FormData API.
If you need to display the file i.e in HTML media element, then use URL.createObjectURL(yourFile) method.
And if you really need a dataURI version of the file, then use reader.readAsDataURL(yourFile) method.

Works for me:
var reader = new FileReader();
reader.onload = function (evt) {
var binary = '';
var bytes = new Uint8Array(reader.result);
var len = bytes.byteLength;
for (var i = 0; i < len; i++) {
binary += String.fromCharCode(bytes[i]);
}
console.log(btoa(binary))
}
reader.readAsArrayBuffer(file)

If you read the file using the FileReader, the whole file will be loaded into the memory. If you'd like handle large files, this will simply result in your web browser crashing right away. If you are really interested in passing your file as a Base64 String, I recommend you to add file size constraints in order to prevent any potential problems. As a conclusion, none of the methods of the FileReader class would be suitable for this purpose unless and again unless you are dealing with small files not larger than 100MG or so, otherwise you will run into problems.

After playing around here's the solution:
$("#file").change(function () {
var reader = new FileReader();
reader.readAsBinaryString(this.files[0]);
var fileName = this.files[0].name;
var fileType = this.files[0].type;
alert(fileType)
reader.onloadend = function () {
var base64Image = btoa(this.result);
}
}

Related

JavaScript FileReader Slice Performance

I am trying to access the first few lines of text files using the FileApi in JavaScript.
In order to do so, I slice an arbitrary number of bytes from the beginning of the file and hand the blob over to the FileReader.
For large files this takes very long, even though, my understanding currently is that only the first few bytes of the file need to be accessed.
Is there some implementation in the background that requires the whole file to be accessed before it can be sliced?
Does it depend on the browser implementation of the FileApi?
I currently have tested in both Chrome and Edge (chromium).
Analysis in Chrome using the performance dev tools shows a lot of idle time before the reader.onloadend and no increase in ram usage. This might be however, because the FileApi is implemented in the Browser itself and does not reflect in the JavaScript performance statistics.
My implementation of the FileReader looks something like this:
const reader = new FileReader();
reader.onloadend = (evt) => {
if (evt.target.readyState == FileReader.DONE) {
console.log(evt.target.result.toString());
}
};
// Slice first 10240 bytes of the file
var blob = files.item(0).slice(0, 1024 * 10);
// Start reading the sliced blob
reader.readAsBinaryString(blob);
This works fine but as described performs quite underwhelmingly for large files. I tried it for 10kb, 100mb and 6gb. The time until the first 10kb are logged seems to correlate directly to the file size.
Any suggestions on how to improve performance for reading the beginning of a file?
Edit:
Using Response and DOM streams as suggested by #BenjaminGruenbaum does sadly not improve the read performance.
var dest = newWritableStream({​​​​​​​​
write(str) {​​​​​​​​
console.log(str);
}​​​​​​​​,
}​​​​​​​​);
var blob = files.item(0).slice(0, 1024 * 10);
(blob.stream ? blob.stream() : newResponse(blob).body)
// Decode the binary-encoded response to string
.pipeThrough(newTextDecoderStream())
.pipeTo(dest)
.then(() => {​​​​​​​​
console.log('done');
}​​​​​​​​);
how about this!!
function readFirstBytes(file, n) {
return new Promise((resolve, reject) => {
const reader = new FileReader();
reader.onload = () => {
resolve(reader.result);
};
reader.onerror = reject;
reader.readAsArrayBuffer(file.slice(0, n));
});
}
readFirstBytes('file', 10).then(buffer => {
console.log(buffer);
});

FileReader memory leak in Chrome

I have a webpage with file upload functionality. The upload is performed in 5MB chunks. I want to calculate hash for each chunk before sending it to the server. The chunks are represented by Blob objects. In order to calculate the hash I am reading such blob into an ArrayBuffer using a native FileReader. Here is the code:
var reader = new FileReader();
var getHash = function (blob, callback) {
reader.onloadend = function (e) {
var hash = util.hash(e.target.result);
callback(hash);
};
reader.readAsArrayBuffer(blob);
}
var processChunk = function (chunk) {
if (chunk) {
getHash(chunk, function (hash) {
util.sendToServer(chunk, hash, function() {
// this callback is called when chunk upload is finished
processChunk(chunks.shift());
});
});
}
}
var chunks = file.splitIntoChunks(); // gets an array of blobs
processChunk(chunks.shift());
The problem: using the FileReader.readAsArrayBuffer seems to eat up a lot of memory which is not released. So far I tested with a 5GB file on following browsers:
Chrome 55.0.2883.87 m (64-bit): the memory goes up to 1-2GB quickly and oscillates around that. Sometimes it goes all the way up and browser tab crashes. It can use more memory than the size of read chunks. E.g. after reading 500MB of chunks the process already uses 700MB of memory.
Firefox 50.1.0: memory usage oscillates around 300-600MB
Code adjustments I have tried - all to no avail:
re-using the same FileReader instance for all chunks (as suggested in this question)
creating new FileReader for each chunk
adding timeout before starting new chunk
setting the FileReader and the ArrayBuffer to null after each read
The question: is there a way to fix the problem? Is this a bug in the FileReader implementations or am I doing something wrong?
EDIT: Here is a JSFiddle https://jsfiddle.net/andy250/pjt9udeu/
This is a bug in Chrome on Windows. It is reported here: https://bugs.chromium.org/p/chromium/issues/detail?id=674903

how do i get any images in binary code using js?

I want to make a multiple images upload system with prograss bar. I want to do with simaple code(using jquery or js). I want when user has upload his images on browser and i want to show on browser that images and with upload button he starts uploading image via ajax in his folder.
So questions
1.) Is it possible to show uploaded image (without any complicated code) ?
2.) Do i get a variable or array where uploaded images are stored as base64 code (data:/img:dfd5d/d54fs..... something like this) or encoded?
3.) How do i add progressBar ?
I didn't write any code yet because i dont know how to start. I am new in computer science.
But i find this code on this site
function previewFile() {
var preview = document.querySelector('img');
var file = document.querySelector('input[type=file]').files[0];
var reader = new FileReader();
reader.onloadend = function () {
preview.src = reader.result;
}
if (file) {
reader.readAsDataURL(file);
} else {
preview.src = "";
}
}
This is easy code and i understand but one thing is not clear what does mean this line var reader = new FileReader(); why use new and what is it ?
Ty in advance and please dont explain complicate and i am not very good in english. So please try to explain in poor words if possible..
Assuming that you have this field
<input type="file" onchange="showImage(this)"/>
you can create a script to take the binary data and show it
function showImage(input){
var reader = new FileReader();
// validating...
var fileType = input.files[0].type;
var filesize = input.files[0].size;
// filetype (this will validate mimetype, only png, jpeg, jpg allowed)
var fileTypes = ["image/png", "image/jpeg", "image/gif"];
if (fileTypes.indexOf(fileType) < 0){
// return error, invalid mimetype
return false;
}
// file cannot be more than 500kb
if (filesize > 5000000) {
// return error, image too big
return false;
}
reader.onload = function (e) {
// e will contain the image info
jQuery('#myimagetopreview').attr('src', e.target.result)
}
reader.readAsDataURL(input.files[0]);
}
This should work, if you have problem tell me
edit: FileReader is not supported by all the browsers, check the documentation for more https://developer.mozilla.org/en/docs/Web/API/FileReader
The FileReader in JS has Status "Working Draft" and isn't part of the official JS API. I think you have to wait until the Browsers support this ne API or you have to activate experimental JS API in the Browser.

How to get all sliced data from the entire file

I get the original code from here: Using Javascript FileReader with huge files
But my purpose is different, the author wants to get just a part of the whole but I want them all.
I'm trying modify it with loop, mixed with this technique: slice large file into chunks and upload using ajax and html5 FileReader
All fails, is there anyway I can get what I want.
var getSource = function(file) {
var reader = new FileReader();
reader.onload = function(e) {
if (e.target.readyState == FileReader.DONE) {
process(e.target.result);
}
};
var part = file.slice(0, 1024*1024);
reader.readAsBinaryString(part);
};
function process(data) {
// data processes here
}
Thank you,

How to get correct MD5 hash after saving canvas as .png in Chrome extension?

I am trying to save image canvas to disk as .png in chrome extension with file name reflecting MD5 hash. For this I use something like this:
var img = document.createElement("img");
img.src=canvas.toDataURL("image/png");
var image_data = atob(img.src.split(',')[1]);
var arraybuffer = new ArrayBuffer(image_data.length);
var view = new Uint8Array(arraybuffer);
for (var i=0; i<image_data.length; i++) {
view[i] = image_data.charCodeAt(i);
}
var blob = new Blob([view], {type: 'image/png'});
var url = (window.webkitURL || window.URL).createObjectURL(blob);
var b = new FileReader;
b.readAsDataURL(blob);
b.onloadend = function () {
filename = SparkMD5.hash(b.result);
}
// ....some code
chrome.downloads.download ({ url, filename+'.png', saveAs: false });
The file is saved correctly, but MD5 hash that I get in code via SparkMD5 is different from the one I see in Windows after the file is saved. I cannot understand why. Experimented a bit with different approaches to saving (directly XMLHttpRequest, etc), but no luck yet. Probably I misunderstand some basic concept, as far as I am a bit of newbee to web programming.
I have also saved files via chrome.pageCapture.saveAsMHTML with the use of FileReader and in that case MD5 are equal.
What is wrong and is there a way to get equal MD5 for filename and final file while saving .png from Chrome extension?

Categories

Resources