Converting Uint8Array crashing browser for large files - javascript

Have an app where there is an input of type "file". The following methods grab the file, then prep it to be sent to the server via AJAX.
private StartUpload = (files) => {
if (files && files.length === 1) {
this.GetFileProperties(files[0])
.done((properties: IFileProperties) => {
$('input[type=file]').val("");
if (this._compatibleTypes.indexOf(properties.Extension) >= 0) {
var base64 = this.ArrayBufferToBase64(properties.ArrayBuffer);
this.DoFileUpload(base64, properties.Extension).always(() => {
this.ShowDialogMessage('edit_document_upload_complete', 'edit_document_upload_complete');
});
} else {
this.ShowDialogMessage('edit_document_upload_incompatible', 'edit_document_upload_compatible_types', this._compatibleTypes);
}
});
} else {
this.ShowDialogMessage('edit_document_upload_one_file', 'edit_document_upload_one_file_msg');
}
};
private ArrayBufferToBase64(buffer): any {
var binary = '';
var bytes = new Uint8Array(buffer);
for (var xx = 0, len = bytes.byteLength; xx < len; xx++) {
binary += String.fromCharCode(bytes[xx]);
}
return window.btoa(binary);
}
private DoFileUpload = (base64, extension) => {
this.IsLoading(true);
var dfd = $.Deferred();
var data = {
data: base64
};
UpdateFormDigest((<any>window)._spPageContextInfo.webServerRelativeUrl, (<any>window)._spFormDigestRefreshInterval);
var methodUrl = "_vti_bin/viewfile/FileInformation.asmx/AddScannedItemAlt";
$.ajax({
headers: {
"X-RequestDigest": $("#__REQUESTDIGEST").val()
},
url: methodUrl,
contentType: "application/json",
data: JSON.stringify(data),
dataType: 'json',
type: "POST",
success: (response) => {
// do stuff
},
error: (e) => {
// do stuff
}
});
return dfd;
};
This works perfectly in the vast majority of cases. However, when the file size is large (say 200MB+) it kills the browser.
Chrome shows a blackish-grey page with the "aw snap" message and basically dies.
IE shows an "Out of Memory" console error but continues to work.
FF shows an "Unresponsive script" warning. Choosing "don't show me again" lets it run until an "out of memory" console error shows up.
This is where it dies:
for (var xx = 0, len = bytes.byteLength; xx < len; xx++) {
binary += String.fromCharCode(bytes[xx]);
}
Wrapping a try/catch around this does nothing and no error is caught.
I can step into the loop without a crash, but stepping through every iteration is tough since len = 210164805. For this I tried to add console.log(xx) to the loop and let it fly - but the browser crashes before anything shows up in the log.
Is there some limit to the size a string can be that could be causing the browser to crash once exceeded?
Thanks

You need to do this in an asynchronous way by breaking up the code either in blocks or time segments.
This means your code will need to use callback, but otherwise it's straight forward -
Example
var bytes = new Uint8Array(256*1024*1024); // 256 mb buffer
convert(bytes, function(str) { // invoke the process with a callback defined
alert("Done!");
});
function convert(bytes, callback) {
var binary = "", blockSize = 2*1024*1024, // 2 mb block
block = blockSize, // block segment
xx = 0, len = bytes.byteLength;
(function _loop() {
while(xx < len && --block > 0) { // copy until block segment = 0
binary += String.fromCharCode(bytes[xx++]);
}
if (xx < len) { // more data to copy?
block = blockSize; // reinit new block segment
binary = ""; // for demo to avoid out-of-memory
setTimeout(_loop, 10); // KEY: async wait
// update a progress bar so we can see something going on:
document.querySelector("div").style.width = (xx / len) * 100 + "%";
}
else callback(binary); // if done, invoke callback
})(); // selv-invoke loop
}
html, body {width:100%;margin:0;overflow:hidden}
div {background:#4288F7;height:10px}
<div></div>
Using large buffers converted to string will possibly make the client run out of memory. A buffer of 200mb converted to string will add 2 x 200mb as strings are stored as UTF-16 (ie. 2 bytes per char), so here we use 600 mb out of the box.
It depends on browser and how it deals with memory allocations as well as the system of course. The browser will try to protect the computer against malevolent scripts which would attempt to fill up the memory for example.
You should be able to stay in ArrayBuffer and send that to server.

Related

Intensive job javascript, browser freez

I got a blob to construct, and received almost 100 parts of (500k) to decrypt and construct a blob file.
Actually it's working fine, but when i do my decryption, that take processor, and freeze my page.
I try different approach, with defered of jquery, timeout but always the same probleme.
It's there a ways to not freez the UI thread ?
var parts = blobs.sort(function (a, b) {
return a.part - b.part;
})
// notre bytesarrays finales
var byteArrays = [];
i = 0;
for (var i = 0; i < blobs.length; i++)
{
// That job is intensive, and take time
byteArrays.push(that.decryptBlob(parts[i].blob.b64, fileType));
}
// create new blob with all data
var blob = new Blob(byteArrays, { type: fileType });
The body inside for(...) loop is synchronous, so the entire decryption process is synchronous, in simple words, decryption happens chunk after chunk. How about making it asynchronous ? Like decrypting multiple chunks in parallel. In JavaScript terminology we can use Asynchronous Workers. These workers can work in parallel, so if you spawn 5 workers for example. The total time is reduced by T / 5. (T = total time in synchronous mode).
Read more about worker threads here :
https://blog.logrocket.com/node-js-multithreading-what-are-worker-threads-and-why-do-they-matter-48ab102f8b10/
Tanks to Sebastian Simon,
I took the avenue of worker. And it's working fine.
var chunks = [];
var decryptedChucnkFnc = function (args) {
// My builder blob job here
}
// determine the number of maximum worker to use
var maxWorker = 5;
if (totalParts < maxWorker) {
maxWorker = totalParts;
}
for (var iw = 0; iw < maxWorker; iw++) {
eval('var w' + iw + ' = new Worker("decryptfile.min.js")');
var wo = eval("w" + iw);
var item = blobs.pop();
wo.postMessage(MyObjectPassToTheFile);
wo.onmessage = decryptedChucnkFnc;
}

javascript: string limited to 268,400,000 length?

With javascript and chrome (on electron) I am reading files in chunks and attaching it to a string. I can see now that if I try to read a file of 462Mb, I get the error RangeError: Invalid string length and if in every chunk I print the string length, the lass reading shows 268,400,000, reading chunks of 100,000.
What is this error about? A javascript string limit? My computer saying stop? I can see that CPU keeps below 50% and memory doesn't go higher than 55%.
I am about to think about a workaround, but I cannot find anything about a length limit, so maybe I am facing another type of error?
The code I'm using to read files
var start, temp_end, end;
var BYTES_PER_CHUNK = 100000;
function readFile(file_to_read,param) {
if (param.start < param.end) {
return new Promise(function(resolve){
var chunk = file_to_read.file.slice(param.start, param.temp_end);
var reader = new FileReader();
reader.onload = function(e) {
if (e.target.readyState == 2) { // the file is being uploaded in chunks, and the chunk has been successfully read
document.getElementById('file_monitor').max = param.end;
document.getElementById('file_monitor').value = param.temp_end;
//file.data += new TextDecoder("utf-8").decode(e.target.result);
Promise.resolve()
.then(function(){
file_to_read.data += e.target.result;
}).then(function(){
param.start = param.temp_end; // 0 if a new file, the previous one if still reading the same file
param.temp_end = param.start + BYTES_PER_CHUNK;
if (param.temp_end > param.end)
param.temp_end = param.end;
resolve(readFile(file_to_read,param));
}).catch(function(e){
console.log(e);
console.log(file_to_read.data.length);
console.log(file_to_read.data);
console.log(e.target.result);
resolve();
});
}
}
reader.readAsText(chunk);
// reader.readAsBinaryString(chunk);
});
} else
return Promise.resolve();
}

PHP function fails to return correct result after ~1000 ajax posts

I'm debugging some code and in order to do so I am repeatedly making ajax post to a PHP script on my localhost Apache24 server. In simple terms, the PHP script takes an integer value and returns a different data string depending on the input integer.
I'm stepping through numerous integer values with a for loop on the javascript side, starting at x=1. I've noticed, however, that after ~980 ajax posts, the PHP function stops returning the correct data; it seems to only return the data for x = 980, even as x continues to increment. Console.log confirms that the x value doesn't hang at 980.
I initially thought maybe the script was buggy but then I restarted the loop at x = 980 and, sure enough, the php script worked fine until x = ~1900, when it stopped working again.
Is there a reason the PHP script fails to work after ~980 requests? I have received no errors on either the web side or server side.
function interpretDisplay(input_string) {
var display = input_string.split("?");
for (var x = 0; x < 16; x++) {
document.getElementById("ids").innerHTML = display + " ";
}
}
function runDisplay(x) {
values[1]+= "&seed=" + x;
$.post("test.php", values[1], function(data) {
console.log(x);
if (x % 1 == 0) {
interpretDisplay(data);
}
if (x < 1000) {
setTimeout(function() {
runDisplay(x+1);
}
, 10);
}
});
}
var url = window.location.href;
var values = url.split('?');
runDisplay(1);

Node.js net tcp buffering memory leak

I'm writing a TCP game server in Node.js and am having issues with splitting the TCP stream into messages. As i want to read numbers and floats from the buffer i cannot find a suitable module to outsource to as all the ones i've found deal with simple strings ending with a new line delimiter. I decided to go with prefixing each message with the length in bytes of the message. I did this and wrote a simple program to spam the server with random messages ( well constructed with a UInt16LE prefix depicting the length of the message ). I noticed that the longer I leave the programs running my actual server keeps using up more and more memory. I tried using a debugging tool to trace the memory allocation with no success so I figured i'd post my code here and hope for a reply. So here is my code... any tips or pointers as to where I'm going wrong or what I can do differently/more efficiently would be amazing!
Thanks.
server.on("connection", function(socket) {
var session = new sessionCS(socket);
console.log("Connection from " + session.address);
// data buffering variables
var currentBuffer = new Buffer(args.bufSize);
var bufWrite = 0;
var bufRead = 0;
var mSize = null;
var i = 0;
socket.on("data", function(dataBuffer) {
// check if buffer risk of overflow
if (bufWrite + dataBuffer.length > args.bufSize-1) {
var newBufWrite = 0;
var newBuffer = new Buffer(args.bufSize);
while(bufRead < bufWrite) {
newBuffer[newBufWrite] = currentBuffer[bufRead];
newBufWrite++;
bufRead++;
}
currentBuffer = newBuffer;
bufWrite = newBufWrite;
bufRead = 0;
newBufWrite = null;
}
// appending buffer
for (i=0; i<dataBuffer.length; i++) {
currentBuffer[bufWrite] = dataBuffer[i];
bufWrite ++;
}
// if beginning of message not acknowleged
if (mSize === null && (bufWrite - bufRead) >= 2) {
mSize = currentBuffer.readUInt16LE(bufRead);
}
// if difference between read and write is greater or equal to message mSize + 2
// +2 for the integer holding the message size
// this means that a full message is in the buffer and needs to be extracted
while ((bufWrite - bufRead) >= mSize+2) {
bufRead += 2;
var messageBuffer = new Buffer(mSize);
for(i=0; i<messageBuffer.length; i++) {
messageBuffer[i] = currentBuffer[bufRead];
bufRead++;
}
// this is where the message buffer would be passed to the router
router(session, messageBuffer);
messageBuffer = null;
// seeinf if another message length indicator is in the buffer
if ((bufWrite - bufRead) >= 2) {
mSize = currentBuffer.readUInt16LE(bufRead);
}
else {
mSize = null;
}
}
});
}
Buffer Frame Serialization Protocol (BUFSP) https://github.com/teambition/bufsp
It may be that you want: encode messages into buffer, write to TCP, receive and splitting the TCP stream buffers into messages.

Firefox UI becomes unresponsive while downloading many files with Addon SDK API

I have a problem that is rather hard to debug, i need to download a lot (~400) of rather small (~3-4mb) files in the background using the firefox addon sdk API.
I tried using the old API (nsIWebBrowserPersist) as well as the new API (Downloads.jsm) (shortened code):
Task.spawn(function () {
for (var i = 0; i < documents.length; i++) {
var url = ...;
var file = ...;
let download = yield Downloads.createDownload({
source: url,
target: file,
});
yield download.start();
yield download.finalize();
}
});
But the UI gets extremely unresponsive after some time, i tried using the same file and overwriting it, because my first guess was windows file handles accumulating over the time, but it didn't help. It does not seem to be related to the system performance, also it works sometimes and on the same machine after 5 min it fails.
Is there a known issue with downloading a lot of files using the firefox sdk api, or am i doing something wrong?
I found that by using an alternative API the download became faster and the ui more responsive:
function downloadFromUrl(url, file, callback) {
var channel = chrome.Cc["#mozilla.org/network/io-service;1"]
.getService(chrome.Ci.nsIIOService)
.newChannel(url, 0, null);
var bstream = chrome.Cc["#mozilla.org/binaryinputstream;1"]
.createInstance(chrome.Ci.nsIBinaryInputStream);
bstream.setInputStream(channel.open());
var fos = chrome.Cc["#mozilla.org/network/safe-file-output-stream;1"]
.createInstance(chrome.Ci.nsIFileOutputStream);
try {
fos.init(file, 0x04 | 0x08 | 0x10 | 0x20 | 0x40, 0600, 0); // see:https://developer.mozilla.org/en-US/docs/PR_Open#Parameters
var length = 0;
var size = 0;
while(size = bstream.available()) {
fos.write(bstream.readBytes(size), size);
length += size;
callback(length);
}
} finally {
if (fos instanceof chrome.Ci.nsISafeOutputStream) {
fos.finish();
} else {
fos.close();
}
}
}
I know that this is kind of primitive api, but it works way better than the alternatives..
Edit:
I improved the above function, but it may be too bloated, here is it anyways:
/**
* Downloads from a given url to a local file
* #param url url to download
* #param file local file
* #param callback called during the download, signature: callback(currentBytes)
* #returns downloadResult {contentType, error: false | ExceptionObject}
*/
function downloadFromUrl(url, file, callback) {
let result = {
contentType: null,
error: false
};
try {
let channel = chrome.Cc["#mozilla.org/network/io-service;1"]
.getService(chrome.Ci.nsIIOService)
.newChannel(url, 0, null);
let bstream = chrome.Cc["#mozilla.org/binaryinputstream;1"]
.createInstance(chrome.Ci.nsIBinaryInputStream);
bstream.setInputStream(channel.open());
let fos = chrome.Cc["#mozilla.org/network/safe-file-output-stream;1"]
.createInstance(chrome.Ci.nsIFileOutputStream);
try {
// const values from https://developer.mozilla.org/en-US/docs/PR_Open#Parameters
const PR_RDWR = 0x04; // Open for reading and writing.
const PR_CREATE_FILE = 0x08; // If the file does not exist, the file is created. If the file exists, this flag has no effect.
const PR_APPEND = 0x10; // The file pointer is set to the end of the file prior to each write.
const PR_TRUNCATE = 0x20; // If the file exists, its length is truncated to 0.
const PR_SYNC = 0x40; // If set, each write will wait for both the file data and file status to be physically updated.
fos.init(file, PR_RDWR | PR_CREATE_FILE | PR_APPEND | PR_TRUNCATE | PR_SYNC, 0600, 0);
let length = 0;
let size = bstream.available();
while(size) {
fos.write(bstream.readBytes(size), size);
length += size;
callback(length);
size = bstream.available();
}
fos.flush();
result.contentType = channel.contentType;
} finally {
if (fos instanceof chrome.Ci.nsISafeOutputStream) {
fos.finish();
} else {
fos.close();
}
}
} catch (e) {
result.error = e;
}
return result;
}

Categories

Resources