PeerJS/WebRTC connection fails on rapid chunks transmittion - javascript

I'm using PeerJS, but thought that this problem can be about WebRTC in general, hope You can help me out:
I'm trying to write a simple peer-to-peer file sharing. I'm using serialisation: "none" for PeerJS connection DataChannel, as I'm sending just pure ArrayBuffers.
Everything is good with files around 10mb but I have problems sending bigger file (30+ mb), for example after sending aroung 10-20 first chunks of 900mb zip file connection between peers start throwing Connection is not open. You should listen for the "open" event before sending messages. (on the Sender side)
My setup:
File dragged to drag&drop, Sender uses FileReader to read it as ArrayBuffer in chunks of 64x1024 bytes (no difference with 16x1024) and as soon as each chunk is read - it's sent via peer.send(ChunkArrayBuffer).
Reciever creates blob from each recieved chunk, after transmission finished creates a complete blob out of those and gives a link to user.
My peer connection settings:
var con = peer.connect(peerid, {
label: "file",
reliable: true,
serialization: "none"
})
My sending function:
function sliceandsend(file, sendfunction) {
var fileSize = file.size;
var name = file.name;
var mime = file.type;
var chunkSize = 64 * 1024; // bytes
var offset = 0;
function readchunk() {
var r = new FileReader();
var blob = file.slice(offset, chunkSize + offset);
r.onload = function(evt) {
if (!evt.target.error) {
offset += chunkSize;
console.log("sending: " + (offset / fileSize) * 100 + "%");
if (offset >= fileSize) {
con.send(evt.target.result); ///final chunk
console.log("Done reading file " + name + " " + mime);
return;
}
else {
con.send(evt.target.result);
}
} else {
console.log("Read error: " + evt.target.error);
return;
}
readchunk();
};
r.readAsArrayBuffer(blob);
}
readchunk();
}
Any ideas what can cause this?
Update: Setting 50ms Timeout between chunk transmittions helped a bit, 900mb file loading reached 6% (instead of 1 - 2% previously) before started throwing errors. Maybe it's some kind of limit of simultaneous operations through datachannel or overflowing some kind of datachannel buffer?
Update1: Here's my PeerJS connection object with DataChannel object inside it:

Good News everyone!
It was a buffer overflow of DataChannel problem, thx to this article http://viblast.com/blog/2015/2/25/webrtc-bufferedamount/
bufferedAmount is a property of DataChannel(DC) object which in the latest Chrome version displays amount of data in bytes being currently in buffer, when it exceedes 16MB - DC is silently closed.
Therefore anyone who will encounter this problem need to implement buffering mechanism on application level, which will watch for this property and hold back messages if needed. Also, be aware that in versions of Chrome prior to 37
the same property displays quantity(not size) of messages, and more of that it's broken under windows and displays 0, but with v<37 on overflow DC is not closed - only exception thrown, which can also be caught to indicate buffer overflow.
I made an edit in peer.js unminified code for myself, here you can see both methods in one function (for more of the source code you can look at https://github.com/peers/peerjs/blob/master/dist/peer.js#L217)
DataConnection.prototype._trySend = function(msg) {
var self = this;
function buffering() {
self._buffering = true;
setTimeout(function() {
// Try again.
self._buffering = false;
self._tryBuffer();
}, 100);
return false;
}
if (self._dc.bufferedAmount > 15728640) {
return buffering(); ///custom buffering if > 15MB is buffered in DC
} else {
try {
this._dc.send(msg);
} catch (e) {
return buffering(); ///custom buffering if DC exception caught
}
return true;
}
}
Also opened an issue on PeerJS GitHub: https://github.com/peers/peerjs/issues/291

Have a look at Transfer a file
This page shows how to transfer a file via WebRTC datachannels.
To accomplish this in an interoperable way, the file is split into chunks which are then transferred via the datachannel. The datachannel is reliable and ordered by default which is well-suited to filetransfers.
Although it doesn't use peerjs it can be adapted (to use peerjs) and the code is easy to follow and works without any issues.

Related

Reading files from CD using HTML5, webkitdirectory takes more time compared to reading local files

I have a web application which allows the user to upload DICOM and Non-DICOM files to their account. I am using JavaScript, HTML5, Webkitdirectory, Chrome and Datatable to populate selected files on UI. The issue i am facing is -
While selecting files from their local machine the following code seems to work pretty fast and the selected files are populated immediately on the UI, but while selecting same amount of files from a CD it takes time to render on UI. Here is an example -
For a CD with 20 DICOMs + 2 Non DICOMs studies, and about 2241 images, it takes about 5-6 min to populate the list the first time
on UI. If I try to select same CD folder, the list will populate in
roughly 60 sec if it’s been populated once before during the same
session.
But if i use the same set of files from the local machine then it roughly takes 6 -7 sec to populate on UI.
Here is my code which is executed for each and every DICOM file -
var fileReader = new FileReader();
fileReader.onload = function(evt){
console.log("Completed Reading");
var arrayBuffer = fileReader.result;
var byteArray = new Uint8Array(arrayBuffer);
_parseDicom(byteArray);
try {
if (fileReader.readyState !== 2) {
fileReader.abort();
}
}
catch (err) {
console.log('error occured: '+err);
}
}
var blob = f.slice(0, 50000);
console.log("Starting to Read");
fileReader.readAsArrayBuffer(blob);
After analyzing the issue i came up with,
The basic thing which i guess is, OS takes time to mount the CD to its memory as it is an external drive. It takes less time if we access the second time, because the CD content is already mounted.
The time between "Starting to Read" and "Completed Reading" is relatively more while reading files from CD than from local machine.
I also tried looking for DICOMDIR file, which is an index of all study files contained on the disc, and is included for exactly this reason: to avoid lengthy scans of the disc. But I didn't find any standard or way to parse the DICOMDIR file in JavaScript
Is there any way to reduce the amount of time it takes to read files from CD ??
UPDATE -
I am able to get DICOMDIR file structure now into JavaScript using dicomParser -
https://github.com/chafey/dicomParser
var fr = new FileReader();
fr.onload = function(evt){
var byteArray = new Uint8Array(fr.result);
try {
var dataSet = dicomParser.parseDicom(byteArray);
_searchDicom(dataSet, f);
} catch (err) {
if (typeof err.dataSet != 'undefined') {
_searchDicom(err.dataSet, f);
}
}
}
var blob = f.slice(0, 1000000);
fr.readAsArrayBuffer(blob);
function _searchDicom(dataset,f) {
var data = dataset.elements.x00041220.items;
if(data) {
data.forEach(function (e) {
if (e.dataSet.string('x00041430') === 'PATIENT') {
console.log("Patient Name - "+e.dataSet.string('x00100010'));
}
else if (e.dataSet.string('x00041430') === 'STUDY') {}
else if (e.dataSet.string('x00041430') === 'SERIES') {}
else if (e.dataSet.string('x00041430') === 'IMAGE') {}
});
}
}
The structure of object is similar to what is displayed on -
https://rawgit.com/chafey/dicomParser/master/examples/dumpWithDataDictionary/index.html
when we upload any DICOMDIR file.
The problem here is I am not able to collect all patients,studies,series or images at once. The only solution I found is to iterate and check whether it's a patient,study,series or image object
Is there any method/standard to retrieve in a better way ??

Browser freezes (OOM?) when computing a large file MD5 using JS Crypto library

I am using Webpack, compiling a bundled JS file.
The problem
I have a Worker that I am offloading the hashing work to. I pass a file and filesize to it. I previously did not use a Worker. However, when Chrome reacted badly to hashing a large file, I thought that the main thread was being blocked by the hashing mechanism. This could be a false assumption.
The code works correctly for small files. However, for large files, once reaching the part where the final hash is generated, Chrome shows this error:
Firefox is a bit more helpful and shows this message:
Error: Uncaught, unspecified "error" event. (out of memory)
However, the piping of data should alleviate this issue. fileReaderStream reads data in chunks of 1 MB.
The code
import Crypto from 'crypto'
import fileReaderStream from 'filereader-stream'
import concat from 'concat-stream'
var progress = require('progress-stream');
self.onmessage = (event) => {
switch (event.data.topic) {
case 'hash': {
var file = event.data.file;
var filesize = event.data.filesize;
let p1 = progress({
length: filesize,
time: 100 /* ms */
});
let p2 = progress({
length: filesize,
time: 100 /* ms */
});
p1.on('progress', function(progress) {
console.log('p1', progress);
});
p2.on('progress', function(progress) {
console.log('p2', progress);
});
let md5 = Crypto.createHash('md5');
console.log("START HASH");
var reader = fileReaderStream(file);
reader.pipe(p1).pipe(md5).pipe(p2).pipe(concat((data) => {
console.log("DONE HASH");
console.log(data);
}));
break;
}
}
}
Small file example (5,248 KB)
Large file example (643 MB)
Additional Information
Screenshot of memory usage. It takes up 3 GB in a few seconds.
So it could be worth using a different library if this one is poorly implemented with regards to memory management.
This javascript library is implemented by stanford - https://bitwiseshiftleft.github.io/sjcl/
You may also want to consider using a more secure hashing algorithm than md5 due to it's vulnerability to collisions via the birthday attack.

Large blob file in Javascript

I have an XHR object that downloads 1GB file.
function getFile(callback)
{
var xhr = new XMLHttpRequest();
xhr.onload = function () {
if (xhr.status == 200) {
callback.apply(xhr);
}else{
console.log("Request error: " + xhr.statusText);
}
};
xhr.open('GET', 'download', true);
xhr.onprogress = updateProgress;
xhr.responseType = "arraybuffer";
xhr.send();
}
But the File API can't load all that into memory even from a worker
it throws out of memory...
btn.addEventListener('click', function() {
getFile(function() {
var worker = new Worker("js/saving.worker.js");
worker.onmessage = function(e) {
saveAs(e.data); // FileSaver.js it creates URL from blob... but its too large
};
worker.postMessage(this.response);
});
});
Web Worker
onmessage = function (e) {
var view = new DataView(e.data, 0);
var file = new File([view], 'file.zip', {type: "application/zip"});
postMessage('file');
};
I'm not trying to compress the file, this file is already compressed from server.
I thought storing it first on indexedDB but i i'll have to load blob or file anyway, even if i do request by range bytes, soon or late i will have to build this giant blob..
I want to create blob: url and send it to user after been downloaded by browser
I'll use FileSystem API for Google Chrome, but i want make something for firefox, i looked into File Handle Api but nothing...
Do i have to build an extension for firefox, in order to do the same thing as FileSystem does for google chrome?
Ubuntu 32 bits
Loading 1gb+ with ajax isn't convenient just for monitoring download progress and filling up the memory.
Instead I would just send the file with a Content-Disposition header to save the file.
There are however ways to go around it to monitor the progress. Option one is to have a second websocket that signals how much you have downloaded while you are downloading normally with a get request. the other option will be described later in the bottom
I know you talked about using Blinks sandboxed filesystem in the conversation. but it has some drawbacks. It may need permission if using persistent storage. It only allows 20% of the available disk that are left. And if chrome needs to free some space then it will throw away any others domains temporary storage that was last used for the most recent file. Beside it doesn't work in private mode.
Not to mention that it has been dropping support for it and may never end up in other browsers - but they will most likely not remove it since many sites still depend on it
The only way to process this large file is with streams. That is why I have created a StreamSaver. This is only going to work in Blink (chrome & opera) ATM but it will eventually be supported by other browsers with the whatwg spec to back it up as a standard.
fetch(url).then(res => {
// One idea is to get the filename from Content-Disposition header...
const size = ~~res.headers.get('Content-Length')
const fileStream = streamSaver.createWriteStream('filename.zip', size)
const writeStream = fileStream.getWriter()
// Later you will be able to just simply do
// res.body.pipeTo(fileStream)
// instead of pumping
const reader = res.body.getReader()
const pump = () => reader.read()
.then(({ value, done }) => {
// here you know how large the value (chunk) is and you can
// figure out the download speed/progress when comparing it to the size
return done
? writeStream.close()
: writeStream.write(value).then(pump)
)
// Start the reader
pump().then(() =>
console.log('Closed the stream, Done writing')
)
})
This will not take up any memory
I have a theory that is if you split the file into chunks and store them in the indexedDB and then later merge them together it will work
A blob isn't made of data... it's more like pointers to where a file can be read from
Meaning if you store them in indexedDB and then do something like this (using FileSaver or alternative)
finalBlob = new Blob([blob_A_fromDB, blob_B_fromDB])
saveAs(finalBlob, 'filename.zip')
But i can't confirm this since i haven't tested it, would be good if someone else could
Blob is cool until you want to download a large file, there is a 600MB limit(chrome) for blob since it stores everything in memory.

How to pass a blob from a Chrome extension to a Chrome app

A Little Background
I've been working for a couple of days on a Chrome extension that takes a screenshot of given web pages multiple times a day. I used this as a guide and things work as expected.
There's one minor requirement extensions can't meet, though. The user must have access to the folder where the images (screenshots) are saved but Chrome Extensions don't have access to the file system. Chrome Apps, on the other hand, do. Thus, after much looking around, I've concluded that I must create both a Chrome Extension and a Chrome App. The idea is that the extension would create a blob of the screenshot and then send that blob to the app which would then save it as an image to a user-specified location. And that's exactly what I'm doing — I'm creating a blob of the screentshot on the extension side and then sending it over to the app where the user is asked to choose where to save the image.
The Problem
Up to the saving part, everything works as expected. The blob is created on the extension, sent over to the app, received by the app, the user is asked where to save, and the image is saved.... THAT is where things fall apart. The resulting image is unusable. When I try to open it, I get a message that says "Can't determine type". Below is the code I'm using:
First ON THE EXTENSION side, I create a blob and send it over, like this:
chrome.runtime.sendMessage(
APP_ID, /* I got this from the app */
{myMessage: blob}, /* Blob created previously; it's correct */
function(response) {
appendLog("response: "+JSON.stringify(response));
}
);
Then, ON THE APP side, I receive the blob and attempt to save it like this:
// listen for external messages
chrome.runtime.onMessageExternal.addListener(
function(request, sender, sendResponse) {
if (sender.id in blacklistedIds) {
sendResponse({"result":"sorry, could not process your message"});
return; // don't allow this extension access
} else if (request.incomingBlob) {
appendLog("from "+sender.id+": " + request.incomingBlob);
// attempt to save blob to choosen location
if (_folderEntry == null) {
// get a directory to save in if not yet chosen
openDirectory();
}
saveBlobToFile(request.incomingBlob, "screenshot.png");
/*
// inspect object to try to see what's wrong
var keys = Object.keys(request.incomingBlob);
var keyString = "";
for (var key in keys) {
keyString += " " + key;
}
appendLog("Blob object keys:" + keyString);
*/
sendResponse({"result":"Ok, got your message"});
} else {
sendResponse({"result":"Ops, I don't understand this message"});
}
}
);
Here's the function ON THE APP that performs the actual save:
function saveBlobToFile(blob, fileName) {
appendLog('entering saveBlobToFile function...');
chrome.fileSystem.getWritableEntry(_folderEntry, function(entry) {
entry.getFile(fileName, {create: true}, function(entry) {
entry.createWriter(function(writer) {
//writer.onwrite = function() {
// writer.onwrite = null;
// writer.truncate(writer.position);
//};
appendLog('calling writer.write...');
writer.write(blob);
// Also tried writer.write(new Blob([blob], {type: 'image/png'}));
});
});
});
}
There are no errors. No hiccups. The code works but the image is useless. What exactly am I missing? Where am I going wrong? Is it that we can only pass strings between extensions/apps? Is the blob getting corrupted on the way? Does my app not have access to the blob because it was created on the extension? Can anyone please shed some light?
UPDATE (9/23/14)
Sorry for the late update, but I was assigned to a different project and could not get back to this until 2 days ago.
So after much looking around, I've decided to go with #Danniel Herr's suggestion which suggests to use a SharedWorker and a page embedded in a frame in the app. The idea is that the Extension would supply the blob to the SharedWorker, which forwards the blob to a page in the extension that is embedded in a frame in the app. That page, then forwards the blob to the app using parent.postMessage(...). It's a bit cumbersome but it seems it's the only option I have.
Let me post some code so that it makes a bit more sense:
Extension:
var worker = new SharedWorker(chrome.runtime.getURL('shared-worker.js'));
worker.port.start();
worker.postMessage('hello from extension'); // Can send blob here too
worker.port.addEventListener("message", function(event) {
$('h1Title').innerHTML = event.data;
});
proxy.js
var worker = new SharedWorker(chrome.runtime.getURL('shared-worker.js'));
worker.port.start();
worker.port.addEventListener("message",
function(event) {
parent.postMessage(event.data, 'chrome-extension://[extension id]');
}
);
proxy.html
<script src='proxy.js'></script>
shared-worker.js
var ports = [];
var count = 0;
onconnect = function(event) {
count++;
var port = event.ports[0];
ports.push(port);
port.start();
/*
On both the extension and the app, I get count = 1 and ports.length = 1
I'm running them side by side. This is so maddening!!!
What am I missing?
*/
var msg = 'Hi, you are connection #' + count + ". ";
msg += " There are " + ports.length + " ports open so far."
port.postMessage(msg);
port.addEventListener("message",
function(event) {
for (var i = 0; i < ports.length; ++i) {
//if (ports[i] != port) {
ports[i].postMessage(event.data);
//}
}
});
};
On the app
context.addEventListener("message",
function(event) {
appendLog("message from proxy: " + event.data);
}
);
So this is the execution flow... On the extension I create a shared worker and send a message to it. The shared worker should be capable of receiving a blob but for testing purposes I'm only sending a simple string.
Next, the shared worker receives the message and forwards it to everyone who has connected. The proxy.html/js which is inside a frame in the app has indeed connected at this point and should receive anything forwarded by the shared worker.
Next, proxy.js [should] receives the message from the shared worker and sends it to the app using parent.postMessage(...). The app is listening via a window.addEventListener("message",...).
To test this flow, I first open the app, then I click the extension button. I get no message on the app. I get no errors either.
The extension can communicate back and forth with the shared worker just fine. The app can communicate with the shared worker just fine. However, the message I sent from the extension->proxy->app does not reach the app. What am I missing?
Sorry for the long post guys, but I'm hoping someone will shed some light as this is driving me insane.
Thanks
Thanks for all your help guys. I found the solution to be to convert the blob into a binary string on the extension and then send the string over to the app using chrome's message passing API. On the app, I then did what Francois suggested to convert the binary string back a blob. I had tried this solution before but I had not worked because I was using the following code on the app:
blob = new Blob([blobAsBinString], {type: mimeType});
That code may work for text files or simple strings, but it fails for images (perhaps due to character encoding issues). That's where I was going insane. The solution is to use what Francois provided since the beginning:
var bytes = new Uint8Array(blobAsBinString.length);
for (var i=0; i<bytes.length; i++) {
bytes[i] = blobAsBinString.charCodeAt(i);
}
blob = new Blob([bytes], {type: mimeString});
That code retrains the integrity of the binary string and the blob is recreated properly on the app.
Now I also incorporated something I found suggested by some of you here and RobW elsewhere, which is to split the blob into chunks and send it over like that, in case the blob is too large. The entire solution is below:
ON THE EXTENSION:
function sendBlobToApp() {
// read the blob in chunks/chunks and send it to the app
// Note: I crashed the app using 1 KB chunks. 1 MB chunks work just fine.
// I decided to use 256 KB as that seems neither too big nor too small
var CHUNK_SIZE = 256 * 1024;
var start = 0;
var stop = CHUNK_SIZE;
var remainder = blob.size % CHUNK_SIZE;
var chunks = Math.floor(blob.size / CHUNK_SIZE);
var chunkIndex = 0;
if (remainder != 0) chunks = chunks + 1;
var fr = new FileReader();
fr.onload = function() {
var message = {
blobAsText: fr.result,
mimeString: mimeString,
chunks: chunks
};
// APP_ID was obtained elsewhere
chrome.runtime.sendMessage(APP_ID, message, function(result) {
if (chrome.runtime.lastError) {
// Handle error, e.g. app not installed
// appendLog is defined elsewhere
appendLog("could not send message to app");
}
});
// read the next chunk of bytes
processChunk();
};
fr.onerror = function() { appendLog("An error ocurred while reading file"); };
processChunk();
function processChunk() {
chunkIndex++;
// exit if there are no more chunks
if (chunkIndex > chunks) {
return;
}
if (chunkIndex == chunks && remainder != 0) {
stop = start + remainder;
}
var blobChunk = blob.slice(start, stop);
// prepare for next chunk
start = stop;
stop = stop + CHUNK_SIZE;
// convert chunk as binary string
fr.readAsBinaryString(blobChunk);
}
}
ON THE APP
chrome.runtime.onMessageExternal.addListener(
function(request, sender, sendResponse) {
if (sender.id in blacklistedIds) {
return; // don't allow this extension access
} else if (request.blobAsText) {
//new chunk received
_chunkIndex++;
var bytes = new Uint8Array(request.blobAsText.length);
for (var i=0; i<bytes.length; i++) {
bytes[i] = request.blobAsText.charCodeAt(i);
}
// store blob
_blobs[_chunkIndex-1] = new Blob([bytes], {type: request.mimeString});
if (_chunkIndex == request.chunks) {
// merge all blob chunks
for (j=0; j<_blobs.length; j++) {
var mergedBlob;
if (j>0) {
// append blob
mergedBlob = new Blob([mergedBlob, _blobs[j]], {type: request.mimeString});
}
else {
mergedBlob = new Blob([_blobs[j]], {type: request.mimeString});
}
}
saveBlobToFile(mergedBlob, "myImage.png", request.mimeString);
}
}
}
);
Does my app not have access to the blob because it was created on the
extension? Can anyone please shed some light?
Exactly! You may want to pass a dataUrl instead of a blob. Something like this below could work:
/* Chrome Extension */
var blobToDataURL = function(blob, cb) {
var reader = new FileReader();
reader.onload = function() {
var dataUrl = reader.result;
var base64 = dataUrl.split(',')[1];
cb(base64);
};
reader.readAsDataURL(blob);
};
blobToDataUrl(blob, function(dataUrl) {
chrome.runtime.sendMessage(APP_ID, {databUrl: dataUrl}, function() {});
});
/* Chrome App */
function dataURLtoBlob(dataURL) {
var byteString = atob(dataURL.split(',')[1]),
mimeString = dataURL.split(',')[0].split(':')[1].split(';')[0];
var ab = new ArrayBuffer(byteString.length);
var ia = new Uint8Array(ab);
for (var i = 0; i < byteString.length; i++) {
ia[i] = byteString.charCodeAt(i);
}
var blob = new Blob([ia], {type: mimeString});
return blob;
}
chrome.runtime.onMessageExternal.addListener(
function(request) {
var blob = dataURLtoBlob(request.dataUrl);
saveBlobToFile(blob, "screenshot.png");
});
I am extremely interested in this question, as I am trying to accomplish something similar.
these are the questions that I have found to be related:
How can a Chrome extension save many files to a user-specified directory?
Implement cross extension message passing in chrome extension and app
Does chrome.runtime support posting messages with transferable objects?
Pass File object to background.js from content script or pass createObjectURL (and keep alive after refresh)
According to Rob W, in the first link:
"Chrome's fileSystem (app) API can directly write to the user's filesystem (e.g. ~/Documents or %USERPROFILE%\Documents), specified by the user."
If you can write to a user's filesystem you should be able to read from it right?
I haven't had the opportunity to try this out, but instead of directly passing the file blob to the app, you could save the item to your downloads using the chrome extension downloads api.
Then you could retrieve it with the chrome app filesystem api to gain access to it.
Edit:
I keep reading that the filesystem the api can access is sandboxed. So I have no idea if this solution is possible. It being sandboxed and Rob W's description of "writing directly to the user's filesystem" sound like opposites to me.
Edit:
Rob W has revised his answer here: Implement cross extension message passing in chrome extension and app.
It no longer uses a shared worker, and passes file data as a string to the backend, which can turn the string back into a blob.
I'm not sure what the max length of a message is, but Rob W also mentions a solution for slicing up blobs to send them in pieces.
Edit:
I have sent 43 mbs of data without crashing my app.
That's really an intresting question. From my point of view it can be done using these techniques:
First of all you should convert your blob to arraybuffer. This can be done with FileReader, and it is async operation
Then here comes some magic of Encoding API, which is currently available on stable Chrome. So you convert your arraybuffer into string. This operation is sync
Then you can communicate with other extensions/apps using Chrome API like this. I am using this technique to promote one of my apps (new packaged app) using another famous legacy app. And due to the fact that legacy packaged apps are in fact extensions, I think everything will be okay.

Why is XMLHttpRequest so slow on localhost

We're developing an application that uses XMLHttpRequest for uploading files with drag&drop support. We're using a jQuery plugin for that, but it's not the issue here.
Our tester has reported that uploading files on localhost takes a serious amount of time, considering he's basically sending files to his own machine through the browser. 20 MB file was uploading about 30 seconds (!).
I was assigned to investigate the problem and I found out that the problematic thing is XMLHttpRequest. When I've forced a fallback mechanism (iframe, works but has no progress bar support), the same file our tester was uploading took less than a second.
I've written a simple testing script to see what's the deal is (it's very quick and dirty, don't judge me)
# var file was leaked from our jquery uploader, it's basically input.files[0] where input = <input type="file">
average = 0;
averages = [];
previous = 0;
previous_time = 0;
x = new XMLHttpRequest;
x.open("POST", "/something/accept_file?upload_param_name=file", true);
x.upload.onprogress = function(e) {
now = e.loaded;
now_time = Date.now();
diff = now - previous;
diff_time = now_time - previous_time;
console.log("speed", (diff / diff_time));
averages.push((diff / diff_time));
previous = now;
previous_time = now_time;
}
x.onreadystatechange = function() {
if (x.readyState == 4) {
for(i=0, l=averages.length; i < l; i++) {
average += averages[i]
}
console.log("AVG SPEED: ", average/averages.length)
}
}
x.setRequestHeader("X-Requested-With", "XMLHttpRequest");
x.setRequestHeader("X-File-Name", "test");
x.setRequestHeader("Content-Type", "application/octet-stream");
x.send(file);
Average speed of uploading file to the same server: 488.3 KB/s
Average speed of uploading file to the remote server: 801.7 KB/s (which sounds about right considering our office internet connection)
Now my question is: why is XMLHttpRequest with binary files so slow? It looks to me like it sends the file through all our network so it passes through our router again, but Networking section in Task Manager didn't register any network activity spike (it did when uploading to the remote server though) or I am doing something wrong.
edit: As I see, any mention of keywords "jQuery plugin" makes people think in wrong terms, so:
x = new XMLHttpRequest;
x.open("POST", "/something/accept_file?upload_param_name=file", true);
x.send(file);
This is enough to trigger the problem (slow upload). No jQuery, no fancy callbacks and progress bars, no chunking - three lines of code.

Categories

Resources