We're developing an application that uses XMLHttpRequest for uploading files with drag&drop support. We're using a jQuery plugin for that, but it's not the issue here.
Our tester has reported that uploading files on localhost takes a serious amount of time, considering he's basically sending files to his own machine through the browser. 20 MB file was uploading about 30 seconds (!).
I was assigned to investigate the problem and I found out that the problematic thing is XMLHttpRequest. When I've forced a fallback mechanism (iframe, works but has no progress bar support), the same file our tester was uploading took less than a second.
I've written a simple testing script to see what's the deal is (it's very quick and dirty, don't judge me)
# var file was leaked from our jquery uploader, it's basically input.files[0] where input = <input type="file">
average = 0;
averages = [];
previous = 0;
previous_time = 0;
x = new XMLHttpRequest;
x.open("POST", "/something/accept_file?upload_param_name=file", true);
x.upload.onprogress = function(e) {
now = e.loaded;
now_time = Date.now();
diff = now - previous;
diff_time = now_time - previous_time;
console.log("speed", (diff / diff_time));
averages.push((diff / diff_time));
previous = now;
previous_time = now_time;
}
x.onreadystatechange = function() {
if (x.readyState == 4) {
for(i=0, l=averages.length; i < l; i++) {
average += averages[i]
}
console.log("AVG SPEED: ", average/averages.length)
}
}
x.setRequestHeader("X-Requested-With", "XMLHttpRequest");
x.setRequestHeader("X-File-Name", "test");
x.setRequestHeader("Content-Type", "application/octet-stream");
x.send(file);
Average speed of uploading file to the same server: 488.3 KB/s
Average speed of uploading file to the remote server: 801.7 KB/s (which sounds about right considering our office internet connection)
Now my question is: why is XMLHttpRequest with binary files so slow? It looks to me like it sends the file through all our network so it passes through our router again, but Networking section in Task Manager didn't register any network activity spike (it did when uploading to the remote server though) or I am doing something wrong.
edit: As I see, any mention of keywords "jQuery plugin" makes people think in wrong terms, so:
x = new XMLHttpRequest;
x.open("POST", "/something/accept_file?upload_param_name=file", true);
x.send(file);
This is enough to trigger the problem (slow upload). No jQuery, no fancy callbacks and progress bars, no chunking - three lines of code.
Related
My website is taking longer to load than is optimal because it requests a lot of small files, and those small files eventually take up a lot of requests. The ones I have the most control over are a number of data files which will always be loaded, but having them as separate files makes the process of generating them easier. Is there a way that I could put one HTTP request to a (tar?) file, and then process that efficiently with JavaScript? This is the function that I am using right now to read in the data files. What I would really like is a way to request one file that can be easily parsed. The file structure is very simple, just a collection of 4 byte floats, in a repeating pattern. I suppose I could, and I might, combine them in to a slightly more complex data structure, but if there's a way to just combine all of these files and read it in JavaScript I would love to se it!
Also of some note is some small icon files, I have a dozen or so of those that I would love to do the same thing, combine in to a single file and just load that one file.
function loadBinaryFloatArray(url, convertFunction,variable_name, onLoaded) {
var mRequest = new XMLHttpRequest();
mRequest.open('GET', url);
mRequest.responseType = 'arraybuffer';
mRequest.onreadystatechange = function () {
if (this.readyState === 4) {
// Get bytes
var buffer = this.response;
var dataview = new DataView(buffer);
// Create buffer (4 bytes / float)
var mFloatArray = new Float64Array(buffer.byteLength / 8);
// Copy floats
for (var i = 0; i < mFloatArray.length; i++)
{
mFloatArray[i] = dataview.getFloat64(i * 8,true); // At every 8th byte
}
onLoaded(convertFunction(Array.prototype.slice.call(mFloatArray)),variable_name)
}
};
mRequest.send();
}
Well, the simplest things for your icons would be to use a spritemap.
That being said. This is normally not a thing you should do: Join files on the server, because then you have to resend this HUGE request if it fails.
I also took a look at your website. For me it loads pretty fast. The major problem is that you keep requesting the same image (WhereIsRoadster.png) over and over again, which is probably the thing slowing your website down. Or it might also be down to your internet connection. Without more details, there is not much more I can tell you.
I have an XHR object that downloads 1GB file.
function getFile(callback)
{
var xhr = new XMLHttpRequest();
xhr.onload = function () {
if (xhr.status == 200) {
callback.apply(xhr);
}else{
console.log("Request error: " + xhr.statusText);
}
};
xhr.open('GET', 'download', true);
xhr.onprogress = updateProgress;
xhr.responseType = "arraybuffer";
xhr.send();
}
But the File API can't load all that into memory even from a worker
it throws out of memory...
btn.addEventListener('click', function() {
getFile(function() {
var worker = new Worker("js/saving.worker.js");
worker.onmessage = function(e) {
saveAs(e.data); // FileSaver.js it creates URL from blob... but its too large
};
worker.postMessage(this.response);
});
});
Web Worker
onmessage = function (e) {
var view = new DataView(e.data, 0);
var file = new File([view], 'file.zip', {type: "application/zip"});
postMessage('file');
};
I'm not trying to compress the file, this file is already compressed from server.
I thought storing it first on indexedDB but i i'll have to load blob or file anyway, even if i do request by range bytes, soon or late i will have to build this giant blob..
I want to create blob: url and send it to user after been downloaded by browser
I'll use FileSystem API for Google Chrome, but i want make something for firefox, i looked into File Handle Api but nothing...
Do i have to build an extension for firefox, in order to do the same thing as FileSystem does for google chrome?
Ubuntu 32 bits
Loading 1gb+ with ajax isn't convenient just for monitoring download progress and filling up the memory.
Instead I would just send the file with a Content-Disposition header to save the file.
There are however ways to go around it to monitor the progress. Option one is to have a second websocket that signals how much you have downloaded while you are downloading normally with a get request. the other option will be described later in the bottom
I know you talked about using Blinks sandboxed filesystem in the conversation. but it has some drawbacks. It may need permission if using persistent storage. It only allows 20% of the available disk that are left. And if chrome needs to free some space then it will throw away any others domains temporary storage that was last used for the most recent file. Beside it doesn't work in private mode.
Not to mention that it has been dropping support for it and may never end up in other browsers - but they will most likely not remove it since many sites still depend on it
The only way to process this large file is with streams. That is why I have created a StreamSaver. This is only going to work in Blink (chrome & opera) ATM but it will eventually be supported by other browsers with the whatwg spec to back it up as a standard.
fetch(url).then(res => {
// One idea is to get the filename from Content-Disposition header...
const size = ~~res.headers.get('Content-Length')
const fileStream = streamSaver.createWriteStream('filename.zip', size)
const writeStream = fileStream.getWriter()
// Later you will be able to just simply do
// res.body.pipeTo(fileStream)
// instead of pumping
const reader = res.body.getReader()
const pump = () => reader.read()
.then(({ value, done }) => {
// here you know how large the value (chunk) is and you can
// figure out the download speed/progress when comparing it to the size
return done
? writeStream.close()
: writeStream.write(value).then(pump)
)
// Start the reader
pump().then(() =>
console.log('Closed the stream, Done writing')
)
})
This will not take up any memory
I have a theory that is if you split the file into chunks and store them in the indexedDB and then later merge them together it will work
A blob isn't made of data... it's more like pointers to where a file can be read from
Meaning if you store them in indexedDB and then do something like this (using FileSaver or alternative)
finalBlob = new Blob([blob_A_fromDB, blob_B_fromDB])
saveAs(finalBlob, 'filename.zip')
But i can't confirm this since i haven't tested it, would be good if someone else could
Blob is cool until you want to download a large file, there is a 600MB limit(chrome) for blob since it stores everything in memory.
I'm using PeerJS, but thought that this problem can be about WebRTC in general, hope You can help me out:
I'm trying to write a simple peer-to-peer file sharing. I'm using serialisation: "none" for PeerJS connection DataChannel, as I'm sending just pure ArrayBuffers.
Everything is good with files around 10mb but I have problems sending bigger file (30+ mb), for example after sending aroung 10-20 first chunks of 900mb zip file connection between peers start throwing Connection is not open. You should listen for the "open" event before sending messages. (on the Sender side)
My setup:
File dragged to drag&drop, Sender uses FileReader to read it as ArrayBuffer in chunks of 64x1024 bytes (no difference with 16x1024) and as soon as each chunk is read - it's sent via peer.send(ChunkArrayBuffer).
Reciever creates blob from each recieved chunk, after transmission finished creates a complete blob out of those and gives a link to user.
My peer connection settings:
var con = peer.connect(peerid, {
label: "file",
reliable: true,
serialization: "none"
})
My sending function:
function sliceandsend(file, sendfunction) {
var fileSize = file.size;
var name = file.name;
var mime = file.type;
var chunkSize = 64 * 1024; // bytes
var offset = 0;
function readchunk() {
var r = new FileReader();
var blob = file.slice(offset, chunkSize + offset);
r.onload = function(evt) {
if (!evt.target.error) {
offset += chunkSize;
console.log("sending: " + (offset / fileSize) * 100 + "%");
if (offset >= fileSize) {
con.send(evt.target.result); ///final chunk
console.log("Done reading file " + name + " " + mime);
return;
}
else {
con.send(evt.target.result);
}
} else {
console.log("Read error: " + evt.target.error);
return;
}
readchunk();
};
r.readAsArrayBuffer(blob);
}
readchunk();
}
Any ideas what can cause this?
Update: Setting 50ms Timeout between chunk transmittions helped a bit, 900mb file loading reached 6% (instead of 1 - 2% previously) before started throwing errors. Maybe it's some kind of limit of simultaneous operations through datachannel or overflowing some kind of datachannel buffer?
Update1: Here's my PeerJS connection object with DataChannel object inside it:
Good News everyone!
It was a buffer overflow of DataChannel problem, thx to this article http://viblast.com/blog/2015/2/25/webrtc-bufferedamount/
bufferedAmount is a property of DataChannel(DC) object which in the latest Chrome version displays amount of data in bytes being currently in buffer, when it exceedes 16MB - DC is silently closed.
Therefore anyone who will encounter this problem need to implement buffering mechanism on application level, which will watch for this property and hold back messages if needed. Also, be aware that in versions of Chrome prior to 37
the same property displays quantity(not size) of messages, and more of that it's broken under windows and displays 0, but with v<37 on overflow DC is not closed - only exception thrown, which can also be caught to indicate buffer overflow.
I made an edit in peer.js unminified code for myself, here you can see both methods in one function (for more of the source code you can look at https://github.com/peers/peerjs/blob/master/dist/peer.js#L217)
DataConnection.prototype._trySend = function(msg) {
var self = this;
function buffering() {
self._buffering = true;
setTimeout(function() {
// Try again.
self._buffering = false;
self._tryBuffer();
}, 100);
return false;
}
if (self._dc.bufferedAmount > 15728640) {
return buffering(); ///custom buffering if > 15MB is buffered in DC
} else {
try {
this._dc.send(msg);
} catch (e) {
return buffering(); ///custom buffering if DC exception caught
}
return true;
}
}
Also opened an issue on PeerJS GitHub: https://github.com/peers/peerjs/issues/291
Have a look at Transfer a file
This page shows how to transfer a file via WebRTC datachannels.
To accomplish this in an interoperable way, the file is split into chunks which are then transferred via the datachannel. The datachannel is reliable and ordered by default which is well-suited to filetransfers.
Although it doesn't use peerjs it can be adapted (to use peerjs) and the code is easy to follow and works without any issues.
A Little Background
I've been working for a couple of days on a Chrome extension that takes a screenshot of given web pages multiple times a day. I used this as a guide and things work as expected.
There's one minor requirement extensions can't meet, though. The user must have access to the folder where the images (screenshots) are saved but Chrome Extensions don't have access to the file system. Chrome Apps, on the other hand, do. Thus, after much looking around, I've concluded that I must create both a Chrome Extension and a Chrome App. The idea is that the extension would create a blob of the screenshot and then send that blob to the app which would then save it as an image to a user-specified location. And that's exactly what I'm doing — I'm creating a blob of the screentshot on the extension side and then sending it over to the app where the user is asked to choose where to save the image.
The Problem
Up to the saving part, everything works as expected. The blob is created on the extension, sent over to the app, received by the app, the user is asked where to save, and the image is saved.... THAT is where things fall apart. The resulting image is unusable. When I try to open it, I get a message that says "Can't determine type". Below is the code I'm using:
First ON THE EXTENSION side, I create a blob and send it over, like this:
chrome.runtime.sendMessage(
APP_ID, /* I got this from the app */
{myMessage: blob}, /* Blob created previously; it's correct */
function(response) {
appendLog("response: "+JSON.stringify(response));
}
);
Then, ON THE APP side, I receive the blob and attempt to save it like this:
// listen for external messages
chrome.runtime.onMessageExternal.addListener(
function(request, sender, sendResponse) {
if (sender.id in blacklistedIds) {
sendResponse({"result":"sorry, could not process your message"});
return; // don't allow this extension access
} else if (request.incomingBlob) {
appendLog("from "+sender.id+": " + request.incomingBlob);
// attempt to save blob to choosen location
if (_folderEntry == null) {
// get a directory to save in if not yet chosen
openDirectory();
}
saveBlobToFile(request.incomingBlob, "screenshot.png");
/*
// inspect object to try to see what's wrong
var keys = Object.keys(request.incomingBlob);
var keyString = "";
for (var key in keys) {
keyString += " " + key;
}
appendLog("Blob object keys:" + keyString);
*/
sendResponse({"result":"Ok, got your message"});
} else {
sendResponse({"result":"Ops, I don't understand this message"});
}
}
);
Here's the function ON THE APP that performs the actual save:
function saveBlobToFile(blob, fileName) {
appendLog('entering saveBlobToFile function...');
chrome.fileSystem.getWritableEntry(_folderEntry, function(entry) {
entry.getFile(fileName, {create: true}, function(entry) {
entry.createWriter(function(writer) {
//writer.onwrite = function() {
// writer.onwrite = null;
// writer.truncate(writer.position);
//};
appendLog('calling writer.write...');
writer.write(blob);
// Also tried writer.write(new Blob([blob], {type: 'image/png'}));
});
});
});
}
There are no errors. No hiccups. The code works but the image is useless. What exactly am I missing? Where am I going wrong? Is it that we can only pass strings between extensions/apps? Is the blob getting corrupted on the way? Does my app not have access to the blob because it was created on the extension? Can anyone please shed some light?
UPDATE (9/23/14)
Sorry for the late update, but I was assigned to a different project and could not get back to this until 2 days ago.
So after much looking around, I've decided to go with #Danniel Herr's suggestion which suggests to use a SharedWorker and a page embedded in a frame in the app. The idea is that the Extension would supply the blob to the SharedWorker, which forwards the blob to a page in the extension that is embedded in a frame in the app. That page, then forwards the blob to the app using parent.postMessage(...). It's a bit cumbersome but it seems it's the only option I have.
Let me post some code so that it makes a bit more sense:
Extension:
var worker = new SharedWorker(chrome.runtime.getURL('shared-worker.js'));
worker.port.start();
worker.postMessage('hello from extension'); // Can send blob here too
worker.port.addEventListener("message", function(event) {
$('h1Title').innerHTML = event.data;
});
proxy.js
var worker = new SharedWorker(chrome.runtime.getURL('shared-worker.js'));
worker.port.start();
worker.port.addEventListener("message",
function(event) {
parent.postMessage(event.data, 'chrome-extension://[extension id]');
}
);
proxy.html
<script src='proxy.js'></script>
shared-worker.js
var ports = [];
var count = 0;
onconnect = function(event) {
count++;
var port = event.ports[0];
ports.push(port);
port.start();
/*
On both the extension and the app, I get count = 1 and ports.length = 1
I'm running them side by side. This is so maddening!!!
What am I missing?
*/
var msg = 'Hi, you are connection #' + count + ". ";
msg += " There are " + ports.length + " ports open so far."
port.postMessage(msg);
port.addEventListener("message",
function(event) {
for (var i = 0; i < ports.length; ++i) {
//if (ports[i] != port) {
ports[i].postMessage(event.data);
//}
}
});
};
On the app
context.addEventListener("message",
function(event) {
appendLog("message from proxy: " + event.data);
}
);
So this is the execution flow... On the extension I create a shared worker and send a message to it. The shared worker should be capable of receiving a blob but for testing purposes I'm only sending a simple string.
Next, the shared worker receives the message and forwards it to everyone who has connected. The proxy.html/js which is inside a frame in the app has indeed connected at this point and should receive anything forwarded by the shared worker.
Next, proxy.js [should] receives the message from the shared worker and sends it to the app using parent.postMessage(...). The app is listening via a window.addEventListener("message",...).
To test this flow, I first open the app, then I click the extension button. I get no message on the app. I get no errors either.
The extension can communicate back and forth with the shared worker just fine. The app can communicate with the shared worker just fine. However, the message I sent from the extension->proxy->app does not reach the app. What am I missing?
Sorry for the long post guys, but I'm hoping someone will shed some light as this is driving me insane.
Thanks
Thanks for all your help guys. I found the solution to be to convert the blob into a binary string on the extension and then send the string over to the app using chrome's message passing API. On the app, I then did what Francois suggested to convert the binary string back a blob. I had tried this solution before but I had not worked because I was using the following code on the app:
blob = new Blob([blobAsBinString], {type: mimeType});
That code may work for text files or simple strings, but it fails for images (perhaps due to character encoding issues). That's where I was going insane. The solution is to use what Francois provided since the beginning:
var bytes = new Uint8Array(blobAsBinString.length);
for (var i=0; i<bytes.length; i++) {
bytes[i] = blobAsBinString.charCodeAt(i);
}
blob = new Blob([bytes], {type: mimeString});
That code retrains the integrity of the binary string and the blob is recreated properly on the app.
Now I also incorporated something I found suggested by some of you here and RobW elsewhere, which is to split the blob into chunks and send it over like that, in case the blob is too large. The entire solution is below:
ON THE EXTENSION:
function sendBlobToApp() {
// read the blob in chunks/chunks and send it to the app
// Note: I crashed the app using 1 KB chunks. 1 MB chunks work just fine.
// I decided to use 256 KB as that seems neither too big nor too small
var CHUNK_SIZE = 256 * 1024;
var start = 0;
var stop = CHUNK_SIZE;
var remainder = blob.size % CHUNK_SIZE;
var chunks = Math.floor(blob.size / CHUNK_SIZE);
var chunkIndex = 0;
if (remainder != 0) chunks = chunks + 1;
var fr = new FileReader();
fr.onload = function() {
var message = {
blobAsText: fr.result,
mimeString: mimeString,
chunks: chunks
};
// APP_ID was obtained elsewhere
chrome.runtime.sendMessage(APP_ID, message, function(result) {
if (chrome.runtime.lastError) {
// Handle error, e.g. app not installed
// appendLog is defined elsewhere
appendLog("could not send message to app");
}
});
// read the next chunk of bytes
processChunk();
};
fr.onerror = function() { appendLog("An error ocurred while reading file"); };
processChunk();
function processChunk() {
chunkIndex++;
// exit if there are no more chunks
if (chunkIndex > chunks) {
return;
}
if (chunkIndex == chunks && remainder != 0) {
stop = start + remainder;
}
var blobChunk = blob.slice(start, stop);
// prepare for next chunk
start = stop;
stop = stop + CHUNK_SIZE;
// convert chunk as binary string
fr.readAsBinaryString(blobChunk);
}
}
ON THE APP
chrome.runtime.onMessageExternal.addListener(
function(request, sender, sendResponse) {
if (sender.id in blacklistedIds) {
return; // don't allow this extension access
} else if (request.blobAsText) {
//new chunk received
_chunkIndex++;
var bytes = new Uint8Array(request.blobAsText.length);
for (var i=0; i<bytes.length; i++) {
bytes[i] = request.blobAsText.charCodeAt(i);
}
// store blob
_blobs[_chunkIndex-1] = new Blob([bytes], {type: request.mimeString});
if (_chunkIndex == request.chunks) {
// merge all blob chunks
for (j=0; j<_blobs.length; j++) {
var mergedBlob;
if (j>0) {
// append blob
mergedBlob = new Blob([mergedBlob, _blobs[j]], {type: request.mimeString});
}
else {
mergedBlob = new Blob([_blobs[j]], {type: request.mimeString});
}
}
saveBlobToFile(mergedBlob, "myImage.png", request.mimeString);
}
}
}
);
Does my app not have access to the blob because it was created on the
extension? Can anyone please shed some light?
Exactly! You may want to pass a dataUrl instead of a blob. Something like this below could work:
/* Chrome Extension */
var blobToDataURL = function(blob, cb) {
var reader = new FileReader();
reader.onload = function() {
var dataUrl = reader.result;
var base64 = dataUrl.split(',')[1];
cb(base64);
};
reader.readAsDataURL(blob);
};
blobToDataUrl(blob, function(dataUrl) {
chrome.runtime.sendMessage(APP_ID, {databUrl: dataUrl}, function() {});
});
/* Chrome App */
function dataURLtoBlob(dataURL) {
var byteString = atob(dataURL.split(',')[1]),
mimeString = dataURL.split(',')[0].split(':')[1].split(';')[0];
var ab = new ArrayBuffer(byteString.length);
var ia = new Uint8Array(ab);
for (var i = 0; i < byteString.length; i++) {
ia[i] = byteString.charCodeAt(i);
}
var blob = new Blob([ia], {type: mimeString});
return blob;
}
chrome.runtime.onMessageExternal.addListener(
function(request) {
var blob = dataURLtoBlob(request.dataUrl);
saveBlobToFile(blob, "screenshot.png");
});
I am extremely interested in this question, as I am trying to accomplish something similar.
these are the questions that I have found to be related:
How can a Chrome extension save many files to a user-specified directory?
Implement cross extension message passing in chrome extension and app
Does chrome.runtime support posting messages with transferable objects?
Pass File object to background.js from content script or pass createObjectURL (and keep alive after refresh)
According to Rob W, in the first link:
"Chrome's fileSystem (app) API can directly write to the user's filesystem (e.g. ~/Documents or %USERPROFILE%\Documents), specified by the user."
If you can write to a user's filesystem you should be able to read from it right?
I haven't had the opportunity to try this out, but instead of directly passing the file blob to the app, you could save the item to your downloads using the chrome extension downloads api.
Then you could retrieve it with the chrome app filesystem api to gain access to it.
Edit:
I keep reading that the filesystem the api can access is sandboxed. So I have no idea if this solution is possible. It being sandboxed and Rob W's description of "writing directly to the user's filesystem" sound like opposites to me.
Edit:
Rob W has revised his answer here: Implement cross extension message passing in chrome extension and app.
It no longer uses a shared worker, and passes file data as a string to the backend, which can turn the string back into a blob.
I'm not sure what the max length of a message is, but Rob W also mentions a solution for slicing up blobs to send them in pieces.
Edit:
I have sent 43 mbs of data without crashing my app.
That's really an intresting question. From my point of view it can be done using these techniques:
First of all you should convert your blob to arraybuffer. This can be done with FileReader, and it is async operation
Then here comes some magic of Encoding API, which is currently available on stable Chrome. So you convert your arraybuffer into string. This operation is sync
Then you can communicate with other extensions/apps using Chrome API like this. I am using this technique to promote one of my apps (new packaged app) using another famous legacy app. And due to the fact that legacy packaged apps are in fact extensions, I think everything will be okay.
I'm looking for a way to save large files (exactly 8 megabytes) in Safari. I have tried using both the URI scheme along with the eligreyFileSaver and the flash plugin Downloadify. All of these cause Safari to allocate memory until the web worker process reaches about 2 gigabytes and then Safari crashes.
I realize there are questions like this one before, but I have tried everything those questions have resulted in. Links:
Using HTML5/Javascript to generate and save a file
How to Save a file at client side using JavaScript?
create a file using javascript in chrome on client side
This code works on Firefox & Google Chrome (uses the eligreyFileSaver library for saveAs):
function io_saveData (){
var bb;
var buffer;
var data;
alert ("The file will now be saved.");
bb = new BlobBuilder();
for (var i = 0;i<kMapHeight;i++){
var stduint8 = new Uint8Array (uint16map[i].buffer);
var stduint8LittleEndian = new Uint8Array (kMapWidth*2);
//byte swap work around
for (var j = 0;j<stduint8.length;j+=2){
stduint8LittleEndian [j] = stduint8 [j+1]
stduint8LittleEndian [j+1] = stduint8 [j];
}
bb.append(stduint8LittleEndian.buffer);
}
var blob = bb.getBlob("example/binary");
saveAs(blob, "Data File");
bb = null;
buffer = null;
data = null;
}
I'm looking for a way for Safari to create a download without crashing. The deployment area is Mac OS X, so each machine will have apache built in along with PHP, I would rather not take that route though.
Here you go. First of store the file in HTML5 file system and after the completion data storing download it using filesaver api. I worked on it and I got good results with out blocking UI and crashes of browser. better to do it in webworkers to get performance of app.
Here are helpful article to it.
TEMPORARY storage has a default quota of 50% of available disk as a shared pool. (50GB => 25GB) (Not restricted to 1GB anymore)
http://updates.html5rocks.com/tag/filesystem
Unfortunately, Safari7 seems to not support writing files.
https://github.com/eligrey/FileSaver.js/issues/12
http://caniuse.com/#feat=filesystem