According to Google's guide on chrome's blob implementation, Blob data is paginated to disk when memory is scarce. Thus, Blobs can be used to temporarily store large amounts of data without running out of memory. This is particularly useful for situations when you want to create large download files client-side.
However, I have been noticing that my application does not work in incognito mode. It runs out of memory when creating the download file. Checking chrome://blob-internals indeed shows that Chrome does not paginate in incognito mode - all the data is stored in RAM.
This is puzzling as I've been unable to find documentation on this behavior. Is there a way to force Chrome to use disk? What alternatives exist to store large amounts of data in incognito mode?
Screenshot: Successful blob pagination writes blob data to disk
Screenshot: No pagination in incognito mode. ERR_OUT_OF_MEMORY
Test code:
const buffer = new ArrayBuffer(1e8);
const arr = [];
for (let i = 0; i < 20; i++) { // You may have to modify this if your RAM is larger/smaller.
arr.push(buffer)
}
let blob = new Blob(arr,{ // First blob builds correctly.
type: "application/octet-stream"
});
let blob2 = new Blob(arr,{ // Second blob fails only in incognito mode.
type: "application/octet-stream"
});
// Check "chrome://blob-internals"
Related
If I create multiple new blobs:
var myBlob1 = new Blob(['Hello world!'], {type: 'text/plain'});
var myBlob2 = new Blob(['Very cool!'], {type: 'text/plain'});
var myBlob3 = new Blob(['I love AI!'], {type: 'text/plain'});
How can I list them all in Chrome Dev Tools (or in plain javascript) to expose all the blobs and get all the information? So the result I'm requiring is somehow:
var blobs = getBlobs(); // [myBlob1, myBlob2, myBlob3]
I found this question while looking for this very URL I knew existed but had forgotten, it's hard to find documentation for it.
When using Chrome you can access chrome://blob-internals/ which lists all Blobs URLs that have been created. It doesn't provide a nice interface like dev tools, but it should help you track whether you are revoking URLs appropriately.
You can try to take a Memory Snapshot from your dev-tools, and then to filter by class using Blob.
From there, you should have a list of all the Blobs that were in memory when you ran the Snapshot.
But you won't have much information on these Blobs, even their number might be off.
And there is no way to have it from web-APIs, if there was such a method, then browsers couldn't collect it when unused.
I am trying to use FineUploader to upload a large amount of files. I also need to manipulate the files before uploading them - namely I need to anonymize some identifying information. In another answer, Ray Nicholus suggested to reject the original file in the onSubmit handler and then re-add the manipulated file. So my onSubmit handler looks like so:
onSubmit: function (id, name)
{
var file = this.getFile(id)
if (file.isAnonymized)
{
return;
}
var reader = new FileReader()
reader.onload = function()
{
var arrayBuffer = this.result
var byteArray = new Uint8Array(arrayBuffer)
// Manipulate the byteArray in some way...
var blob = new window.Blob([byteArray])
blob.isAnonymized = true
// add the anonymized file instead
uploader.addFiles({blob: blob, name: name})
}
reader.readAsArrayBuffer(file)
// cancel the original file
return false
},
This works fine for a small amount of files. In a concrete example, a customer tried to upload ~1.500 files of 3MB each in Firefox, and saw Firefox's memory usage spike through the roof before the tab eventually crashed. Other browsers (Chrome, Edge) exhibit similar behavior. Using the browser's developer tools doesn't seem to show any large memory allocations. There are no problems when simply uploading the files as-is, but that's not an option.
I cleaned up the example at https://github.com/sgarcialaguna-mms/file-upload-memory/ somewhat and am now confident that the error is due to the fineuploader library holding on to blobs longer than needed.
The example now loads one file into memory at a time, then passes the blob to the upload library. I also now use an actual server (the Django example from the fineuploader server-examples repository).
With Firefox, when I drag in ~1GB of files, Firefox's memory usage steadily rises during the upload and stays high even after the upload is completed. I can open about:memory, click "Minimize memory usage" to trigger a garbage collection, press on "Measure" and the file data shows up under "memory-file-data/large". Call uploader.reset(), trigger a garbage collection again and Firefox's memory usage drops sharply. Measuring again shows the "memory-file-data/large" objects are no longer present in memory. As per https://github.com/FineUploader/fine-uploader/issues/1540#issuecomment-194201646, calling this._handler.expunge(id) after every upload works as well.
Chrome behaves a bit differently, due to a long-standing bug it eventually starts throwing ERR_FILE_NOT_FOUND errors once more than 500 MB of blob data accumulates. The chrome://blob-internals page shows which blobs are being held as well as their refcount.
I don't know if there is an easy way to tell which variable / closure / whatever is holding on to these objects, but it would help immensely.
I am trying to do cross extension message passing between chrome extension and chrome app according to this article. But I am not sure that how to do it correctly. I used background js to receive and send messages. But no clue whether it is working or not. Actually I want to save file from chrome extension, since it cannot be done I thought this could work. So any idea or suggestion or example is highly welcome.
I have go through many alternatives as also appears in this question. Then one of answer points this example. I found that this example is works fine. I hope that I could use this mechanism to save file using Chrome App's fileSystem API.
The Chrome messaging APIs can only transfer JSON-serializable values. If the files are small, then you could just read the file content using FileReader in the extension, send the message over the external messaging channel to the Chrome App, then save the data using the FileWriter API.
When the files are big, read the file in chunks using file.slice(start, end) then follow the same method as for small files.
Extension:
var app_id = '.... ID of app (32 lowercase a-p characters) ....';
var file = ...; // File or Blob object, e.g. from an <input type=file>
var fr = new FileReader();
fr.onload = function() {
var message = {
blob: fr.result,
filename: file.name,
filetype: file.type
};
chrome.runtime.sendMessage(app_id, message, function(result) {
if (chrome.runtime.lastError) {
// Handle error, e.g. app not installed
console.warn('Error: ' + chrome.runtime.lastError.message);
} else {
// Handle success
console.log('Reply from app: ', result);
}
});
};
fr.onerror = function() { /* handle error */ };
// file or sliced file.
fr.readAsText(file);
App:
chrome.runtime.onMessageExternal.addListener(
function(message, sender, sendResponse) {
// TODO: Validate that sender.id is allowed to invoke the app!
// Do something, e.g. convert back to Blob and do whatever you want.
var blob = new Blob([message.blob], {type: message.filetype});
console.log('TODO: Do something with ' + message.filename + ':', blob);
// Do something, e.g. reply to message
sendResponse('Processed file');
// if you want to send a reply asynchronously, uncomment the next line.
// return true;
});
EDIT: Although the following method using sounded nice in theory, it does not work in practice because a separate SharedWorker process is created for the app / extension.
If you want to send huge files (e.g. Files), then you could implement the following:
Extension: Create proxy.html (content = <script src=proxy.js></script>). (feel free to pick any other name).
Extension: Put proxy.html in web_accessible_resources.
App: Bind a window.onmessage event listener. This event listener will receive messages from the frame you're going to load in the next step.
App: Load chrome-extension://[EXTENSIONID]/proxy.html in a frame within your app. This extension ID can either be hard-coded (see Obtaining Chrome Extension ID for development) or exchanged via the external extension message passing API (make sure that you validate the source - hardcoding the ID would be the best way).
Extension: When proxy.html is loaded, check whether location.ancestorOrigins[0] == 'chrome-extension://[APPID]' to avoid a security leak. Terminate all steps if this condition fails.
Extension: When you want to pass a File or Blob to the app, use parent.postMessage(blob, 'chrome-extension://[APPID]');
App: When it receives the blob from the extension frame, save it to the FileSystem that you obtained through the chrome.fileSystem API.
The last task to solve is getting a file from the extension to the extension frame (proxy.html) that is embedded in the app. This can be done via a SharedWorker, see this answer for an example (you can skip the part that creates a new frame because the extension frame is already created in one of the previous steps).
Note that at the moment (Chrome 35), Files can only be sent with a work-around due to a bug.
I am building a file storage for HTML5, and I am using indexedDB as the storage, I ask the files from the server via xmlHttpRequest with the response type as arrayBuffer (for chrome) and blob (for other browsers).
Everything is fine even if the files-collection size is 500MB or more, (hey, it can even reach GB). But I noticed something strange when I add the file to the indexedDB, it will trigger error when the single file exceeds ~120MB, so it is not stored. But when the file is less than 120MB, it will store it.
Notice that it will only have this error when storing a single file > 120MB, for example, an .mp4 file of 200MB will trigger an error, but if I have 5 videos with each of them have a size of 100MB (so the total will be 500MB) it will be all fine.
I would like to know whether this is a limit-rule or some glitch and the two have the same error. I didn't find any documentation about it. I tested it in IE and Chrome.
EDIT:
Ok, I got this error apparently in the add or put function of indexedDB when storing the file:
inside the e.target.error.message:
The serialized value is too large (size=140989466 bytes, max=133169152 bytes)
At the time this question was asked, Chrome still didn't support saving Blobs to IndexedDB, it only came the next month.
For anyone facing the same issue nowadays, store Blobs or Files directly, not ArrayBuffers.
Contrarily to ArrayBuffers, saving a Blob to IDB doesn't require to serialize its data, the serialization steps of a Blob are just to make a snapshot of the js object and keep the link to the same underlying byte sequence, which itself is not cloned.
The only limit you should face would be the one of the db itself.
Code taken from Chrome's announcement:
var store = db.transaction(['entries'], 'readwrite').objectStore('entries');
// Store the object
var req = store.put(blob, 'blob');
req.onerror = function(e) {
console.log(e);
};
req.onsuccess = function(event) {
console.log('Successfully stored a blob as Blob.');
};
I think this is an issue with your browser's implementation of IndexedDB. I ran into this same error myself in Firefox, when I tried to store a 100 MB file into a IndexedDB record, but the identical code worked fine in Chrome. It seems different browsers have different implementation quirks and limits.
Personally, I suspect this is a bug in Firefox, since Firefox grants the requested size, but then prevents single-record usage of that entire size, whereas Chrome is more forgiving.
I'm trying to create a page where there are links to online pdf's.
When you click these links, it will save the file locally, and add a name / path to local storage.
I then iterate over the local storage keys, to display links to each saved file.
I'm having issues with saving files locally. I tried using chrome filesystem api:
function saveFile() {
chrome.fileSystem.chooseEntry({
type: "saveFile",
suggestedName: "file.txt"
},
function (savedFile) {
localStorage[s] = saveFile.fullPath;
});
}
but I get Uncaught TypeError: Cannot read property 'chooseEntry' of undefined.
Essentially, I need to save a file to the system, and get that path. It is preferable if there is no prompt to select name/location.
If the app is a Chrome extension then I think it is likely that the fileSystem API has not been enabled. For a chrome app you need to enable it via the manifest.json file for that application, you may also be able to ask for user permission.
If it is for a web application then you can request the file system using window.webkitRequestFileSystem although this will only be possible in Chrome (and maybe still Opera).
For cross browser file storage and download support you could use something like Dexie for browser IndexedDB storage and the FileSaver and Blob libraries although you will not have the same storage capacity and flexibility than with the native chrome APIs. Here is an example for FileSaver & Blob:
var blob = new Blob([data.text], {
type: "text/csv;charset=utf-8"
});
saveAs(blob, data.label + ".csv");
Is this a Google Chrome Application?
If so, you must have the following entry in your manifest file:
"permissions" : [
{
"fileSystem" : ["write", "directory"]
}
]
(Be careful with the spelling. I encountered the error because I wrote permission instead of permissions)