IndexedDB-Bug in Google Chrome - javascript

I try to create a ring buffer, so that i can store a lot of json-Data.
The goal is to save around 300,000 records and change them cyclically. For the test, I randomly created 1,000 records (with 10 float values per record) and saved them as json in the indexedDB.
To persist in the IndexedDB, i used a loop (from 0 to 99) and the command "put".
My observation is the following:
On the first pass, the DB is created and the 100 records are saved successfully.
Also, the first refresh works, the new randomly generated float values are saved. But the memory utilization increases significantly.
After a second refresh, the random data will not be changed, because the memory usage has exceeded the limit.
The key for the used indexedDB are set in a loop (starts by 0 and ends by 99).
In other browsers like Firefox and MS Edge, the test runs well, even after 100 refreshes.
Is there someone who knows the problem or even better has a solution?
It would also be ok to delete all Recrords from the indexedDB, while the page is reloading.
So I tried to remove all data while initializing - but also here, the memory usage stayed at a high level. (over 230 MB).
function getObjectStore(store_name, mode) {
var tx = db.transaction(store_name, mode);
return tx.objectStore(store_name);
}
function putDbElement(number, json, _callback) {
var obj = {
number: number,
json: json
};
var store = getObjectStore(DB_STORE_NAME, 'readwrite');
var req;
try {
req = store.put(obj);
_callback();
} catch (e) {
throw e;
}
}
for ( var i = 0; i < 100; i++ ) {
putDbElement(
i,
getRandomJson( 1000 ),
function() {
console.log( "created: " + i );
}
);
}

IndexedDB is asynchronous.
You are opening a new transaction for each iteration. It could be the reason for high memory usage.
You need to handle success, error. You can use loops, but they must be within a transaction, onsuccess. Then each put operation must have its own success/error handlers too.

Thankes alot for your quick answer.
I've extended the code to onsuccess and onerror.But still had the same problem.
Although I found no solution, but an explanation for the problem: IndexedDB size keeps growing even though data saved doesn't change.
Chrome uses the LevelDB to be faster, but in my case I find that irritating.

Related

How do I fix or work around this memory leak in fetch?

Similar to this question:
Fetch API leaks memory in Chrome
When using fetch to regularly poll data, Chrome's memory usage continually increases without ever releasing the memory, which eventually causes a crash.
https://jsfiddle.net/abfinz/ahc65y3s/13/
const state = {
response: {},
count: 0
}
function poll(){
fetch('https://upload.wikimedia.org/wikipedia/commons/3/3d/LARGE_elevation.jpg')
.then(response => {
state.response = response;
state.count = state.count + 1;
if (state.count < 20){
console.log(state.count);
setTimeout(poll, 3000);
} else {
console.log("Will restart in 1 minute");
state.count = 0;
setTimeout(poll, 60000);
}
});
}
poll();
This JSFiddle demonstrates the issue fairly well. By polling for data every 3 seconds, it seems that something is causing Chrome to continually hold onto the memory. If I let it stop and wait for a few minutes, it usually will release the memory, but if polling continues, it usually holds onto it. Also, if I let it run for a few full cycles, even forcing garbage collection from the perfomance tab of the dev tools doesn't always release all of the memory.
The memory doesn't show up in the JS Heap. I have to use the Task Manager to see it.
Occasionally, the memory will clear while actively polling, but inevitably builds to extreme levels.
Edge also shows the issue, but seems to be more proactive in clearing the memory. Though it still eventually builds to 1GB+ of extra memory usage.
Am I doing something wrong, or is this a bug? Any ideas on how I can get this kind of polling to work long-term without the memory leak?
I played around a bit with it and it seems to be a bug with the handling of the response so that it won't free the allocated memory if you are not calling any of the response functions.
The chrome task manager and the windows task manager report the same size of 30 MB constantly if i start the code snippet here using this order of execution. Meanwhile it runs on jsfiddle too with 30 MB on request #120.
const state = {
response: {},
count: 0
},
uri = 'https://upload.wikimedia.org/wikipedia/commons/3/3d/LARGE_elevation.jpg';
!function poll() {
const controller = new AbortController(),
signal = controller.signal;
// using this you can cancel it and destroy it completly.
fetch(uri, { signal })
// this is triggered as soon as the header data is transfered
.then(response => {
/**
* Continung without doing anything on response
* fills the memory.
*
* Chrome downloads the file in the background and
* seems to wait for the use of a call on the
* response.fx() or an abort signal.
*
* This seems to be a bug or a small design mistake
* if response is never used.
*
* If response.json(), .blob(), .body() or .text() is
* called the memory will be free'd.
*/
return response.blob();
}).then((binary) => {
// store data to a var
return state.response = binary;
}).catch((err) => {
console.error(err);
}).finally(() => {
// and start the next poll
console.log(++state.count, state.response.type, (state.response.size / 1024 / 1024).toFixed(2)+' MB');
requestAnimationFrame(poll);
// console.dir(state.response);
// reduces memory a bit more
controller.abort();
})
}()

JavaScript: Like-Counter with Memory

Complete - Edited Once
I am looking to create a Like Counter with persistent Memory!
Right now, my project is stored on a USB-Drive and I'm not thinking of uploading my semi-finished site to the Internet just yet. I'm carrying it around, plugging and working.
A feature of the site, is a Heart Counter and Like Counter, respective with their symbolic icons.
I have a little sideline JavaScript file that has a dozen functions to handle the click-events and such - such as the Number Count of the counters.
But, as the values of the counters are auto-assigned to Temporary Memory - if you were to reload the page - the counter number would reset to it's default, Zero. A huge headache...
Reading from .txt
I thought of using the experimental ReadFile() object to handle the problem - but I soon found that it needed a user-put file to operate (from my examinations).
Here's my attempt:
if (heartCount || likeCount >= 1) {
var reader = new FileReader();
var readerResults = reader.readAsText(heartsAndLikes.txt);
//return readerResults
alert(readerResults);
}
When loaded, the page runs through standard operations, except for the above.
This, in my opinion, would have been the ideal solution...
Reading from Cookies
Cookies now don't seem like an option as it resides on a per-computer basis.
They are stored on the computer's SSD, not in the JavaScript File... sad...
HTML5 Web Storage
Using the new Web Storage will be of big help, probably. But again, it is on a per-computer basis, no matter how beautiful the system is...
localStorage.heartCount = 0 //Originally...
function heartButtonClicked() {
if (localStorage.heartCount) {
localStorage.heartCount = Number(localStorage.heartCount) + 1
}
document.getElementById('heartCountDisplay').innerHTML = localStorage.heartCount
} //Function is tied to the heartCountButton directly via the 'onclick' method
However, I am questioning whether web storage can be carried over on a USB-Drive...
Summarised ideas
Currently, I am looking to Reading and Editing the files, as it's most ideal to my situation. But...
Which would you use? Would you introduce a new method of things?
Please, tell me about it! :)
if (typeof(Storage) !== "undefined") { //make sure local storage is available
if (!localStorage.heartCount) { //if heartCount is not set then set it to zero
localStorage.heartCount = 0;
}
} else {
alert('Local storage is not available');
}
function heartButtonClicked() {
if (localStorage.heartCount) { //if heartCount exists then increment it by one
localStorage.heartCount++;
}
//display the result
document.getElementById('heartCountDisplay').innerHTML = localStorage.heartCount
}
This will only work on a per computer basis and will not persist on your thumb drive. The only way I can think of to persist the data on your drive is to manually download a JSON or text file.

NodeJS - Memory/CPU management with MongoJS Stream

I'm parsing a fairly large dataset from MongoDB (of about 40,000 documents, each with a decent amount of data inside).
The stream is being accessed like so:
var cursor = db.domains.find({ html: { $exists: true } });
cursor.on('data', function(rec) {
i++;
var url = rec.domain;
var $ = cheerio.load(rec.html);
checkList($, rec, url, i);
// This "checkList" function parses HTML data with Cheerio to find different elements on the page. Lots of if/else statements
});
cursor.on('end', function(){
console.log("Streamed all objects!");
})
Each record gets parsed with Cheerio (the record contains HTML data from a page scraped earlier) and then I process the Cheerio data to look for various selectors, then saved back to MongoDB.
For the first ~2,000 objects the data is parsed quite quickly (in ~30 seconds). After that it becomes far slower, around 50 records being parsed per second.
Looking in my Macbook Air's activity monitor I see that it's not using a crazy amount of memory (226.5mb / 8gb ram) but it is using a whole lot of CPU (io.js is taking up 99% of my cpu).
Is this a possible memory leak? The checkLists function isn't particularly intensive (or at least, as far as I can tell - there are quite a few nested if/else statements but not much else).
Am I meant to be clearing my variables after they're being used, like setting $ = '' or similar? Any other reason with Node would be using so much CPU?
You basically need to "pause" the stream or otherwise "throttle" it from executing on very data item recieved straight away. So the code in the "event" does not wait before completion before the next event is fired, unless you stop the events emitting.
var cursor = db.domains.find({ html: { $exists: true } });
cursor.on('data', function(rec) {
cursor.pause(); // stop processessing new events
i++;
var url = rec.domain;
var $ = cheerio.load(rec.html);
checkList($, rec, url, i);
// if checkList() is synchronous then here
cursor.resume(); // start events again
});
cursor.on('end', function(){
console.log("Streamed all objects!");
})
If checkList() contains async methods, then pass in the cursor
checkList($, rec, url, i,cursor);
And process the "resume" inside:
function checkList(data, rec, url, i, cursor) {
somethingAsync(args,function(err,result) {
// We're done
cursor.resume(); // start events again
})
}
The "pause" stops the events emitting from the stream until the "resume" is called. This means your operations don't "stack up" in memory and wait for each to complete.
You probably want more advanced flow control for some parallel processing, but this is basically how you do it with streams.
And resume inside

How to get the value of SELECT COUNT(*)?

I've literally been trying all day to make Firefox to obey my will...
I want :
int c = SELECT COUNT(*) FROM ...
I've tried executeAsync({...});, but I believe it's the wrong paradigm, as I want the result immediately. (And mozIStoragePendingStatement results in errors)
var count = 0;
var conn = Services.storage.openDatabase(dbfile); // Will also create the file if it does not exist
let statement = conn.createStatement("SELECT COUNT(*) FROM edges LIMIT 42;");
console.log("columns: " + statement.columnCount); // prints "1";
console.log("col name:" + statement.getColumnName(0)); // is "COUNT(*)"
while (statement.executeStep())
count = statement.row.getResultByIndex(0); // "illegal value"
count = statement.row.getString(0); // "illegal value", too
count = statement.row.COUNT(*); // hahaha. still not working
count = statement.row[0]; // hahaha. "undefinded"
count = statement.row[1]; // hahaha. "undefinded"
}
statement.reset();
It basically works but I dont get the value. What's wrong with all the statements (those within the loop).
Thanks for any hints...
I've tried executeAsync({...});, but I believe it's the wrong paradigm, as I want the result immediately.
You shouldn't want that, the Storage API is asynchronous for a reason. Synchronous access to databases can cause a random delay (e.g. if the hard drive is busy). And since your code executes on the main thread (the same thread that services the user interface) the entire user interface would hang while your code is waiting for the database to respond. The Mozilla devs tried synchronous database access in Firefox 3 and quickly noticed that it degrades user experience - hence the asynchronous API, the database processing happens on a background thread without blocking anything.
You should change your code to work asynchronously. Something like this should do for example:
Components.utils.import("resource://gre/modules/Services.jsm");
var conn = Services.storage.openDatabase(dbfile);
if (conn.schemaVersion < 1)
{
conn.createTable("edges", "s INTEGER, t INTEGER");
conn.schemaVersion = 1;
}
var statement = conn.createStatement("SELECT COUNT(*) FROM edges");
statement.executeAsync({
handleResult: function(resultSet)
{
var row = resultSet.getNextRow();
var count = row.getResultByIndex(0);
processResult(count);
},
handleError: function(error) {},
handleCompletion: function(reason) {}
});
// Close connection once the pending operations are completed
conn.asyncClose();
See also: mozIStorageResultSet, mozIStorageRow.
Try aliasing count(*) as total, then fetch that

How to stop intense Javascript loop from freezing the browser

I'm using Javascript to parse an XML file with about 3,500 elements. I'm using a jQuery "each" function, but I could use any form of loop.
The problem is that the browser freezes for a few seconds while the loop executes. What's the best way to stop freezing the browser without slowing the code down too much?
$(xmlDoc).find("Object").each(function() {
//Processing here
});
I would ditch the "each" function in favour of a for loop since it is faster. I would also add some waits using the "setTimeout" but only every so often and only if needed. You don't want to wait for 5ms each time because then processing 3500 records would take approx 17.5 seconds.
Below is an example using a for loop that processes 100 records (you can tweak that) at 5 ms intervals which gives a 175 ms overhead.
var xmlElements = $(xmlDoc).find('Object');
var length = xmlElements.length;
var index = 0;
var process = function() {
for (; index < length; index++) {
var toProcess = xmlElements[index];
// Perform xml processing
if (index + 1 < length && index % 100 == 0) {
setTimeout(process, 5);
}
}
};
process();
I would also benchmark the different parts of the xml processing to see if there is a bottleneck somewhere that may be fixed. You can benchmark in firefox using firebug's profiler and by writing out to the console like this:
// start benchmark
var t = new Date();
// some xml processing
console.log("Time to process: " + new Date() - t + "ms");
Hope this helps.
Set a timeOut between processing to prevent the loop cycle from eating up all the browser resources. In total it would only take a few seconds to process and loop through everything, not unreasonable for 3,500 elements.
var xmlElements = $(xmlDoc).find('Object');
var processing = function() {
var element = xmlElements.shift();
//process element;
if (xmlElements.length > 0) {
setTimeout(processing, 5);
}
}
processing();
I'd consider converting the 3500 elements from xml to JSON serverside or even better upload it to server converted, so that it's native to JS from the getgo.
This would minimize your load and prolly make the file size smaller too.
you can setTimeout() with duration of ZERO and it will yield as desired
Long loops without freezing the browser is possible with the Turboid framework. With it, you can write code like:
loop(function(){
// Do something...
}, number_of_iterations, number_of_milliseconds);
More details in this turboid.net article: Real loops in Javascript
Javascript is single-threaded, so aside from setTimeout, there's not much you can do. If using Google Gears is an option for your site, they provide the ability to run javascript in a true background thread.
You could use the HTML5 workers API, but that will only work on Firefox 3.1 and Safari 4 betas atm.
I had the same problem which was happening when user refreshed the page successively. The reason was two nested for loops which happened more than 52000 times. This problem was harsher in Firefox 24 than in Chrome 29 since Firefox would crash sooner (around 2000 ms sooner than Chrome). What I simply did and it worked was that I user "for" loops instead of each and then I refactored the code so that I divided the whole loop array to 4 separated calls and then merged the result into one. This solution has proven that it has worked.
Something like this:
var entittiesToLoop = ["..."]; // Mainly a big array
loopForSubset(0, firstInterval);
loopForSubset(firstInterval, secondInterval);
...
var loopForSubset = function (startIndex, endIndex) {
for (var i=startIndex; i < endIndex; i++) {
//Do your stuff as usual here
}
}
The other solution which also worked for me was the same solution implemented with Worker APIs from HTML5. Use the same concept in workers as they avoid your browser to be frozen because they run in the background of your main thread. If just applying this with Workers API did not work, place each of instances of loopForSubset in different workers and merge the result inside the main caller of Worker.
I mean this might not be perfect but this has worked. I can help with more real code chunks, if someone still thinks this might suite them.
You could try shortening the code by
$(xmlDoc).find("Object").each(function(arg1) {
(function(arg1_received) {
setTimeout(function(arg1_received_reached) {
//your stuff with the arg1_received_reached goes here
}(arg1_received), 0)
})(arg1)
}(this));
This won't harm you much ;)
As a modification of #tj111 answer the full usable code
//add pop and shift functions to jQuery library. put in somewhere in your code.
//pop function is now used here but you can use it in other parts of your code.
(function( $ ) {
$.fn.pop = function() {
var top = this.get(-1);
this.splice(this.length-1,1);
return top;
};
$.fn.shift = function() {
var bottom = this.get(0);
this.splice(0,1);
return bottom;
};
})( jQuery );
//the core of the code:
var $div = $('body').find('div');//.each();
var s= $div.length;
var mIndex = 0;
var process = function() {
var $div = $div.first();
//here your own code.
//progress bar:
mIndex++;
// e.g.: progressBar(mIndex/s*100.,$pb0);
//start new iteration.
$div.shift();
if($div.size()>0){
setTimeout(process, 5);
} else {
//when calculations are finished.
console.log('finished');
}
}
process();

Categories

Resources