I want to add objects into some table in IndexedDB in one transaction:
_that.bulkSet = function(data, key) {
var transaction = _db.transaction([_tblName], "readwrite"),
store = transaction.objectStore(_tblName),
ii = 0;
_bulkKWVals.push(data);
_bulkKWKeys.push(key);
if (_bulkKWVals.length == 3000) {
insertNext();
}
function insertNext() {
if (ii < _bulkKWVals.length) {
store.add(_bulkKWVals[ii], _bulkKWKeys[ii]).onsuccess = insertNext;
++ii;
} else {
console.log(_bulkKWVals.length);
}
}
};
Looks like that it works fine, but it is not very optimized way of doing that especially if the number of objects is very high (~50.000-500.000). How could I possibly optimize it? Ideally I want to add first 3000, then remove it from the array, then add another 3000, namely in chunks. Any ideas?
Inserting that many rows consecutively, is not possible to get good performance.
I'm an IndexedDB dev and have real-world experience with IndexedDB at the scale you're talking about (writing hundreds of thousands of rows consecutively). It ain't too pretty.
In my opinion, IDB is not suitable for use when a large amount of data has to be written consecutively. If I were to architect an IndexedDB app that needed lots of data, I would figure out a way to seed it slowly over time.
The issue is writes, and the problem as I see it is that the slowness of writes, combined with their i/o intensive nature, makes gets worse over time. (Reads are always lightening fast in IDB, for what it's worth.)
To start, you'll get savings from re-using transactions. Because of that your first instinct might be to try to cram everything into the same transaction. But from what I've found in Chrome, for example, is that the browser doesn't seem to like long-running writes, perhaps because of some mechanism meant to throttle misbehaving tabs.
I'm not sure what kind of performance you're seeing, but average numbers might fool you depending on the size of your test. The limiting faster is throughput, but if you're trying to insert large amounts of data consecutively pay attention to writes over time specifically.
I happen to be working on a demo with several hundred thousand rows at my disposal, and have stats. With my visualization disabled, running pure dash on IDB, here's what I see right now in Chrome 32 on a single object store with a single non-unique index with an auto-incrementing primary key.
A much, much smaller 27k row dataset, I saw 60-70 entries/second:
* ~30 seconds: 921 entries/second on average (there's always a great burst of inserts at the start), 62/second at the moment I sampled
* ~60 seconds: 389/second average (sustained decreases starting to outweigh effect initial burst) 71/second at moment
* ~1:30: 258/second, 67/second at moment
* ~2:00 (~1/3 done): 188/second on average, 66/second at moment
Some examples with a much smaller dataset show far better performance, but similar characteristics. Ditto much larger datasets - the effects are greatly exaggerated and I've seen as little as <1 entries per second when leaving for multiple hours.
IndexedDB is actually designed to optimize for bulk operations. The problem is that the spec and certain docs does not advertice the way it works. If paying certain attention to the parts in the IndexedDB specification that defines how all the mutating operations in IDBObjectStore works (add(), put(), delete()), you'll find out that it allow callers to call them synchronously and omit listening to the success events but the last one. By omitting doing that (but still listen to onerror), you will get enormous performance gains.
This example using Dexie.js shows the possible bulk speed as it inserts 10,000 rows in 680 ms on my macbook pro (using Opera/Chromium).
Accomplished by the Table.bulkPut() method in the Dexie.js library:
db.objects.bulkPut(arrayOfObjects)
Related
I want to measure the memory usage of my web SPA using performance.memory, and the purpose is to detect if there is any problem i.e. memory leak during the webapp's lifetime.
For this reason I would need to call this API for specific time interval - it could be every 3 second, every 30 second, or every 1 minute, ... Then I have a question - to detect any issue quickly and effectively I would have to make the interval as short as I could, but then I come up with the concern about performance. The measuring itself could affect the performance of the webapp if the measuring is such a expensive task (hopefully I don't think that is the case though)
With this background above, I have the following questions:
Is performance.memory such a method which would affect browser's main thread's performance so that I should care about the frequency of usage?
Would there be a right way or procedure to determine whether a (Javascript) task is affecting the performance of a device? If question 1 is uncertain, then I would have to try other way to find out the proper interval for calling of memory measurement.
(V8 developer here.)
Calling performance.memory is pretty fast. You can easily verify that in a quick test yourself: just call it a thousand times in a loop and measure how long that takes.
[EDIT: Thanks to #Kaiido for highlighting that this kind of microbenchmark can in general be very misleading; for example the first operation could be much more expensive; or the benchmark scenario could be so different from the real application's scenario that the results don't carry over. Do keep in mind that writing useful microbenchmarks always requires some understanding/inspection of what's happening under the hood!
In this particular case, knowing a bit about how performance.memory works internally, the results of such a simple test are broadly accurate; however, as I explain below, they also don't matter.
--End of edit]
However, that observation is not enough to solve your problem. The reason why performance.memory is fast is also the reason why calling it frequently is pointless: it just returns a cached value, it doesn't actually do any work to measure memory consumption. (If it did, then calling it would be super slow.) Here is a quick test to demonstrate both of these points:
function f() {
if (!performance.memory) {
console.error("unsupported browser");
return;
}
let objects = [];
for (let i = 0; i < 100; i++) {
// We'd expect heap usage to increase by ~1MB per iteration.
objects.push(new Array(256000));
let before = performance.now();
let memory = performance.memory.usedJSHeapSize;
let after = performance.now();
console.log(`Took ${after - before} ms, result: ${memory}`);
}
}
f();
(You can also see that browsers clamp timer granularity for security reasons: it's not a coincidence that the reported time is either 0ms or 0.1ms, never anything in between.)
(Second) however, that's not as much of a problem as it may seem at first, because the premise "to detect any issue quickly and effectively I would have to make the interval as short as I could" is misguided: in garbage-collected languages, it is totally normal that memory usage goes up and down, possibly by hundreds of megabytes. That's because finding objects that can be freed is an expensive exercise, so garbage collectors are carefully tuned for a good compromise: they should free up memory as quickly as possible without wasting CPU cycles on useless busywork. As part of that balance they adapt to the given workload, so there are no general numbers to quote here.
Checking memory consumption of your app in the wild is a fine idea, you're not the first to do it, and performance.memory is the best tool for it (for now). Just keep in mind that what you're looking for is a long-term upwards trend, not short-term fluctuations. So measuring every 10 minutes or so is totally sufficient, and you'll still need lots of data points to see statistically-useful results, because any single measurement could have happened right before or right after a garbage collection cycle.
For example, if you determine that all of your users have higher memory consumption after 10 seconds than after 5 seconds, then that's just working as intended, and there's nothing to be done. Whereas if you notice that after 10 minutes, readings are in the 100-300 MB range, and after 20 minutes in the 200-400 MB range, and after an hour they're 500-1000 MB, then it's time to go looking for that leak.
Due to the reasons outlined in this question I am building my own client side search engine rather than using the ydn-full-text library which is based on fullproof. What it boils down to is that fullproof spawns "too freaking many records" in the order of 300.000 records whilst (after stemming) there are only about 7700 unique words. So my 'theory' is that fullproof is based on traditional assumptions which only apply to the server side:
Huge indices are fine
Processor power is expensive
(and the assumption of dealing with longer records which is just applicable to my case as my records are on average 24 words only1)
Whereas on the client side:
Huge indices take ages to populate
Processing power is still limited, but relatively cheaper than on the server side
Based on these assumptions I started of with an elementary inverted index (giving just 7700 records as IndexedDB is a document/nosql database). This inverted index has been stemmed using the Lancaster stemmer (most aggressive one of the two or three popular ones) and during a search I would retrieve the index for each of the words, assign a score based on overlap of the different indices and on similarity of typed word vs original (Jaro-Winkler distance).
Problem of this approach:
Combination of "popular_word + popular_word" is extremely expensive
So, finally getting to my question: How can I alleviate the above problem with a minimal growth of the index? I do understand that my approach will be CPU intensive, but as a traditional full text search index seems unusably big this seems to be the only reasonable road to go down on. (Pointing me to good resources or works is also appreciated)
1 This is a more or less artificial splitting of unstructured texts into small segments, however this artificial splitting is standardized in the relevant field so has been used here as well. I have not studied the effect on the index size of keeping these 'snippets' together and throwing huge chunks of texts at fullproof. I assume that this would not make a huge difference, but if I am mistaken then please do point this out.
This is a great question, thanks for bringing some quality to the IndexedDB tag.
While this answer isn't quite production ready, I wanted to let you know that if you launch Chrome with --enable-experimental-web-platform-features then there should be a couple features available that might help you achieve what you're looking to do.
IDBObjectStore.openKeyCursor() - value-free cursors, in case you can get away with the stem only
IDBCursor.continuePrimaryKey(key, primaryKey) - allows you to skip over items with the same key
I was informed of these via an IDB developer on the Chrome team and while I've yet to experiment with them myself this seems like the perfect use case.
My thought is that if you approach this problem with two different indexes on the same column, you might be able to get that join-like behavior you're looking for without bloating your stores with gratuitous indexes.
While consecutive writes are pretty terrible in IDB, reads are great. Good performance across 7700 entries should be quite tenable.
I'm currently writing an application that displays a lot, and I mean, a lot of 2D paths (made of hundreds, thousands of tiny segments) on an HTML5 canvas. Typically, a few million points. These points are downloaded from a server into a binary ArrayBuffer.
I probably won't be using that many points in the real world, but I'm kinda interested in how I could improve the performance. You can call it curiosity if you want ;)
Anyway, I've tested the following solutions :
Using gl.LINES or gl.LINE_STRIP with WebGL, and compute everything in shaders on the GPU. Currently the fastest, can display up to 10M segments without flinching on my Macbook Air. But there are very strict constraints for the binary format if you want to avoid processing things in JavaScript, which is slow.
Using Canvas2D, draw a huge path with all the segments in one stroke() call. When I'm getting past 100k points, the page freezes for a few seconds before the canvas is updated. So, not working here.
Using Canvas2D, but draw each path with its own stroke() call. Despite what others have been saying on the internet, this is much faster than drawing everything in one call, but still a lot slower than WebGL. Things start to get bad when I reach about 500k segments.
The two Canvas2D solutions require looping through all the points of all the paths in JavaScript, so this is quite slow. Do you know of any method(s) that could improve JavaScript's iteration speed in an ArrayBuffer, or processing speed in general?
But, what's strange is, the screen isn't updated immediately after all the canvas draw calls have finished. When I start getting to the performance limit, there is a noticeable delay between the end of the draw calls and the update of the canvas. Do you have any idea where that comes from, and is there a way to reduce it?
First, WebGL was a nice and hype idea, but the amount of processing required to decode and display the binary data simply doesn't work in shaders, so I ruled it out.
Here are the main bottlenecks I've encountered. Some of them are quite common in general programming, but it's a good idea to remember them :
It's best to use multiple, small for loops
Create variables and closures at the highest level possible, don't create them inside the for loops
Render your data in chunks, and use setTimeout to schedule the next chunk after a few milliseconds : that way, the user will still be able to use the UI
JavaScript objects and arrays are fast and cheap, use them. It's best to read/write them in sequential order, from the beginning to the end.
If you don't write data sequentially in an array, use objects (because non-sequential read-writes are cheap for objects) and push the indexes into an index array. I used a SortedList implementation to keep the indexes sorted, which I found here. Overhead was minimal (about 10-20% of the rendering time), and in the end it was well worth it.
That's about everything I remember. If I do find something else, I'll update this answer!
I'm using a jQuery plugin called Tablesorter
to do client-side sorting of a log table in one of my applications. I am also making use of the tablepager add-in.
I really like the responsiveness that client-side sorting and paging brings to the party. I also like how you don't have to hit the web server or database repeatedly.
However I can see that, in time, the log I'm displaying could grow quite large. I'm sure there comes a point where client-side paging and sorting is going to be impractical. What point will this technique begin to collapse under it's own weight? 500 records? 2000 records? 10,000 records?
EDIT:
In nutshell, what criteria would you use to determine if you are going to use client-side sorting/paging as opposed to server-side paging? Does the size of expected result set factor into your decision? Where is the tipping point?
This technique will probably collapse when the browser or client host can't take it.
Use server-side pagination to prevent this.
I would first consider the amount of data I am sending to the client, which in turn causes the Loading time factor.
Say if each row of the table is 200 bytes, and I am sending 10000 rows to the client (which allows client sorting and pagination), I am sending 200 * 10000 = 2,000,000 bytes, aka 2 MB. This will take the browser quite some time to load it from server, then some time for the sorting plugin to sort everything, then pagination some time to page it.
In fact, you server load will increase with the need to send ALL the rows to the client.
Normally with so much data and iteration for Javascript to handle, the browser (Firefox or similar) will lock up and look as though it is crashing.
If you use server side sorting + pagination, the client sees the accurate and up-to-date information. Also say you have the same 10000 rows, each 200 bytes. You have 20 rows per page. You are only sending 20 * 200 = 4000 bytes, which is 4 KB, relatively small and can be handled by the browser/server.
However I can see that, in time, the
log I'm displaying could grow quite
large. I'm sure there comes a point
where client-side paging and sorting
is going to be impractical. What point
will this technique begin to collapse
under it's own weight? 500 records?
2000 records? 10,000 records?
This really depends on a lot of different things like table size number of columns, and which browser and version the person is using. I routinely can sort up to 1000 records before seeing a real problem. If you start approaching this number, I would definitely start looking at server side sorting. With AJAX, server side sorting can be quite efficient and have a decent user experience.
The best way is to look at your particular situation is to try it out and see. Browsers although not really designed to handle really large amounts of data like that can still handle it. The user experience will be abysmal, but the number of records it can handle are quite high.
A few hundred is probably okay, depending on the number of columns. This will most certainly break down when you're dealing data on the order of 10^3 (thousands).
These have been my empirical findings across different browsers, but I was usually on beefy hardware. I would limit your data set to hundreds.
I was reading about output buffering in JavaScript here, and was trying to get my head around the script the author says was the fastest at printing 1 to 1,000,000 to a web page. (Scroll down to the header "The winning one million number script".) After studying it a bit, I have a few questions:
What makes this script so efficient compared to other approaches?
Why does buffering speed things up?
How do you determine the proper buffer size to use?
Does anyone here have any tricks up her/his sleeve that could optimize this script further?
(I realize this is probably CS101, but I'm one of those blasted, self-taught hackers and I was hoping to benefit from the wisdom of the collective on this one. Thanks!)
What makes this script so efficient compared to other approaches?
There are several optimizations that the author is making to this algorithm. Each of these requires a fairly deep understanding of how the are underlying mechanisms utilized (e.g. Javascript, CPU, registers, cache, video card, etc.). I think there are 2 key optimizations that he is making (the rest are just icing):
Buffering the output
Using integer math rather than string manipulation
I'll discuss buffering shortly since you ask about it explicitly. The integer math that he's utilizing has two performance benefits: integer addition is cheaper per operation than string manipulation and it uses less memory.
I don't know how JavaScript and web browsers handle the conversion of an integer to a display glyph in the browser, so there may be a penalty associated with passing an integer to document.write when compared to a string. However, he is performing (1,000,000 / 1000) document.write calls versus 1,000,000 - 1,000 integer additions. This means he is performing roughly 3 orders of magnitude more operations to form the message than he is to send it to the display. Therefore the penalty for sending an integer vs a string to document.write would have to exceed 3 orders of magnitude offset the performance advantage of manipulating integers.
Why does buffering speed things up?
The specifics of why it works vary depending on what platform, hardware, and implementation you are using. In any case, it's all about balancing your algorithm to your bottleneck inducing resources.
For instance, in the case of file I/O, buffer is helpful because it takes advantage of the fact that a rotating disk can only write a certain amount at a time. Give it too little work and it won't be using every available bit that passes under the head of the spindle as the disk rotates. Give it too much, and your application will have to wait (or be put to sleep) while the disk finishes your write - time that could be spent getting the next record ready for writing! Some of the key factors that determine ideal buffer size for file I/O include: sector size, file system chunk size, interleaving, number of heads, rotation speed, and areal density among others.
In the case of the CPU, it's all about keeping the pipeline full. If you give the CPU too little work, it will spend time spinning NO OPs while it waits for you to task it. If you give the CPU too much, you may not dispatch requests to other resources, such as the disk or the video card, which could execute in parallel. This means that later on the CPU will have to wait for these to return with nothing to do. The primary factor for buffering in the CPU is keeping everything you need (for the CPU) as close to the FPU/ALU as possible. In a typical architecture this is (in order of decreasing proximity): registers, L1 cache, L2 cache, L3 cache, RAM.
In the case of writing a million numbers to the screen, it's about drawing polygons on your screen with your video card. Think about it like this. Let's say that for each new number that is added, the video card must do 100,000,000 operations to draw the polygons on your screen. At one extreme, if put 1 number on the page at a time and then have your video card write it out and you do this for 1,000,000 numbers, the video card will have to do 10^14 operations - 100 trillion operations! At the other extreme, if you took the entire 1 million numbers and sent it to the video card all at once, it would take only 100,000,000 operations. The optimal point is some where in the middle. If you do it one a time, the CPU does a unit of work, and waits around for a long time while the GPU updates the display. If you write the entire 1M item string first, the GPU is doing nothing while the CPU churns away.
How do you determine which buffer size to use?
Unless you are targeting a very specific and well defined platform with a specific algorithm (e.g. writing packet routing for an internet routing) you typically cannot determine this mathematically. Typically, you find it empirically. Guess a value, try it, record the results, then pick another. You can make some educated guesses of where to start and what range to investigate based on the bottlenecks you are managing.
Does anyone here have any tricks up her/his sleeve that could optimize this script further?
I don't know if this would work and I have not tested it however, buffer sizes typically come in multiples of 2 since the under pinnings of computers are binary and word sizes are typically in multiples of two (but this isn't always the case!). For example, 64 bytes is more likely to be optimal than 60 bytes and 1024 is more likely to be optimal than 1000. One of the bottlenecks specific to this problem is that most browsers to date (Google Chrome being the first exception that I'm aware of) have javascript run serially within the same thread as the rest of the web page rendering mechanics. This means that the javascript does some work filling the buffer and then waits a long time until the document.write call returns. If the javascript was run as separate process, asynchronously, like in chrome, you would likely get a major speed up. This is of course attacking the source of the bottleneck not the algorithm that uses it, but sometimes that is the best option.
Not nearly as succinct as I would like it, but hopefully it's a good starting point. Buffering is an important concept for all sorts of performance issues in computing. Having an good understanding of the underlying mechanisms that your code is using (both hardware and software) is extremely useful in avoiding or addressing performance issues.
I would bet the slowest thing in printing 1m numbers is the browser redrawing the page, so the fewer times you call document.write(), the better. Of course this needs to be balanced against large string concatenations (because they involve allocating and copying).
Determining the right buffer size is found through experimentation.
In other examples, buffering helps align along natural boundaries. Here are some examples
32 bit CPUs can transfer 32 bits more efficiently.
TCP/IP packets have maximum sizes.
File I/O classes have internal buffers.
Images, like TIFFs, may be stored with their data in strips.
Aligning with the natural boundaries of other systems can often have performance benefits.
One way to think about it is to consider that a single call to document.write() is very expensive. However, building an array and joining that array into a string is not. So reducing the number of calls to document.write() effectively reduces the overall computational time needed to write the numbers.
Buffers are a way to try and tie together two different cost pieces of work.
Computing and filling arrays has a small cost for small arrays, bug large cost for large arrays.
document.write has a large constant cost regardless of the size of the write but scales less than o(n) for larger inputs.
So queuing up larger strings to write, or buffering them, speeds overall performance.
Nice find on the article by the way.
So this one has been driving me crazy cause I don't think it really is the fastest. So here is my experiment that anyone can play with. Perhaps I wrote it wrong or something, but it would appear that doing it all at once instead of using a buffer is actually a faster operation. Or at least in my experiments.
<html>
<head>
<script type="text/javascript">
function printAllNumberBuffered(n, bufferSize)
{
var startTime = new Date();
var oRuntime = document.getElementById("divRuntime");
var oNumbers = document.getElementById("divNumbers");
var i = 0;
var currentNumber;
var pass = 0;
var numArray = new Array(bufferSize);
for(currentNumber = 1; currentNumber <= n; currentNumber++)
{
numArray[i] = currentNumber;
if(currentNumber % bufferSize == 0 && currentNumber > 0)
{
oNumbers.textContent += numArray.join(' ');
i = 0;
}
else
{
i++;
}
}
if(i > 0)
{
numArray.splice(i - 1, bufferSize - 1);
oNumbers.textContent += numArray.join(' ');
}
var endTime = new Date();
oRuntime.innerHTML += "<div>Number: " + n + " Buffer Size: " + bufferSize + " Runtime: " + (endTime - startTime) + "</div>";
}
function PrintNumbers()
{
var oNumbers = document.getElementById("divNumbers");
var tbNumber = document.getElementById("tbNumber");
var tbBufferSize = document.getElementById("tbBufferSize");
var n = parseInt(tbNumber.value);
var bufferSize = parseInt(tbBufferSize.value);
oNumbers.textContent = "";
printAllNumberBuffered(n, bufferSize);
}
</script>
</head>
<body>
<table border="1">
<tr>
<td colspan="2">
<div>Number: <input id="tbNumber" type="text" />Buffer Size: <input id="tbBufferSize" type="text" /><input type="button" value="Run" onclick="PrintNumbers();" /></div>
</td>
</tr>
<tr>
<td style="vertical-align:top" width="30%">
<div id="divRuntime"></div>
</td>
<td width="90%">
<div id="divNumbers"></div>
</td>
</tr>
</table>
</body>
</html>