Opening 500+ nodes at once in d3.js - javascript

Currently I am trying to expand a d3.js tree which contains over 100,000 nodes. Many leafs exist under multiple parents, as they fit multiple sections/items/regions. Searches performed by the user results in the tree opening up to all leafs with that node ID. This can result in the graph trying to open up, on rare occasions, up to 2000 leaf nodes at once. Currently, I have found the only way to do this without crashing Chrome is to use the following setInterval code.
var timeout = setInterval(function(){
for(var j = i; j < i + 10 ; j++){
makeEl(d[j]);
Search.rules += (j+1) + ") " + Graph.findNode(d[j]) + "<br><br>";
highlightPathTo(d[j].id);
if(j >= d.length - 1){
//When all of the elements have been itterated through.
clearInterval(timeout);
highlight.selected = d;
$('#highlights').removeClass('empty');
break;
}
}
i+=10;
}, 500);
However, this takes minutes and is very laggy. I there any other way to accomplish opening this number of nodes in one go that would result in a quicker completion?

Not likely. JavaScript and Browsers can't magically transcend the limits of physics (or your computer). Browsers have come a long way handling huge documents. But there are limits. I don't know for sure how much memory each tree node needs but if each of them needs just 1 KB, we're talking about 100 MB raw data which needs to be rendered on the screen. If each nodes just takes 10 ms to render, simply drawing the page will take 1000 seconds.
So my guess without looking more closely at the problem is that you can't do it. And you probably shouldn't: The human brain isn't able to process that much information nor is there a computer screen which could display it all. Think harder about what you really want to achieve and find a better representation.
You can dump a ton of data on your poor user but that will just drown them. Find a way to present just the important bits, the few gems under the ton of garbage.

Related

Most efficient way to fill a typed array

I have a lot of points represented by an id and their position as lat long coordinates. Every 10 seconds, a point is able to move and its position is updated. There may be 100,000 points at most but this number can also vary through time.
I need to send all this data to a user when it connects to my server.
To achieve this, I put them in a typed array like so :
var data = new Float32Array(points.length * 3);
for (var i = 0; i < points.length;i++) {
data[i*3]=points[i].id;
data[i*3+1]=points[i].position.lat;
data[i*3+2]=points[i].position.long;
}
The thing is, doing this takes about 0.9 millisecond on my computer which isn't so bad but not quite satisfying. The surprising thing is that of course when lines 3,4 and 5 are commented out, the algorithm is a lot faster, but once one is not commented, it doesn't change a lot whether the two others are or not, the time will be very similar.
I would like to know if I am doing something wrong here or if there is a faster way to achieve what I want.
Thanks in advance,

timing in Javascript acting strange

I am working on js to compare the performance of brute force O(n^2) and Barnes-Hut O(nlog(n))
In my code right now I am doing the same thing with the same data five times
like this:
let data = [arrays of 100 data];
let count = 5;
while(count>0){
count--;
console.time('brute force time' + iterationCount);
brute force function()
console.timeEnd('brute force time' + iterationCount);
console.time('BH time' + iterationCount );
Barnes Hut algorithm();
console.timeEnd('BH time' + iterationCount);
}
}
Though, the code is the same for each time, console.time is showing the different and worrying results.
In the timing, the difference between BH and brute force is not relatively similar in multiple iterations.
and it is not predictable when I run code every time.
One more thing to notice is that every time I run the code, in the first iteration brute force and Barnes Hut algorithm timing are almost similar where as after first, it shows that Barnes-Hut is way better.
PS: everything in both functions is scoped to each function i.e local variable and shares the same data in each iteration so in each iteration code is identical !!!!
can anyone help me understand why I am getting like this?
I'd guess these times are so tiny random noise affects the results - your machine doing other processing perhaps. I'd increase the data size, and therefore the processing time - maybe you'd see more consistent results then.

Do I got number of operations per second in this way?

Look at this code:
function wait(time) {
let i = 0;
let a = Date.now();
let x = a + (time || 0);
let b;
while ((b = Date.now()) <= x) ++i;
return i;
}
If I run it in browser (particularly Google Chrome, but I don't think it matters) in the way like wait(1000), the machine will obviously freeze for a second and then return recalculated value of i.
Let it be 10 000 000 (I'm getting values close to this one). This value varies every time, so lets take an average number.
Did I just got current number of operations per second of the processor in my machine?
Not at all.
What you get is the number of loop cycles completed by the Javascript process in a certain time. Each loop cycle consists of:
Creating a new Date object
Comparing two Date objects
Incrementing a Number
Incrementing the Number variable i is probably the least expensive of these, so the function is not really reporting how much it takes to make the increment.
Aside from that, note that the machine is doing a lot more than running a Javascript process. You will see interference from all sorts of activity going on in the computer at the same time.
When running inside a Javascript process, you're simply too far away from the processor (in terms of software layers) to make that measurement. Beneath Javascript, there's the browser and the operating system, each of which can (and will) make decisions that affect this result.
No. You can get the number of language operations per second, though the actual number of machine operations per second on a whole processor is more complicated.
Firstly the processor is not wholly dedicated to the browser, so it is actually likely switching back and forth between prioritized processes. On top of that memory access is obscured and the processor uses extra operations to manage memory (page flushing, etc.) and this is not gonna be very transparent to you at a given time. On top of that physical properties means that the real clock rate of the processor is dynamic... You can see it's pretty complicated already ;)
To really calculate the number of machine operations per second you need to measure the clock rate of the processor and multiply it by the number of instructions per cycle the processor can perform. Again this varies, but really the manufacturer specs will likely be good enough of an estimate :P.
If you wanted to use a program to measure this, you'd need to somehow dedicate 100% of the processor to your program and have it run a predictable set of instructions with no other hangups (like memory management). Then you need to include the number of instructions it takes to load the program instructions into the code caches. This is not really feasible however.
As others have pointed out, this will not help you determine the number of operations the processor does per second due to the factors that prior answers have pointed out. I do however think that a similar experiment could be set up to estimate the number of operations to be executed by your JavaScript interpreter running on your browser. For example given a function: factorial(n) an operation that runs in O(n). You could execute an operation such as factorial(100) repeatedly over the course of a minute.
function test(){
let start = Date.now();
let end = start + 60 * 1000;
let numberOfExecutions = 0;
while(Date.now() < end){
factorial(100);
numberOfExecutions++;
}
return numberOfExecutions/(60 * 100);
}
The idea here is that factorial is by far the most time consuming function in the code. And since factorial runs in O(n) we know factorial(100) is approximately 100 operations. Note that this will not be exact and that larger numbers will make for better approximations. Also remember that this will estimate the number of operations executed by your interpreter and not your processor.
There is a lot of truth to all previous comments, but I want to invert the reasoning a little bit because I do believe it is easier to understand it like that.
I believe that the fairest way to calculate it is with the most basic loop, and not relying on any dates or functions, and instead calculate the values later.
You will see that the smaller the function, the bigger the initial overload is. That means it takes a small amount of time to start and finish each function, but at a certain point they all start reaching a number that can clearly be seen as close-enough to be considered how many operations per second can JavaScript run.
My example:
const oneMillion = 1_000_000;
const tenMillion = 10_000_000;
const oneHundredMillion = 100_000_000;
const oneBillion = 1_000_000_000;
const tenBillion = 10_000_000_000;
const oneHundredBillion = 100_000_000_000;
const oneTrillion = 1_000_000_000_000;
function runABunchOfTimes(times) {
console.time('timer')
for (let i = 0; i < times; ++i) {}
console.timeEnd('timer')
}
I've tried on a machine that has a lot of load already on it with many processes running, 2020 macbook, these were my results:
at the very end I am taking the time the console showed me it took to run, and I divided the number of runs by it. The oneTrillion and oneBillion runs are virtually the same, however when it goes to oneMillion and 1000 you can see that they are not as performant due to the initial load of creating the for loop in the first place.
We usually try to sway away from O(n^2) and slower functions exactly because we do not want to reach for that maximum. If you were to perform a find inside of a map for an array with all cities in the world (around 10_000 according to google, I haven't counted) we would already each 100_000_000 iterations, and they would certainly not be as simple as just iterating through nothing like in my example. Your code then would take minutes to run, but I am sure you are aware of this and that is why you posted the question in the first place.
Calculating how long it would take is tricky not only because of the above, but also because you cannot predict which device will run your function. Nowadays I can open in my TV, my watch, a raspberry py and none of them would be nearly as fast as the computer I am running from when creating these functions. Sure. But if I were to try to benchmark a device I would use something like the function above since it is the simplest loop operation I could think of.

something is not right with console.time();

I am testing my javascript's speed using the console.time(); method, so it logs the loading time of a function on load.
if (window.devicePixelRatio > 1) {
var images = $('img');
console.time('testing');
var imagesObj = images.length;
for ( var i = 0; i < imagesObj; i++ ) {
var lowres = images.eq(i).attr('src'),
highres = lowres.replace(".", "_2x.");
images.eq(i).attr('src', highres);
}
console.timeEnd('testing');
}
But every time I reload the page it gives me a pretty different value. Should it have this behaviour? Shouldn't it give me a consistent value?
I have loaded it 5 times in a row and the values are the following:
5.051 ms
4.977 ms
8.009 ms
5.325 ms
6.951 ms
I am running this on XAMPP and in Chrome btw.
Thanks in advance
console.time/endTime is working correctly and the timing does indeed fluctuate by a tiny amount.
However, when dealing with such small numbers - the timings are all less than 1/100 of a second! - the deviation is irrelevant and can be influenced by a huge number of factors.
There is always variation, could be caused by a number of things.
server responding slightly slower (hat can also block other parts of the browser)
your processor is doing something in the mean time
your processor clocked down to save power
random latency in the network
browser extension doing something in the background
Also, Firefox has a system that intelligently tries to optimize javascript execution, in most cases it will perform better but it is somewhat random.

How to optimally render large amounts of DOM elements using javascript?

On a web page I have a quite large list of items (say, product cards, each contains image and text) - about 1000 of them. I want to filter this list on client (only those items, which are not filtered away should be shown), but there is a rendering performance problem. I apply a very narrow filter and only 10-20 items remain, then cancel it (so all items have to be shown again), and browser (Chrome on very nice machine) hangs up for a second or two.
I re-render the list using following routine:
for (var i = 0, l = this.entries.length; i < l; i++) {
$(this.cls_prefix + this.entries[i].id).css("display", this.entries[i].id in dict ? "block" : "none")
}
dict is the hash of allowed items' ids
This function itself runs instantly, it's rendering that hangs up. Is there a more optimal re-render method than changing "display" property of DOM elements?
Thanks for your answers in advance.
Why load 1000 items? First you should consider something like pagination. Showing around 30 items per page. that way, you are not loading that much.
then if you are really into that "loop a lot of items", consider using timeouts. here's a demo i had once that illustrates the consequences of looping. it blocks the UI and will cause the browser to lag, especially on long loops. but when using timers, you delay each iteration, allowing the browser to breathe once in a while and do something else before the next iteration starts.
another thing to note is that you should avoid repaints and reflows, which means avoid moving around elements and changing styles that often when it's not necessary. also, another tip is to remove from the DOM the nodes that are not actually visible. if you don't need to display something, remove it. why waste memory putting something that isn't actually seen?
You can use the setTimeout trick that offloads the loop calls from the main thread and avoids the client freeze. I suspect that the total processing – from start to finish – would last the same amount of time, but at least this way the interface can still be used and the result is a better user experience:
for (var i = 0, l = this.entries.length; i < l; i++) {
setTimeout(function(){
$(this.cls_prefix + this.entries[i].id).css("display", this.entries[i].id in dict ? "block" : "none")
}, 0);
}
Dude - the best way to handle "large amounts of DOM elements" is to NOT do it on the client, and/or DON'T use Javascript if you can avoid it.
If there's no better solution (and I'm sure there probably is!), then at LEAST partition your working set down to what you actually need to display at that moment (instead of the whole, big, honkin' enchilada!)

Categories

Resources