Ways to speed up rendering / plotting performance on plotly.js - javascript

I'm looking for all kinds of ways to speed up the rendering of plots in JavaScript, maybe we can collect some stuff here?
I'm plotting scientific data, around 500 points + errors per trace (between 1 and 4 traces normally). Doesn't sound too much, does it? Now the data changes according to user input - a slider, and my goal is to update the plots as smoothly as possible while dragging the slider. And that's where I realized that im not satisfied with the drawing speed.
Since the y data (+errors) can totally change, I use Plotly.redraw - this takes in my case on Chrome about 30-40ms. Seems ok, but considering that I want to present 5-15 plots at the same time, this often adds up to half a second or more. 1 or 2 fps ain't exactly what one would call "smooth". Plus, that's on Chrome, it looks way worse on other Browsers.
So I wonder if redrawing is the only option, and if so, how to speed this up? Any ideas? I tried type: 'scattergl'and while this seems to be a big boost (down to 10-15ms), it only works like a charm for small plots with only one trace, I can't manage to get it to work for all 10-15 plots - it throws multiple different errors, not worth mentioning since they are always different on different machines. So my conclusion is, that the scattergl interface isn't as mature as svg, but maybe I'm using it wrong?
Sorry for the long text, but now I would be really glad to hear some ideas on how to speed things up.

Related

Questions around VivaGraph WebGL based rendering

I have been using VivaGraphs for network analysis, but my knowledge is very rusty around JavaScript and concepts of SVG and WebGL in particular. I have been able to create nice networks using both SVG and WebGL and need a few pointers from you:
I feel WebGL is way faster than SVG when it comes to rendering large networks. I tried on a network with 80k edges and 20k nodes. Am I right in this assumption?
SVG is far easier to customize appearance of nodes and edges, WebGL is far too restrictive (or maybe my lack of knowledge). As in do you believe SVG gives me far more flexibility in customization?
One thing I noticed is that I need to pause my graph after some time otherwise the clusters in my graph keep on drifting. Anyway I can restrict my graph coordinates so that it never goes out of my screen size?
One major issue with WebGL I faced was that when I paused the rendered, none of my code worked (like events for node hover, click etc). But the moment I resumed it, it worked. This is not the case in SVG. My Hover,click functions on nodes work even if renderer is paused. This is a big showstopper in my case. Do you think there is a way to counter this?
Please open an issue on GitHub repository or share a link with broken webgl inputs - I'll be happy to take a look and fix a problem.
In terms of your intuition, yes, webgl is much faster, yet requires more effort to work with.

dc.js / crossfilter performance issue 12,000+ rows of CSV data

I'm having some performance issues with using dc.js and crossfilter. I'm creating a set of graphs displaying different dimensions of some cycling data (code here, data here, viewable here). The charts render after a few second or two, but the main performance issues occur when clicking any of the graphs - the transition animations kind of "snap" after a delay, and it's a bit jarring. I've also noticed that just removing the empty line chart at the top of the page causes the three remaining graphs to perform much better with transitions returning to normal.
I've looked at a few similar questions such as this one, but this doesn't necessarily seem applicable since I'm not splitting by multiple dimensions at one time. Is 12,000 records just getting toward the upper end of what crossfilter can handle? The file is only about 1.4 MB, so that seems a little surprising that there would be issues at this size, but maybe all this demonstrates is a lack of understanding on my part. Would greatly appreciate any pointers on this one as I'm stumped. Thanks for reading.
Usually this means something is slowing down the Crossfilter updates, resulting in the browser freezing until most of the transition is already done.
The only thing that sticks out at me is that you have 2 variable declarations in the accessor function for your dayOfWeek dimension. It would be better to define that property up-front when you load your data.
The only other possible problem I see is the Date object in your data and the dimension defined based on it. These types of complex objects can slow things down quite a bit (and the d3.js date parsing isn't extremely fast), but I don't see that showing up as a major problem in the Chrome profiler, so I don't think that's what's slowing you down here.

(Raphael) Alternative to free_transform.js? (Slow Animation)

I have this web app that creates rows and columns of holes. With each hole there can also be text and a path or 2 associated with it. I have all of these stored in a set. The user also has the option of moving this set using free_transform.js. It works great if I have less than 50 holes; less in Firefox for some reason. But the problem really shows itself when the user decides to create a grid of 100 or more holes.
The functionality works fine; it's just that the animation is slow to catch up to my mouse movement. At worst I would say the delay is 2 seconds, but as a client that would be really annoying and of course we don't want to annoy the client.
My question is does anyone have experience using Raphael/free_transform in this context? Is there a better solution? Are the tools I am using insufficient for my goal?

Browser render engines: which strategy would be best for a huge image background

I have a project I'm planning which based on kind of an 'interactive world' style experience where the browsers viewport moves around to show many different graphic environments, it must all be fluid and no page-to-page breaks. The project is in js/html5/css3
The problem this poses is that the entire 'world' will be perhaps 8-15,000 px squared (it also rotates, and has various png alpha overlays on top of it)
I was going to run some tests but there are so many ways to approach this and I'm looking for the most fluid one. My knowledge of the internal workings of browser render engines isn't great so I thought I'd ask around.
I cant use the 'tiling' approach which google map uses as it's not fluid enough (too blocky) also when rotating around it's going to create headaches do the math-transforms to work out which tiles to load at what angles so here are the 2 choices I have boiled it down to:
(1) The "Huge" image approach
The benefit of this is that once it's loaded everything is easy, the downside is that it's going to be huge and I cannot show an incremental preloader as the image queue will essentially be 2 images (overlay and huge img)
(2) Image segments
The benefit is that I can show a preloader with an image queue at 10% increments (10x images)
Question:
is the 2nd approach going to have a more painful overhead on the browser's rendering engine due to there being 9 separate sets of calculations being done or do browser engines simply see them as one painted area once it's initially rendered and then update it as a whole? Or each time the dom is changed (rotated etc), the browser has to run the same transform/repaint process 9 times?
Thanks very much.
LOTS of tests later: result: Use a big image, seems to be less for the browser to deal with.

HTML5 canvas performance on small vs. large files

I seem to be experiencing varying performance using an HTML5 canvas based on the memory size of the page... perhaps the number of images (off-screen canvases) that are loaded. How to I best locate the source of the performance problem? Or does anyone know if in fact there is a performance issue when there's a lot of data loaded, even if it isn't all being used at once?
Here's an example of good performance. I have a relatively simple map. It's between 700 and 800 KB. You can drag to scroll around this map relatively smoothly.
There's another file (which you may not want to look at due to its large size).
It's about 16 MB and contains dozens, maybe on the order of a hundred images/canvases. It draws a smaller view so it should go faster. But it doesn't. Many of the maps lag quite severely compared to the simpler demo.
I could try to instrument the code to start timing things, but I have not done this in JavaScript before, and could use some help. If there are easier ways to locate the source of performance problems, I'd be interested.
In Google Chrome and Chromium, you can open the developer tools (tools->developer tools) and then click on "Profiles". Press the circle at the bottom, let the canvas redraw and then click on the circle again. This gives you a profile that shows how much time was spent where.
I've been working on some complex canvas stuff where rendering performance mattered to me.
I wrote some test cases in jsperf and came to the conclusion that a rule of thumb is that a source offscreen canvas should never be more than 65536 pixels.
I haven't yet come to a conclusion about why this is, but likely a data structure or data type has to be changed when dealing with large source canvases.
putImageData showed similar results.
destination canvas size didn't seem to matter.
Here are some tests I wrote that explore this performance limitation:
http://jsperf.com/magic-canvas/2
http://jsperf.com/pixel-count-matters/2

Categories

Resources