WebGL rendering a lot of objects - performance issue - javascript

I'm working with ChemDoodle library for rendering complex chemical structures. Everything works fine, but with big molecules (about 20k atoms) it's working quite slow. I don't have much experience in graphics, but I think it might me because each atom is rendering independently - every time it's rerendering, it has to iterate over array of atoms (it should be buffered).
My idea was to create some structure, that would be calculated on init time and when render, only camera would change it's position. I don't need to manipulate with atoms, only use mouse to rotate/move whole molecule. Is something like this even possible and would it improve performance?
I would appreciate info if it's possible (or some other suggestions), ideally in pure WebGL, without ThreeJS..

Related

React-like programming without React

I grew up using JQuery and have been following a programming pattern which one could say is "React-like", but not using React. I would like to know how my graphics performance is doing so well, nonetheless.
As an example, in my front-end, I have a table that displays some "state" (in React terms). However, this "state" for me is just kept in global variables. I have an update_table() function which is the central place where updates to the table happen. It takes the "state" and renders the table with it. The first thing it does is call $("#table").empty() to get a clean start and then fills in the rows with the "state" information.
I have some dynamically changing data (the "state") every 2-3 seconds on the server side which I poll using Ajax and once I get the data/"state", I just call update_table().
This is the perfect problem for solving with React, I know. However, after implementing this simple solution with JQuery, I see that it works just fine (I'm not populating a huge table here; I have a max of 20 rows and 5 columns).
I expected to see flickering because of the $("#table").empty() call followed by adding rows one-by-one inside the update_table() function. However, the browser (chrome/safari) somehow seems to be doing a very good job of updating only that elements that have actually changed (Almost as if the browser has an implementation of Virtual DOM/diffing, like React!)
I guess your question is why you can have such a good graphics performance without React.
What you see as a "good graphics performance" really is a matter of definition or, worse, opinion.
The classic Netscape processing cycle (which all modern browsers inherit) has basically four main stages. Here is the full-blown Gecko engine description.
As long as you manipulate the DOM, you're in the "DOM update" stage and no rendering is performed AT ALL. Only when your code yields, the next stage starts. Because of the DOM changes the sizes or positions of some elements may have changed, too. So this stage recomputes the layout. After this stage, the next is rendering, where the pixels are redrawn.
This means that if your code changes a very large number elements in the DOM, they are all still rendered together, and not in an incremental fashion. So, the empty() call does not render if you repopulate the table immediately after.
Now, when you see the pixels of an element like "13872", the rendering stage may render those at the exact same position with the exact same colors. You don't have any change in pixel color, and thus there is no flickering you could see.
That said, your graphics performance is excellent -- yes. But how did you measure it? You just looked at it and decided that it's perfect. Now, visually it really may be very very good. Because all you need is avoid the layout stage from sizing/positioning something differently.
But actual performance is not measured with the lazy eyes of us humans (there are many usability studies in that field, let's say that one frame at 60 Hz takes 16.6 ms, so it is enough to render in less than that). It is measured with an actual metric (updates per second or whatever). Consider that on older machines with older browsers and slower graphics cards your "excellent" performance may look shameful. How do you know it is still good on an old Toshiba tablet with 64 MB graphics memory?
And what about scaling? If you have 100x the elements you have now, are you sure it will scale well? What if some data takes more (or less) space and changes the whole layout? All of these edge conditions may not be covered by your simple approach.
A library like React takes into account those cases you may not have encountered yet, and offers a uniform pattern to approach them.
So if you are happy with your solution you don't need React. I often avoid jQuery because ES5/ES6 is already pretty good these days and I can just jot down 3-4 lines of code using document.getElementById() and such. But I realize that on larger projects or complex cases jQuery is the perfect tool.
Look at React like that: a tool that is useful when you realize you need it, and cumbersome when you think you can do without. It's all up to you :)
When you have something like this:
$("#table").empty()
.html("... new content of the table ... ");
then the following happens:
.empty() removes content and marks rendering tree / layout as invalid.
.html() adds new content and marks rendering tree / layout as invalid.
mark as invalid among other things calls InvalidateRect() (on Windows) that causes the window to receive WM_PAINT event at some point in future.
By handling WM_PAINT the browser will calculate layout and render all the result.
Therefore multiple change requests will be collapsed into single window painting operation.

2D utilities for drawing boxes, text, etc. in Three.js?

Our library ApiNATOMY renders a tree-map to navigate a hierarchical space (like human anatomy, for instance). We used to use nested <div> elements for every tile, but the re-flow process was a constant source of slowdown. We've now switched to pure WebGL (Three.js), and got a remarkable performance boost.
However, this did lose us the convenience of HTML-like markup. It now requires 10-20 lines of code to draw a box, and another 10-20 to render some text. Having things stay centered when the box is re-sized is also a manual job, and I don't even dare dream of automatic line-breaks. Then there's the problem of THREE.FontUtils being an incredibly slow way to render text. I've found a way to do it with a separate canvas, but that also requires a lot of manual code, and is more inflexible w.r.t. sizing.
So my question is: Is there some library of utility classes/functions to make these sorts of 2D jobs in Three.js easier and more robust? Alternatively, any general tips or references?
(Note: There will still be 3D aspects to ApiNATOMY, so a pure 2D renderer is not an option.)

Improve performance when using GoJS

I have to deal with very big data sources lately and was wondering if there was a way to increase the performance of GoJS.
GoJS is very efficiently processing my data, and the TreeView I am trying to make is displayed shortly after the site is fully loaded. Unfortunately, when panning the view, the diagram somehow begins to lag a little.
I am now looking for a way to decrease that lag to a minimum.
I tried fiddling with the Layout options but it did not result in significant performance increase.
To the diagram, I have a diagram with "relatively" few nodes (498 to be precise), but my template is unfortunately rather complicated. It has a nested itemArray, which generates rows and columns inside that row. Another thing is that I use a slightly modified version of the "LayeredTreeView" model.
These nodes are in 388 invisible groups. Generating it without using layout things like crossing reduction only takes a moderate amount of time.
Also i have just discovered the performance site of the GoJS introduction. It has been mentioned there, that complex templates make GoJS slow. Could this here be the case?
Complicated templates take longer to build than simple ones, so the loading time will take longer when the nodes are complex and detailed.
However, once all of the Nodes and Links have been created and initialized in the Diagram, scrolling (a.k.a panning) should be pretty fast.
Virtualization decreases load time because there should be very few nodes and links to create and show initially. However virtualization does slow down scrolling and zooming because nodes and links have to be created as the viewport changes. And as that Performance page suggests, implementing virtualization requires a lot more programming work. And it might not even be feasible or faster, depending on the circumstances.

Javascript - Large number of rendered objects

I'm building a web app based on javascript/jQuery and php, and I need to render and manage (e.g. have the user drag/drop, resize, etc) a large number (20,000+) of drawn objects on a web page and am looking for a recommendation in terms of approach/libraries to use, mainly to get an idea of how to do this whilst keeping page performance acceptable.
The objects are simple geometric shapes (rectangles, circles, squares, etc) that I will need to attach event handlers to and be able to move/re-size. Shape attributes will be based on properties of javascript objects and I'll need to change the shapes based on the javascript object properties and vice versa.
The canvas area is likely to be quite large (not sure if this will affect performance?) although not all objects will be 'visible' on the page, but must be able to scroll within a div (using overflow, etc) around the full canvas. I have built something for test purposes using jQuery SVGDOM which works well when I have a couple of hundred objects, but the page grinds to a halt when I go over 1000 objects.
What I like about svgdom is the way it fits nicely with jQuery for referencing the DOM objects (for event handlers, etc), but am willing to (try to) develop more complex code if I need to in order to be able to address the larger number of objects that svgdom doesn't seem happy with from a performance perspective.
Any suggestions for how to do this?
I think you need to look into webGL, which renders using the GPU. A good framework for that is three.js.
Still, to manage your expectations: making 20k objects interactive really a big challenge and might need some smart caching system to fake it. if you target mobile devices too, I would say your goal is way too ambigious. I am sometimes already happy if 100 objects run/move smooth.
I'm taking as the answer to my original question that it is not practical to display/manage the number of objects that I need on a single page whether SVG or directly to the canvas.
So my approach must be to reduce the number of objects displayed at any given time - now I just need to figure out what the best way to do this is...

Canvas vs CSS3 for Web Application

A very common question, but almost all comparison I've seen is mainly focused on games with a lot of interaction.
What I'll be working on is a web application that manipulate objects one at a time. For example, the object can be either an image or a text, then it can be replaced, resized, rotated, zoomed in, and deleted.
If the manipulations applied to many objects, I know that canvas will be a better choice but here the manipulation only can be done one at a time to one object only. Each container will at most have about 30 object in it, and I'll be working on multiple containers (maybe around 20 containers) that will be hidden or shown depends on the interaction.
The question is whether to use Canvas or CSS3? What I'm looking is the performance issue and complexity of the app.
I don't have a lot of experience with canvas but as far as I know if you use it together with requestAnimationFrame the performance is pretty similar to CSS animations. You should also consider that CSS animations are very limited when it comes to working with complex animations.

Categories

Resources