Moving object performance - ThreeJs - javascript

Context :
I'm working on a pretty simple THREE.JS project, and it is, I believe, optimized in a pretty good way.
I'm using a WebGLRenderer to display lot's of Bode plot extracted from an audio signal every 50ms. This is pretty cool, but obviously, the more Bode I display, the more laggy it is. In addition, Bodes are moving at constant speed, letting new ones some space to be displayed.
I'm now at a point where I implemented every "basic" optimization I found on Internet, and I managed to get a 30 fps constantly at about 10.000.000 lines displayed, with such a bad computer (nVidia GT 210 and Core i3 2100...).
Note also i'm not using any lights,reflections... Only basic lines =)
As it is a working project, i'm not allowed to show some screenshots/code, sorry ...
Current implementation :
I'm using an array to store all my Bodes, which are each displayed via a THREE.Line.
FYI, actually 2000 THREE.Line are used.
When a Bode has been displayed and moved for 40s, it is then deleted and the THREE.Line is re-used with another one. Note that to move these, I'm modifying THREE.Line.position property.
Note also that I already disabled my scene and object matrix autoUpdate, as I'm doing it manually. (Thx for pointing that Volune).
My Question :
Do the THREE.Line.position modification induces some heavy
calculations the renderer has already done ? Or is three.js aware that my object did not change and
avoid these ?
In other words, I'd like to know if rendering/updating the same object which was just translated is heavier in the rendering process than just leaving it alone, without updating his matrix etc...
Is there any sort of low-level optimization, either in ThreeJS about rendering the same objects many times ? Is this optimization cancelled when I move my object ?
If so, I've in mind an other way to do this : using only two big Mesh, which are folowing each other, but this induces merging/deleting parts of their geometries each frames... Might it be better ?
Thanks in advance.

I found in the sources (here and here) that the meshes matrices are updated each frame no matter the position changed or not.
This means that the position modification does not induce heavy calculation itself. This also means that a lot of matrices are updated and a lot of uniforms are sent to the GC each frame.
I would suggest trying your idea with one or two big meshes. This should reduce javascript computations internal to THREE.js, and the only big communications with the GC will be relative to the big buffers.
Also note that there exists a WebGL function bufferSubData (MSDN documentation) to update parts of a buffer, but it seems not yet usable in THREE.js

Related

React-like programming without React

I grew up using JQuery and have been following a programming pattern which one could say is "React-like", but not using React. I would like to know how my graphics performance is doing so well, nonetheless.
As an example, in my front-end, I have a table that displays some "state" (in React terms). However, this "state" for me is just kept in global variables. I have an update_table() function which is the central place where updates to the table happen. It takes the "state" and renders the table with it. The first thing it does is call $("#table").empty() to get a clean start and then fills in the rows with the "state" information.
I have some dynamically changing data (the "state") every 2-3 seconds on the server side which I poll using Ajax and once I get the data/"state", I just call update_table().
This is the perfect problem for solving with React, I know. However, after implementing this simple solution with JQuery, I see that it works just fine (I'm not populating a huge table here; I have a max of 20 rows and 5 columns).
I expected to see flickering because of the $("#table").empty() call followed by adding rows one-by-one inside the update_table() function. However, the browser (chrome/safari) somehow seems to be doing a very good job of updating only that elements that have actually changed (Almost as if the browser has an implementation of Virtual DOM/diffing, like React!)
I guess your question is why you can have such a good graphics performance without React.
What you see as a "good graphics performance" really is a matter of definition or, worse, opinion.
The classic Netscape processing cycle (which all modern browsers inherit) has basically four main stages. Here is the full-blown Gecko engine description.
As long as you manipulate the DOM, you're in the "DOM update" stage and no rendering is performed AT ALL. Only when your code yields, the next stage starts. Because of the DOM changes the sizes or positions of some elements may have changed, too. So this stage recomputes the layout. After this stage, the next is rendering, where the pixels are redrawn.
This means that if your code changes a very large number elements in the DOM, they are all still rendered together, and not in an incremental fashion. So, the empty() call does not render if you repopulate the table immediately after.
Now, when you see the pixels of an element like "13872", the rendering stage may render those at the exact same position with the exact same colors. You don't have any change in pixel color, and thus there is no flickering you could see.
That said, your graphics performance is excellent -- yes. But how did you measure it? You just looked at it and decided that it's perfect. Now, visually it really may be very very good. Because all you need is avoid the layout stage from sizing/positioning something differently.
But actual performance is not measured with the lazy eyes of us humans (there are many usability studies in that field, let's say that one frame at 60 Hz takes 16.6 ms, so it is enough to render in less than that). It is measured with an actual metric (updates per second or whatever). Consider that on older machines with older browsers and slower graphics cards your "excellent" performance may look shameful. How do you know it is still good on an old Toshiba tablet with 64 MB graphics memory?
And what about scaling? If you have 100x the elements you have now, are you sure it will scale well? What if some data takes more (or less) space and changes the whole layout? All of these edge conditions may not be covered by your simple approach.
A library like React takes into account those cases you may not have encountered yet, and offers a uniform pattern to approach them.
So if you are happy with your solution you don't need React. I often avoid jQuery because ES5/ES6 is already pretty good these days and I can just jot down 3-4 lines of code using document.getElementById() and such. But I realize that on larger projects or complex cases jQuery is the perfect tool.
Look at React like that: a tool that is useful when you realize you need it, and cumbersome when you think you can do without. It's all up to you :)
When you have something like this:
$("#table").empty()
.html("... new content of the table ... ");
then the following happens:
.empty() removes content and marks rendering tree / layout as invalid.
.html() adds new content and marks rendering tree / layout as invalid.
mark as invalid among other things calls InvalidateRect() (on Windows) that causes the window to receive WM_PAINT event at some point in future.
By handling WM_PAINT the browser will calculate layout and render all the result.
Therefore multiple change requests will be collapsed into single window painting operation.

WebGL rendering a lot of objects - performance issue

I'm working with ChemDoodle library for rendering complex chemical structures. Everything works fine, but with big molecules (about 20k atoms) it's working quite slow. I don't have much experience in graphics, but I think it might me because each atom is rendering independently - every time it's rerendering, it has to iterate over array of atoms (it should be buffered).
My idea was to create some structure, that would be calculated on init time and when render, only camera would change it's position. I don't need to manipulate with atoms, only use mouse to rotate/move whole molecule. Is something like this even possible and would it improve performance?
I would appreciate info if it's possible (or some other suggestions), ideally in pure WebGL, without ThreeJS..

Javascript - Large number of rendered objects

I'm building a web app based on javascript/jQuery and php, and I need to render and manage (e.g. have the user drag/drop, resize, etc) a large number (20,000+) of drawn objects on a web page and am looking for a recommendation in terms of approach/libraries to use, mainly to get an idea of how to do this whilst keeping page performance acceptable.
The objects are simple geometric shapes (rectangles, circles, squares, etc) that I will need to attach event handlers to and be able to move/re-size. Shape attributes will be based on properties of javascript objects and I'll need to change the shapes based on the javascript object properties and vice versa.
The canvas area is likely to be quite large (not sure if this will affect performance?) although not all objects will be 'visible' on the page, but must be able to scroll within a div (using overflow, etc) around the full canvas. I have built something for test purposes using jQuery SVGDOM which works well when I have a couple of hundred objects, but the page grinds to a halt when I go over 1000 objects.
What I like about svgdom is the way it fits nicely with jQuery for referencing the DOM objects (for event handlers, etc), but am willing to (try to) develop more complex code if I need to in order to be able to address the larger number of objects that svgdom doesn't seem happy with from a performance perspective.
Any suggestions for how to do this?
I think you need to look into webGL, which renders using the GPU. A good framework for that is three.js.
Still, to manage your expectations: making 20k objects interactive really a big challenge and might need some smart caching system to fake it. if you target mobile devices too, I would say your goal is way too ambigious. I am sometimes already happy if 100 objects run/move smooth.
I'm taking as the answer to my original question that it is not practical to display/manage the number of objects that I need on a single page whether SVG or directly to the canvas.
So my approach must be to reduce the number of objects displayed at any given time - now I just need to figure out what the best way to do this is...

Monte-carlo methods in graphics

I was reading through this interesting post about using JavaScript to generate an image of a rose. However, I'm a bit confused, as this article claims that the author used monte carlo methods to reduce the code size.
It's my understanding that the author was using monte carlo methods to do something like GIF interlacing, so that the image would appear to load more quickly. Have I missed something?
The Monte-Carlo (MC) method used by the author has nothing to do with the resulting image file type, it has everything to do with how the image was generated in the first place. Since the point of JS1K is to write compact code, the author defines the rose by mathematical forms that are to be filled in with tiny dots (so they look like a solid image) by a basic render.
How do you fill those forms in? One method is to sample the surface uniformly, that is over a set interval, place a dot. As #Jordan quoted, it will work if and only if the interval is set correctly. Make it to small it takes to long; make it to large, the image is patchwork. However you can bypass the whole problem by sampling over the surface randomly. This is where the MC method comes in.
I've seen this confusion over MC before, since it is often thought of as a tool for numerical simulation. While widely used as such, the core idea is to randomly sample an interval with a bias that weights each step accordingly (dependent on the problem). For example, a physics simulation might have a weight of e^(-E/kT), whereas a numerical integrator might use a weight proportional to the derivative at the sample point. The wikipeida entry (and the refs. therein) are a good starting place for a more detail.
You can think of the complete rose as a function that is fully computed. As the MC algorithm runs, it samples this function while it converges onto the correct answer.
The author writes in the article that he uses Monte Carlo sampling to overcome the limitations of interval based sampling because the latter "requires the setting of a proper interval for each surface. If the interval is big, it render fast but it can end with holes in the surface that has not been filled. On the other hand, if the interval is too little, the time for rendering increments up to prohibitive quantities." I believe that the WebMonkey article's conclusion re: code size is incorrect.

Drawing Application Zoom Concept

I'm building an application on top of canvas, it consists of a simple DOM that gets redrawn on every mouse move (yes, it is necessary), for performance issues not every part part gets redrawn only what is needed.
The app is working well but I'd like to add the zoom feature, the way I see it, it can be done in three different ways:
1 - Every DOM element gets recalculated (position and size) every time a user zooms in or out - it might have issues with precision and its not a very good abstraction
2 - The canvas has a resolution property (i.e. when the user zooms out resolution might change from 1 to .75) - there will be a need to make the calculations on every redraw
3 - Use the built in translate() and scale() methods - possibly the most elegant and fastest solution, however it is not intuitive at all, it might be difficult to understand how it is being done latter on by me or someone else (these methods work on the full canvas, first you would translate and scale on the canvas and afterwards everything you draw gets 'magicly' translated and scaled)
Which one is best or are there other possibilities I'm not thinking of?
I would use the builtin translate()/scale() methods. If you're worried about the performance and quality of any of these methods, you should try to do it in a way that you can swap it out for another of the options to compare, if the results end up giving you any concern.

Categories

Resources