Marching Cubes Performance (2d) - javascript

I'm trying to implement a 2D version of the marching cubes algorithm (marching squares?), and one of the major roadblocks I've run into is the performance issues (using WebGL and three.js). I notice that there's a huge tradeoff between quality (voxel/square size) and performance, and I think that the culprit for this is the center (solid area) of the metaballs:
Obviously I don't care about the faces on the inside of the metaballs since that's a completely solid area anyways; but I'm not sure how to get around polyganizing the interior area without treating it the same as the rest of the surface. The problem becomes worse when I add more metaballs to the mix.
How can I get around this problem, to maintain a decent quality and be able to render many metaballs at a decent framerate?

If you are implementing the standard marching squares technique then the cases inside and outside the surface shouldn't be a problem. In fact they are the cheapest because you don't need to do any computation for them.
If you want to reduce the poly-count in areas where it is not needed (the central area of the circle), you need to look into using an adaptive sampling technique. In this case probably most adequate would be a quad-tree (2d octree).
The speed issue when decreasing the cell size would always be there because Marching Cubes is a O(n^3) algorithm (very slow), thus marching squares would be O(n^2) (still very slow). There is no way around that. (Using an adaptive sampling data-structure, as mentioned above, would speed things up.)
It seems to me you could improve on the quality at the lower resolution. The circle seems to be aliasing a lot (assuming this is not because it is actual low screen resolution). I would check again how you interpolate on the edges of the squares (i hope you don't just use the centres of the edges) - using a more appropriate interpolation will give you better approximation and you would get better results at lower resolution.
See Paul Bourke's article on marching cubes and checkout the interpolation if you are not doing it.
Here are some references for 3d isosurface extraction techniques, (mostly based on MC) but you could benefit from them, in your 2d case:
(Kazdan et al, 2007)
(Manson and Shaefer, 2010)
(Wilhelms and Gleder, 1992)
PS: also check out their references for many more similar and maybe foundation papers!

Related

Can glsl be used instead of webgl?

This may be a bit of a naive question so please go easy on me. But I was looking at shaders at shadertoy.com and I'm amazed at how small the glsl code is for the 3d scenes. Digging deeper I noticed how most of the shaders use a technique called ray marching.
This technique makes it possible to avoid using vertices/triangles altogether and just employ the pixel shader and some math to create some pretty complex scenes.
So I was wondering why is it that 3d scenes often use triangle meshes with webgl instead of just using pixel shaders. Can't we just render the entire scene with glsl and pixel shaders (aka fragment shaders)?
The simple answer is because the techniques on shadertoy are probably 10,100,1000 times slower than using vertices and triangles.
Compare this shadertoy forest that runs at 1fps at best fullscreen on my laptop
https://www.shadertoy.com/view/4ttSWf
To this Skyrim forest which runs at 30 to 60fps
https://www.youtube.com/watch?v=PjqsYzBrP-M
Compare this Shadertoy city which runs at 5fps on my laptop
https://www.shadertoy.com/view/XtsSWs
To this Cities:Skylines city which runs at 60fps
https://www.youtube.com/watch?v=0gI2N10QyRA
Compare this Shadertoy Journey clone which runs 1fps fullscreen on my laptop
https://www.shadertoy.com/view/ldlcRf
to the actual Journey game on PS3, a machine with an arguably slower GPU than my laptop given the PS3 came out in 2006, and yet runs at 60fps
https://www.youtube.com/watch?v=61DZC-60x20#t=0m46s
There's plenty of other reasons. A typical 3D world uses gigabytes of data for textures, characters, animations, collisions etc, none of that is available in just GLSL. Another is often they use fractal techniques so there's no easy way to actually design anything. Instead they just search the math for something interesting. That would not be a good way to design game levels for example. In other words using data of vertices makes things far more flexible and editable.
Compare the Journey examples above. The Shadertoy example is a single scene vs the game which is a vast designed world with buildings and ruins and puzzles etc...
There's a reason it's called ShaderTOY. It's a meant as a fun challenge. Given a single function who's only input is which pixel is currently being drawn, write code to draw something. As such the images people have managed to draw given that limit are amazing!
But, they aren't generally the techniques used to write real apps. If you want your app to run fast and be flexible you use the more traditional techniques of vertices and triangles. The techniques used by GTA5, Red Dead Redemption 2, Call of Duty, Apex Legends, Fortnite, etc....

Bad rendering of ParticleSystems in Three.js with low graphic cards

I am trying to use particle systems to speed up the rendering of a system of stars, but I've noticed that the display is really bad on weak graphic cards (for example on Intel HD, which are pretty widespread). The particles, which should have a specific texture, are replaced by ugly squares with strange colors and transparency. For instance, this system of particles renders to :
This can be reproduced with any instance of THREE.ParticleSystem or THREE.Points (the more modern version). All the other THREE objects (Sphere, Cubes, Planes, etc.) are rendering well on my GPU, only particles bug.
Is there a way to avoid this effect? Otherwise, is there another method than particle systems to display a large number of objects without slowing down?
I'm not sure about your specific case but I've found that drawing a 'Point' primitives may be problematic for some GPUs, drivers and/or API versions.
They are just a primitive type and should work the same as Triangles and Lines, but for some GPUs - especially the low-end ones - they just don't work. And if the drawing Points works by itself - it doesn't support point sizes, or texturing, or something else...
In such case you may replace them with regular textured quad and it should be fine. You'll probably lose some performance this way so you may keep both approaches and select one based on GPU.

How can I increase map rendering performance in HTML Canvas?

We are developing a web-based game. The map has a fixed size and is procedually generated.
At the moment, all these polygons are stored in one array and checked whether they should be drawn or not. This requires a lot of performance. Which is the best rendering / buffering solution for big maps?
What I've tried:
Quadtrees. Problem: Performance still not as great because there are so many polygons.
Drawing sections of the map to offscreen-canvases. A test run: http://norizon.ch/repo/buffered-map-rendering/ Problem: The browser crashes when trying to buffer that much data and such big images (maybe 2000x2000) still seem to perform badly on a canvas.
(posting comments as an answer for convenience)
One idea could be, when the user is translating the map, to re-use the part that will still be in view, and to draw only the stripe(s) that are no longer corrects.
I believe (do you confirm ?) that the most costly operation is the drawing, not to find which polygon to draw.
If so, you should use your QuadTree to find the polygons that are within the strips. Notice that, given Javascript's overhead, a simple 2D bucket that contains the polygons that are within a given (x,y) tile might be faster to use (if the cost of the quadtree is too high).
Now i have a doubt about the precise way you should do that, i'm afraid you'll have to experiment / benchmark, and maybe choose a prefered browser.
Problems :
• Copying a canvas on itself can be very slow depending on devices/Browsers. (might require to do 2 copy, in fact)
• Using an offscreen canvas can be very slow depending on devices/Browsers. (might not use hardware acceleration when off-screen).
If you are drawing things on top of the map, you can either use a secondary canvas on top of the map canvas, or you'll be forced to use an off-screen canvas that you'll copy on each frame.
I have tried a lot of things and this solution turned out to be the best for us.
Because our map has a fixed size, it is calculated server-side.
One big image atlas with all the required tiles will be loaded at the beginning of the game. For each image on the atlas, a seperate canvas is created. The client loads the whole map data into one two-dimensional array. The values determine, which tile has to be loaded. Maybe it would be even better if the map was drawn on a seperate canvas, so that only the stripes have to be painted. But the performance is really good, so we won't change that.
Three conclusions:
Images are fast. GetImageData is not!
JavaScript has not yet great support for multi threading, so we don't calculate the map client-side in game-time.
Quadtrees are fast. Arrays are faster.

Best practice: Rendering volume (voxel) based data in WebGL

I´m searching for a (or more) best practice(s) for the following problem. I´ll try to describe it as abstract as possible, so the solution can be applied to scenarios i have not yet thought of.
Data available: Voxels (Volumetric Pixels), forming a cube, with coordinates x,y,z and a color attached.
Goal: Use OpenGL to display this data, as you move through it from different sides.
Question: Whats the best practice to render those voxels, depending on the viewpoint? How (which type of Object) can store the data?
Consider the following:
The cube of data can be considered as z layers of x y data. It should
be possible to view, in-between-layers, then the displayed color
should be interpolated from the closest matching voxels.
For my application, i have data sets of (x,y,z)=(512,512,128) and
more, containing medical data (scans of hearts, brains, ...).
What i´ve tried so far:
Evaluated different frameworks (PIXI.js, three.js) and worked through a few WebGL tutorials.
If something is not yet clear enough, please ask.
There are 2 major ways to represent / render 3D datasets. Rasterization and Ray-tracing.
One fair rasterization approach is a surface reconstruction technique by the use of algorithms such as Marching Cubes, Dual Contouring or Dual Marching Cubes.
Three.js have a Marching Cubes implementation in the examples section. You basically create polygons from your voxels for classical rasterization. It may be faster than it seems. Depending the level of detail you want to reach, the process can be fast enough to be done more than 60 times per second, for thousands of vertices.
Although, unless you want to simply represent cubes (I doubt) instead of a surface, you will also need more info associated to each of your voxels rather than only voxel positions and colors.
The other way is raycasting. Unless you find a really efficient raycasting algorithm, you will have serious performance hit with a naive implementation.
You can try to cast rays from your camera position through your data structure, find / stop marching through when you reach a surface and project your intersection point back to screen space with the desired color.
You may draw the resulting pixel in a texture buffer to map it on a full-screen quad with a simple shader.
In both cases, you need more information than just colors and cubes. For example, you need at least density values at each corners of your voxels for Marching cubes or intersection normals along voxels edges (hermite data) for Dual Contouring.
The same for ray-casting, you need at least some density information to figure out where the surface lies or not.
One of the keys is also in how you organize the data in your structure specially for out-of-core accesses.

Is HTML5's Canvas API Rotate or Translate function better?

After a short discussion with a friend on a Canvas project, we realized that there is no clear cut answer on whether the Canvas rotate or translate functions are better to use. Mainly we want to know which is the best for rendering performance and why.
PS The save and restore method for rotating images across the net is terrible...
Since they both boil down to matrix transforms, and "translate" is performed as merely an offset, presumably the translation would yield better performance in general than most types of rotation because there are simply more operations involved with a rotation than translation. See Transformation matrix, Translation matrix and Rotation matrix for some background.
But since the two aren't interchangeable in any way, shape or form, I fail to see the fruit of the debate.
Also, I feel that I should add since there is a general movement toward hardware acceleration for canvas related features in general, that the cost of any transformation may end up as a constant (eg. no actual cost) due to co-processor parallelism in the rendering pipeline...

Categories

Resources