Ways of speeding up THREE.js - javascript

I am writing a simple minecraft clone in THREE.js.
However, the result is very laggy.
I am using box geometry for the voxels, but I need to remove blocks when they are broken and need to use mouse picking.
I have heard that joining geometries speeds it up but as far as I am aware, this means that you cannot remove any of the voxels or use mouse picking.
What are the other ways of speeding up in THREE.js?

Using a box per voxel (making one draw call per voxel) will be too slow on any machine even if you wrote it in assembly language.
You need to build a mesh for every section of your world. As in divide the world into 48x48x48 unit chunks and build one mesh that contains all the voxels in that area. When the user edits a box you edit the mesh (the vertices) rather than remove a Box object.

Related

find the exact element clicked in three js whith raycaster and intersects when small parts are closely coupled together to form a complex model

while working on three js. I have a 3d model which has several small parts which are joint together to form a one three d loader (model). now I am using raycaster and intersects for finding out the clicked element. so its basically a door which has small screws, handles, hinges, bolts, rails, rotation points, pivots etc.
My problem is that while using raycaster and three js the element is not getting identified until or unless I zoom. when I zoom into a good extent I am able to identify clicked element. can anyone help me with that where I do get rid of zoom in.
You might be best to create an additional mesh attached/added to the small elements (like screws) - the additional mesh could be defined by a large sphere geometry shape that surrounds the screw. This larger sphere geometry would be invisible (ie its material visibility is false), and would function to intercept raycasting of the small elements like screws that are difficult to intercept on their own.

How to make THREE.Mesh look volumetric with WebVR?

I'm working on porting an existing three.js project to WebVR + Oculus Rift. Basically, this app takes an STL file as input, creates a THREE.Mesh based on it and renders it on an empty scene. I managed to make it work in Firefox Nightly with VREffect plugin to three.js and VRControls. A problem I have is models rendered in VR aren't really 3D. Namely, when I move the HMD back and forth an active 3D model doesn't get closer/farther, and I can't see different sides of the model. It looks like the model is rather a flat background image stuck to its position. If I add THREE.AxisHelper to the scene, it is transformed correctly when HMD is moved.
Originally, THREE.OrbitControls were used in the app and models were rotated and moved properly.
There's quite some amount of source code so I'll post some snippets on demand.
It turned out that technically there was no problem. The issue was essentially with different scales of my models and Oculus movements. When VRControls is used with default settings, it reports a position of HMD as it reads it from Oculus, in meters. So, the range of movements of my head could barely exceed 1 m, whereas average sizes of my models are about a few dozens of their own units. When I used them altogether at the same scene, it was like a viewer is an ant looking at a giant model. Naturally, the ant have to walk a while to see another side of the model. That's why it seemed like not a 3D body.
Fortunately, there's a scale property of VRControls that should be used for adjusting scale of HMD movements. When I set it to about 30, everything works pretty well.
Thanks to #brianpeiris's comment, I decided to check coordinates of the model and camera once again to make sure they're not knit with each other. And, it led me to the solution.

Best practice: Rendering volume (voxel) based data in WebGL

I´m searching for a (or more) best practice(s) for the following problem. I´ll try to describe it as abstract as possible, so the solution can be applied to scenarios i have not yet thought of.
Data available: Voxels (Volumetric Pixels), forming a cube, with coordinates x,y,z and a color attached.
Goal: Use OpenGL to display this data, as you move through it from different sides.
Question: Whats the best practice to render those voxels, depending on the viewpoint? How (which type of Object) can store the data?
Consider the following:
The cube of data can be considered as z layers of x y data. It should
be possible to view, in-between-layers, then the displayed color
should be interpolated from the closest matching voxels.
For my application, i have data sets of (x,y,z)=(512,512,128) and
more, containing medical data (scans of hearts, brains, ...).
What i´ve tried so far:
Evaluated different frameworks (PIXI.js, three.js) and worked through a few WebGL tutorials.
If something is not yet clear enough, please ask.
There are 2 major ways to represent / render 3D datasets. Rasterization and Ray-tracing.
One fair rasterization approach is a surface reconstruction technique by the use of algorithms such as Marching Cubes, Dual Contouring or Dual Marching Cubes.
Three.js have a Marching Cubes implementation in the examples section. You basically create polygons from your voxels for classical rasterization. It may be faster than it seems. Depending the level of detail you want to reach, the process can be fast enough to be done more than 60 times per second, for thousands of vertices.
Although, unless you want to simply represent cubes (I doubt) instead of a surface, you will also need more info associated to each of your voxels rather than only voxel positions and colors.
The other way is raycasting. Unless you find a really efficient raycasting algorithm, you will have serious performance hit with a naive implementation.
You can try to cast rays from your camera position through your data structure, find / stop marching through when you reach a surface and project your intersection point back to screen space with the desired color.
You may draw the resulting pixel in a texture buffer to map it on a full-screen quad with a simple shader.
In both cases, you need more information than just colors and cubes. For example, you need at least density values at each corners of your voxels for Marching cubes or intersection normals along voxels edges (hermite data) for Dual Contouring.
The same for ray-casting, you need at least some density information to figure out where the surface lies or not.
One of the keys is also in how you organize the data in your structure specially for out-of-core accesses.

'Culling' in a voxel world

I have a world full of voxels, lets say that my world is 320*320*96 voxels. My idea is to load that entire world into the ram of my videocard so that no peformance is lost in transferring new "chunks" to the GPU. The amount of faces generated to display that voxelworld should easyly fit into the memory of modern graphic cards.
However, the problem I am facing now it how to not display parts of that world, I want to limit the view of this world to (for example) 128*128*96 and shift the world or the camera around to show different parts.
To demonstrate my problem, have a look at a (simple) scene consisting of a ground with in white the "viewable" area, I am looking for the right WebGL/three.js functions to restrict the view to just the white part.
You could remove voxels you don't want to display from the scene.
scene.remove( mesh )
And add them to the scene when you do want to display them.
scene.add( mesh )
I recommend splitting your voxel world into chunks (like Minecraft) and making those chunks into meshes individually. Add the chunk meshes that you want visible to the scene and remove them when you want to hide them.

Rounded Plane In THREE JS

THREE JS, can often seem angular and straight edged. I haven't used it for very long and thus am struggling to understand how to curve the world so to speak. I would imagine a renderer or something must be changed, but the idea is to take a 2d map and turn it into a simple three lane running game. However, if you look at the picture below from another similar game, how can i achieve the fish eye effect?
I would do that kind of effect on per-vertex base depending on the distance from the camera.
Also, maybe a bit tweaked perspective camera with bigger vertical fov would boost up the effect of the "curviness".
It's just a simple distortion effect that has been simulated in some way, it probably isn't really curved. Hope this helps.
I'm sure there are many possible different approaches... Here's one that creates nice barrel distortion effect.
You can do something like that by rendering normal wide angle camera to a texture, then project it to a lens-shaped plane (a sphere even), then the actual on-screen render is from a camera pointing to that.
I don't have the code available ATM, but I should be able to dig it up in few days if interested. Or you can just adapt from the three.js examples. Three.js includes some postprocessing examples where the scene is first rendered into a texture, that texture is applied to a a quad then photographed with ortographic camera. You can modify such an example by changing the ortographic camera to a perspective one, then distorting/changing the quad to something more appropriately shaped.
Taken to extremes, this approach can produce some pixelization / blocky artifacts.

Categories

Resources