I have a world full of voxels, lets say that my world is 320*320*96 voxels. My idea is to load that entire world into the ram of my videocard so that no peformance is lost in transferring new "chunks" to the GPU. The amount of faces generated to display that voxelworld should easyly fit into the memory of modern graphic cards.
However, the problem I am facing now it how to not display parts of that world, I want to limit the view of this world to (for example) 128*128*96 and shift the world or the camera around to show different parts.
To demonstrate my problem, have a look at a (simple) scene consisting of a ground with in white the "viewable" area, I am looking for the right WebGL/three.js functions to restrict the view to just the white part.
You could remove voxels you don't want to display from the scene.
scene.remove( mesh )
And add them to the scene when you do want to display them.
scene.add( mesh )
I recommend splitting your voxel world into chunks (like Minecraft) and making those chunks into meshes individually. Add the chunk meshes that you want visible to the scene and remove them when you want to hide them.
Related
while working on three js. I have a 3d model which has several small parts which are joint together to form a one three d loader (model). now I am using raycaster and intersects for finding out the clicked element. so its basically a door which has small screws, handles, hinges, bolts, rails, rotation points, pivots etc.
My problem is that while using raycaster and three js the element is not getting identified until or unless I zoom. when I zoom into a good extent I am able to identify clicked element. can anyone help me with that where I do get rid of zoom in.
You might be best to create an additional mesh attached/added to the small elements (like screws) - the additional mesh could be defined by a large sphere geometry shape that surrounds the screw. This larger sphere geometry would be invisible (ie its material visibility is false), and would function to intercept raycasting of the small elements like screws that are difficult to intercept on their own.
I have a demo where i used hundreds of cubes that have exactly same geometry and texture, example:
texture = THREE.ImageUtils.loadTexture ...
material = new THREE.MeshLambertMaterial( map: texture )
geometry = new THREE.BoxGeometry( 1, 1, 1 )
cubes = []
for i in [0..1000]
cubes.push new THREE.Mesh geometry, material
... on every frame
for cube in cubes
// do something with each cube
Once all the cubes are created i start moving them on the screen.
All of them have the same texture, same size, they just change position and rotation. The problem here is that when i start using many hundreds of cubes the computer starts to suffer to render it.
Is there any way i could tell Three.js / WebGL that all those objects are the same object, they are identical copies just in different positions ?
I ready something about BufferGeometry and Geometry2 being able to boost performance for this sort of situation but i'm not exactly sure what would be the best in this case.
Thank you
Is there any way i could tell Three.js / WebGL that all those objects are the
same object, they are identical copies just in different positions ?
Unfortunately there's nothing that can automatically determine and optimize rendercalls in that regard. That would be pretty awesome.
I read something about BufferGeometry and Geometry2 being able to boost performance for this sort of situation but i'm not exactly sure what would be the best in this case.
So, the details here is this: the normal THREE.Geometry-class three.js provides is built for developer-convenience, but is a bit far from how data is handled by WebGL. This is what the DirectGeometry (earlier called Geometry2) and BufferGeometry are for. A BufferGeometry is a representation of how WebGL expects data for drawcalls to be held: it contains a typed-array for every attribute of the geometry. The conversion from Geometry to BufferGeometry happens automatically every time geometry.verticesNeedsUpdate is set to true.
If you don't change any of the attributes, this conversion will happen once per geometry (of which you have 1) so this is completely ok and moving to a buffer-geometry won't help (simply because you are already using it).
The main problem you face with several hundred geometries is the number of drawcalls required to render the scene. Generally speaking, every instance of THREE.Mesh represents a single drawcall. And those drawcalls are expensive: A single drawcall that outputs hundred thousands of triangles is no problem at all, but a thousands of drawcalls with 100 triangles each will very quickly become a serious performance problem.
Now, there are different ways how the number of drawcalls can be reduced using three.js. The first is (as already mentioned in comments) to combine multiple meshes/geometries into a single one (in the end, meshes are just a collection of triangles, so there's no requirement that they form a single "body" or something like that). This isn't too practical in your case as this would involve applying the position and rotation of each of your cubes via JS and update the vertex-arrays accordingly on each frame.
What you are really looking for is a WebGL-feature called geometry instancing.
This is not as easy to use as regular meshes and geometries, but not too complicated either.
With instancing, you can create a huge amount of objects in a single drawcall. All of the rendered objects will share a single geometry (your cube-geometry with its vertices, normals and uv-coordinates). The instancing happens when you add special attributes named InstancedBufferAttribute that can contain independent values for each of the instances. So you could add two per-instance attributes for position and rotation (or a single instance transformation-matrix if you like).
These examples should pretty much be what you are looking for:
http://threejs.org/examples/?q=instancing
The only difficulty with instancing as of now is the material: you will need to provide a custom vertex-shader that knows how to apply your per-instance-attributes to the vertex-positions from the original geometry (this can also be seen in the code of the examples).
You have a webgl tag so Im going to give a non three js answer.
The best way to handle this is to allocate a float texture array made of model transform matrix data (or just vec3 positions if thats all you need). Then you allocate a mesh chunk containing all your cube data. You need to add an additional attribute which I refer to as modelTransform index. For each "cube instance" in the mesh chunk, write the correct modelTransform index value corresponding to the correct offset in the model transform data texture.
On each frame, you calculate the correct model transform data for all the cubes and write to the model transform data texture with correct offsets and such. Upload the texture to GPU on each frame.
In the vertex shader, access the model transform data from the modelTransform index attribute and the float texture. Rest is the same.
This is what I am using in my engine and it works well for smallish objects such as cubes. Note however, updating 150000 cubes on 60 FPS will likely take most of your CPU resources from JS. This is unavoidable regardless of which instancing scheme you take.
If the motion/animation of each cube is fixed, then a even better way to do it is to upload a velocity attribute and initial creation time stamp attribute for each cube instance. On each frame, send the current time as uniform and calculate the position as "pos += attr_velocity * getDeltaTime(attr_initTime, unif_currentTime);". This skips work on CPU all together and allows you to render a much higher number of cubes.
I recently made a project in WebGL, using Javascript and the 3D library three.js
However its perfomance is very poor, slow at the beginning and at best gets close to okay.
The objects of my game are: 1 car, 6 oranges, 161 cheerios, 1 table, 1 fork, 6 candles.
You control the car as in a race game (WASD or directional keys), which you drive through a circuit limited by cheerios. The car is composed of several three.js geometries (box, torus, cylinder, sphere). If an orange collides with the car, the player goes back to the beginning of the track and loses 1 life.
All oranges move in a straight uniform movement, and can kill the car if they collide with it. The orange model is composed of three.js geometry sphere and cylinder.
The table is a cube scaled to be 300x1x300 in xyz coordinates.
Each candle is a pointlight source, which intensity varies to give a flickering sensation.
Besides the 6 pointlights, there is also ambient light and 1 directional light, all created with three.js.
The fork as a billboard-like behaviour that rotates to be always pointing toward the current active camera, represented by a plane.
Whenever an orange reaches the end of its trajectory and temporarily disappears, or the car finishes a lap, an explosion of particles occurs.
Each explosion can have several particles (at least 100), and each particle is a very small plane with billboard-like behaviour.
Upon the creation of an explosion, all its particles are individually created and added to the scene.
Each explosion also has a time to live in miliseconds, usually 1000. When it expires, the explosion is removed from the scene.
All objects of the game have their own textures, and not all textures have a "good" size, i.e, dimensions as powers of 2 (32x32, 256x256, 1024x1024, etc). Each texture is loaded with a deprecated method THREE.ImageUtils.loadTexture(URL).
Everything was built with three.js, from the scene, cameras and lights, to the meshes, geometries and materials.
I noticed that after adding so many cheerios the perfomance diminished dramatically, so the problem may be rooted in the large amount of cheerios rendered each frame.
Since they all share the same model (a simple torus with a simple texture), is there any way of using only 1 model for all the cheerios (much like in openGL with VS libs)?
How can I improve its perfomance?
Tell me if you need more specific information regarding the problem.
Create a geometry. Then create cheerios meshes. After creating a mesh do not add it to the scene but merge it into geometry with
var globalCheeriosGeometry = new THREE.Geometry();
// create 161 cherios meshes and add them to global geometry
globalCheeriosGeometry.mergeMesh( cheeriosMesh );
thus you will create one geometry containing all the cherios from the scene. Then create one mesh with this geometry and add it to the scene. That will significantly reduce the number of draw calls from your scene.
I would guess its something along the lines of that you are calling an expensive (in terms of computation power) three.js method too many times. I would profile your game first to determine if problem is in cpu bound or gpu bound.
Besides the 6 pointlights, there is also ambient light and 1
directional light, all created with three.js.
lighting calculations are expensive individually per pixel, and they have to be done for every pixel. consider cutting down the light sources.
Each explosion can have several particles (at least 100), and each
particle is a very small plane with billboard-like behaviour.
I hope this is done via a billboard particle system and not as individual planes. Otherwise three js will probably do one draw call per plane.
I am writing a simple minecraft clone in THREE.js.
However, the result is very laggy.
I am using box geometry for the voxels, but I need to remove blocks when they are broken and need to use mouse picking.
I have heard that joining geometries speeds it up but as far as I am aware, this means that you cannot remove any of the voxels or use mouse picking.
What are the other ways of speeding up in THREE.js?
Using a box per voxel (making one draw call per voxel) will be too slow on any machine even if you wrote it in assembly language.
You need to build a mesh for every section of your world. As in divide the world into 48x48x48 unit chunks and build one mesh that contains all the voxels in that area. When the user edits a box you edit the mesh (the vertices) rather than remove a Box object.
How can I get the value from each angle of my perspective camera in 3D scene.
I'm using Three.js library.
To be more accurate, I shall mark what I want to get known with the next sign:
What coordinates I need to know:
It's needed for me, because I'm creating a real mode map engine with moving in 3D scene via mouse cursor.
What I'm trying to achieve is available here:
http://www.zephyrosanemos.com/windstorm/current/live-demo.html
As you could see, in this sample, new terrain is loading with intersecting the new location (which previously were not available with the garbage collection, when the camera is leaving the old viewport):
Now, I want to show a piece of screen from my three.js application:
As you are able to see, I'm loading my scene statically, where is available only one plane with buildings (buildings data are loading from my server, and the data I have token from some osm services).
And it may be controlled only with pushing the keyboard button (for e.g. the 3D scene is locating to the new location by pushing on keyboard arrows, aslo you could see the empty space in map :) this is only because of preparing the cut data in DB for testing purposes, when the application is ready - it won't me empty, it's miles easy to work with light count of records in DB ). All meshes are being deleted, and with each new movement the new data is loaded and the new building are rendered.
But I want to made them loaded dynamically with camera movement. So I want to make it be able dynamically loaded as in the example of dynamic terrain generation. I suggest, that I shall prepare a big plane matrix, which loads data only for the 8 planes (as in terrain generation sample) and to make a logic with the camera intersacting/leavaing old view for such a dynamic work.
So... I want you to help me with this piece of hard task :)
To get the field of view angle simply get the value of this field:
Three.PerspectiveCamera.fov
With that angle you should can have an imaginary cubic cone and test it for collision. For the collision part refer to this question:
How to detect collision in three.js?