I am currently using the three.js geometry class for creating a shape and then performing multiple CSG operations on that shape. Thus continuously redrawing the shape.
This process of performing multiple csg operations is slow, as I am using ray-casting to get the shape on click and perform CSG of the selected shape and a pre-defined shape (any shape or geometry).
So my questions are :
Will using buffer geometry speed up my CSG, but that said is there any library to perform CSG operations on THREE.BufferGeometry instances?
Is there a way I can speed up the process by using any other methods ?
This is my code-flow :
var objects = [];
init();
render();
function init(){
//scene and camera setup ... etc
var sphere = new THREE.SphereGeometry(200, 32, 32);
objects.push(sphere);
// raycasting config
// bind mouse click and move event
}
function onMouseDown() {
var intersects = raycaster.intersectObjects(objects);
.....
// get intersected shape ..
// perfrom csg with predifend shape .
// Also contains steps to convert
geometry to *CSG libaray geometry*
and back to Three.js geometry)..
// replace the shape with existing
.....
render();
}
I am using this library for CSG operations and overall flow is similar to this example in three.js examples.
I don't have element for performance comparison, but you can find a buffergeometry library in develop branch of ThreeCSG ThreeCSG develop from Wilt
It support buffergeometry (from examples):
var nonIndexedBoxMesh = new THREE.Mesh( nonIndexedBufferGeometry, material );
var bsp1 = new ThreeBSP( nonIndexedBoxMesh );
var indexedBoxMesh = new THREE.Mesh( indexedBufferGeometry, material );
var bsp2 = new ThreeBSP( indexedBoxMesh );
var geometry = bsp1.subtract( bsp2 ).toBufferGeometry();
var mesh = new THREE.Mesh( geometry, material );
It works with r75
Related
I'm new to the area of geometry generation and manipulation and I'm planning on doing this on an intricate and large scale. I know the basic way of doing this is like it's shown in the answer to this question..
var geom = new THREE.Geometry();
var v1 = new THREE.Vector3(0,0,0);
var v2 = new THREE.Vector3(0,500,0);
var v3 = new THREE.Vector3(0,500,500);
geom.vertices.push(v1);
geom.vertices.push(v2);
geom.vertices.push(v3);
geom.faces.push( new THREE.Face3( 0, 1, 2 ) );
geom.computeFaceNormals();
var object = new THREE.Mesh( geom, new THREE.MeshNormalMaterial() );
object.position.z = -100;//move a bit back - size of 500 is a bit big
object.rotation.y = -Math.PI * .5;//triangle is pointing in depth, rotate it -90 degrees on Y
scene.add(object);
But I do have experience with doing image manipulation working directly with a typed array image buffer on the GPU which is essentially the same thing as manipulating 3D points, since colors are essentially 3D points on a 2D grid (in the case of a buffer, flattened out to a 1D typed array) and I know just how much faster that kind of large scale manipulation is when processed with shaders on the GPU.
So I'm wondering if I can access the geometry in three.js directly as a typed array buffer. If so, I can use gpu.js to manipulate it on the GPU rather than CPU and boost performance exponentially.
Basically I'm asking if there's something like canvas's getImageData method for three.js geometry.
As ThJim01 mentioned in the comment, THREE.BufferGeometry is the way to go, but if you insist on using THREE.Geometry to initialize your list of triangles, you can use the BufferGeometry.fromGeometry function to generate the BufferGeometry from the Geometry you originally made.
var geometry = new THREE.Geometry();
// ... initialize verts and faces ...
// Initialize the BufferGeometry
var buffGeom = new THREE.BufferGeometry();
buffGeom.fromGeometry(geometry);
// Print the typed array for the position of the vertices
console.log(buffGeom.getAttribute('position').array);
Note that the resultant geometry will not have an index array and just be a list of disjointed triangles (as it was represented as in the first place!)
Hope that helps!
I´m using Three.js and trying to create some custom shapes, similar to one that appears in a project from one of agencies using threejs:
three.js featured project esample
How did they generated these boxes with holes inside? (on that examples
boxes basically have only borders around and are empty inside).
As I saw in the code (I was trying to figure out myself) they use BoxGeometry but I have no idea how to accomplish that. Does anyone know or can give me any directions? It would be really helpfull as i´m stuck with this and have no idea on how to create them.
So in THREE.js Meshes represent any kind of 3D object. They combine Geometries and Shaders. Generally to create a mesh you call
var mesh = new THREE.Mesh( geometry, shader );
If you use any of the builtin shaders (also known as Materials [ MeshBasicMaterial, MeshLambertMaterial, etc]) they have a wireFrame boolean attribute that allows this functionality.
var geometry = new THREE.BoxGeometry( x, y, z ),
material = new THREE.MeshBasicMaterial( {
wireFrame: true, // This makes the object appear wireframe
color: 0xffffff // You can alter other properties
});
var box = new THREE.Mesh( geometry, material );
// You can also change it later
box.material.wireFrame = false;
I'm rendering an object with textures using MTL and OBJ files with Three.js. My code here works but my model is displayed as flat shaded. How do I enable smooth shading?
var scene = new THREE.Scene();
var mtlLoader = new THREE.MTLLoader();
mtlLoader.setPath('assets/');
mtlLoader.setBaseUrl('assets/');
mtlLoader.load('asset.mtl', function(materials) {
materials.preload();
var objLoader = new THREE.OBJLoader();
objLoader.setMaterials(materials);
objLoader.setPath('assets/');
objLoader.load('asset.obj', function(object) {
//
// This solved my problem
//
object.traverse(function(child) {
if(child instanceof THREE.Mesh)
{
child.material.shading = THREE.SmoothShading;
}
});
//
//
scene.add(object);
});
});
EDIT:
I updated my code with a solution that fixed my problem based on the accepted answer.
It could be one of two things that I can think of right now.
It could be that the material is set to FlatShading. In this case just somehow retrieve the object and use object.material.shading = THREE.SmoothShading; to fix.
If that doesn't change it, it's possible that the object contains per-vertex-normals (meaning that every vertex of every triangle has a normal attached to it) and that all normals for each triangle point in the same direction. This is something that should better be solved in the 3d-editing process, but you can also re-compute the normals in three.js:
object.geometry.computeVertexNormals(true);
This should [1] recompute the normals for smooth surfaces. However, it will only work for regular Geometries and Indexed BufferGeometries (or, to put it the other way around: it won't work if the geometry doesn't have information about vertices being reused for adjacent faces)
[1]: I didn't test it myself and just go after what I just read in the code
You may need to smooth geometry as follows:
geometry = BufferGeometryUtils.mergeVertices(geometry, 0.1);
geometry.computeVertexNormals(true);
I'm using Three.js to display planes, however I can't seem to find a way to change the normal of it. There's a Plane class that has a normal property so is there any way to use this instead of the PlaneGeometry one?
PlaneGeometry offers no means to change its normal, which is effectively always (0,0,1).
To make the plane geometry face in a different direction, you need to transform its vertices. This
is done by converting a Plane object to a transformation matrix and applying that
matrix to the PlaneGeometry. Here is code that generates a transformation matrix:
// Assumes that "plane" is the source THREE.Plane object.
// Normalize the plane
var normPlane=new THREE.Plane().copy(plane).normalize();
// Rotate from (0,0,1) to the plane's normal
var quaternion=new THREE.Quaternion()
.setFromUnitVectors(new THREE.Vector3(0,0,1),normPlane.normal);
// Calculate the translation
var position=new THREE.Vector3(
-normPlane.constant*normPlane.normal.x,
-normPlane.constant*normPlane.normal.y,
-normPlane.constant*normPlane.normal.z);
// Create the matrix
var matrix=new THREE.Matrix4()
.compose(position,quaternion,new THREE.Vector3(1,1,1));
// Transform the geometry (assumes that "geometry"
// is a THREE.PlaneGeometry or indeed any
// THREE.Geometry)
geometry.applyMatrix(matrix);
There is another option that perhaps can suit you. You can use lookAt method from the Mesh class. This method is inherited from Object3D class. You just need to specify the point where the plane will look. This way you can reuse your PlaneGeometry for other Mesh instances.
var geometry = new THREE.PlaneGeometry( 12, 12 );
var material = new THREE.MeshBasicMaterial( { color: 0x005E99 } );
var plane = new THREE.Mesh( geometry, material );
plane.lookAt(new THREE.Vector3(0.7, 0.7, 0.7));
I have created mesh and rendered " 10 " 3d objects using three.js?
how to access each object to perform scaling , rotation & all stuffs so there is a
need to get the div object individually?
help me to solve this issue ?
thanks !
You do not seem to be asking a real question. But rather asking for someone to teach you something. In the 'startup code' a SphereGeometry object is combined with a MeshBasicMaterial object in order to create the Mesh object which is your 3d object that you can then use to access/set the objects position, rotation, etc. Here are the lines of code I am referring to:
var geometry = new THREE.SphereGeometry( 75, 20, 10 );
var material = new THREE.MeshBasicMaterial( { color: 0xffffff, wireframe: true } );
var mesh = new THREE.Mesh( geometry, material );
Once you create mesh objects you need to add them to the scene with a call to scene.add(mesh). At this point you can set or get the rotation or position as such
mesh.position.x = 50;
mesh.rotation.z = Math.PI / 2 // rotations are in radians