Three.js - Editing faces - javascript

I thought I'd ask this here because I can't find any information anywhere (SO or the three.js documentation) - How do you get the average x,y,z coordinates of a specific face? Or at the very least, is there a way to get the x,y,z coordinates of the three vertices that make up a face? And then use those to calculate the average?
So far I have
var fLength = plane.geometry.faces.length;
for (var i = 0; i < fLength; i++) {
var f = plane.geometry.faces[i];
//How do I get the x,y,z of the current/i face?
//Is there a way to move this face?
//Is there a way to extrude this face? Or do -anything- with it for that matter?
}
Additionally, are there any methods for moving a face apart from moving the vertices? What about extruding faces? I realize it's quite a few questions however the process for all of this seems a little unclear to me...

Each Three.js Face3 contains properties a, b and c. Those are the indices into the vertex array of the same geometry object. For example, use
v1 = plane.geometry.vertices[f.a];
to get the Vector3 representing the position of the first vertex in the face.
Three.js doesn't offer much convenience methods for modifying the geometry of an existing object. Like the underlying graphics API's, its focus is on quickly composing and displaying a scene of mostly static objects (vertex shader operations aside).
You'll have to manually adjust the individual vertice that make up the faces (and set the correct dirty flags), or even rebuild faces if your modifications effectively modify edges. Depending on your use case, your operation might be simpler to perform using a vertex shader.
The point is, the topic of building and modifying geometry quickly gets pretty hairy. If you need any help there, make sure to ask a specific question, outlining your desired operation on the geometry.

Related

Threejs - Best way to handle partial transforms on geometry vertices

I have mesh with a geometry of about 5k vertices. This is the result of multiple geometries being merged so it's all in a flat array. So far all good, I can modify individual vertices and by setting the flag verticesNeedUpdate=true I can see the changes reflected on my mesh. My problem is that I need to make translations and rotations on some of this vertices and I was wondering if there is better way of applying transforms for each collection apart from modifying each vertex position inside a loop to apply the transforms.
One idea I had was to create a new geometry and assign the vertices subset (by reference) so I could then do the transforms but that seems weird so I stopped and ask this question instead.
I took a look at this example https://threejs.org/examples/?q=constr#webgl_buffergeometry_constructed_from_geometry but I have no idea how would I go about rotating/scaling groups of vertices.
Also from what I understand this will reduce the calls to the GPU by uploading only one set of vertices instead of having hundreds of calls. But then I wonder if modifying this geometry vertices on each frame defeats the all purpose of doing all this?
As i can see, the example creates multiple heartshapes (geometry) and applies a transformation, then the transformed vertices are combined to one geometry (bufferGeometry). So all hearts are in the same geometry and drawn in one call. The downside is, that you can't manipulate the heart individually.
The clue here is that the transformations are done initially and the transformed coords are uploaded to the gpu. You don't want to update the vertices each frame by the cpu.
geometry.lookAt( vector );
geometry.translate( vector.x, vector.y, vector.z );
is responsible for transforming the vertices, before they are added to the bufferGeometry.
If you can add an 'index' to each vertex, you could use a UBO for storing matrices and give vertices different transformations (in vertexshader) within the same drawcall.

Copy mesh thousand times and animate without big performance hit?

I have a demo where i used hundreds of cubes that have exactly same geometry and texture, example:
texture = THREE.ImageUtils.loadTexture ...
material = new THREE.MeshLambertMaterial( map: texture )
geometry = new THREE.BoxGeometry( 1, 1, 1 )
cubes = []
for i in [0..1000]
cubes.push new THREE.Mesh geometry, material
... on every frame
for cube in cubes
// do something with each cube
Once all the cubes are created i start moving them on the screen.
All of them have the same texture, same size, they just change position and rotation. The problem here is that when i start using many hundreds of cubes the computer starts to suffer to render it.
Is there any way i could tell Three.js / WebGL that all those objects are the same object, they are identical copies just in different positions ?
I ready something about BufferGeometry and Geometry2 being able to boost performance for this sort of situation but i'm not exactly sure what would be the best in this case.
Thank you
Is there any way i could tell Three.js / WebGL that all those objects are the
same object, they are identical copies just in different positions ?
Unfortunately there's nothing that can automatically determine and optimize rendercalls in that regard. That would be pretty awesome.
I read something about BufferGeometry and Geometry2 being able to boost performance for this sort of situation but i'm not exactly sure what would be the best in this case.
So, the details here is this: the normal THREE.Geometry-class three.js provides is built for developer-convenience, but is a bit far from how data is handled by WebGL. This is what the DirectGeometry (earlier called Geometry2) and BufferGeometry are for. A BufferGeometry is a representation of how WebGL expects data for drawcalls to be held: it contains a typed-array for every attribute of the geometry. The conversion from Geometry to BufferGeometry happens automatically every time geometry.verticesNeedsUpdate is set to true.
If you don't change any of the attributes, this conversion will happen once per geometry (of which you have 1) so this is completely ok and moving to a buffer-geometry won't help (simply because you are already using it).
The main problem you face with several hundred geometries is the number of drawcalls required to render the scene. Generally speaking, every instance of THREE.Mesh represents a single drawcall. And those drawcalls are expensive: A single drawcall that outputs hundred thousands of triangles is no problem at all, but a thousands of drawcalls with 100 triangles each will very quickly become a serious performance problem.
Now, there are different ways how the number of drawcalls can be reduced using three.js. The first is (as already mentioned in comments) to combine multiple meshes/geometries into a single one (in the end, meshes are just a collection of triangles, so there's no requirement that they form a single "body" or something like that). This isn't too practical in your case as this would involve applying the position and rotation of each of your cubes via JS and update the vertex-arrays accordingly on each frame.
What you are really looking for is a WebGL-feature called geometry instancing.
This is not as easy to use as regular meshes and geometries, but not too complicated either.
With instancing, you can create a huge amount of objects in a single drawcall. All of the rendered objects will share a single geometry (your cube-geometry with its vertices, normals and uv-coordinates). The instancing happens when you add special attributes named InstancedBufferAttribute that can contain independent values for each of the instances. So you could add two per-instance attributes for position and rotation (or a single instance transformation-matrix if you like).
These examples should pretty much be what you are looking for:
http://threejs.org/examples/?q=instancing
The only difficulty with instancing as of now is the material: you will need to provide a custom vertex-shader that knows how to apply your per-instance-attributes to the vertex-positions from the original geometry (this can also be seen in the code of the examples).
You have a webgl tag so Im going to give a non three js answer.
The best way to handle this is to allocate a float texture array made of model transform matrix data (or just vec3 positions if thats all you need). Then you allocate a mesh chunk containing all your cube data. You need to add an additional attribute which I refer to as modelTransform index. For each "cube instance" in the mesh chunk, write the correct modelTransform index value corresponding to the correct offset in the model transform data texture.
On each frame, you calculate the correct model transform data for all the cubes and write to the model transform data texture with correct offsets and such. Upload the texture to GPU on each frame.
In the vertex shader, access the model transform data from the modelTransform index attribute and the float texture. Rest is the same.
This is what I am using in my engine and it works well for smallish objects such as cubes. Note however, updating 150000 cubes on 60 FPS will likely take most of your CPU resources from JS. This is unavoidable regardless of which instancing scheme you take.
If the motion/animation of each cube is fixed, then a even better way to do it is to upload a velocity attribute and initial creation time stamp attribute for each cube instance. On each frame, send the current time as uniform and calculate the position as "pos += attr_velocity * getDeltaTime(attr_initTime, unif_currentTime);". This skips work on CPU all together and allows you to render a much higher number of cubes.

WebGL display loaded model without matrix

I'm learning webgl. I've managed to draw stuff and hopefully understood the pipeline. Now, every tutorial I see explains matrices before even loading a mesh. While it can be good for most, I think I need to concentrate on the process of loading external geometry, maybe through a json file. I've read that openGL by default displays things orthogonally, so I ask: is it possible to display a 3d mesh without any kind of transformation?
Now, every tutorial I see explains matrices before even loading a mesh.
Yes. Because understanding transformations is essential and you will need to work with them. They're not hard to understand and the sooner you wrap your head around them, the better. Actually in the case of OpenGL for the model-view transformation part it's actually rather simple:
The transformation matrix is just a bunch of vectors (in columns) placed within a "parent" coordinate system. The first the columns define how the X, Y and Z axes of the "embedded" coordinate system are aligned within the "parent", the W column moves it around. By varying the lengths of the base vectors you can stretc, i.e. scale things.
That's it, there's nothing more to it (in the modelview) than that. Learn the rules of matrix-matrix multiplication. Matrix-vector multiplication is just a special case of matrix-matrix multiplication.
The projection matrix is a little bit trickier, but I suggest you don't bother too much with it, just use GLM, Eigen::3D or linmath.h to build the matrix. The best analogy for the projection matrix is being the "lens" of OpenGL, i.e. this is where you apply zoom (aka field of view), tilt and shift. But the place of the "camera" is defined through the modelview.
is it possible to display a 3d mesh without any kind of transformation?
No. Because the mesh coordinates have to be transformed into screen coordinates. However a identity transform is perfectly possible, which, yes, looks like a dead on orthographic projection where the coordinate range [-1, 1] in either dimension is mapped to fill the viewport.

Three.js. Finding neighboring vertices within radius

I've been looking for a way to find the the vertices within a certain radius from a given point. One way to this is brute force. After selection of a point (raypicking), loop over all vertices, check whether it is within a set radius and voila. However, this tends to get quite slow for models with lots of vertices.
What I would want to do is use raypicking to select a point on the model. This would give me the face this point is on. Then from that face I can get the vertices belonging to that face. These vertices can be "shared" over faces. This might allow me to forward search from this point, flagging visited vertices and stop whenever the distance reaches the set maximum (radius). However, from what I can see from a dump of the geometry I can get the vertices belonging to a face directly, but there's no way to get the faces that a vertex belongs to. That is without preprocessing. Am I right here, or did I miss something?

how to "sort" polygons 3d?

I am still working on my "javascript 3d engine" (link inside stackoverflow).
at First, all my polygons were faces of cubes, so sorting them by average Z was working fine.
but now I've "evolved" and I want to draw my polygons (which may contain more than 4 vertices)
in the right order, namely, those who are close to the camera will be drawn last.
basically,
I know how to rotate them and "perspective"-ize them into 2D,
but don't know how to draw them in the right order.
just to clarify:
//my 3d shape = array of polygons
//polygon = array of vertices
//vertex = point with x,y,z
//rotation is around (0,0,0) and my view point is (0,0,something) I guess.
can anyone help?
p.s: some "catch phrases" I came up with, looking for the solution: z-buffering, ray casting (?!), plane equations, view vector, and so on - guess I need a simple to understand answer so that's why I asked this one. thanks.
p.s2: i don't mind too much about overlapping or intersecting polygons... so maybe the painter's algorthm indeed might be good. but: what is it exactly? how do I decide the distance of a polygon?? a polygon has many points.
The approach of sorting polygons and then drawing them bottom-to-top is called the "Painter's algorithm". Unfortunately the sorting step is in general an unsolvable problem, because it's possible for 3 polygons to overlap each other:
Thus there is not necessarily any polygon that is "on top". Alternate approaches such as using a Z buffer or BSP tree (which involves splitting polygons) don't suffer from this problem.
how do I decide the distance of a polygon?? a polygon has many points.
Painter's algorithm is the simplest to implement, but it works only in very simple cases because it assumes that there is only a single "distance" or z-value for each polygon (which you could approximate to be the average of z-values of all points in the polygon). Of course, this will produce wrong results if two polygons intersect each other.
In reality, there isn't a single distance value for a polygon -- each point on the surface of a polygon can be at a different distance from the viewer, so each point has its own "distance" or depth.
You already mentioned Z-buffering, and that is one way of doing this. I don't think you can implement this efficiently on a HTML canvas, but here's the general idea:
You need to maintain an additional canvas, the "z-buffer", where each pixel's colour represents the z-depth of the corresponding pixel on the main canvas.
To draw a polygon, you go through each point on its surface and draw only those points which are closer to the viewer than any previous objects, as indicated by the z-buffer.
I think you will have some ideas by investigating BSP tree ( binary spaces partition tree ), even if the algo will require to split some of your polygon in two.
Some example could be find here http://www.devmaster.net/articles/bsp-trees/ or by google for BSP tree. Posting some code as a reply is, in my opinion, not serious since is a complex topic.

Categories

Resources