I am new to Three.js so perhaps I am not going abut this optimally,
I have geometry which I create as follows,
const geo = new THREE.PlaneBufferGeometry(10,0);
I then apply a rotation to it
geo.applyMatrix( new THREE.Matrix4().makeRotationX( Math.PI * 0.5 ) );
then I create a Mesh from it
const open = new THREE.Mesh( geo, materialNormal);
I then apply a bunch of operations to the mesh to position it correctly, as follows:
open.position.copy(v2(10,20);
open.position.z = 0.5*10
open.position.x -= 20
open.position.y -= 10
open.rotation.z = angle;
Now what is the best way to get the vertices of the mesh both before and after it's position is changed? I was surpised to discover that the vertices of a mesh are not in-built into three.js.
Any hints and code samples would be greatly appreciated.
I think you're getting tripped-up by some semantics regarding three.js objects.
1) A Mesh does not have vertices. A Mesh contains references to Geometry/BufferGeometry, and Material(s). The vertices are contained in the Mesh's geometry property/object.
2) You're using PlaneBufferGeometry, which means an implementation of a BufferGeometry object. BufferGeometry keeps its vertices in the position attribute (mesh.geometry.attributes.position). Keep in mind that the vertex order may be affected by the index property (mesh.geometry.index).
Now to your question, the geometric origin is also its parent Mesh's origin, so your "before mesh transformation" vertex positions are exactly the same as when you created the mesh. Just read them out as-is.
To get the "after mesh transformation" vertex positions, you'll need to take each vertex, and convert it from the Mesh's local space, into world space. Luckily, three.js has a convenient function to do this:
var tempVertex = new THREE.Vector3();
// set tempVertex based on information from mesh.geometry.attributes.position
mesh.localToWorld(tempVertex);
// tempVertex is converted from local coordinates into world coordinates,
// which is its "after mesh transformation" position
Here's an example written by typescript.
It gets the grid's position in the world coordinate system.
GetObjectVertices(obj: THREE.Object3D): { pts: Array<THREE.Vector3>, faces: Array<THREE.Face3> }
{
let pts: Array<THREE.Vector3> = [];
let rs = { pts: pts, faces: null };
if (obj.hasOwnProperty("geometry"))
{
let geo = obj["geometry"];
if (geo instanceof THREE.Geometry)
{
for (let pt of geo.vertices)
{
pts.push(pt.clone().applyMatrix4(obj.matrix));
}
rs.faces = geo.faces;
}
else if (geo instanceof THREE.BufferGeometry)
{
let tempGeo = new THREE.Geometry().fromBufferGeometry(geo);
for (let pt of tempGeo.vertices)
{
pts.push(pt.applyMatrix4(obj.matrix));
}
rs.faces = tempGeo.faces;
tempGeo.dispose();
}
}
return rs;
}
or
if (geo instanceof THREE.BufferGeometry)
{
let positions: Float32Array = geo.attributes["position"].array;
let ptCout = positions.length / 3;
for (let i = 0; i < ptCout; i++)
{
let p = new THREE.Vector3(positions[i * 3], positions[i * 3 + 1], positions[i * 3 + 2]);
}
}
Related
I have a Mesh created with a BufferGeometry.
I also have the coordinates of where my mouse intersects the Mesh, using the Raycaster.
I am trying to detect faces within(and touching) a radius from the intersection point.
Once I detect the "tangent" faces, I then want to color the faces. Because I am working with a BufferGeometry, I am manipulating the buffer attributes on my geometry.
Here is my code:
let vertexA;
let vertexB;
let vertexC;
let intersection;
const radius = 3;
const color = new THREE.Color('red');
const positionsAttr = mesh.geometry.attributes.position;
const colorAttr = mesh.geometry.attributes.color;
// on every mouseMove event, do below:
vertexA = new THREE.Vector3();
vertexB = new THREE.Vector3();
vertexC = new THREE.Vector3();
intersection = raycaster.intersectObject(mesh).point;
// function to detect tangent edge
function isEdgeTouched(v1, v2, point, radius) {
const line = new THREE.Line3();
const closestPoint = new THREE.Vector3();
line.set(v1, v2);
line.closestPointToPoint(point, true, closestPoint);
return point.distanceTo(closestPoint) < radius;
}
// function to color a face
function colorFace(faceIndex) {
colorAttr.setXYZ(faceIndex * 3 + 0, color.r, color.g, color.b);
colorAttr.setXYZ(faceIndex * 3 + 0, color.r, color.g, color.b);
colorAttr.setXYZ(faceIndex * 3 + 0, color.r, color.g, color.b);
colorAttr.needsUpdate = true;
}
// iterate over each face, color it if tangent
for (let i=0; i < (positionsAttr.count) /3); i++) {
vertexA.fromBufferAttribute(positionsAttr, i * 3 + 0);
vertexB.fromBufferAttribute(positionsAttr, i * 3 + 1);
vertexC.fromBufferAttribute(positionsAttr, i * 3 + 2);
if (isEdgeTouched(vertexA, vertexB, point, radius)
|| isEdgeTouched(vertexA, vertexB, point, radius)
|| isEdgeTouched(vertexA, vertexB, point, radius)) {
colorFace(i);
}
While this code works, it seems to be very poor in performance especially when I am working with a geometry with many many faces. When I checked the performance monitor on Chrome DevTools, I notices that both the isEdgeTouched and colorFace functions take up too much time on each iteration for a face.
Is there a way to improve this algorithm, or is there a better algorithm to use to detect adjacent faces?
Edit
I got some help from the THREE.js slack channel, and modified the algorithm to use Three's Sphere. I am now no longer doing "edge" detection, but instead checking whether a face is within the Sphere
Updated code below:
const sphere = new THREE.Sphere(intersection, radius);
// now checking if each vertex of a face is within sphere
// if all are, then color the face at index i
for (let i=0; i < (positionsAttr.count) /3); i++) {
vertexA.fromBufferAttribute(positionsAttr, i * 3 + 0);
vertexB.fromBufferAttribute(positionsAttr, i * 3 + 1);
vertexC.fromBufferAttribute(positionsAttr, i * 3 + 2);
if (sphere.containsPoint(vertexA)
&& sphere.containsPoint(vertexA)
&& sphere.containsPoint(vertexA)) {
colorFace(i);
}
When I tested this in my app, I noticed that the performance has definitely improved from the previous version. However, I am still wondering if I could improve this further.
This seem to be a classic Nearest Neighbors problem.
You can narrow the search by finding the nearest triangles to a given point very fast by building a Bounding Volume Hierarchy (BVH) for the mesh, such as the AABB-tree.
BVH:
https://en.m.wikipedia.org/wiki/Bounding_volume_hierarchy
AABB-Tree:
https://www.azurefromthetrenches.com/introductory-guide-to-aabb-tree-collision-detection/
Then you can query against the BVH a range query using a sphere or a box of a given radius. That amounts to traverse the BVH using a sphere/box "query" which is used to discard quickly and very early the Bounding Volume Nodes that does not clip the sphere/box "query". At the end the real distance or intersection test is made only with triangles whose BV intersect the sphere/box "query", typically a very small fraction of the triangles.
The complexity of the query against the BVH is O(log n) in contrast with your approach which is O(n).
My problem is the following:
I have two intersecting surfaces created with THREE.ParametricGeometry. Like this:
I need to draw the intersection of these two surfaces. Using the Wolfram|Alpha API I get the intersection function and render it. Like this:
But, as you can see, the intersection mesh is much bigger than the two surfaces.
So I though that I could compute the intersection of the surfaces bound box (this intersection can be seen in the image above) and 'limit', so to speak, the intersection mesh to this box's dimensions.
I've tried setting the intersection mesh's scale property to the bounding box's dimensions (the difference between the box's max and min); but this only makes the intersection mesh even bigger.
Any though of how I can accomplish this?
The intersection mesh is created like this (ThreeJS r81):
// 'intersections' is an array of mathematical functions in string format.
intersections.forEach(function (value) {
var rangeX = bbox.getSize().x - (bbox.getSize().x * -1);
var rangeY = bbox.getSize().y - (bbox.getSize().y * -1);
var zFunc = math.compile(value); // The parsing is done with MathJS
// 'bbox' is the intersected bounding box.
var meshFunction = function (x, y) {
x = rangeX * x + (bbox.getSize().x * -1);
y = rangeY * y + (bbox.getSize().y * -1);
var scope = {x: x, y: y};
var z = zFunc.eval(scope);
if (!isNaN(z))
return new THREE.Vector3(x, y, z);
else
return new THREE.Vector3();
};
var geometry = new THREE.ParametricGeometry(meshFunction, segments, segments,
true);
var material = new THREE.MeshBasicMaterial({
color: defaults.intersectionColor,
side: THREE.DoubleSide
});
var mesh = new THREE.Mesh(geometry, material);
intersectionMeshes.push(mesh);
// 'intersectionMeshes' is returned and then added to the scene.
});
I think that scaling the intersection mesh wouldn't work as the intersection would become incorrect.
Let's try to do this with Three.js clipping :
Set renderer.localClippingEnabled to true ;
Compute the bounding box of the surfaces ;
For every 6 sides of the bounding box, compute a plane with the normal pointing inside the box.
(e.g. right-side : new THREE.Plane(new THREE.Vector3(-1,0,0), -bbox.max.x);)
You now have an array of clipping planes ;
Create a new THREE.Material with material.clippingPlanes being the array of clipping planes ;
Use this material for the intersection mesh.
Note that with local clipping, the intersection mesh and the surface meshes should share the same world transformation. (putting all these meshes into a THREE.Group would be reasonable.)
I'm trying to use part of a video as a texture in a Three.js mesh.
Video is here, http://video-processing.s3.amazonaws.com/example.MP4 it's a fisheye lens and I want to only use the part with actual content, i.e. the circle in the middle.
I want to somehow mask, crop or position and stretch the video on the mesh so that only this part shows and the black part is ignored.
Video code
var video = document.createElement( 'video' );
video.loop = true;
video.crossOrigin = 'anonymous';
video.preload = 'auto';
video.src = "http://video-processing.s3.amazonaws.com/example.MP4";
video.play();
var texture = new THREE.VideoTexture( video );
texture.minFilter = THREE.NearestFilter;
texture.magFilter = THREE.LinearFilter;
texture.format = THREE.RGBFormat;
var material = new THREE.MeshBasicMaterial( { map : texture } );
The video is then projected onto a 220 degree sphere, to give the VR impression.
var geometry = new THREE.SphereGeometry( 200,100,100, 0, 220 * Math.PI / 180, 0, Math.PI);
Here is a code pen
http://codepen.io/bknill/pen/vXBWGv
Can anyone let me know how I'm best to do this?
You can use texture.repeat to scale the texture
http://threejs.org/docs/#Reference/Textures/Texture
for example, to scale 2x on both axis
texture.repeat.set(0.5, 0.5);
In short, you need to update the UV-Map of the sphere so that the relevant area of your texture is assigned to the corresponding vertices of the sphere.
The UV-coordinates for each vertex define the coordinates within the texture that is assigned to that vertex (in a range [0..1], so coordinates (0, 0) are the top left corner and (1,1) the bottom right corner of your video). This example should give you an Idea what this is about.
Those UV-coordinates are stored in your geometry as geometry.faceVertexUvs[0] such that every vertex of every face has a THREE.Vector2 value for the UV-coordinate. This is a two-dimensional array, the first index is the face-index and the second one the vertex-index for the face (see example).
As for generating the UV-map there are at least two ways to do this. The probably easier way (ymmv, but I'd always go this route) would be to create the UV-map using 3D-editing software like blender and export the resulting object using the three.js exporter-plugin.
The other way is to compute the values by hand. I would suggest you first try to simply use an orthographic projection of the sphere. So basically, if you have a unit-sphere at the origin, simply drop the z-coordinate of the vertices and use u = x/2 + 0.5 and v = y/2 + 0.5 as UV-coordinates.
In JS that would be something like this:
// create the geometry (note that for simplicity, we're
// a) using a unit-sphere and
// b) use an exact half-sphere)
const geometry = new THREE.SphereGeometry(1, 18, 18, Math.PI, Math.PI)
const uvs = geometry.faceVertexUvs[0];
const vertices = geometry.vertices;
// compute the UV from the vertices of the sphere. You will probably need
// something a bit more elaborate than this for the 220degree FOV, also maybe
// some lens-distorion, but it will boild down to something like this:
for(let i = 0; i<geometry.faces.length; i++) {
const face = geometry.faces[i];
const faceVertices = [vertices[face.a], vertices[face.b], vertices[face.c]];
for(let j = 0; j<3; j++) {
const vertex = faceVertices[j];
uvs[i][j].set(vertex.x/2 + 0.5, vertex.y/2 + 0.5);
}
}
geometry.uvsNeedUpdate = true;
(if you need more information in either direction, drop a comment and i will elaborate)
My display has a resolution of 7680x4320 pixels. I want to display up to 4 million different colored squares. And I want to change the number of squares with a slider. If have currently two versions. One with canvas-fillRect which looks somethink like this:
var c = document.getElementById("myCanvas");
var ctx = c.getContext("2d");
for (var i = 0; i < num_squares; i ++) {
ctx.fillStyle = someColor;
ctx.fillRect(pos_x, pos_y, pos_x + square_width, pos_y + square_height);
// set pos_x and pos_y for next square
}
And one with webGL and three.js. Same loop, but I create a box geometry and a mesh for every square:
var geometry = new THREE.BoxGeometry( width_height, width_height, 0);
for (var i = 0; i < num_squares; i ++) {
var material = new THREE.MeshLambertMaterial( { color: Math.random() * 0xffffff } );
material.emissive = new THREE.Color( Math.random(), Math.random(), Math.random() );
var object = new THREE.Mesh( geometry, material );
}
They both work quite fine for a few thousand squares. The first version can do up to one million squares, but everything over a million is just awful slow. I want to update the color and the number of squares dynamically.
Does anyone has tips on how to be more efficient with three.js/ WebGL/ Canvas?
EDIT1: Second version: This is what I do at the beginning and when the slider has changed:
// Remove all objects from scene
var obj, i;
for ( i = scene.children.length - 1; i >= 0 ; i -- ) {
obj = scene.children[ i ];
if ( obj !== camera) {
scene.remove(obj);
}
}
// Fill scene with new objects
num_squares = gui_dat.squareNum;
var window_pixel = window.innerWidth * window.innerHeight;
var pixel_per_square = window_pixel / num_squares;
var width_height = Math.floor(Math.sqrt(pixel_per_square));
var geometry = new THREE.BoxGeometry( width_height, width_height, 0);
var pos_x = width_height/2;
var pos_y = width_height/2;
for (var i = 0; i < num_squares; i ++) {
//var object = new THREE.Mesh( geometry, );
var material = new THREE.Material()( { color: Math.random() * 0xffffff } );
material.emissive = new THREE.Color( Math.random(), Math.random(), Math.random() );
var object = new THREE.Mesh( geometry, material );
object.position.x = pos_x;
object.position.y = pos_y;
pos_x += width_height;
if (pos_x > window.innerWidth) {
pos_x = width_height/2;
pos_y += width_height;
}
scene.add( object );
}
The fastest way to draw squares is to use the gl.POINTS primitive and then setting gl_PointSize to the pixel size.
In three.js, gl.POINTS is wrapped inside the THREE.PointCloud object.
You'll have to create a geometry object with one position for each point and pass that to the PointCloud constructor.
Here is an example of THREE.PointCloud in action:
http://codepen.io/seanseansean/pen/EaBZEY
geometry = new THREE.Geometry();
for (i = 0; i < particleCount; i++) {
var vertex = new THREE.Vector3();
vertex.x = Math.random() * 2000 - 1000;
vertex.y = Math.random() * 2000 - 1000;
vertex.z = Math.random() * 2000 - 1000;
geometry.vertices.push(vertex);
}
...
materials[i] = new THREE.PointCloudMaterial({size:size});
particles = new THREE.PointCloud(geometry, materials[i]);
I didn't dig through all the code but I've set the particle count to 2m and from my understanding, 5 point clouds are generated so 2m*5 = 10m particles and I'm getting around 30fps.
The highest number of individual points I've seen so far was with potree.
http://potree.org/, https://github.com/potree
Try some demo, I was able to observe 5 millions of points in 3D at 20-30fps. I believe this is also current technological limit.
I didn't test potree on my own, so I cant say much about this tech. But there is data convertor and viewer (threejs based) so should only figure out how to convert the data.
Briefly about your question
The best way handle large data is group them as quad-tree (2d) or oct-tree (3d). This will allow you to not bother program with part that is too far from camera or not visible at all.
On the other hand, program doesnt like when you do too many webgl calls. Try to understand it like this, you want to do create ~60 images each second. But each time you set some parameter for GPU, program must do some sync. Spliting data means you will need to do more setup so tree must not be too detialed.
Last thing, someone said:
You'll probably want to pass an array of values as one of the shader uniforms
I dont suggest it, bad idea. Texture lookup is quite fast, but attributes are always faster. If we are talking about 4M points, you cant afford reading data from uniforms.
Sorry I cant help you with the code, I could do it without threejs, Im not threejs expert :)
I would recommend trying pixi framework( as mentioned in above comments ).
It has webgl renderer and some benchmarks are very promising.
http://www.goodboydigital.com/pixijs/bunnymark_v3/
It can handle allot of animated sprites.
If your app only displays the squares, and doesnt animate, and they are very simple sprites( only one color ) then it would give better performance than the demo link above.
I have a model witch intersects with my raycaster. The raycaster returns the correct point, but the face normal vector is not what i'm waiting. Three.js has as built in VertexNormalsHelper, when i use that it display the correct normals, but when i create two cubes one will be at the position of the intersection point and the other will be at the normal vector it will be like this:
The red cube is the raycaster intersection point, blue cube is the face normal
My code is simple just a simple raycaster, and i copy the position of the points to the cubes. When i load my model i update everything on the geomtery. I use the Orbitcontrols for the camera movement.
var intersects = this.checkIntersection(this.surfaceModel);
for (var i = 0; i < intersects.length; i++) {
var p = intersects[ 0 ].point;
var normal = intersects[ 0 ].face.normal.clone();
//Red & Blue cube position update
this.pointHelper_A.position.copy(p);
this.pointHelper_B.position.copy(normal);
Here is an image when the VertexNormalsHelper is turned on, so you can see that the normals are fine here:
I've learned some vector math, so the normal vector is in the right position. When you need a point at the ray intersection perpendicular to the intersected face then you need to multiply the faceNormalVector with a scalar, this will be the distance between the face and the new point, and then add this new vector to the intersection point.
I've read the source code of the VertexNormalsHelper and from that i've created a function wich gives you the normal vector of a face. So it can tell you that point but i don't think that this is the real solution for the main problem, also it has many operations.
Here is the code:
(object: THREE.Mesh,
face: THREE.Face3)
this.getFaceNormalPosition = function (object, face) {
var v1 = new THREE.Vector3();
var keys = ['a', 'b', 'c', 'd'];
object.updateMatrixWorld(true);
var size = (size !== undefined) ? size : 1;
var normalMatrix = new THREE.Matrix3();
normalMatrix.getNormalMatrix(object.matrixWorld);
var vertices = new THREE.Vector3();
var verts = object.geometry.vertices;
var faces = object.geometry.faces;
var worldMatrix = object.matrixWorld;
for (var j = 0, jl = face.vertexNormals.length; j < jl; j++) {
var vertexId = face[ keys[ j ] ];
var vertex = verts[ vertexId ];
var normal = face.vertexNormals[ j ];
vertices.copy(vertex).applyMatrix4(worldMatrix);
v1.copy(normal).applyMatrix3(normalMatrix).normalize().multiplyScalar(size);
v1.add(vertices);
}
return v1;
};
I had the same problem of not understanding the normal face I'm getting. The solution for me was to multiply the normal with the normal matrix of the object (didnt know that was a thing). Here is a simple helper class written in typescript, porting it to javascript shouldn't be too hard:
import { Vector3, Object3D, Matrix3, Face3 } from 'three';
export class Face3Utils {
private static _matrix3: Matrix3;
private static get matrix3(): Matrix3 {
if(this._matrix3 === undefined) {
this._matrix3 = new Matrix3();
}
return this._matrix3;
}
public static getWorldNormal(face: Face3, object: Object3D, normalVector?: Vector3): Vector3 {
if(normalVector === null || normalVector === undefined) {
normalVector = new Vector3();
}
object.updateMatrixWorld( true );
Face3Utils.matrix3.getNormalMatrix( object.matrixWorld );
normalVector.copy(face.normal).applyMatrix3( Face3Utils.matrix3 ).normalize();
return normalVector;
}
}