Three.js Ray ignore transparent pixels - javascript

I have some Vector3ds that contain plane geometries that use tranparent pngs as they're materials.
The trouble I'm having is the Raycaster is picking up the entire object, so clicking near the material is enough to activate the corresponding functions.
Is it possible to hide the tranparent parts of a mesh from the raycaster ?
Further to Alex's assistance, I've got the actual point on the object.
How do I now convert this into an pixel on the image, to test for transparency?

I'm not familiar with Three.js, but the online documentation suggests to me that the Raycast algorithm is calculated based on the distribution of objects in space, and not on their composition. In this regard, I don't think there'd be any specific way to inform the Raycast that we'd only like opaque pixels to be tested.
I'm thinking that first, use Three.js's Raycast.intersectObjects(pObjects, pRecursive) to get an analysis of which of your objects broadly collide with the spatial expanse of the ray. Once the Objects have been filtered, you need to perform a specific test to determine whether the ray collides with opaque pixels or not.
A simple way of doing this would be to take the Object which we guarantee is colliding with the ray and determine the centre location of the ray when it crosses the Object. (This will increase incomplexity depending on whether your objects can be scaled, or if your objects/ray can be rotated.) The absolute location of the point of intersection of the ray in 3D space can be found using trigonometry, which will need to account for the near and far values of your ray.
By knowing the location and dimensions of your image, you can convert the co-ordinate of raycast intersection in your 3D world into discrete 2D co-ordinates relative to that of the image you're using.
As an example, imagine a simple 2D case. If you had a 50px² image sat at the origin, and the ray intersects with the image at position (25,25), you could use this information to index the pixel data array and test the alpha values of the pixel. Since PNG image data stores RGBA data, you're looking at four bytes per pixel. Therefore, you'd need an implementation looking something like the example below, as image data is stored contiguously.
byte lPixelRed = bitmap.getData()[(X + Y*bitmap.getWidth())*4];
byte lPixelGreen = bitmap.getData()[(X + Y*bitmap.getWidth())*4 + 1];
byte lPixelBlue = bitmap.getData()[(X + Y*bitmap.getWidth())*4 + 2];
byte lPixelAlpha = bitmap.getData()[(X + Y*bitmap.getWidth())*4 + 3];
Once you've found the area of pixel data, you can test the alpha bytes to determine whether they're opaque (0xFF) or transparent (0x00). You can use this to qualify whether the raycast has successfully collided with your objects or not.

Related

Map (vrm) animated humanoid model based on skeleton coordinates in three.js

I'm really new to three.js and animation in general, and currently pretty confused with concepts like what rotation angles are/what exactly a VRM is and how it interacts with three.js/what is humanoid animation etc, but i will try to be as explicit as i can about my question below.
So i have a sequence of frames, where each frame has a set of coordinates (xyz, imagine x goes from left to right on your screen, y from top to bottom and z comes out the screen) for human joints (e.g. left foot, right foot, left shoulder etc...). And I would like to have a 3D animated model move based on the provided coordinates.
What I have seen people done so far (e.g. RM motion capture demo using pixiv three-vrm), it seems like they would modify the rotation (z) of the human bone node (returned by getBoneNode) in order to map the human action onto the animated model.
My questions are:
You can (e.g. like the author of above link) and only need to compute the rotation around z-axis since the input is a 2D video, but in my case it's 3D coordinates, how can I calculate the rotation value? From the documention on Object3D of three.js, looks like the rotation are Euler angles.
i. But how can one calculate these Euler angles given e.g. the coordinate of left shoulder?
ii. And what angles of which humanoid body/bone part do I need to do this calculation for? e.g. Does it even make sense to talk about rotation of LeftShoulder or nose?
iii. this probably is silly, but just thinking out loud here, why can't I just supply the xyz coordinate value as the position attribute of these humanoid bone node? e.g. something like:
currentVrm.humanoid.getBoneNode(THREE.VRMSchema.HumanoidBoneName.Neck).position = (10, -2.5, 1)
this would not get the animated model moving the same as the person in the frames with coordinates provided?
What exactly does a humanoid bone node look like or how are they represented? from three.js doc, it only says it's a Object3D object, it cannot be just a vector right? because from my limited understanding of Euler angles, it doesn't make complete sense to have all three Eulers angles of a vector (since it can't rotate like a cylinder). The reason im asking this, is because im confused on what angle and how needs to be calculated for each humanoid bone node, e.g. i have leftShoulder = (3, 11.2, -8.72), do i just calculate its angle to each xyz axis and supply these angles to the rotation. attributes of the bone node?
Can't tell much about three.js, but I can tell something about VRM.
Basically you have bones hierarchy. That is root-hips-spine-chest-neck... etc,
from chest you have left/right_shoulder - l/r_upper_arm - l/r_lower_arm - l/r_hand etc, from hips you have legs and feet.
Every bone has 3 position coordinates (X,Y,Z) and a quaternion (X,Y,Z,W). Which means that if you want to find a position of some bone in the world coordinate systems you have to go through all hierarchy (starting from root) applying quaternions and adding positions.
For example, if I want to find 'neck bone' position I have to:
take 'root' coordinates and apply 'root' quaternion
take 'hips' position and apply 'hips' quaternion, add resulting coordinates to 'root' coordinates;
take 'spine' coordinates and apply 'spine' quaternion, add resulting coordinates to 'hips' coordinates
take 'chest' coordinates and apply 'chest' quaternion, add resulting coordinates to 'spine' coordinates
take 'neck' coordinates and apply 'neck' quaternion, add resulting coordinates to 'chest' coordinates
Also, 'applying quaternion' means that you also keep previous quaternion in mind (you do that by multiplication); that is the resulting quaternion for 'neck' would be
qneck_res = qneckqchestqspineqhipsqroot
There is a procedure to convert between Euler angles and quaternion if needed.

THREE .JS raycasting performance

I am trying to find the closest distance from a point to large, complex Mesh along a plane in a direction range:
for (var zDown in verticalDistances) {
var myIntersect = {};
for (var theta = Math.PI / 2 - 0.5; theta < Math.PI / 2 + 0.5; theta += 0.3) {
var rayDirection = new THREE.Vector3(
Math.cos(theta),
Math.sin(theta),
0
).transformDirection(object.matrixWorld);
// console.log(rayDirection);
_raycaster.set(verticalDistances[zDown].minFacePoint, rayDirection, 0, 50);
// console.time('raycast: ');
var intersect = _raycaster.intersectObject(planeBufferMesh);
// console.timeEnd('raycast: '); // this is huge!!! ~ 2,300 ms
// console.log(_raycaster);
// console.log(intersect);
if (intersect.length == 0) continue;
if ((!('distance' in myIntersect)) || myIntersect.distance > intersect[0].distance) {
myIntersect.distance = intersect[0].distance;
myIntersect.point = intersect[0].point.clone();
}
}
// do stuff
}
I get great results with mouse hover on the same surface but when performing this loop the raycasting is taking over 2 seconds per cast. The only thing i can think of is that the BackSide of the DoubleSide Material is a ton slower?
Also i notice as I space out my verticalDistances[zDown].minFacePoint to be farther apart raycast starts to speed up up (500ms /cast). So as the distance between verticalDistances[i].minFacePoint and verticalDistances[i+1].minFacePoint increases, the raycaster performs faster.
I would go the route of using octree but the mouse hover event works extremely well on the exact same planeBuffer. Is this a side of Material issue,. that could be solved by loading 2 FrontSide meshes pointing in opposite directions?
Thank You!!!!
EDIT: it is not a front back issue. I ran my raycast down the front and back side of the plane buffer geometry with the same spot result. Live example coming.
EDIT 2: working example here. Performance is little better than Original case but still too slow. I need to move the cylinder in real time. I can optimize a bit by finding certain things, but mouse hover is instant. When you look at the console time the first two(500ms) are the results i am getting for all results.
EDIT 3: added a mouse hover event, that performs the same as the other raycasters. I am not getting results in my working code that i get in this sample though. The results I get for all raycast are the same as i get for the first 1 or 2 in the sample around 500ms. If i could get it down to 200ms i can target the items i am looking for and do way less raycasting. I am completely open to suggestions on better methods. Is octree the way to go?
raycast: : 467.27001953125ms
raycast: : 443.830810546875ms
EDIT 4: #pailhead Here is my plan.
1. find closest grid vertex to point on the plane. I can do a scan of vertex in x/y direction then calculate the min distance.
2. once i have that closest vertex i know that my closest point has to be on a face containing that vertex. So i will find all faces with that vertex using the object.mesh.index.array and calculate the plane to point of each face. Seems like a ray cast should be a little bit smarter than a full scan when intersecting a mesh and at least cull points based on max distance? #WestLangley any suggestions?
EDIT 5:
#pailhead thank you for the help. Its appreciated. I have really simplified my example(<200 lines with tons more comments); Is raycaster checking every face? Much quicker to pick out the faces within the set raycasting range specified in the constructor and do a face to point calc. There is no way this should be looping over every face to raycast. I'm going to write my own PlaneBufferGeometry raycast function tonight, after taking a peak at the source code and checking octree. I would think if we have a range in the raycaster constructor, pull out plane buffer vertices within that range ignoring z. Then just raycast those or do a point to plane calculation. I guess i could just create a "mini" surface from that bounding circle and then raycast against it. But the fact that the max distance(manual uses "far") doesn't effect the speed of the raycaster makes me wonder how much it is optimized for planeBuffer geometries. FYI your 300k loop is ~3ms on jsfiddle.
EDIT 6: Looks like all meshes are treated the same in the raycast function. That means it wont smart hunt out the area for a plane Buffer Geometry. Looking at mesh.js lines 266 we loop over the entire index array. I guess for a regular mesh you dont know what faces are where because its a TIN, but a planeBuffer could really use a bounding box/sphere rule, because your x/y are known order positions and only the Z are unknown. Last edit, Answer will be next
FYI: for max speed, you could use math. There is no need to use ray casting. https://brilliant.org/wiki/3d-coordinate-geometry-equation-of-a-plane/
The biggest issue resolved is filtering out faces of planeBufferGeometry based on vertex index. With a planeBufferGeometry you can find a bounding sphere or rectangle that will give you the faces you need to check. they are ordered in x/y in the index array so that filters out many of the faces. I did an indexOf the bottom left position and lastIndexOf the top right corner position in the index array. RAYCASTING CHECKS EVERY FACE
I also gave up on finding the distance from each face of the object and instead used vertical path down the center of the object. This decreased the ray castings needed.
Lastly I did my own face walk through and used the traingle.closestPointToPoint() function on each face.
Ended up getting around 10ms per point to surface calculation(single raycast) and around 100 ms per object (10 vertical slices) to surface. I was seeing 2.5 seconds per raycast and 25+ seconds per object prior to optimization.

How to use different drawArrays types in one program using WebGL?

I have an assignment with little context on how to actually implement what the professor is asking (I am also a novice at Javascript, but I know a ton about c and c++). The WebGL program must render 3 different types of drawArray calls: POINTS, TRIANGLE_FAN, and LINES.
I have different arrays for each, respectively, and I know how to draw one type at a time, but I am unsure as to how to draw 3 different types.
Should all the vectors be put into one giant array? I tried doing this method and the first TRIANGLE_FAN would draw correctly, but calling drawArrays again with the other two types, and setting the offset to be the 'first' index of a line, and then a 'point', gave me the errors:
WebGL error INVALID_OPERATION in drawArrays(ONE, 36, 2)
WebGL error INVALID_OPERATION in drawArrays(NONE, 40, 1)
Alternatively, using separate arrays for each type, and setting buffers for each, how do you go about drawArrays when there is more than one array set to 'gl' -> (getWebGLContext(canvas))?
For reference, this is what the professor assigned:
Write a WebGL program that displays a rotating pendulum. The
pendulum bob is free to rotate through 360 degrees about an anchor
point at the center of the canvas. The pendulum has the following
three components.
1) The anchor point is a green square centered at the origin
(0,0) with point size = 5 pixels.
2) The bob is a blue hexagon of radius r = 0.1. Render this with
a triangle fan centered at the origin (along with a ModelView
matrix that translates and rotates).
3) The bob is attached to the anchor point by a rigid red wire of
length l = 0.8.
Use global variables for the point size of the anchor, the radius
of the bob, the length of the wire, and the angular velocity of
rotation in degrees per second. Set the initial angular velocity
to 45 and allow an interactive user to increase or decrease the
value in multiples of 10 degrees per second with button presses.
It's up to you. WebGL doesn't care as long as you specify things correctly. In your case if you put them the same buffer you need to specify offsets to each piece data.
The most common way would be to put the data for each draw call in its own buffers.
also the first argument to drawArrays is the primitive type (POINT, LINE, TRIANGLES, etc) not ONE or NONE has you've put in your question
You might want to check out some tutorials on webgl

Three.JS: Get position of rotated object

In Three.JS, I am capable of rotating an object about its origin. If I were to do this with a line, for instance, the line rotates, but the positions of its vertices are not updated with their new locations. Is there some way to apply the rotation matrix to the position of the vertices to find the new position of the point? Say I rotate a line with points at (0,0,0) and (0,100,100) by 45° on the x, 20° on the y, and 100° on the z. How would I go about finding the actual position of the vertices with respect to the entire scene.
Thanks
yes, 'entire scene' means world position.
THREE.Vector3() has a applyMatrix4() method,
you can do the same things that the shader does so in order to project a vertex into world space you would do this
yourPoint.applyMatrix4(yourObject.matrixWorld);
to project that into camera space you can apply this next
yourPoint.applyMatrix4(camera.matrixWorld);
to get an actual screen position in -1 to 1
yourPoint.applyMatrix4(camera.projectionMatrix);
you would access your point like this
var yourPoint = yourObject.geometry.vertices[0]; //first vertex
also, rather than doing this three times, you can just combine the matrices. Didnt test this, but something along the lines of this. Might go the other way:
var neededPVMmatrix = new THREE.Matrix4().multiplyMatrices(yourObject.matrixWorld, camera.matrixWorld);
neededPVMmatrix.multiplyMatrices(neededPVMmatrix, camera.projectionMatrix);
if you need a good tutorial on what this does under the hood i recommend this
Alteredq posted everything there is to know about three.js matrices here
edit
One thing to note though, if you want just the rotation, not the translation, you need to use the upper 3x3 portion which is the rotation matrix, of the models world matrix. This might be slightly more complicated. I forgot what three.js gives you, but i think the normalMatrix would do the trick, or perhaps you can convert your THREE.Vector3() to THREE.Vector4(), and set .w to 0, this will prevent any translation from being applied.
edit2
if you want to move the line point in your example, instead of applying it to the particle, apply it to
var yourVertexWorldPosition = new THREE.Vector3().clone(geo.vertices[1]); //this is your second line point, to whatever you set it in your init function
yourVertexWorldPosition.applyMatrix4();//this transforms the new vector into world space based on the matrix you provide (line.matrixWorld)

How would you create a particle SURFACE emitter based on a created canvas shape? HTMLS CANVAS JS

I have a shape (a quarter circle) that I've created using the html canvas function:
moveTo
LineTo
QuadraticCurveTo
How do I go about exploding the shape into particles and then return them to form a circle?
I'm not going to write the code for you because it will take some time, and I'm sure you can find examples on the web, but I'll tell you the theory you need to know in order to make such a thing.
Create an in-memory canvas (using document.createElement('canvas')) that will never be seen on the page. This canvas must be at least as large as your object. I'm going to assume it is exactly as large as your object. We'll call this tempCanvas and it has tempCtx
Draw your object to tempCtx.
There will be some event that you didn't mention exactly but I'm sure you have in mind. Either you press a button or click on the object and it "explodes". For the sake of picking something I'll assume you want it to explode on click.
So to do the explosion:
Draw the object onto your normal context: ctx.drawImage(tempCanvas, x, y) so the user sees something
You're going to want to have an array of pixels for the location of each pixel in tempCanvas. So if tempCanvas is 20x30 you'll want an array of [20][30] to correspond.
You have to keep data for each of those pixels. Specifically, their starting point, which is easy, because pixel [2][4]'s starting point is (2,4)! And also their current location, which is identical to starting point at first but will change on each frame.
When the explosion event occurs keep track of the original mouse x and y position.
At this point for every single pixel you have a vector which means you have a direction. If you clicked in the middle of the object you'll want to save the mouse coordinates of (10,15) (see note 1). So now all of the pixels of the to-be-exploded image have their trajectory. There's a bit of math here that I'm taking for granted, but if you search separate topics either on SO or on the internet you'll find out how to find the slope/etc of these lines and continue them.
For every frame hereafter you must take each pixel [x][y] and use ctx.drawImage(tempCanvas, x, y, 1, 1, newX, newY, 1, 1) where x and y are the same as the pixel's [x][y] and the newX and newY are calculated using the vector and finding what the next point would be along its line.
The result will be each pixel of the image being drawn in a location that is slightly more away from the original click point. If you continue to do this frame after frame it will look as if the object has exploded.
That's the general idea, anyway. Let me know if any of it is unclear.
note 1: Most likely your normal canvas won't be the same size as the to-explode object. Maybe the object is placed at 100,100 so you really clicked on 110, 115 instead of 10,15. I'm omitting that offset just for the sake of simplicity.

Categories

Resources