Physijs simple collision between meshes without gravity - javascript

i am using Physijs to determine static collision between my meshes. As i need to know what surfaces are intersecting.
i hacked a simple demo that seems to work.
currently i have to configure my scene to use gravity, which prevents me from position my meshes in any y position, as they start to fall or float.
is there is simple way to remove the gravity from the simulation, and just use the mesh collision detection?
--update---
i had to explicitly set the mass to each mesh to 0 rather than blank. With mass=0 gravity has no affect. great!
however meshes are not reporting a collision.
any ideas where i am going wrong?
thanks
-lp

You cannot use Physijs for collision detection alone. It just comes fully equipped with real-time physics simulation, based on the ammo.js library. When you set the mass of the meshes to 0, it made them static. They were then unresponsive to external forces, such as collision responses (i.e. the change of velocity applied on the mesh after the collision was detected) or gravity. Also, two static meshes that overlap each other do not fire a collision event.
Solution A: Use ammo.js directly
Ported from Bullet Physics, the library provides the necessary tools for generating physics simulations, or just detect collisions between defined shapes (which Physijs doesn't want us to see). Here's a snippet for detecting collision between 2 rigid spheres:
var bt_collision_configuration;
var bt_dispatcher;
var bt_broadphase;
var bt_collision_world;
var scene_size = 500;
var max_objects = 10; // Tweak this as needed
bt_collision_configuration = new Ammo.btDefaultCollisionConfiguration();
bt_dispatcher = new Ammo.btCollisionDispatcher(bt_collision_configuration);
var wmin = new Ammo.btVector3(-scene_size, -scene_size, -scene_size);
var wmax = new Ammo.btVector3(scene_size, scene_size, scene_size);
// This is one type of broadphase, Ammo.js has others that might be faster
bt_broadphase = new Ammo.bt32BitAxisSweep3(
wmin, wmax, max_objects, 0, true /* disable raycast accelerator */);
bt_collision_world = new Ammo.btCollisionWorld(bt_dispatcher, bt_broadphase, bt_collision_configuration);
// Create two collision objects
var sphere_A = new Ammo.btCollisionObject();
var sphere_B = new Ammo.btCollisionObject();
// Move each to a specific location
sphere_A.getWorldTransform().setOrigin(new Ammo.btVector3(2, 1.5, 0));
sphere_B.getWorldTransform().setOrigin(new Ammo.btVector3(2, 0, 0));
// Create the sphere shape with a radius of 1
var sphere_shape = new Ammo.btSphereShape(1);
// Set the shape of each collision object
sphere_A.setCollisionShape(sphere_shape);
sphere_B.setCollisionShape(sphere_shape);
// Add the collision objects to our collision world
bt_collision_world.addCollisionObject(sphere_A);
bt_collision_world.addCollisionObject(sphere_B);
// Perform collision detection
bt_collision_world.performDiscreteCollisionDetection();
var numManifolds = bt_collision_world.getDispatcher().getNumManifolds();
// For each contact manifold
for(var i = 0; i < numManifolds; i++){
var contactManifold = bt_collision_world.getDispatcher().getManifoldByIndexInternal(i);
var obA = contactManifold.getBody0();
var obB = contactManifold.getBody1();
contactManifold.refreshContactPoints(obA.getWorldTransform(), obB.getWorldTransform());
var numContacts = contactManifold.getNumContacts();
// For each contact point in that manifold
for(var j = 0; j < numContacts; j++){
// Get the contact information
var pt = contactManifold.getContactPoint(j);
var ptA = pt.getPositionWorldOnA();
var ptB = pt.getPositionWorldOnB();
var ptdist = pt.getDistance();
// Do whatever else you need with the information...
}
}
// Oh yeah! Ammo.js wants us to deallocate
// the objects with 'Ammo.destroy(obj)'
I transformed this C++ code into its JS equivalent. There might have been some missing syntax, so you can check the Ammo.js API binding changes for anything that doesn't work.
Solution B: Use THREE's ray caster
The ray caster is less accurate, but can be more precise with the addition of extra vertex count in your shapes. Here's some code to detect collision between 2 boxes:
// General box mesh data
var boxGeometry = new THREE.CubeGeometry(100, 100, 20, 1, 1, 1);
var boxMaterial = new THREE.MeshBasicMaterial({color: 0x8888ff, wireframe: true});
// Create box that detects collision
var dcube = new THREE.Mesh(boxGeometry, boxMaterial);
// Create box to check collision with
var ocube = new THREE.Mesh(boxGeometry, boxMaterial);
// Create ray caster
var rcaster = new THREE.Raycaster(new THREE.Vector3(0, 0, 0), new THREE.Vector3(0, 1, 0));
// Cast a ray through every vertex or extremity
for(var vi = 0, l = dcube.geometry.vertices.length; vi < l; vi++){
var glovert = dcube.geometry.vertices[vi].clone().applyMatrix4(dcube.matrix);
var dirv = glovert.sub(dcube.position);
// Setup ray caster
rcaster.set(dcubeOrigin, dirv.clone().normalize());
// Get collision result
var hitResult = rcaster.intersectObject(ocube);
// Check if collision is within range of other cube
if(hitResult.length && hitResult[0].distance < dirv.length()){
// There was a hit detected between dcube and ocube
}
}
Check out these links for more information (and maybe their source code):
Three.js-Collision-Detection
Basic Collision Detection, Raycasting with Three.js
THREE's ray caster docs

Related

THREE.js updating object matrix after setting position with Raycaster

I have a run into a problem with FPS camera controls in a three.js scene. I'm using a Raycaster to determine the camera group position based on it's intersection with the scene along the Y axis (a poor man's gravity if you will =) and then apply user input to move around. The camera group position keeps getting reset to the intersection location on every frame, essentially gluing you to the spot.
I'm assuming this is either an updateMatrix() problem or a that a Vector3 is getting passed by reference somewhere, but for the life of me I can't seem to put my finger on it. I need some help... I hope this code is clear enough to help understand the problem :
renderer.setAnimationLoop((event) => {
if (clock.running) {
update();
renderer.render(scene,character.view);
}
});
clock.start();
//
const update = () => {
// velocity
const velocity = new THREE.Vector3();
velocity.x = input.controller.direction.x;
velocity.z = input.controller.direction.y;
velocity.clampLength(0,1);
if (velocity.z < 0) {
velocity.z *= 1.4;
}
// gravity
if (scene.gravity.length() > 0) {
const origin = new THREE.Vector3().copy(character.view.position);
const direction = new THREE.Vector3().copy(scene.gravity).normalize();
const intersection = new THREE.Raycaster(origin,direction).intersectObjects(scene.collision).shift();
if (intersection) {
character.group.position.copy(intersection.point);
character.group.updateMatrix();
}
}
// rotation
const rotation = new THREE.Euler();
rotation.x = input.controller.rotation.y;
rotation.y = input.controller.rotation.x;
character.group.rotation.set(0,rotation.y,0);
character.view.rotation.set(rotation.x,0,0);
// velocity.applyEuler(rotation);
const quaternion = new THREE.Quaternion();
character.group.getWorldQuaternion(quaternion);
velocity.applyQuaternion(quaternion);
// collision
const origin = new THREE.Vector3().setFromMatrixPosition(character.view.matrixWorld);
const direction = new THREE.Vector3().copy(velocity).normalize();
const raycaster = new THREE.Raycaster(origin,direction);
for (const intersection of raycaster.intersectObjects(scene.collision)) {
if (intersection.distance < 0.5) {
// face normals ignore object quaternions
const normal = new THREE.Vector3().copy(intersection.face.normal);
const matrix = new THREE.Matrix4().extractRotation(intersection.object.matrixWorld);
normal.applyMatrix4(matrix);
// normal
normal.multiplyScalar(velocity.clone().dot(normal));
velocity.sub(normal);
}
}
// step
const delta = 0.001 / clock.getDelta();
velocity.multiplyScalar(delta);
// apply
character.group.position.add(velocity);
}
The camera setup is a lot like the PointerLockControls helper, the camera being a child of a Group Object for yaw and pitch. The controller input is defined elsewhere as it can come from the mouse or a gamepad, but it returns normalized values.
To be more precise, the part that is causing the problem is here :
// gravity
if (scene.gravity.length() > 0) {
const origin = new THREE.Vector3().copy(character.view.position);
const direction = new THREE.Vector3().copy(scene.gravity).normalize();
const intersection = new THREE.Raycaster(origin,direction).intersectObjects(scene.collision).shift();
if (intersection) {
character.group.position.copy(intersection.point);
character.group.updateMatrix();
}
}
if I comment out character.group.position.copy(intersection.point);, for example, the camera moves like it's supposed to (except of course it's flying), but otherwise it moves a frame's worth of distance and then gets reset back to the intersection point on the next frame.
I have tried all manner of updateMatrix(), updateMatrixWorld(), updateProjectionMatrix(), and Object.matrixWorldNeedsUpdate = true, but alas no joy.
I apologise for using a copy/paste of my code rather than a testable case scenario. Thank you for your time.
Holy cow, I feel dumb... const origin = new THREE.Vector3().copy(character.view.position); returns local space coordinates, of course it gets reset to the origin!
replacing it with const origin = new THREE.Vector3().setFromMatrixPosition(character.view.matrixWorld); gives me the proper result
There's a lesson in there somewhere about staring blankly et your code for too long. I hope at least that this question helps someone out there one day.

Repair normals on possibly bad .stl files

I am new to Three.js and have been assigned the task of trying to repair the normals on files that have been coming in occasionally that appear to be bad. We do not know if they are bad scans or possibly bad uploads. We are looking into the upload function, but also would like to try and repair them if possible. Can anyone provide any ideas or tips to repair the file or find the correct normals?
Below is the code where we grab the normals and how we grab them. NOTE: this code works fine generally, it is only a problem when the normals are bad. I am also attaching one of the files so you can see the types of normals and "bad file" I am dealing with. Get File here
We are also using VTK on the backend with C++, so a solution or idea using either of these is helpful.
my.geometry = geometry;
var front = new THREE.MeshPhongMaterial(
{color: 0xe2e4dc, shininess: 50, side: THREE.DoubleSide});
var mesh = [new THREE.Mesh(geometry, front)];
my.scene.add(mesh[0]);
my.objects.push(mesh[0]);
var rc = new THREE.Raycaster();
var modelData = {'objects': [mesh[0].id], 'id': mesh[0].id};
var normalFound = false;
for (var dy = 80; dy >= -80; dy = dy - 10) {
console.log('finding a normal on', 0, dy, -200);
rc.set(new THREE.Vector3(0, dy, -200), new THREE.Vector3(0, 0, 1));
var hit = rc.intersectObjects([mesh[0]]);
if (hit.length) {
my.normal = hit[0].face.normal.normalize();
console.log('normal', my.normal.z);
modelData['normal'] = my.normal;
if ((my.normal.z > 0.9 && my.normal.z < 1.1)) {
my.requireOrienteering = true;
modelData['arch'] = 'lower';
normalFound = true;
console.log('we have a lower arch');
} else if ((my.normal.z < -0.9 && my.normal.z > -1.1)) {
modelData['arch'] = 'upper';
normalFound = true;
console.log('we have an upper arch');
}
break;
}
}
Calculating the normals is an easy step. If you calculate the cross product of two vectors (geometrical one), you will get a vector, that is orthogonal to the two, you input. All you have to do now is normalize it, since normals should be normalised to not mess up lightning calculations.
For smooth surfaces, you have to calculate all normals on the point and average them. For flat surfaces each vertex has multiple normales (one for each surface).
In pseudo code it will look like this for quads:
foreach quad : mesh
foreach vertex : quad
vector1 = neighborVertex.pos - vertex.pos;
vector2 = otherNeighborVertex.pos - vertex.pos;
vertex.normal = normalize(cross(vector1, vector2));
end foreach;
end foreach;
VTK has a filter named vtkPolyDataNormals that you can run on your file to compute normals. You probably want to call ConsistencyOn(), NonManifoldTraversalOn(), and AutoOrientNormalsOn() before running it.
If you want point-normals (instead of per-cell normals) and your shape has sharp corners, you probably want to provide a feature angle with SetFeatureAngle() and call SplittingOn().

Create very soft shadows in three.js?

is it possible to create a very soft / very subtle shadow in three.js?
like on this pic?
everything I managed to do so far is this:
My Lights:
hemisphereLight = new THREE.HemisphereLight(0xaaaaaa,0x000000, 0.9);
ambientLight = new THREE.AmbientLight(0xdc8874, 0.5);
shadowLight = new THREE.DirectionalLight(0xffffff, 1);
shadowLight.position.set(5, 20, -5);
shadowLight.castShadow = true;
shadowLight.shadowCameraVisible = true;
shadowLight.shadowDarkness = 0.5;
shadowLight.shadow.camera.left = -500;
shadowLight.shadow.camera.right = 500;
shadowLight.shadow.camera.top = 500;
shadowLight.shadow.camera.bottom = -500;
shadowLight.shadow.camera.near = 1;
shadowLight.shadow.camera.far = 1000;
shadowLight.shadowCameraVisible = true;
shadowLight.shadow.mapSize.width = 4096; // default is 512
shadowLight.shadow.mapSize.height = 4096; // default is 512
and render:
renderer.shadowMapEnabled = true;
renderer.shadowMapSoft = true;
renderer.shadowMapType = THREE.PCFSoftShadowMap;
thanks you
You can soften shadows by setting radius like this:
var light = new THREE.PointLight(0xffffff, 0.2);
light.castShadow = true;
light.shadow.radius = 8;
I was curious about this too, so i played around with all possible vars i found. The first real change made this one at init:
shadowLight.shadow.mapSize.width = 2048; // You have there 4K no need to go over 2K
shadowLight.shadow.mapSize.height = 2048; // - || -
Then i tested something other and when i've set:
shadowLight = new THREE.DirectionalLight(0xffffff(COLOR), 1.75(NEAR should in this case under 2), 1000 (FAR should just be the range between light/floor)); the Shadows smothing more out when i set also my directional lights position the 250 with:
shadowLight.position.set( 100(X just for some side effects), 250(Y over the scene), 0(Z) );
!!!!!!!!!!!!!!!!!Delete the (VAR) parts before using!!!!!!!!!!
Then i change this value to the value of my floor width.
d = 1000;
shadowLight.shadow.camera.left = -d;
shadowLight.shadow.camera.right = d;
shadowLight.shadow.camera.top = d;
shadowLight.shadow.camera.bottom = -d;
because if you use this:
var helper = new THREE.CameraHelper( shadowLight.shadow.camera );
and put it also in render:
scenes.add( shadowLight, helper );
...you see the box of your light and it should be maxed to your scene width itself i think. Hope it helps someone out.
What you're looking at is called Ambient Occlusion. There are a few things already available to look at, and you can probably find more now that you know what to search for. For example: Ambient occlusion in threejs
Actually that is not Ambient Occlusion. AO is only the contact shadow between 2 meshes which are very close one to each other.
What you are looking for, those soft shadows, you can get them in 2 ways:
First one, the easier: creating lightmaps in the 3D software that you use to create your models. That is: baking the shadows (and ambient occlusion too, if you want, and even textures and materials) into 1 texture that you can use later in ThreeJS. BUT: you will not be able to move those objects later... or better said, you will be able to move them, but their shadows will remain in the objects where they were being projected when you baked the lightmap.
The other way is doing something as it is done in this example, but unfortunately I haven't been able to go through it yet and I don't know much about it:
http://helloracer.com/webgl/
Good luck with it! Regards.
EDIT: Sorry... the 2nd option, the one with the F1 car, is still a lightmap :( But that shadow is made to follow the car, so the effect is quite nice at the end. Here you have the shadow being used, it is all baked, not real-time calculated:
http://helloracer.com/webgl/obj/textures/Shadow.jpg
I think drei is a good solution of soft shadow
https://github.com/pmndrs/drei#softshadows

THREE.js line drawn with BufferGeometry not rendering if the origin of the line isn't in the camera's view

I am writing a trace-line function for a visualization project that requires jumping between time step values. My issue is that during rendering, the line created using THREE.js's BufferGeometry and the setDrawRange method, will only be visible if the origin of the line is in the camera's view. Panning away will result in the line disappearing and panning toward the origin of the line (usually 0,0,0) will make it appear again. Is there a reason for this and a way around it? I have tried playing around with render settings.
The code I have included is being used in testing and draws the trace of the object as time progresses.
var traceHandle = {
/* setup() returns trace-line */
setup : function (MAX_POINTS) {
var lineGeo = new THREE.BufferGeometry();
//var MAX_POINTS = 500*10;
var positions = new Float32Array( MAX_POINTS * 3 ); // 3 vertices per point
lineGeo.addAttribute('position', new THREE.BufferAttribute(positions, 3));
var lineMaterial = new THREE.LineBasicMaterial({color:0x00ff00 });
var traceLine = new THREE.Line(lineGeo, lineMaterial);
scene.add(traceLine);
return traceLine;
},
/****
* updateTrace() updates and draws trace line
* Need 'index' saved globally for this
****/
updateTrace : function (traceLine, obj, timeStep, index) {
traceLine.geometry.setDrawRange( 0, timeStep );
traceLine.geometry.dynamic = true;
var positions = traceLine.geometry.attributes.position.array;
positions[index++]=obj.position.x;
positions[index++]=obj.position.y;
positions[index++]=obj.position.z;
// required after the first render
traceLine.geometry.attributes.position.needsUpdate = true;
return index;
}
};
Thanks a lot!
Likely, the bounding sphere is not defined or has radius zero. Since you are adding points dynamically, you can set:
traceLine.frustumCulled = false;
The other option is to make sure the bounding sphere is current, but given your use case, that seems too computationally expensive.
three.js r.73

Trying to workout how to use quaternions to rotate a camera that is moving along a path to look in new direction vector

I am trying to rotate the camera smoothly and without altering the y-vector of the camera direction, i can use look at, and it changes the camera direction in a flash, but this is not working for me, I would like a smooth transition as the direction of the camera changes. I have been reading up, and not understanding everything, but it seems to me that quaternions are the solution to this problem.
I have this.object (my camera) moving along a set path (this.spline.points). The location of the camera at any one time is (thisx,thisy, thisz)
I have cc[i] the direction vector for the direction I would like the camera to face (formerly I was using lookat(cc[i]) which changes the direction correctly, but too quickly/instantaneously)
Using info I have read, I have tried this below, and it just resulted in the screen going black at the point when the camera is due to move.
Could anyone please explain if I am on the right track, how to correct my code.
Thanks
var thisx = this.object.matrixWorld.getPosition().x.toPrecision(3);
var thisy = this.object.matrixWorld.getPosition().y.toPrecision(3);
var thisz = this.object.matrixWorld.getPosition().z.toPrecision(3);
var i = 0;
do {
var pathx = this.spline.points[i].x.toPrecision(3);
var pathz = this.spline.points[i].z.toPrecision(3);
if (thisx == pathx && thisz == pathz){
this.object.useQuaternion = true;
this.object.quaternion = new THREE.Quaternion(thisx, thisy, thisz, 1);
var newvect;
newvect.useQuaternion = true;
newvect.quaternion = new THREE.Quaternion(thisx+cc[i].x, thisy+cc[i].y, thisz+cc[i].z, 1);
var newQuaternion = new THREE.Quaternion();
THREE.Quaternion.slerp(this.object.quaternion, newvect.quaternion, newQuaternion, 0.5);
this.object.quaternion = newQuaternion;
//this.object.lookAt( cc[i]);
i = cc.length;
} else i++;
} while(i < cc.length);
There is no need to call this.object.useQuaternion = true. That is default behavior.
Also, this.object.quaternion contains the current rotation, so no need to generate that either.
You might want to try a different approach - construct the rotation matrix from the spline position, lookAt and up vectors, creating a path of quaternions as a preprocessing step:
var eye = this.spline.points[i].clone().normalize();
var center = cc[i].normalize();
var up = this.object.up.normalize();
var rotMatrix = new THREE.Matrix4().lookAt(eye, center, up);
You could then create the quaternions from the rotation matrix:
var quaternionAtSplineCoordinates = [];
quaternionAtSplineCoordinates.push(new THREE.Quaternion().setFromRotationMatrix(rotMatrix));
Once you have that path, you could apply the quaternion to the camera in your animation loop - provided you have a large enough number of samples. Otherwise, you could consider using slerp to generate the intermediate points.

Categories

Resources