I'm being driven mildly insane looking for a working combination of interactions. I basically need to make something like a google earth style setup, where you can:
orbit round an object, highlighting the centre-most location,
click a menu link and animate rotation of the object to a particular 'location' (highlighting the new location).
I'm using orbitcontrols for the first bit, and was hoping to tween the orbitcontrols directly for the menu link bit, but couldn't get the camera to move in the right path. SO I put the camera inside an object, and whilst orbitcontrols handles the camera, the tweening is done on the object ('camHolder') instead.
So there are two moving parts (cam controlled by user's mouse, camHolder tweened into position by link clicks), and when either one moves, the rotational difference between them changes. In order to highlight the right 'point' between these two rotation values, I need to keep track of the offset between the two. Basically (simplified version of the codepen):
// ------- MOUSE/CAMERA INTERACTION ---------
// location of points (in radians):
var pointLongs=[-3,-2,-2.5,-2,-1.5,-1,-0.5,0,1,2,2.5,3];
// most recent point highlighted (by menu click):
var currentPoint = 5;
// get diff (in radians) between camera and current point
var pointDistance = pointLongs[currentPoint] - camera.rotation.y;
// the offset rotation of cam (i.e. whats closest to the front):
var offset = camera.rotation.y + pointDistance;
// find the closest value to offset in pointLongs array:
var closest = pointLongs.reduce(function (prev, curr) {
return (Math.abs(curr - offset) < Math.abs(prev - offset ) ? curr : prev);
});
closestPointIndex = pointLongs.indexOf(closest);
// highlight that point (raise it up):
scene.getObjectByName(pointNames[closestPointIndex]).position.y = 20;
This seems to work as long as pointDistance is above 0, but if not, the tracking of the current 'point' only works on part of the mouse orbiting circle, when it should work all the way round.
Codepen here: http://codepen.io/anon/pen/BNPWya (the Sole tween code is embedded in there so skip the first chunk...). Try rotating the shape with the mouse, and notice that the points aren't raised all the way around. Click the random / next menu buttons, and the 'gap' changes... Sometimes it does go all the way round!
I've tried changing just about all the values (pointLongs all positive values; initial rotation of camera, etc) but my maths is generally terrible, and I've lost the ability to see straight - anyone have any ideas? Please ask if something doesn't make sense!
I'd add the tag 'HelpMeWestLangleyYoureMyOnlyHope' but I don't have enough reputation :D
TLDR; rotation of object and camera won't 'sync', need to either correct the difference, or maybe find a way to tween position/rotation of orbitcontrols?
Related
in my project I have a player walk around a globe. The globe is not just a sphere, it has mountains and valleys, so I need the players z position to change. For this I'm raycasting a single ray from player's position against a single object (the globe) and I get the point they intersect and change players position accordingly. I'm only raycasting when the player moves, not on every frame.
For a complex object it takes forever. It takes ~200ms for an object with ~1m polys (faces) (1024x512 segments sphere). Does raycasting cast against every single face ?
Is there a traditional fast way to achieve this in THREE, like some acceleration structure (octree? bvh? -- tbh from my google searches I haven't seem to find such a thing included in THREE) or some other thinking-out-of-the-box (no ray casting) method?
var dir = g_Game.earthPosition.clone();
var startPoint = g_Game.cubePlayer.position.clone();
var directionVector = dir.sub(startPoint.multiplyScalar(10));
g_Game.raycaster.set(startPoint, directionVector.clone().normalize());
var t1 = new Date().getTime();
var rayIntersects = g_Game.raycaster.intersectObject(g_Game.earth, true);
if (rayIntersects[0]) {
var dist = rayIntersects[0].point.distanceTo(g_Game.earthPosition);
dist = Math.round(dist * 100 + Number.EPSILON) / 100;
g_Player.DistanceFromCenter = dist + 5;
}
var t2 = new Date().getTime();
console.log(t2-t1);
Thank you in advance
Do not use three.js Raycaster.
Consider Ray.js that offers function intersectTriangle(a, b, c, backfaceCulling, target)
Suggested optimizations:
If player starts from some known positions ⇒ you must know his initial height, − no need to raycast (or just do one time full mesh slow intersection)
if player moves with small steps ⇒ next raycast will most likely intersect the same face as before.
Optimization #1 − remember previous face, and raycast it first.
if player does not jump ⇒ next raycast will most likely intersect the adjacent face to the face where player was before.
Optimization #2 − build up a cache, so that given a face idx you could retrieve adjacent faces in O(1) time.
This cache may be loaded from the file, if your planet is not generated in real time.
So with my approach on each move you do O(1) read operation from cache and raycast 1-6 faces.
Win!
For a complex object it takes forever. It takes ~200ms for an object with ~1m polys (faces) (1024x512 segments sphere). Does raycasting cast against every single face ?
Out of the box THREE.js does check every triangle when performing a raycast against a mesh and there are no acceleration structures built into THREE.
I've worked with others on the three-mesh-bvh package (github, npm) to help address this problem, though, which may help you get up to the speeds your looking for. Here's how you might use it:
import * as THREE from 'three';
import { MeshBVH, acceleratedRaycast } from 'three-mesh-bvh';
THREE.Mesh.prototype.raycast = acceleratedRaycast;
// ... initialize the scene...
globeMesh.geometry.boundsTree = new MeshBVH(globeMesh.geometry);
// ... initialize raycaster...
// Optional. Improves the performance of the raycast
// if you only need the first collision
raycaster.firstHitOnly = true;
const intersects = raycaster.intersectObject(globeMesh, true);
// do something with the intersections
There are some caveats mentioned in the README so keep those in mind (the mesh index is modified, only nonanimated BufferGeometry is supported, etc). And there's still some memory optimization that could be done but there are some tweakable options to help tune that.
I'll be interested to hear how this works for you! Feel free to leave feedback in the issues on how to improve the package, as well. Hope that helps!
I think you should pre-render the height map of your globe into a texture, assuming your terrain is not dynamic. Read all of it into a typed array, and then whenever your player moves, you only need to back-project her coordinates into that texture, query it, offset and multiply and you should get what you need in O(1) time.
It's up to you how you generate that height map. Actually if you have a bumpy globe, then you should probably start with height map in the first place, and use that in your vertex shader to render the globe (with the input sphere being perfectly smooth). Then you can use the same height map to query the player's Z.
Edit: Danger! This may cause someone's death one day. The edge case I see here is the nearest collision will be not be seen because searchRange will not contain the nearest triangle but will contain the second nearest one returning it as the closest one. I.e. a robotic arm may stop nearby the torso instead of stopping at the arm right in front of it.
anyway
Here's a hack when raycasting not too far from the previous result i.e. during consecutive mousemove events. This will not work for completely random rays
Mesh raycast supports drawRange to limit how many triangles will be searched. Also each raycast result comes with faceIndex telling which triangle was hit. If you're continuously looking for raycasts i.e. with mousemove or there's a laser linearly scanning a mesh you can first search the area nearby* the previous hit.
triangles' distance in the data may look like they're neighbours but it's not guaranteed they are sorted in any way. Still it's very possible that the close ones in the data are close in space.
let lastFaceIndex = null
const searchRange = 2000 * 3
function raycast(mesh, raycaster) {
// limited search
if (lastFaceIndex !== null) {
const drawRange = mesh.geometry.drawRange
drawRange.start = Math.max(0, lastFaceIndex * 3 - searchRange)
drawRange.count = searchRange * 2
const intersects = raycaster.intersectObjects([mesh]);
drawRange.start = 0
drawRange.count = Infinity
if (intersects.length) {
lastFaceIndex = intersects[0].faceIndex
return intersects[0]
}
}
// regular search
const intersects = raycaster.intersectObjects([mesh]);
if (!intersects.length) {
lastFaceIndex = null
return null
}
lastFaceIndex = intersects[0].faceIndex
return intersects[0]
}
I am trying to find the closest distance from a point to large, complex Mesh along a plane in a direction range:
for (var zDown in verticalDistances) {
var myIntersect = {};
for (var theta = Math.PI / 2 - 0.5; theta < Math.PI / 2 + 0.5; theta += 0.3) {
var rayDirection = new THREE.Vector3(
Math.cos(theta),
Math.sin(theta),
0
).transformDirection(object.matrixWorld);
// console.log(rayDirection);
_raycaster.set(verticalDistances[zDown].minFacePoint, rayDirection, 0, 50);
// console.time('raycast: ');
var intersect = _raycaster.intersectObject(planeBufferMesh);
// console.timeEnd('raycast: '); // this is huge!!! ~ 2,300 ms
// console.log(_raycaster);
// console.log(intersect);
if (intersect.length == 0) continue;
if ((!('distance' in myIntersect)) || myIntersect.distance > intersect[0].distance) {
myIntersect.distance = intersect[0].distance;
myIntersect.point = intersect[0].point.clone();
}
}
// do stuff
}
I get great results with mouse hover on the same surface but when performing this loop the raycasting is taking over 2 seconds per cast. The only thing i can think of is that the BackSide of the DoubleSide Material is a ton slower?
Also i notice as I space out my verticalDistances[zDown].minFacePoint to be farther apart raycast starts to speed up up (500ms /cast). So as the distance between verticalDistances[i].minFacePoint and verticalDistances[i+1].minFacePoint increases, the raycaster performs faster.
I would go the route of using octree but the mouse hover event works extremely well on the exact same planeBuffer. Is this a side of Material issue,. that could be solved by loading 2 FrontSide meshes pointing in opposite directions?
Thank You!!!!
EDIT: it is not a front back issue. I ran my raycast down the front and back side of the plane buffer geometry with the same spot result. Live example coming.
EDIT 2: working example here. Performance is little better than Original case but still too slow. I need to move the cylinder in real time. I can optimize a bit by finding certain things, but mouse hover is instant. When you look at the console time the first two(500ms) are the results i am getting for all results.
EDIT 3: added a mouse hover event, that performs the same as the other raycasters. I am not getting results in my working code that i get in this sample though. The results I get for all raycast are the same as i get for the first 1 or 2 in the sample around 500ms. If i could get it down to 200ms i can target the items i am looking for and do way less raycasting. I am completely open to suggestions on better methods. Is octree the way to go?
raycast: : 467.27001953125ms
raycast: : 443.830810546875ms
EDIT 4: #pailhead Here is my plan.
1. find closest grid vertex to point on the plane. I can do a scan of vertex in x/y direction then calculate the min distance.
2. once i have that closest vertex i know that my closest point has to be on a face containing that vertex. So i will find all faces with that vertex using the object.mesh.index.array and calculate the plane to point of each face. Seems like a ray cast should be a little bit smarter than a full scan when intersecting a mesh and at least cull points based on max distance? #WestLangley any suggestions?
EDIT 5:
#pailhead thank you for the help. Its appreciated. I have really simplified my example(<200 lines with tons more comments); Is raycaster checking every face? Much quicker to pick out the faces within the set raycasting range specified in the constructor and do a face to point calc. There is no way this should be looping over every face to raycast. I'm going to write my own PlaneBufferGeometry raycast function tonight, after taking a peak at the source code and checking octree. I would think if we have a range in the raycaster constructor, pull out plane buffer vertices within that range ignoring z. Then just raycast those or do a point to plane calculation. I guess i could just create a "mini" surface from that bounding circle and then raycast against it. But the fact that the max distance(manual uses "far") doesn't effect the speed of the raycaster makes me wonder how much it is optimized for planeBuffer geometries. FYI your 300k loop is ~3ms on jsfiddle.
EDIT 6: Looks like all meshes are treated the same in the raycast function. That means it wont smart hunt out the area for a plane Buffer Geometry. Looking at mesh.js lines 266 we loop over the entire index array. I guess for a regular mesh you dont know what faces are where because its a TIN, but a planeBuffer could really use a bounding box/sphere rule, because your x/y are known order positions and only the Z are unknown. Last edit, Answer will be next
FYI: for max speed, you could use math. There is no need to use ray casting. https://brilliant.org/wiki/3d-coordinate-geometry-equation-of-a-plane/
The biggest issue resolved is filtering out faces of planeBufferGeometry based on vertex index. With a planeBufferGeometry you can find a bounding sphere or rectangle that will give you the faces you need to check. they are ordered in x/y in the index array so that filters out many of the faces. I did an indexOf the bottom left position and lastIndexOf the top right corner position in the index array. RAYCASTING CHECKS EVERY FACE
I also gave up on finding the distance from each face of the object and instead used vertical path down the center of the object. This decreased the ray castings needed.
Lastly I did my own face walk through and used the traingle.closestPointToPoint() function on each face.
Ended up getting around 10ms per point to surface calculation(single raycast) and around 100 ms per object (10 vertical slices) to surface. I was seeing 2.5 seconds per raycast and 25+ seconds per object prior to optimization.
So I'm very new to THREE JS and I've been trying to figure this out for a few hours now, but how do I determine whether or not a mesh is facing a selected point? Essentially what I have is an RTS style game, where you can select a character and select where he moves to. Currently you can select the character and you can select and where you want it to move to on the map and it will start walking, however I can't figure out how to determine if it is facing the right direction. I don't want to use lookAt because I want the mesh to turn while it walks forward, and not do anything instantaneously.
Ideas?
a simple solution is to select arbitrary look vector
var lookVector = new THREE.Vector3(0,0,1);
and when you need to do some check transform a copy of this vector with mesh matrix (make sure matrix is updated and count in the geometry transformations if you did any)
var direction = lookVector.clone().applyMatrix4(mesh.matrix);
var origin = mesh.boundingSphere.center;
var lookVectorAtThisTime = direction.sub(origin);
then calculate the angle to your point of interest
var vectorToPOI = POI.sub(origin);
var angle = lookVectorAtThisTime.angleTo(vectorToPOI);
if(angle < minAngle)
{
//looking at the point
}
you can also calculate your look vector directly from geometry or some other way origin vector can be something else than the center of the object, but this should get you on the right path..
I'm trying to do a Tiny Wings like in javascript.
I first saw a technique using Box2D, I'm using the closure-web version (because of the memory leaks fix).
In short, I explode the curve into polygons so it looks like that:
I also tried with Chipmunk-js and I use the segment shape to simulate my ground like that:
In both cases, I'm experiencing some "crashes" or "bumps" at the common points between polygons or segments when a circle is rolling.
I asked about it for Chipmunk and the author said he implemented a radius property for the segment to reduce this behavior. I tried and it indeed did the trick but it's not perfect. I still have some bumps(I had to set to 30px of radius to get a positive effect).
The "bumps" append at the shared points between two polygons :
Using, as illandril suggested to me, the edging technique (he only tested with polygon-polygon contact) to avoid the circle to crash on an edge:
Also tried to add the bullet option as Luc suggested and nothing seems to change.
Here the demo of the issue.
You can try to change the value to check :
bullet option
edge size
iterations count
the physics
(only tested on latest dev Chrome)
Be patient (or change the horizontal gravity) and you'll see what I mean.
Here the repo for the interested.
The best solution is edge shapes with ghost vertices, but if that's not available in the version/port you're using, the next best thing is like the diagram in your question called 'edging', but extend the polygons further underground with a very shallow slope, like in this thread: http://www.box2d.org/forum/viewtopic.php?f=8&t=7917
I first thought the problem could come from the change of slope between two adjacent segments, but since on a flat surface of polygons you still have bumps I think the problem is rather hitting the corner of a polygon.
I don't know if you can set two sets of polygons, overlapping each other ? Just use the same interpolation calculations and generate a second set of polygons just like in the diagram hereafter : you have the red set of polygons built and add the green set by setting the left vertices of a green polygon in the middle of a red polygon, and its right vertices in the middle of the next red polygon.
![diagram][1]
This should work on concave curves and... well you should be flying over the convex ones anyway.
If this doesn't work try setting a big number of polygons to build the slope. Use a tenth of the circle's radius for the polygon's width, maybe even less. That should reduce your slope discontinuities.
-- Edit
In Box2D.js line 5082 (in this repo at least) you have the PreSolve(contact, manifold) function that you can override to check if the manifolds (directions in which the snowball are impulsed when colliding the polygons) are correct.
To do so, you would need to recover the manifold vector and compare it to the normal of the curve. It should look like that (maybe not exactly) :
Box2D.Dynamics.b2ContactListener.prototype.PreSolve = function (contact, oldManifold) {
// contact instanceof Box2D.Dynamics.Contacts.b2Contact == true
var localManifold, worldManifold, xA, xB, man_vect, curve_vect, normal_vect, angle;
localManifold = contact.GetManifold();
if(localManifold.m_pointCount == 0)
return; // or raise an exception
worldManifold = new Box2D.Collision.b2WorldManifold();
contact.GetWorldManifold( worldManifold );
// deduce the impulse direction from the manifold points
man_vect = worldManifold.m_normal.Copy();
// we need two points close to & surrounding the collision to compute the normal vector
// not sure this is the right order of magnitude
xA = worldManifold.m_points[0].x - 0.1;
xB = worldManifold.m_points[0].x + 0.1;
man_vect.Normalize();
// now we have the abscissas let's get the ordinate of these points on the curve
// the subtraction of these two points will give us a vector parallel to the curve
var SmoothConfig;
SmoothConfig = {
params: {
method: 'cubic',
clip: 'mirror',
cubicTension: 0,
deepValidation: false
},
options: {
averageLineLength: .5
}
}
// get the points, smooth and smooth config stuff here
smooth = Smooth(global_points,SmoothConfig);
curve_vect = new Box2D.Common.Math.b2Vec2(xB, smooth(xB)[1]);
curve_vect.Subtract(new Box2D.Common.Math.b2Vec2(xA, smooth(xA)[1]));
// now turn it to have a normal vector, turned upwards
normal_vect = new Box2D.Common.Math.b2Vec2(-curve_vect.y, curve_vect.x);
if(normal_vect.y > 0)
normal_vect.NegativeSelf();
normal_vect.Normalize();
worldManifold.m_normal = normal_vect.Copy();
// and finally compute the angle between the two vectors
angle = Box2D.Common.Math.b2Math.Dot(man_vect, normal_vect);
$('#angle').text("" + Math.round(Math.acos(angle)*36000/Math.PI)/100 + "°");
// here try to raise an exception if the angle is too big (maybe after a few ms)
// with different thresholds on the angle value to see if the bumps correspond
// to a manifold that's not normal enough to your curve
};
I'd say the problem has been tackled in Box2D 2.2.0 , see its manual, section 4.5 "Edge Shapes"
The thing is it's a feature of the 2.2.0 version, along with the chainshape thing, and the box2dweb is actually ported from 2.2.1a - don't know about box2dweb-closure.
Anything I've tried by modifying Box2D.Collision.b2Collision.CollidePolygonAndCircle has resulted in erratic behaviour. At least a part of the time (e.g. ball bumping in random directions, but only when it rolls slowly).
I have an application with many draggable objects that can also be rotated in 90 degree increments. I'm trying to figure out how to stop the user from dragging the objects outside the Raphael paper (canvas).
This is fairly simple for unrotated objects. I can simply see if the current x and y coordinates are less than 0 and set them to 0 instead. I can adjust similarly by checking if they are outside the canvas width and height.
However, a problem arises when the object is rotated because for some odd reason the coordinate plane rotates as well. Is there an easy way to keep objects inside the canvas? Or is there an example of some this somewhere?
I have spent many hours fiddling with this and I can't seem to make sense of the rotated coordinate plane in order to adjust my calculations. Even when debugging the current coordinates, they seem to shift oddly if I drag an object, release it, and then drag the object again.
Any help is greatly appreciated.
Thanks,
Ryan
I had a similar problem, I needed to move a shape within the boundaries of another shape, so what I did was:
element.drag(onstart, onmove, onend);
...
onStart: function(x,y,e){
// Initialize values so it doesn't recalculate per iteration
// this allows to resume dragging from the point it were left
App.oldX = 0;
App.oldY = 0;
App.currentCircleX = App.fingerPath.attr('cx');
App.currentCircleY = App.fingerPath.attr('cy');
},
onMove: function(dx,dy,x,y,e){
App.setDirection(dx,dy);
},
onEnd: function(e){
// nothing to do here for now
},
// this function tells the element to move only if it's within the bound area
setDirection: function(dx, dy){
var isXYinside;
this.newX = this.currentCircleX - (this.oldX - dx);
this.newY = this.currentCircleY - (this.oldY - dy);
// HERE is the key, this method receives your bounding path and evaluates the positions given and then returns true or false
isXYinside = Raphael.isPointInsidePath(this.viewportPath, this.newX, this.newY);
this.oldX = dx;
this.oldY = dy;
// so if it is within the bound area, will move, otherwise will just stay there
if (isXYinside) {
this.fingerPath.attr({
"cx": this.newX,
"cy": this.newY
});
this.currentCircleX = this.newX;
this.currentCircleY = this.newY;
}
}
I know this is an old one, but I stumbled upon this question when trying to figure out a way to do it. So here's my 2 cents in case someone has this problem.
Reference:
Raphael.isPointInsidePath
Have you tried Element.getBBox()
There Are 2 flavones which give the result before rotation and after rotation
You should toggle the Boolean argument and test it