I am new to threejs.
I have scene with an object in it which we can move around the scene on all the XYZ Axis using TransformControls.js.
When I translate/move the object inside the scene using mouse click and drag on any of the axis (i.e X,Y,Z). I want to get the updated X,Y,Z position co-ordinates of that particular object inside the scene.
I use mesh.position.set( 0, 0, 0 ); to set position of the object prior rendering the scene, But I am not able to find how to get the dynamic position of an object inside a scene.
Eventually I want to save the updated position co-ordinates after the transform operation and re-render the scene with the object at the updated position co-ordinates when the user comes back to the page or does a page refresh.
Any pointers would be very helpful.
Thank you
THREE.TransformControls requires a few steps to use.
Create your THREE.TransformControls object
Add it to your scene
Attach it to the object you wish to manipulate
var xformControl = new THREE.TransformControls(camera, renderer.domElement);
scene.add(xformControls);
// assuming you add "myObj" to your scene...
xformControl.attach(myObj);
// and then later...
xformControl.detatch();
Attaching the control to an object will insert a manipulation "gizmo" into the scene. Dragging the various parts of the gizmo will perform different kinds of transformations. After you are done transforming the part with the gizmo, checking mesh.position should reflect the new position.
Additional information for clarity:
The position of the object will not be updated until you use the "gizmo" to move it. Example:
Your object is in the scene at (10, 10, 10)
xformControl.attach(yourObject)
The "gizmo" is created at (10, 10, 10)
Your object remains at (10, 10, 10)
Use the "gizmo" to translate the object in +Y direction
Your object will now have an updated position
console.log(yourObject.position.y > 10); // true
I might be too late, but you can get the updated value by using TransformControls' objectChange event.
Example code:
const transformControls = new TransformControls(camera, renderer.domElement);
transformControls.addEventListener('objectChange', (e) => {
console.log(e.target.object.position.x);
})
first answer is not correct, it should be:
onst transformControls = new TransformControls(camera, renderer.domElement);
transformControls.addEventListener('objectChange', (e) => {
console.log(e.target.object.position.x);
})
Related
I'm trying to add an Object3D to my gltf model and place it above the model. I'm doing it the following way:
this.el.addEventListener('model-loaded', () => {
this.bar = new MyCustomObject3D();
const size = new THREE.Vector3();
let box = new THREE.Box3().setFromObject(this.el.object3D);
box.getSize(size)
let height = size.y + 1;
this.bar.position.set(0, height, 0);
this.el.setObject3D("bar", this.bar);
// same result:
// this.el.object3D.add(this.bar);
})
The height is 2 and if I placed an element with this position into root (i.e. scene) it would be placed correctly right above the model. But when I add it to the Object3D it's being placed somewhere below the model on height ~0.5. Only by multiplying the height by 25 I could achieve the right position.
So how to calculate the exact offset needed to place the new Object3D above the model without multiplying it to a random number?
UPDATE:
Adding reproducible example. Note the width and height I had to pass to GLTF model.
One way of placing objects above a model, would be grabbing its bounding box, and placing an object above it.
In general, it it simple - just like you did it:
let box = new THREE.Box3().setFromObject(this.el.object3D);
box.getSize(size)
let height = size.y + 1;
this.bar.position.set(0, height, 0);
But in this case - the bounding box is off. Way off. The minimum is way too low, and the maximum is somewhere in the middle. Why is that? (tldr: check it out here)
The cuprit is: skinning. The model is transformed by its bones - which is a form of vertex displacement that happens on the GPU (vertex shader), and has nothing to do with the geometry (source).
Here is some visual aid - the model with its armature:
And without the armature applied:
Now we see why the box is off - its corresponding to the bottom picture!
So we need to re-create what the bones are doing to the geometry:
1. The hard route
You need to take a THREE.Box3.
Iterate through each geometry point of the model
Apply the bone transform to each point (it is done here - but not available in a-frame 1.0.4)
Expand the THREE.Box3
2. The easy route
While looking into this, I've made a utility function THREE.Box3Utils.fromSkinnedMesh(mesh, box3); - box3 will be the bounding box of the model at the time when the function is called.
The function is a part of this repo.
Its used on this example.
One can easily create a THREE.BoxGeometry where you have to pass arguments when creating as three separated arguments for width, height, and depth.
I would like to create any and all THREE[types]() with no parameters and set the values after that.
Is there a way to set the dimensions/size of the box geometry after creation (possibly buried in a Mesh already too)? other then scaling etc.
I couldn't find this in the documentation if so, otherwise maybe a major feature request if not a bug there. Any thoughts on how to classify this? maybe just a documentation change.
If you want to scale a mesh, you have two choices: scale the mesh
mesh.scale.set( x, y, z );
or scale the mesh's geometry
mesh.geometry.scale( x, y, z );
The first method modifies the mesh's matrix transform.
The second method modifies the vertices of the geometry.
Look at the source code so you understand what each scale method is doing.
three.js r.73
When you instantiate a BoxGeometry object, or any other geometry for that matter, the vertices and such buffers are created on the spot using the parameters provided. As such, it is not possible to simply change a property of the geometry and have the vertices update; the entire object must be re-instantiated.
You will need to create your geometries as you have the parameters for them available. You can however create meshes without geometries, add them to a scene, and update the mesh's geometry property once you have enough information to instantiate the object. If not that, you could also set a default value at first and then scale to reach your target.
Technically, scaling only creates the illusion of an updated geometry and the question did say (other then scaling). So, I would say a better approach would be to reassign the geometry property of your mesh to a new geometry.
mesh.geometry = new THREE.BoxGeometry(newSize, newSize, newSize)
With this approach you can update any aspect of the geometry including width segments for example. This is especially useful when working with non box geometries like cylinders or spheres.
Here is a full working example using this approach:
let size = 10
let newSize = 20
// Create a blank geometry and make a mesh from it.
let geometry = new THREE.BoxGeometry()
let material = new THREE.MeshNormalMaterial()
let mesh = new THREE.Mesh(geometry, material)
// Adding this mesh to the scene won't display anything because ...
// the geometry has no parameters yet.
scene.add(mesh)
// Unless you intend to reuse your old geometry dispose of it...
// this will significantly reduce memory footprint.
mesh.geometry.dispose()
// Update the mesh geometry to a new geometry with whatever parameters you desire.
// You will now see these changes reflected in the scene.
mesh.geometry = new THREE.BoxGeometry(size, size, size)
// You can update the geometry as many times as you like.
// This can be done before or after adding the mesh to the scene.
mesh.geometry = new THREE.BoxGeometry(newSize, newSize, newSize)
I'm using Physijs script for physics like gravitation.
I want to move objects in my scene with Raycaster from THREE.js script.
My problem is that Raycaster only move objects (simple box) declared like:
var box = new Physijs.Mesh(cubeGeomtery.clone(), createMaterial);
But here physics does not work. It only works if I declare it like:
var create = new Physijs.BoxMesh(cubeGeomtery.clone(), createMaterial);
But here Raycaster / moving does not work.
The difference between these two is that in the first it's just Mesh and in the second it's BoxMesh.
Does anyone know why this doesn't work? I need BoxMesh in order to use gravity and other physics.
Code to add cube
function addCube()
{
controls.enable = false;
var cubeGeomtery = new THREE.CubeGeometry(85, 85, 85);
var createTexture = new THREE.ImageUtils.loadTexture("images/rocks.jpg");
var createMaterial = new THREE.MeshBasicMaterial({ map: createTexture });
var box = new Physijs.BoxMesh(cubeGeomtery.clone(), createMaterial);
box.castShadow = true;
box.receiveShadow = true;
box.position.set(0, 300, 0);
objects.push(box);
scene.add(box);
}
Explanation
In Physijs, all primitive shapes (such as the Physijs.BoxMesh) inherit from Physijs.Mesh, which in turn inherits from THREE.Mesh. In the Physijs.Mesh constructor, there is a small internal object: the ._physijs field. And, in that object, there is... a shape type declaration, set to null by default. That field must be re-assigned by one of its children. If not, when the shape is passed to the scene, the Physijs worker script won't know what kind of shape to generate and simply abort. Since the Physijs.Scene inherits from the THREE.Scene, the scene keeps a reference of the mesh internally like it should, which means that all methods from THREE.js will work (raycasting, for instance). However, it is never registered as a physical object because it has no type!
Now, when you are trying to move the Physijs.BoxMesh directly with its position and rotation fields, it is immediately overridden by the physics updates, which started with the .simulate method in your scene object. When called, it delegates to the worker to compute new positions and rotations that correspond to the physics configurations in your scene. Once it's finished, the new values are transferred back to the main thread and updated automatically so that you don't have to do anything. This can be a problem in some cases (like this one!). Fortunately, the developer included 2 special fields in Physijs.Mesh: the .__dirtyPosition and .__dirtyRotation flags. Here's how you use them:
// Place box already in scene somewhere else
box.position.set(10, 10, 10);
// Set .__dirtyPosition to true to override physics update
box.__dirtyPosition = true;
// Rotate box ourselves
box.rotation.set(0, Math.PI, 0);
box.__dirtyRotation = true;
The flags get reset to false after updating the scene again via the .simulate method.
Conclusion
It is basically useless to create a Physijs.Mesh yourself, use one of the primitives provided instead. It is just a wrapper for THREE.Mesh for Physijs and has no physical properties until modified properly by one of its children.
Also, when using a Physijs mesh, always set either the .__dirtyPosition or the .__dirtyRotation property in the object to directly modify position or rotation, respectively. Take a look in the above code snippet and here.
My program creates dynamic number of point cloud objects with custom attributes that includes the alpha value of each particle. This works fine, however, when the objects are nested within each other (say spheres) the smaller (inner) ones are getting obscured by the bigger ones, even though their particles' alpha is set properly. When I reverse the order of adding the point-cloud objects to the scene, starting with the bigger ones, going down to the smaller ones, I can see the smaller ones thru the bigger ones.
My question is whether there is a way to tell the renderer to update or recalculate the alpha values or re-render the smaller inner objects so that they show up?
I ran into the same problem as you do. I fixed it to calculate and set the renderdepth for each mesh. For this you need the camera position and the center of your mesh.
You probably already created meshes for each object. If you save all these meshes into an array, it's easier to calculate and set the renderdepth on these objects.
Here's an example how I did it.
updateRenderDepthOnRooms(cameraPosition: THREE.Vector3): void {
var rooms: Room[] = this.getAllRooms();
rooms.forEach((room) => {
var roomCenter = getCenter(room.mesh.geometry);
var renderDepth = 0 - roomCenter.distanceToSquared(cameraPosition);
room.mesh.renderDepth = renderDepth;
});
}
function getCenter(geometry: THREE.Geometry): THREE.Vector3 {
geometry.computeBoundingBox();
var bb = geometry.boundingBox;
var offset = new THREE.Vector3();
offset.addVectors(bb.min, bb.max);
offset.multiplyScalar(0.5);
return offset;
}
So, to get the center of your object, you can ask the geometry from your mesh and use the getCenter(..) function from my example. Then you calculate the renderdepth with the ThreeJs function distanceToSquared(..) and then set this renderdepth to your mesh.
That's it. Hope this will help you.
I have scene,meshes and target object.
When i set up
mesh.lookAt(object)
mesh correctly facing of object.
How can i repeat this rotation of mesh on another mesh, to force another mesh facing the same direction (not the same object, but the same orientation as a first mesh have)?
How can i get rotation coordinates of first mesh?
How can i get this coordinates without need of creating mesh and order mesh.lookAt(object). That mean only to calculate this coordinates without need to use it on some object?
UPDATE:
Only possible solution is to create new THREE.Object3D() and use object.lookAt(target). Then repeat rotation for all later loaded object like: new_object.rotation.set(object.rotation.x,object.rotation.y,object.rotation.z)
You will create only one Object, not a lot of unuseful Vector3-s.
Do not use new_object.rotation = object.rotation it is functional solution, but a variables stay connected. Change of object rotation, will update new_object.rotation too (renderer is updating all values each frame).
You can set the local rotation of the other meshes to the local rotation of the mech facing in the correct direction.
anyOtherMesh.rotation = mesh.rotation;
what about a
lookAt( new THREE.Vector3( target.position.x, target.position.y, target.position.z )
?