Three.JS how to use ray casters with multiple cameras? - javascript

I'm hoping someone here has done this before. I have a ThreeJS scene, which is essentially a cube. The cube has 6 images one on each of the scenes, basically rendering a 360 panoramic photo.
Within our 3D space, we have 6 cameras, one each pointing at each of the directions, forward, backward, left, right, up, and down. Basically we want to be able to project each of these views all at once.
So, to do this I've made 6 cameras, and added them to the scene. In the scene, we have clickable targets. I want to be able to use ray caster to register a click in any camera view. For example, this is what it looks like:
Within our scene we have clickable hot spots. I want to be able to click on a hotspot on ANY of the 6 cameras. But, I can't get the ray caster to work on even one. This is the code I am using:
if (this._my_cameras && this._my_cameras[0])
{
var mouse = {};
mouse.x = ( this._nextMouseX / this._my_cameras[0].viewport.width ) * 2 - 1;
mouse.y = - ( this._nextMouseY / this._my_cameras[0].viewport.height ) * 2 + 1;
Object.assign(
this._mouse,
coords.normalizeScreenXY(mouse, this._my_cameras[0].viewport)
);
console.log("this._mouse: " + util.inspect(this._mouse));
this._raycaster.setFromCamera(this._mouse, this._my_cameras[0]);
}
else
{
this._raycaster.setFromCamera(this._mouse, this._camera);
}
let intersections = this._raycaster.intersectObjects(
this._scene.children,
true
);
this._nextMouseX / Y are the raw mouse coordinates on screen. My normalize function should normally the mouse coordinates to -1 to 1 as needed. This all works fine if I have one camera taking up the whole view. But with 6 cameras, I never get a ray caster intersection with my targets.
Does anyone have an idea on how to get raycasting or object picking working across multiple cameras?
Edit 1:
This is the code I am now trying to get to work using viewports for the cameras for the raycasting:
for (var i = 0; i < this._igloo_cameras.length; i++)
{
this._mouse.x = ( this._nextMouseX / this._igloo_cameras[i].viewport.w ) * 2 - 1;
this._mouse.y = -1 * (this._nextMouseY / this._igloo_cameras[i].viewport.z) * 2 + 1;
console.log("this._igloo_cameras[i].viewport: " + util.inspect(this._igloo_cameras[i].viewport));
console.log("Igloo Camera #" + i + " this._mouse: " + util.inspect(this._mouse));
this._raycaster.setFromCamera(this._mouse, this._igloo_cameras[i]);
let intersections = this._raycaster.intersectObjects(
this._scene.children,
true
);
all_intersections.push(...intersections);
}
with this I get mouse values outside of 1 to -1 and I still don't get accurate click locations/targets.
This is what I get in my console:
this._igloo_cameras[i].viewport: { x: 0, y: 685, z: 685, w: 685 }
Viewer3D.js:521 Igloo Camera #0 this._mouse: { x: -0.5883211678832116,
y: -1.3649635036496353,
rawX: 141,
rawY: 810 }
Viewer3D.js:520 this._igloo_cameras[i].viewport: { x: 685, y: 685, z: 685, w: 685 }
Viewer3D.js:521 Igloo Camera #1 this._mouse: { x: -0.5883211678832116,
y: -1.3649635036496353,
rawX: 141,
rawY: 810 }

Related

THREE.JS & Reality Capture - Rotation issue photogrammetry reference camera's in a 3D space

Thanks for taking the time to review my post. I hope that this post will not only yield results for myself but perhaps helps others too!
Introduction
Currently I am working on a project involving pointclouds generated with photogrammetry. It consists of photos combined with laser scans. The software used in making the pointcloud is Reality Capture. Besides the pointcloud export one can export "Internal/External camera parameters" providing the ability of retrieving photos that are used to make up a certain 3D point in the pointcloud. Reality Capture isn't that well documented online and I have also posted in their forum regarding camera variables, perhaps it can be of use in solving the issue at hand?
Only a few variables listed in the camera parameters file are relevant (for now) in referencing camera positioning such as filename, x,y,alt for location, heading, pitch and roll as its rotation.
Currently the generated pointcloud is loaded into the browser compatible THREE.JS viewer after which the camera parameters .csv file is loaded and for each known photo a 'PerspectiveCamera' is spawned with a green cube. An example is shown below:
The challenge
As a matter of fact you might already know what the issue might be based on the previous image (or the title of this post of course ;P) Just in case you might not have spotted it, the direction of the cameras is all wrong. Let me visualize it for you with shabby self-drawn vectors that rudimentary show in what direction it should be facing (Marked in red) and how it is currently vectored (green).
Row 37, DJI_0176.jpg is the most right camera with a red reference line row 38 is 177 etc. The last picture (Row 48 is DJI_189.jpg) and corresponds with the most left image of the clustured images (as I didn't draw the other two camera references within the image above I did not include the others).
When you copy the data below into an Excel sheet it should display correctly ^^
#name x y alt heading pitch roll f px py k1 k2 k3 k4 t1 t2
DJI_0174.JPG 3.116820957 -44.25690188 14.05258109 -26.86297007 66.43104338 1.912026354 30.35179628 7.25E-03 1.45E-03 -4.02E-03 -2.04E-02 3.94E-02 0 0 0
DJI_0175.JPG -5.22E-02 -46.97266554 14.18056658 -16.2033133 66.11532302 3.552072396 30.28063771 4.93E-03 4.21E-04 1.38E-02 -0.108013599 0.183136287 0 0 0
DJI_0176.JPG -3.056586953 -49.00754998 14.3474763 4.270483155 65.35247679 5.816970677 30.50596933 -5.05E-03 -3.53E-03 -4.94E-03 3.24E-02 -3.84E-02 0 0 0
DJI_0177.JPG -6.909437337 -50.15910066 14.38391206 19.4459053 64.26828897 6.685020944 30.6994734 -1.40E-02 4.72E-03 -5.33E-04 1.90E-02 -1.74E-02 0 0 0
DJI_0178.JPG -11.23696688 -50.36025313 14.56924433 19.19192622 64.40188316 6.265995184 30.7665397 -1.26E-02 2.41E-03 1.24E-04 -4.63E-03 2.84E-02 0 0 0
DJI_0179.JPG -16.04060554 -49.92320365 14.69721478 19.39979452 64.85507307 6.224929846 30.93772566 -1.19E-02 -4.31E-03 -1.27E-02 4.62E-02 -4.48E-02 0 0 0
DJI_0180.JPG -20.95614556 -49.22915437 14.92273203 20.39327092 65.02028543 6.164031482 30.99807237 -1.02E-02 -7.70E-03 1.44E-03 -2.22E-02 3.94E-02 0 0 0
DJI_0181.JPG -25.9335097 -48.45330177 15.37330388 34.24388008 64.82707628 6.979877709 31.3534556 -1.06E-02 -1.19E-02 -5.44E-03 2.39E-02 -2.38E-02 0 0 0
DJI_0182.JPG -30.40507957 -47.21269946 15.67804925 49.98858409 64.29238807 7.449650513 31.6699868 -8.75E-03 -1.31E-02 -4.57E-03 2.31E-02 2.68E-03 0 0 0
DJI_0183.JPG -34.64277285 -44.84034207 15.89229254 65.84203906 62.9109777 7.065942792 31.78292476 -8.39E-03 -2.94E-03 -1.40E-02 8.96E-02 -0.11801932 0 0 0
DJI_0184.JPG -39.17179024 -40.22577764 16.28164396 65.53938063 63.2592604 6.676581293 31.79546988 -9.81E-03 -8.13E-03 1.01E-02 -8.44E-02 0.179931606 0 0 0
DJI_0185.JPG -43.549378 -33.09364534 16.64130671 68.61427166 63.15205908 6.258411625 31.75339036 -9.78E-03 -7.12E-03 4.75E-03 -6.25E-02 0.1541638 0 0 0
DJI_0186.JPG -46.5381556 -24.2992233 17.2286956 74.42382577 63.75110346 6.279208736 31.88862443 -1.01E-02 -1.73E-02 1.02E-02 -6.15E-02 4.89E-02 0 0 0
DJI_0187.JPG -48.18737751 -14.67333218 17.85446854 79.54477952 63.0503902 5.980759013 31.69602914 -8.83E-03 -1.01E-02 -7.63E-03 -7.49E-03 2.71E-02 0 0 0
DJI_0188.JPG -48.48581505 -13.79840485 17.84756621 93.43316271 61.87561678 5.110113503 31.6671977 1.99E-03 -9.40E-04 2.40E-02 -0.180515731 0.32814456 0 0 0
DJI_0189.JPG -48.32815991 -13.88055437 17.77818573 106.3277582 60.87171036 4.039469869 31.50757712 2.84E-03 4.12E-03 8.54E-03 -1.32E-02 3.89E-02 0 0 0
Things tried so far
Something we discovered was that the exported model was mirrored from reality however this did not affect the placement of the camera references as they aligned perfectly. We attempted to mirror the referenced cameras, pointcloud and viewport camera but this did not seem to fix the issue at hand. (hence the camera.applyMatrix4(new THREE.Matrix4().makeScale(-1, 1, 1));)
So far we attempted to load Euler angles, set angles directly or convert and apply a Quaternion sadly without any good results. The camera reference file is being parsed with the following logic:
// Await the .csv file being parsed from the server
await new Promise((resolve) => {
(file as Blob).text().then((csvStr) => {
const rows = csvStr.split('\n');
for (const row of rows) {
const col = row.split(',');
if (col.length > 1) {
const suffixes = col[0].split('.');
const extension = suffixes[suffixes.length - 1].toLowerCase();
const validExtensions = ['jpeg', 'jpg', 'png'];
if (!validExtensions.includes(extension)) {
continue;
}
// == Parameter index by .csv column names ==
// 0: #name; 1: x; 2: y; 3: alt; 4: heading; 5: pitch; 6: roll; 7:f (focal);
// == Non .csv param ==
// 8: bool isRadianFormat default false
this.createCamera(col[0], parseFloat(col[1]), parseFloat(col[2]), parseFloat(col[3]), parseFloat(col[4]), parseFloat(col[5]), parseFloat(col[6]), parseFloat(col[7]));
}
}
resolve(true);
});
});
}
Below you will find the code snippet for instantiating a camera with its position and rotation. I left some additional comments to elaborate it somewhat more. I left the commented code lines in as well to see what else we have been trying:
private createCamera(fileName: string, xPos: number, yPos: number, zPos: number, xDeg: number, yDeg: number, zDeg: number, f: number, isRadianFormat = false) : void {
// Set radials as THREE.JS explicitly only works in radians
const xRad = isRadianFormat ? xDeg : THREE.MathUtils.degToRad(xDeg);
const yRad = isRadianFormat ? yDeg : THREE.MathUtils.degToRad(yDeg)
const zRad = isRadianFormat ? zDeg : THREE.MathUtils.degToRad(zDeg)
// Create camera reference and extract frustum
// Statically set the FOV and aspectratio; Near is set to 0,1 by default and Far is dynamically set whenever a point is clicked in a 3D space.
const camera = new THREE.PerspectiveCamera(67, 5280 / 2970, 0.1, 1);
const pos = new THREE.Vector3(xPos, yPos, zPos); // Reality capture z = up; THREE y = up;
/* ===
In order to set an Euler angle one must provide the heading (x), pitch (y) and roll(z) as well as the order (variable four 'XYZ') in which the rotations will be applied
As a last resort we even tried switching the x,y and zRad variables as well as switching the orientation orders.
Possible orders:
XYZ
XZY
YZX
YXZ
ZYX
ZXY
=== */
const rot = new THREE.Euler(xRad, yRad, zRad, 'XYZ');
//camera.setRotationFromAxisAngle(new THREE.Vector3(0,))
//camera.applyMatrix4(new THREE.Matrix4().makeScale(-1, 1, 1));
// const rot = new THREE.Quaternion();
// rot.setFromAxisAngle(new THREE.Vector3(1, 0, 0), zRad);
// rot.setFromAxisAngle(new THREE.Vector3(0, 1, 0), xRad);
// rot.setFromAxisAngle(new THREE.Vector3(0, 0, 1), yRad);
// XYZ
// === Update camera frustum ===
camera.position.copy(pos);
// camera.applyQuaternion(rot);
camera.rotation.copy(rot);
camera.setRotationFromEuler(rot);
camera.updateProjectionMatrix(); // TODO: Assert whether projection update is required here
/* ===
The camera.applyMatrix listed below was an attempt in rotating several aspects of the 3D viewer.
An attempt was made to rotate each individual photo camera position, the pointcloud itself aswell as the viewport camera both separately
as well as solo. It made no difference however.
=== */
//camera.applyMatrix4(new THREE.Matrix4().makeScale(-1, 1, 1));
// Instantiate CameraPosition instance and push to array
const photo: PhotoPosition = {
file: fileName,
camera,
position: pos,
rotation: rot,
focal: f,
width: 5120, // Statically set for now
height: 5120, // Statically set for now
};
this.photos.push(photo);
}
The cameras created in the snippet above are then grabbed by the next piece of code which passes the cameras to the camera manager and draws a CameraHelper (displayed in both 3D viewer pictures above). It is written within an async function awaiting the csv file to be loaded before proceeding to initialize the cameras.
private initializeCameraPoses(url: string, csvLoader: CSVLoader) {
const absoluteUrl = url + '\\references.csv';
(async (scene, csvLoader, url, renderer) => {
await csvLoader.init(url);
const photos = csvLoader.getPhotos(); // The cameras created by the createCamera() method
this.inspectionRenderer = new InspectionRenderer(scene); // InspectionRenderer manages all further camera operations
this.inspectionRenderer.populateCameras(photos);
for (const photoData of photos) {
// Draw the green cube
const geometry = new THREE.BoxGeometry(0.5, 0.5, 0.5);
const material = new THREE.MeshBasicMaterial({ color: 0x00ff00 });
const cube = new THREE.Mesh(geometry, material);
scene.add(cube);
cube.position.copy(photoData.position);
photoData.camera.updateProjectionMatrix();
// Draws the yellow camera viewport to the scene
const helper = new CameraHelper(photoData.camera);
renderer.render(scene, photoData.camera);
scene.add(helper);
}
})(this.scene, csvLoader, absoluteUrl, this.renderer);
}
Marquizzo's code snippet
The below posted code snippet of Marquizzo seems to bring us a lot closer towards a solution. The cameras seem to be orientated in the correct direction. However the pitch seems to a little off somehow. Below I will include the source image of DJI_0189.jpg. Note that for this example the FOV is currently not being set as it looks chaotic when for every camera position a camera helper is being rendered. For this example I have rendered only the DJI_0189 camera helper.
The edit #Marquizzo provided inverting the pitch (const rotX = deg2rad(photo.pitch * -1);) would result in the midpoint intersection point always being slightly lower as expected:
When the pitch is adjusted to const rotX = deg2rad(photo.pitch * -.5); you'll see that the midpoint intersection is (closer) as that of the source image:
Somehow I think that a solution is within reach and that in the end it'll come down to some very small detail that has been overlooked. I'm really looking forward into seeing a reply. If something is still unclear please say so and I'll provide the necessary details if required ^^
Thanks for reading this post so far!
At first glance, I see three possibilities:
It's hard to see where the issue is without showing how you're using the createCamera() method. You could be swapping pitch with heading or something like that. In Three.js, heading is rotation around the Y-axis, pitch around X-axis, and roll around Z-axis.
Secondly, do you know in what order the heading, pitch, roll measurements were taken by your sensor? That will affect the way in which you initiate your THREE.Euler(xRad, yRad, zRad, 'XYZ'), since the order in which to apply rotations could also be 'YZX', 'ZXY', 'XZY', 'YXZ' or 'ZYX'.
Finally, you have to think "What does heading: 0 mean to the sensor?" It could mean different things between real-world and Three.js coordinate system. A camera with no rotation in Three.js is looking straight down towards -Z axis, but your sensor might have it pointing towards +Z, or +X, etc.
Edit:
I added a demo below, I think this is what you needed from the screenshots. Notice I multiplied pitch * -1 so the cameras "Look down", and added +180 to the heading so they're pointing in the right... heading.
const DATA = [
{name: "DJI_0174.JPG", x: 3.116820957, y: -44.25690188, alt: 14.05258109, heading: -26.86297007, pitch: 66.43104338, roll: 1.912026354},
{name: "DJI_0175.JPG", x: -5.22E-02, y: -46.97266554, alt: 14.18056658, heading: -16.2033133, pitch: 66.11532302, roll: 3.552072396},
{name: "DJI_0176.JPG", x: -3.056586953, y: -49.00754998, alt: 14.3474763, heading: 4.270483155, pitch: 65.35247679, roll: 5.816970677},
{name: "DJI_0177.JPG", x: -6.909437337, y: -50.15910066, alt: 14.38391206, heading: 19.4459053, pitch: 64.26828897, roll: 6.685020944},
{name: "DJI_0178.JPG", x: -11.23696688, y: -50.36025313, alt: 14.56924433, heading: 19.19192622, pitch: 64.40188316, roll: 6.265995184},
{name: "DJI_0179.JPG", x: -16.04060554, y: -49.92320365, alt: 14.69721478, heading: 19.39979452, pitch: 64.85507307, roll: 6.224929846},
{name: "DJI_0180.JPG", x: -20.95614556, y: -49.22915437, alt: 14.92273203, heading: 20.39327092, pitch: 65.02028543, roll: 6.164031482},
{name: "DJI_0181.JPG", x: -25.9335097, y: -48.45330177, alt: 15.37330388, heading: 34.24388008, pitch: 64.82707628, roll: 6.979877709},
{name: "DJI_0182.JPG", x: -30.40507957, y: -47.21269946, alt: 15.67804925, heading: 49.98858409, pitch: 64.29238807, roll: 7.449650513},
{name: "DJI_0183.JPG", x: -34.64277285, y: -44.84034207, alt: 15.89229254, heading: 65.84203906, pitch: 62.9109777, roll: 7.065942792},
{name: "DJI_0184.JPG", x: -39.17179024, y: -40.22577764, alt: 16.28164396, heading: 65.53938063, pitch: 63.2592604, roll: 6.676581293},
{name: "DJI_0185.JPG", x: -43.549378, y: -33.09364534, alt: 16.64130671, heading: 68.61427166, pitch: 63.15205908, roll: 6.258411625},
{name: "DJI_0186.JPG", x: -46.5381556, y: -24.2992233, alt: 17.2286956, heading: 74.42382577, pitch: 63.75110346, roll: 6.279208736},
{name: "DJI_0187.JPG", x: -48.18737751, y: -14.67333218, alt: 17.85446854, heading: 79.54477952, pitch: 63.0503902, roll: 5.980759013},
{name: "DJI_0188.JPG", x: -48.48581505, y: -13.79840485, alt: 17.84756621, heading: 93.43316271, pitch: 61.87561678, roll: 5.110113503},
{name: "DJI_0189.JPG", x: -48.32815991, y: -13.88055437, alt: 17.77818573, heading: 106.3277582, pitch: 60.87171036, roll: 4.039469869},
];
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(
45,
window.innerWidth / window.innerHeight,
1,
1000
);
camera.position.z = 100;
const renderer = new THREE.WebGLRenderer({
antialias: true,
canvas: document.querySelector("#canvas")
});
renderer.setSize(window.innerWidth, window.innerHeight);
const controls = new THREE.OrbitControls( camera, renderer.domElement );
// Helpers
const axesHelper = new THREE.AxesHelper( 20 );
scene.add(axesHelper);
const plane = new THREE.Plane( new THREE.Vector3( 0, 1, 0 ), 0 );
const planeHelper = new THREE.PlaneHelper( plane, 50, 0xffff00 );
scene.add(planeHelper);
let deg2rad = THREE.MathUtils.degToRad;
function createCam(photo) {
let tempCam = new THREE.PerspectiveCamera(10, 2.0, 1, 30);
// Altitude is actually y-axis,
// "y" is actually z-axis
tempCam.position.set(photo.x, photo.alt, photo.y);
// Modify pitch & heading so it matches Three.js coordinates
const rotX = deg2rad(photo.pitch * -1);
const rotY = deg2rad(photo.heading + 180);
const rotZ = deg2rad(photo.roll);
tempCam.rotation.set(rotX, rotY, rotZ, "YXZ");
let helper = new THREE.CameraHelper(tempCam);
scene.add(tempCam);
scene.add(helper);
}
for(let i = 0; i < DATA.length; i++) {
createCam(DATA[i]);
}
function animate() {
renderer.render(scene, camera);
requestAnimationFrame(animate);
}
animate();
html, body { margin:0; padding:0;}
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/r128/three.min.js"></script><script src="https://rawgit.com/mrdoob/three.js/dev/examples/js/controls/OrbitControls.js"></script>
<canvas id="canvas"></canvas>

Problem with click corner of fabricjs using threejs raycast

i'm trying to recreate this app, and it is currently working now. But i can't click the corner of the text perfectly, it always need to be offset.
https://jsfiddle.net/naonvl/ecdxfkbm/3/
right now, it's hard to scale the text. i think the getRealPosition is not correct so the mouse X and Y also not precise.
Does anyone know how to fix this?
The problem is i set getIntersects like this
var getIntersects = function (point, objects) {
mouse.set(point.x * 2 - 0.97, -(point.y * 2) + 0.97);
raycaster.setFromCamera(mouse, camera);
return raycaster.intersectObjects(objects);
};
compare to three.js example here https://threejs.org/examples/#webgl_raycast_texture
function getIntersects( point, objects ) {
mouse.set( ( point.x * 2 ) - 1, - ( point.y * 2 ) + 1 );
raycaster.setFromCamera( mouse, camera );
return raycaster.intersectObjects( objects, false );
}
i miss calculate the mouse.set there, i change the calculation and tweak X and Y position of mouse by adding 3 to each of them, and it raycasted perfectly in the center of mouse pointer

a-frame entity-positioning and rotation

Sadly I am not familiar with the positioning and rotation of entites in 3D space, so I want to create a function that positions an entity with easier to understand parameters like:
createEntity(vertical, horizontal, distance)
for
<a-entity position="-2 0 -2" rotation="-10 30 0"></a-entity>
where vertical and horizontal are float-values between 0 and 360 and distance is a float where 0 is position "0 0 0" and as higher the value than farther the entity goes.
the rotation should face the camera at init.
are there helper-function for the calculations?
It sounds like you want to use the Spherical coordinate system to position the elements, and the look-at component to rotate the objects towards the camera.
I'm not aware of any helpers, but it's quite easy to do this with a custom component, like this:
// Register the component
AFRAME.registerComponent('fromspherical', {
// we will use two angles and a radius provided by the user
schema: {
fi: {},
theta: {},
r: {},
},
init: function() {
// lets change it to radians
let fi = this.data.fi * Math.PI / 180
let theta = this.data.theta * Math.PI / 180
// The 'horizontal axis is x. The 'vertical' is y.
// The calculations below are straight from the wiki site.
let z = (-1) * Math.sin(theta) * Math.cos(fi) * this.data.r
let x = Math.sin(theta) * Math.sin(fi) * this.data.r
let y = Math.cos(theta) * this.data.r
// position the element using the provided data
this.el.setAttribute('position', {
x: x,
y: y,
z: z
})
// rotate the element towards the camera
this.el.setAttribute('look-at', '[camera]')
}
})
Check it out in this fiddle.
The calculations are in a different order than on the wiki website. This is because in aframe the XYZ space looks like this:
The camera is looking along the negative Z axis upon default initialization.

Rotated elements collision / overlapping detection

need some help with detecting collision / overlap of 2 elements of which one is dynamically rotating.
I'm making a game where a user controls a sword which rotates and moves depending on mouse cursor position, the image used for that is 5 pixels wide and 130 long. Rotation possible over a full 360 degrees. I will have targets spawning across the page which the user needs to slash with his sword, so I need to detect when the image slices through one of the target divs. I can't seem to be able to get a working detection going because of the rotation of the image.
Is there anyone that can point me in the right direction / has a solution for detections such as these?
Example of what I have so far here, no proper detection yet
http://tinyurl.com/ovfhfwx
I started off with something like the following
var square = {x: 500, y: 500, width: 25, height: 25}
var intervalId = setInterval(function(){
var lightsaber = {
x: getOffset( document.getElementById('lightsaber') ).left,
y: getOffset( document.getElementById('lightsaber') ).top,
width: $("#lightsaber").width(),
height: $("#lightsaber").height()
}
if (lightsaber.x < square.x + square.width &&
lightsaber.x + lightsaber.width > square.x &&
lightsaber.y < square.y + square.height &&
lightsaber.height + lightsaber.y > square.y) {
console.log('collision');
//clearInterval(intervalId);
}
}, 33);
This obviously didn't rotate the collision box and just worked as if it was always standing straight, I was hoping there would be some similar solution like this but where it does rotate the collision box.
Thanks!

Three JS - How to scale a moving object

I'm trying to scale a moving object in THREE, however when I scale it it will scale my object and move it to another position.
I use the following to set the scale of the object
// initiates the scale transition
function doTriangleScale(intersection){
var tween = new TWEEN.Tween(intersection.object.parent.scale)
.to({
x: scaleSize,
y: scaleSize,
z: scaleSize,
}, scaleEase)
tween.start();
// should have a check if the user is still on the object in question, no time for this now
setTimeout(function(){
doTriangleScaleRevert(intersection);
}, 2000)
}
// triggers the scale revert to default
function doTriangleScaleRevert(intersection) {
var tween = new TWEEN.Tween(intersection.object.parent.scale)
.to({
x: 1,
y: 1,
z: 1
}, scaleEase)
tween.start();
}
This works if the objects are not moving about, however my objects ARE moving about with the following code in the render animation
scene.traverse(function (e) {
if (e instanceof THREE.Mesh && e.name != "pyramid") {
e.rotation.x += rotationSpeed;
e.rotation.y += rotationSpeed;
e.rotation.z += rotationSpeed;
if(triangleFloatLeft){
e.position.x += 0.01;
}else{
e.position.x -= 0.01;
}
}
});
I'm looking for a solution that will scale the objects from it's center.
Thanks!
Objects scale around their origin. For example, if you have a cube:
var geo = new THREE.BoxGeometry(1, 1, 1);
If you scale a mesh with geometry geo, it will grow around its center. However if you move the geometry's origin:
geo.applyMatrix(new THREE.Matrix4().makeTranslation(0, 0.5, 0));
Now it will grow upwards, because the origin was moved to the box's floor.
The origin of your object is likely not at its center. You can move the origin either in the model itself (if you imported it) or in Three.js as above.

Categories

Resources