Customize PlaneGeometry edges for each segment in ThreeJS - javascript

Currently PlaneGeometry has an option to change segment width and height but this has no effect on edges. Each segment currently has indexed positions to create an 'N' shape when viewing the geometry in wireframe mode:
Indexes currently are:
0 = South West
1 = North West
2 = South East
3 = North East
This gives us an 'N' shape for each segment with wireframes, however instead of this 'N' Shape i would like to create an 'X' shape with edges for every segment. Currently i'm using planes to create different heights and having an 'X' shape would make the result look less edgy shaped (screenshots below).
I think all required vertices already exists, but how is it possible to get an extra edge between point 0 and 3 for each segment?
I've tried looking for the answer online but couldn't find a clear answer on this matter, besides many articles are older than version R125 which made breaking changes to Geometries. Currently I'm using version R135.
I'm guessing i will need to create a custom Buffer Geometry, but am in doubt of how to execute this properly and not losing too much performance.
All red and blue lines are currently existing edges in wireframe mode.
All green lines are desired and currently not existing, what would be the best way to do this without losing performance?
Thanks in advance!

It took me a fair amount of attempts but in the end it didn't turn out to be too hard. I've created this custom PlaneGeometry by creating a custom BufferGeometry. Although it's probably 3x heavier to use;
At the moment it holds 36 (12 * 3) positions per tile segment as where the default PlaneGeometry holds 12 (4 * 3) positions. Although i'm not sure if 3x more positions automatically means 3x more performance usage, but it definitely uses more than the default PlaneGeometry.
Here are the results (changes in height smoothen out prettier):
Code to create the geometry:
const geometry = new THREE.BufferGeometry();
const vertices = new Float32Array([
// (North Face)
.5, -.5, 0, // 0: Center
1, -1, 0, // 1: NE
1, 0, 0, // 2: NW
// (East Face)
.5, -.5, 0, // 3: Center
0, -1, 0, // 4: SE
1, -1, 0, // 5: NE
// (South Face)
.5, -.5, 0, // 6: Center
0, 0, 0, // 7: SW
0, -1, 0, // 8: SE
// (West Face)
.5, -.5, 0, // 9: Center
1, 0, 0, // 10: NW
0, 0, 0, // 11: SW
]);
geometry.setAttribute( 'position', new THREE.BufferAttribute( vertices, 3 ) );
const material = new THREE.MeshBasicMaterial( { color: 0xffffff, wireframe: true } );
const mesh = new THREE.Mesh( geometry, material );
mesh.rotation.x = - Math.PI / 2;
scene.add( mesh );

Related

THREE.JS & Reality Capture - Rotation issue photogrammetry reference camera's in a 3D space

Thanks for taking the time to review my post. I hope that this post will not only yield results for myself but perhaps helps others too!
Introduction
Currently I am working on a project involving pointclouds generated with photogrammetry. It consists of photos combined with laser scans. The software used in making the pointcloud is Reality Capture. Besides the pointcloud export one can export "Internal/External camera parameters" providing the ability of retrieving photos that are used to make up a certain 3D point in the pointcloud. Reality Capture isn't that well documented online and I have also posted in their forum regarding camera variables, perhaps it can be of use in solving the issue at hand?
Only a few variables listed in the camera parameters file are relevant (for now) in referencing camera positioning such as filename, x,y,alt for location, heading, pitch and roll as its rotation.
Currently the generated pointcloud is loaded into the browser compatible THREE.JS viewer after which the camera parameters .csv file is loaded and for each known photo a 'PerspectiveCamera' is spawned with a green cube. An example is shown below:
The challenge
As a matter of fact you might already know what the issue might be based on the previous image (or the title of this post of course ;P) Just in case you might not have spotted it, the direction of the cameras is all wrong. Let me visualize it for you with shabby self-drawn vectors that rudimentary show in what direction it should be facing (Marked in red) and how it is currently vectored (green).
Row 37, DJI_0176.jpg is the most right camera with a red reference line row 38 is 177 etc. The last picture (Row 48 is DJI_189.jpg) and corresponds with the most left image of the clustured images (as I didn't draw the other two camera references within the image above I did not include the others).
When you copy the data below into an Excel sheet it should display correctly ^^
#name x y alt heading pitch roll f px py k1 k2 k3 k4 t1 t2
DJI_0174.JPG 3.116820957 -44.25690188 14.05258109 -26.86297007 66.43104338 1.912026354 30.35179628 7.25E-03 1.45E-03 -4.02E-03 -2.04E-02 3.94E-02 0 0 0
DJI_0175.JPG -5.22E-02 -46.97266554 14.18056658 -16.2033133 66.11532302 3.552072396 30.28063771 4.93E-03 4.21E-04 1.38E-02 -0.108013599 0.183136287 0 0 0
DJI_0176.JPG -3.056586953 -49.00754998 14.3474763 4.270483155 65.35247679 5.816970677 30.50596933 -5.05E-03 -3.53E-03 -4.94E-03 3.24E-02 -3.84E-02 0 0 0
DJI_0177.JPG -6.909437337 -50.15910066 14.38391206 19.4459053 64.26828897 6.685020944 30.6994734 -1.40E-02 4.72E-03 -5.33E-04 1.90E-02 -1.74E-02 0 0 0
DJI_0178.JPG -11.23696688 -50.36025313 14.56924433 19.19192622 64.40188316 6.265995184 30.7665397 -1.26E-02 2.41E-03 1.24E-04 -4.63E-03 2.84E-02 0 0 0
DJI_0179.JPG -16.04060554 -49.92320365 14.69721478 19.39979452 64.85507307 6.224929846 30.93772566 -1.19E-02 -4.31E-03 -1.27E-02 4.62E-02 -4.48E-02 0 0 0
DJI_0180.JPG -20.95614556 -49.22915437 14.92273203 20.39327092 65.02028543 6.164031482 30.99807237 -1.02E-02 -7.70E-03 1.44E-03 -2.22E-02 3.94E-02 0 0 0
DJI_0181.JPG -25.9335097 -48.45330177 15.37330388 34.24388008 64.82707628 6.979877709 31.3534556 -1.06E-02 -1.19E-02 -5.44E-03 2.39E-02 -2.38E-02 0 0 0
DJI_0182.JPG -30.40507957 -47.21269946 15.67804925 49.98858409 64.29238807 7.449650513 31.6699868 -8.75E-03 -1.31E-02 -4.57E-03 2.31E-02 2.68E-03 0 0 0
DJI_0183.JPG -34.64277285 -44.84034207 15.89229254 65.84203906 62.9109777 7.065942792 31.78292476 -8.39E-03 -2.94E-03 -1.40E-02 8.96E-02 -0.11801932 0 0 0
DJI_0184.JPG -39.17179024 -40.22577764 16.28164396 65.53938063 63.2592604 6.676581293 31.79546988 -9.81E-03 -8.13E-03 1.01E-02 -8.44E-02 0.179931606 0 0 0
DJI_0185.JPG -43.549378 -33.09364534 16.64130671 68.61427166 63.15205908 6.258411625 31.75339036 -9.78E-03 -7.12E-03 4.75E-03 -6.25E-02 0.1541638 0 0 0
DJI_0186.JPG -46.5381556 -24.2992233 17.2286956 74.42382577 63.75110346 6.279208736 31.88862443 -1.01E-02 -1.73E-02 1.02E-02 -6.15E-02 4.89E-02 0 0 0
DJI_0187.JPG -48.18737751 -14.67333218 17.85446854 79.54477952 63.0503902 5.980759013 31.69602914 -8.83E-03 -1.01E-02 -7.63E-03 -7.49E-03 2.71E-02 0 0 0
DJI_0188.JPG -48.48581505 -13.79840485 17.84756621 93.43316271 61.87561678 5.110113503 31.6671977 1.99E-03 -9.40E-04 2.40E-02 -0.180515731 0.32814456 0 0 0
DJI_0189.JPG -48.32815991 -13.88055437 17.77818573 106.3277582 60.87171036 4.039469869 31.50757712 2.84E-03 4.12E-03 8.54E-03 -1.32E-02 3.89E-02 0 0 0
Things tried so far
Something we discovered was that the exported model was mirrored from reality however this did not affect the placement of the camera references as they aligned perfectly. We attempted to mirror the referenced cameras, pointcloud and viewport camera but this did not seem to fix the issue at hand. (hence the camera.applyMatrix4(new THREE.Matrix4().makeScale(-1, 1, 1));)
So far we attempted to load Euler angles, set angles directly or convert and apply a Quaternion sadly without any good results. The camera reference file is being parsed with the following logic:
// Await the .csv file being parsed from the server
await new Promise((resolve) => {
(file as Blob).text().then((csvStr) => {
const rows = csvStr.split('\n');
for (const row of rows) {
const col = row.split(',');
if (col.length > 1) {
const suffixes = col[0].split('.');
const extension = suffixes[suffixes.length - 1].toLowerCase();
const validExtensions = ['jpeg', 'jpg', 'png'];
if (!validExtensions.includes(extension)) {
continue;
}
// == Parameter index by .csv column names ==
// 0: #name; 1: x; 2: y; 3: alt; 4: heading; 5: pitch; 6: roll; 7:f (focal);
// == Non .csv param ==
// 8: bool isRadianFormat default false
this.createCamera(col[0], parseFloat(col[1]), parseFloat(col[2]), parseFloat(col[3]), parseFloat(col[4]), parseFloat(col[5]), parseFloat(col[6]), parseFloat(col[7]));
}
}
resolve(true);
});
});
}
Below you will find the code snippet for instantiating a camera with its position and rotation. I left some additional comments to elaborate it somewhat more. I left the commented code lines in as well to see what else we have been trying:
private createCamera(fileName: string, xPos: number, yPos: number, zPos: number, xDeg: number, yDeg: number, zDeg: number, f: number, isRadianFormat = false) : void {
// Set radials as THREE.JS explicitly only works in radians
const xRad = isRadianFormat ? xDeg : THREE.MathUtils.degToRad(xDeg);
const yRad = isRadianFormat ? yDeg : THREE.MathUtils.degToRad(yDeg)
const zRad = isRadianFormat ? zDeg : THREE.MathUtils.degToRad(zDeg)
// Create camera reference and extract frustum
// Statically set the FOV and aspectratio; Near is set to 0,1 by default and Far is dynamically set whenever a point is clicked in a 3D space.
const camera = new THREE.PerspectiveCamera(67, 5280 / 2970, 0.1, 1);
const pos = new THREE.Vector3(xPos, yPos, zPos); // Reality capture z = up; THREE y = up;
/* ===
In order to set an Euler angle one must provide the heading (x), pitch (y) and roll(z) as well as the order (variable four 'XYZ') in which the rotations will be applied
As a last resort we even tried switching the x,y and zRad variables as well as switching the orientation orders.
Possible orders:
XYZ
XZY
YZX
YXZ
ZYX
ZXY
=== */
const rot = new THREE.Euler(xRad, yRad, zRad, 'XYZ');
//camera.setRotationFromAxisAngle(new THREE.Vector3(0,))
//camera.applyMatrix4(new THREE.Matrix4().makeScale(-1, 1, 1));
// const rot = new THREE.Quaternion();
// rot.setFromAxisAngle(new THREE.Vector3(1, 0, 0), zRad);
// rot.setFromAxisAngle(new THREE.Vector3(0, 1, 0), xRad);
// rot.setFromAxisAngle(new THREE.Vector3(0, 0, 1), yRad);
// XYZ
// === Update camera frustum ===
camera.position.copy(pos);
// camera.applyQuaternion(rot);
camera.rotation.copy(rot);
camera.setRotationFromEuler(rot);
camera.updateProjectionMatrix(); // TODO: Assert whether projection update is required here
/* ===
The camera.applyMatrix listed below was an attempt in rotating several aspects of the 3D viewer.
An attempt was made to rotate each individual photo camera position, the pointcloud itself aswell as the viewport camera both separately
as well as solo. It made no difference however.
=== */
//camera.applyMatrix4(new THREE.Matrix4().makeScale(-1, 1, 1));
// Instantiate CameraPosition instance and push to array
const photo: PhotoPosition = {
file: fileName,
camera,
position: pos,
rotation: rot,
focal: f,
width: 5120, // Statically set for now
height: 5120, // Statically set for now
};
this.photos.push(photo);
}
The cameras created in the snippet above are then grabbed by the next piece of code which passes the cameras to the camera manager and draws a CameraHelper (displayed in both 3D viewer pictures above). It is written within an async function awaiting the csv file to be loaded before proceeding to initialize the cameras.
private initializeCameraPoses(url: string, csvLoader: CSVLoader) {
const absoluteUrl = url + '\\references.csv';
(async (scene, csvLoader, url, renderer) => {
await csvLoader.init(url);
const photos = csvLoader.getPhotos(); // The cameras created by the createCamera() method
this.inspectionRenderer = new InspectionRenderer(scene); // InspectionRenderer manages all further camera operations
this.inspectionRenderer.populateCameras(photos);
for (const photoData of photos) {
// Draw the green cube
const geometry = new THREE.BoxGeometry(0.5, 0.5, 0.5);
const material = new THREE.MeshBasicMaterial({ color: 0x00ff00 });
const cube = new THREE.Mesh(geometry, material);
scene.add(cube);
cube.position.copy(photoData.position);
photoData.camera.updateProjectionMatrix();
// Draws the yellow camera viewport to the scene
const helper = new CameraHelper(photoData.camera);
renderer.render(scene, photoData.camera);
scene.add(helper);
}
})(this.scene, csvLoader, absoluteUrl, this.renderer);
}
Marquizzo's code snippet
The below posted code snippet of Marquizzo seems to bring us a lot closer towards a solution. The cameras seem to be orientated in the correct direction. However the pitch seems to a little off somehow. Below I will include the source image of DJI_0189.jpg. Note that for this example the FOV is currently not being set as it looks chaotic when for every camera position a camera helper is being rendered. For this example I have rendered only the DJI_0189 camera helper.
The edit #Marquizzo provided inverting the pitch (const rotX = deg2rad(photo.pitch * -1);) would result in the midpoint intersection point always being slightly lower as expected:
When the pitch is adjusted to const rotX = deg2rad(photo.pitch * -.5); you'll see that the midpoint intersection is (closer) as that of the source image:
Somehow I think that a solution is within reach and that in the end it'll come down to some very small detail that has been overlooked. I'm really looking forward into seeing a reply. If something is still unclear please say so and I'll provide the necessary details if required ^^
Thanks for reading this post so far!
At first glance, I see three possibilities:
It's hard to see where the issue is without showing how you're using the createCamera() method. You could be swapping pitch with heading or something like that. In Three.js, heading is rotation around the Y-axis, pitch around X-axis, and roll around Z-axis.
Secondly, do you know in what order the heading, pitch, roll measurements were taken by your sensor? That will affect the way in which you initiate your THREE.Euler(xRad, yRad, zRad, 'XYZ'), since the order in which to apply rotations could also be 'YZX', 'ZXY', 'XZY', 'YXZ' or 'ZYX'.
Finally, you have to think "What does heading: 0 mean to the sensor?" It could mean different things between real-world and Three.js coordinate system. A camera with no rotation in Three.js is looking straight down towards -Z axis, but your sensor might have it pointing towards +Z, or +X, etc.
Edit:
I added a demo below, I think this is what you needed from the screenshots. Notice I multiplied pitch * -1 so the cameras "Look down", and added +180 to the heading so they're pointing in the right... heading.
const DATA = [
{name: "DJI_0174.JPG", x: 3.116820957, y: -44.25690188, alt: 14.05258109, heading: -26.86297007, pitch: 66.43104338, roll: 1.912026354},
{name: "DJI_0175.JPG", x: -5.22E-02, y: -46.97266554, alt: 14.18056658, heading: -16.2033133, pitch: 66.11532302, roll: 3.552072396},
{name: "DJI_0176.JPG", x: -3.056586953, y: -49.00754998, alt: 14.3474763, heading: 4.270483155, pitch: 65.35247679, roll: 5.816970677},
{name: "DJI_0177.JPG", x: -6.909437337, y: -50.15910066, alt: 14.38391206, heading: 19.4459053, pitch: 64.26828897, roll: 6.685020944},
{name: "DJI_0178.JPG", x: -11.23696688, y: -50.36025313, alt: 14.56924433, heading: 19.19192622, pitch: 64.40188316, roll: 6.265995184},
{name: "DJI_0179.JPG", x: -16.04060554, y: -49.92320365, alt: 14.69721478, heading: 19.39979452, pitch: 64.85507307, roll: 6.224929846},
{name: "DJI_0180.JPG", x: -20.95614556, y: -49.22915437, alt: 14.92273203, heading: 20.39327092, pitch: 65.02028543, roll: 6.164031482},
{name: "DJI_0181.JPG", x: -25.9335097, y: -48.45330177, alt: 15.37330388, heading: 34.24388008, pitch: 64.82707628, roll: 6.979877709},
{name: "DJI_0182.JPG", x: -30.40507957, y: -47.21269946, alt: 15.67804925, heading: 49.98858409, pitch: 64.29238807, roll: 7.449650513},
{name: "DJI_0183.JPG", x: -34.64277285, y: -44.84034207, alt: 15.89229254, heading: 65.84203906, pitch: 62.9109777, roll: 7.065942792},
{name: "DJI_0184.JPG", x: -39.17179024, y: -40.22577764, alt: 16.28164396, heading: 65.53938063, pitch: 63.2592604, roll: 6.676581293},
{name: "DJI_0185.JPG", x: -43.549378, y: -33.09364534, alt: 16.64130671, heading: 68.61427166, pitch: 63.15205908, roll: 6.258411625},
{name: "DJI_0186.JPG", x: -46.5381556, y: -24.2992233, alt: 17.2286956, heading: 74.42382577, pitch: 63.75110346, roll: 6.279208736},
{name: "DJI_0187.JPG", x: -48.18737751, y: -14.67333218, alt: 17.85446854, heading: 79.54477952, pitch: 63.0503902, roll: 5.980759013},
{name: "DJI_0188.JPG", x: -48.48581505, y: -13.79840485, alt: 17.84756621, heading: 93.43316271, pitch: 61.87561678, roll: 5.110113503},
{name: "DJI_0189.JPG", x: -48.32815991, y: -13.88055437, alt: 17.77818573, heading: 106.3277582, pitch: 60.87171036, roll: 4.039469869},
];
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(
45,
window.innerWidth / window.innerHeight,
1,
1000
);
camera.position.z = 100;
const renderer = new THREE.WebGLRenderer({
antialias: true,
canvas: document.querySelector("#canvas")
});
renderer.setSize(window.innerWidth, window.innerHeight);
const controls = new THREE.OrbitControls( camera, renderer.domElement );
// Helpers
const axesHelper = new THREE.AxesHelper( 20 );
scene.add(axesHelper);
const plane = new THREE.Plane( new THREE.Vector3( 0, 1, 0 ), 0 );
const planeHelper = new THREE.PlaneHelper( plane, 50, 0xffff00 );
scene.add(planeHelper);
let deg2rad = THREE.MathUtils.degToRad;
function createCam(photo) {
let tempCam = new THREE.PerspectiveCamera(10, 2.0, 1, 30);
// Altitude is actually y-axis,
// "y" is actually z-axis
tempCam.position.set(photo.x, photo.alt, photo.y);
// Modify pitch & heading so it matches Three.js coordinates
const rotX = deg2rad(photo.pitch * -1);
const rotY = deg2rad(photo.heading + 180);
const rotZ = deg2rad(photo.roll);
tempCam.rotation.set(rotX, rotY, rotZ, "YXZ");
let helper = new THREE.CameraHelper(tempCam);
scene.add(tempCam);
scene.add(helper);
}
for(let i = 0; i < DATA.length; i++) {
createCam(DATA[i]);
}
function animate() {
renderer.render(scene, camera);
requestAnimationFrame(animate);
}
animate();
html, body { margin:0; padding:0;}
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/r128/three.min.js"></script><script src="https://rawgit.com/mrdoob/three.js/dev/examples/js/controls/OrbitControls.js"></script>
<canvas id="canvas"></canvas>

threejs AnimationClip Example needed

in threeJS: I have an object3D and want to do simple keyframed Animations with it: Move, Rotate, Scale.
There is a simple example here: https://threejs.org/examples/#misc_animation_keys but it does not work anymore since Animation has changed completely animation rotation switched to quaternion in threeJS some time ago.
I am searching for a very simple example like that, but working with the new Animation system, i already googled it and did find nothing. There is no documentation on the threeJS Page.
Using Blender or Collada to create the animation is not an option, since i have imported the model from a step file, which is supported by neither one.
EDIT I have solved the problem with the example, but i still have problems, since i want to animate a nested Object3d, but only the root Object3d, so i specified keys only for the root object not the whole hierarchy. But it throws an error cause the animation keys hierarchy has not the same structure than the root Object3d hierarchy. But this is another problem and needs another question
The problem with the example was, that rotation in animation keys is now specified as quaternion, not as Euler rotation like in the example. So adding a fourth value (1) to the rotation param made it work.
Finally found one good example with setting desired values in key frames:
Misc animation keys
Full source can be found by inspecting that page.
Here is pasted essential part:
// create a keyframe track (i.e. a timed sequence of keyframes) for each animated property
// Note: the keyframe track type should correspond to the type of the property being animated
// POSITION
var positionKF = new THREE.VectorKeyframeTrack( '.position', [ 0, 1, 2 ], [ 0, 0, 0, 30, 0, 0, 0, 0, 0 ] );
// SCALE
var scaleKF = new THREE.VectorKeyframeTrack( '.scale', [ 0, 1, 2 ], [ 1, 1, 1, 2, 2, 2, 1, 1, 1 ] );
// ROTATION
// Rotation should be performed using quaternions, using a QuaternionKeyframeTrack
// Interpolating Euler angles (.rotation property) can be problematic and is currently not supported
// set up rotation about x axis
var xAxis = new THREE.Vector3( 1, 0, 0 );
var qInitial = new THREE.Quaternion().setFromAxisAngle( xAxis, 0 );
var qFinal = new THREE.Quaternion().setFromAxisAngle( xAxis, Math.PI );
var quaternionKF = new THREE.QuaternionKeyframeTrack( '.quaternion', [ 0, 1, 2 ], [ qInitial.x, qInitial.y, qInitial.z, qInitial.w, qFinal.x, qFinal.y, qFinal.z, qFinal.w, qInitial.x, qInitial.y, qInitial.z, qInitial.w ] );
// COLOR
var colorKF = new THREE.ColorKeyframeTrack( '.material.color', [ 0, 1, 2 ], [ 1, 0, 0, 0, 1, 0, 0, 0, 1 ], THREE.InterpolateDiscrete );
// OPACITY
var opacityKF = new THREE.NumberKeyframeTrack( '.material.opacity', [ 0, 1, 2 ], [ 1, 0, 1 ] );
// create an animation sequence with the tracks
// If a negative time value is passed, the duration will be calculated from the times of the passed tracks array
var clip = new THREE.AnimationClip( 'Action', 3, [ scaleKF, positionKF, quaternionKF, colorKF, opacityKF ] );
// setup the AnimationMixer
mixer = new THREE.AnimationMixer( mesh );
// create a ClipAction and set it to play
var clipAction = mixer.clipAction( clip );
clipAction.play();
Animation has 3 key frames [0,1,2] = [initial,final,initial]
Position array [ 0, 0, 0, 30, 0, 0, 0, 0, 0 ] means (0,0,0) -> (30,0,0) -> (0,0,0)
I find only this one:
https://github.com/mrdoob/three.js/blob/master/examples/webgl_animation_scene.html
Also, was able to write one myself:
//Let's create a mesh
this.mesh = new THREE.Mesh( geometry, material );
this.clock = new THREE.Clock();
//Save this mixer somewhere
this.mixer = new THREE.AnimationMixer( this.mesh );
let animation = THREE.AnimationClipCreator.CreateRotationAnimation(100, "y");
this.mixer.clipAction(animation ).play();
//In the animation block of your scene:
var delta = 0.75 * clock.getDelta();
this.mixer.update( delta );
This is going to rotate the given mesh around of the y axis.

Repeat section of texture over mesh

I'm using a spritesheet (atlas) for low res textures, I want to able to repeat the same portion of the texture multipĺe times without adding more triangles.
I coded so far a plane like this:
var textureSpritemap = loadTexture('textures.png');
var geometry = new t3.PlaneGeometry(80, 80);
var material = new t3.MeshBasicMaterial({map: textureSpritemap});
var plane = new t3.Mesh(geometry, material);
setPlaneUVs(plane, [0, 0.5, 0, 0, 0.5, 0.5, 0.5, 0]);
textureSpritemap.repeat.set(2, 2);
I understand that it's possible to repeat a texture multiple times, but I want to be able to repeat only a portion.
Sprite map:
Intended result:
Actual result:
Any thoughts?

Three.js multiple spotlight performance

I'm doing some racing game on three.js and I stuck with the following problem...
I'm having 2 cars, so we need to render 4 spotlights (minimum) for rear car lights and front car lights for each car...
Also we need some lights on the road...
So i'm having this code:
//front car1 light
var SpotLight = new THREE.SpotLight( 0xffffff, 5, 300, Math.PI/2, 1 );
SpotLight.position.set( 50, 10, 700 );
SpotLight.target.position.set(50, 0, 800);
SpotLight.castShadow = true;
SpotLight.shadowCameraVisible = false;
SpotLight.shadowDarkness = 0.5;
scene.add(SpotLight);
//front car2 light
var SpotLight = new THREE.SpotLight( 0xffffff, 5, 300, -Math.PI/2, 1 );
SpotLight.position.set( -50, 10, 40 );
SpotLight.target.position.set(-50, 0, 100);
SpotLight.castShadow = true;
SpotLight.shadowCameraVisible = false;
SpotLight.shadowDarkness = 0.5;
scene.add(SpotLight);
//rear car1 light
var SpotLight = new THREE.SpotLight( 0xff0000, 2, 200, Math.PI/2, 2 );
SpotLight.position.set( 50, 20, 660 );
SpotLight.target.position.set(50, 0, 600);
SpotLight.castShadow = true;
SpotLight.shadowCameraVisible = false;
SpotLight.shadowDarkness = 0.5;
scene.add(SpotLight);
//rear car2 light
var SpotLight = new THREE.SpotLight( 0xff0000, 2, 100, Math.PI/2, 1 );
SpotLight.position.set( -50, 20, -35 );
SpotLight.target.position.set(-50, 0, -100);
SpotLight.castShadow = true;
SpotLight.shadowCameraVisible = false;
SpotLight.shadowDarkness = 0.5;
scene.add(SpotLight);
//some road light
var SpotLight = new THREE.SpotLight( 0x404040, 3, 500, Math.PI/2, 2 );
SpotLight.position.set( 0, 300, 0 );
SpotLight.target.position.set(0, 0, 0);
SpotLight.castShadow = true;
SpotLight.shadowCameraVisible = false;
SpotLight.shadowDarkness = 0.5;
scene.add(SpotLight);
Nothing special.. but performance dropped to 20-30 FPS and it's a little bit laggy :-1:
And if I add some lights in the future, the performance will be lifted even further ...
Has anyone encountered similar problems already? How to deal with this? Or maybe I'm doing something wrong?
Lights are a very consuming when doing realtime rendering. You'll need to find the cheapest approach that mimics the result you're after.
For instance, you could have a textured plane in front of your car with a texture that looks like the if there were spotlights aiming to the floor. It won't be right, but it will give the impression that is right and you will be saving 4 spotlights and your game will run at 60fps.
Shadows are most likely the culprit in this case - under the hood, the scene needs to be rendered from the point of view of each shadow-casting light. If possible, save them for the most important ones, disable shadows on other lights.
For many lights, you could try to use WebGLDeferredRenderer, it can handle multiple lights much better than the default renderer. It's experimental work in progress though, so you are likely to run into other problems. Also I'm not sure if it helps shadow mapping performance.
I had exactly same issue, beside mrdoob's & yaku's suggestions, which were really helpful, another approach is reducing number of segments & polygons in your geometries.
i.e If you have a simple cylinder in your scene, you can reduce number of segments by assigning heightSegments & radialSegments in initialization time:
As an very simple example, avoid doing something like this if you need to create a simple cylinder:
sampleCylinderGeo = new THREE.CylinderGeometry(2, 2, 5, 16, 32);
instead try:
sampleCylinderGeo = new THREE.CylinderGeometry(2, 2, 5, 8, 1);
Of course if you want smoother cylinder you can increase radial segments from 8 to something like 16 or more according to your needs, but for heightSegments its simply useless to have more than 1 segments in a simple cylinder.
So just adjust number of segments according to your needs so you will save lots of unnecessary segments and achieve a lot more FPS when working with lights, specially when you have lots of geometries in you scene.

cube texture is reversed in one of the cube faces

I use an image for the cube texture, the image is shown correctly in 3 of 4 faces, and looks reversed for the 4th face.
My relevant code is the following:
//dom
var container2=document.getElementById('share');
//renderer
var renderer2 = new THREE.CanvasRenderer();
renderer2.setSize(100,100);
container2.appendChild(renderer2.domElement);
//Scene
var scene2 = new THREE.Scene();
//Camera
var camera2 = new THREE.PerspectiveCamera(50,200/200,1,1000);
camera2.up=camera.up;
//
camera2.position.z = 90;
//
scene2.add(camera2);
//Axes
var axes2= new THREE.AxisHelper();
//Add texture for the cube
//Use image as texture
var img2 = new THREE.MeshBasicMaterial({ //CHANGED to MeshBasicMaterial
map:THREE.ImageUtils.loadTexture('img/fb.jpg')
});
img2.map.needsUpdate = true;
//
var cube = new THREE.Mesh(new THREE.CubeGeometry(40,40,40),img2);
scene2.add(cube);
The image size is 600*600 px. Any suggestion is appreciated, thanx in advance.
First off, it should be pointed out for others that you are trying to develop using the javascript library "three.js". The documentation can be found here: http://mrdoob.github.com/three.js/docs
The crux of the issue is that textures get mapped to Mesh objects based upon UV coordinates stored in the Geometry objects. The THREE.CubeGeometry object has its UV coordinates stored in the array faceVertexUvs.
It contains the following arrays of UV coordinate for the 4 vertices in each of the 6 faces:
{{0,1}, {0,0}, {1,0}, {1,1}}, // Right Face (Top of texture Points "Up")
{{0,1}, {0,0}, {1,0}, {1,1}}, // Left Face (Top of texture Points "Up")
{{0,1}, {0,0}, {1,0}, {1,1}}, // Top Face (Top of texture Points "Backward")
{{0,1}, {0,0}, {1,0}, {1,1}}, // Bottom Face (Top of texture Points "Forward")
{{0,1}, {0,0}, {1,0}, {1,1}}, // Front Face (Top of texture Points "Up")
{{0,1}, {0,0}, {1,0}, {1,1}} // Back Face (Top of texture Points "Up") **Culprit**
It is mapping UV coordinate to each of the faces which make up the cube, which are:
{0, 2, 3, 1}, // Right Face (Counter-Clockwise Order Starting RTF)
{4, 6, 7, 5}, // Left Face (Counter-Clockwise Order Starting LTB)
{4, 5, 0, 1}, // Top Face (Counter-Clockwise Order Starting LTB)
{7, 6, 3, 2}, // Bottom Face (Counter-Clockwise Order Starting LBF)
{5, 7, 2, 0}, // Front Face (Counter-Clockwise Order Starting LTF)
{1, 3, 6, 4} // Back Face (Counter-Clockwise Order Starting RTB)
The above numbers are indexes into the array of vertices, which for the THREE.CubeGeometry are stored in vertices, there are 8 of them:
{20, 20, 20}, // Right-Top-Front Vertex
{20, 20, -20}, // Right-Top-Back Vertex
{20, -20, 20}, // Right-Bottom-Front Vertex
{20, -20, -20}, // Right-Bottom-Back Vertex
{-20, 20, -20}, // Left-Top-Back Vertex
{-20, 20, 20}, // Left-Top-Front Vertex
{-20, -20, -20}, // Left-Bottom-Back Vertex
{-20, -20, 20} // Left-Bottom-Front Vertex
NOTE: All relative directions above are assuming the camera is placed along the positive z axis looking towards a cube centered on the origin.
So the real culprit is the back face which has the texture's top point upwards. In this case you want the texture's top to point downwards on the back face, so when the cube if flipped upside down due to the rotations and viewed the way you have it, the image appears as you expect. It needs to change as follows:
{{1,0}, {1,1}, {0,1}, {0,0}} // FIXED: Back Face (Top of texture Points "Down")
This change can be made in the code to change the coordinates to get the display you would like:
var cubeGeometry = new THREE.CubeGeometry(40, 40, 40);
cubeGeometry.faceVertexUvs[0][5] = [new THREE.UV(1, 0), new THREE.UV(1, 1), new THREE.UV(0, 1), new THREE.UV(0, 0)];
var cube = new THREE.Mesh(cubeGeometry, img2);
For further reading, I recommend the following link on Texture Mapping with UV coordinates http://www.rozengain.com/blog/2007/08/26/uv-coordinate-basics/.

Categories

Resources