Three.js Using Raycaster to detect line and cone children of ArrowHelper - javascript

I have a functioning Raycaster for a simple painting app. I use it for a "bucket tool" in which the user can click on an object and change its color. It works for geometry objects such as BoxGeometry and CircleGeometry, but I'm struggling to apply it to the children of an ArrowHelper object. Because ArrowHelper isn't a shape and does not possess a geometry attribute, Raycaster does not detect collision with its position when checking scene.children for intersections. However, the children of ArrowHelper objects are always two things: a line and a cone, both of which have geometry, material, and position attributes.
I HAVE TRIED:
Toggling the recursive boolean of the function .intersectObjects(objects: Array, recursive: Boolean, optionalTarget: Array ) to true, so that it includes the children of the objects in the array.
Circumventing the ArrowHelper parent by iterating through scene.children for ArrowHelper objects and adding their lines and cones into a separate array of objects. From there I attempted to check for intersections with only the list of lines and cones, but no intersections were detected.
Raycaster setup:
const runRaycaster = (mouseEvent) => {
... // sets mouse and canvas bounds here
const raycaster = new THREE.Raycaster();
raycaster.setFromCamera(mouse, camera);
const intersects = raycaster.intersectObjects(scene.children, true);
if (intersects.length > 0) {
for (let i = 0; i < intersects.length; i++) {
// works for GEOMETRY ONLY
// needs modifications for checking ArrowHelpers
intersects[i].object.material.color.set(currentColor);
}
}
};
Here's my attempt to check the lines and cones individually, without the ArrowHelper parent:
let arrowObjectsList = [];
for (let i = 0; i < scene.children.length; i++) {
if (scene.children[i].type === 'ArrowHelper') {
arrowObjectsList.push(scene.children[i].line);
arrowObjectsList.push(scene.children[i].cone);
} else {
console.log(scene.children[i].type);
}
}
console.log(arrowObjectsList); // returns 2 objects per arrow on the canvas
// intersectsArrows always returns empty
const intersectsArrows = raycaster.intersectObjects(arrowObjectsList, true);
SOME NOTES:
Every ArrowHelper, its line, and its cone have uniquely identifiable names so they can be recolored/repositioned/deleted later.
The Raycaster runs with every onMouseDown and onMouseMove event.
Notably, the line and cone children of ArrowHelpers are BufferGeometry and CylinderBufferGeometry, respectively, rather than variations of Geometry. I'm wondering if this has anything to do with it. According to this example from the Three.JS documentation website, BufferGeometry can be detected by Raycaster in a similar fashion.

Setting recursion = true worked for me. Run the simple code below, and click on the arrow head. You will see the intersection information printed to the console. (three.js r125)
let W = window.innerWidth;
let H = window.innerHeight;
const renderer = new THREE.WebGLRenderer({
antialias: true,
alpha: true
});
document.body.appendChild(renderer.domElement);
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(28, 1, 1, 1000);
camera.position.set(5, 5, 5);
camera.lookAt(scene.position);
scene.add(camera);
const light = new THREE.DirectionalLight(0xffffff, 1);
light.position.set(0, 0, -1);
camera.add(light);
const mesh = new THREE.ArrowHelper(
new THREE.Vector3(0, 0, 1),
new THREE.Vector3(0, 0, 0),
2,
0xff0000,
1,
1
);
scene.add(mesh);
function render() {
renderer.render(scene, camera);
}
function resize() {
W = window.innerWidth;
H = window.innerHeight;
renderer.setSize(W, H);
camera.aspect = W / H;
camera.updateProjectionMatrix();
render();
}
window.addEventListener("resize", resize);
resize();
render();
// RAYCASTER STUFF
const raycaster = new THREE.Raycaster();
const mouse = new THREE.Vector2();
renderer.domElement.addEventListener('mousedown', function(e) {
mouse.set(
(event.clientX / window.innerWidth) * 2 - 1, -(event.clientY / window.innerHeight) * 2 + 1
);
raycaster.setFromCamera(mouse, camera);
const intersects = raycaster.intersectObjects(scene.children, true);
console.log(intersects);
});
html,
body {
width: 100%;
height: 100%;
padding: 0;
margin: 0;
overflow: hidden;
background: skyblue;
}
<script src="https://threejs.org/build/three.min.js"></script>

After a closer inspection, it was a matter of the set position, not necessarily the arrow. The position of the arrow varied based on user mouse click to specify the start point. However, it still presented several problems: It was very difficult to select the line because the lineWidth value of LineBasicMaterial cannot have any other value besides 1, despite being editable. This is due to a limitation in the OpenGL Core Profile, as addressed in the docs and in this question. Similarly, the cone would not respond to setLength. This limits the customization of the ArrowHelper tool pretty badly.
Because of this, I decided to entirely replace ArrowHelper with two objects coupled together: tubeGeometry and coneGeometry, both assigned a MeshBasicMaterial, in a way which can be accessed by Raycasters out of the box.
... // the pos Float32Array is set according to user mouse coordinates.
const v1 = new THREE.Vector3(pos[0], pos[1], pos[2]);
const v2 = new THREE.Vector3(pos[3], pos[4], pos[5]);
const material = new THREE.MeshBasicMaterial({
color: color,
side: THREE.DoubleSide,
});
// Because there are only two vectors, no actual curve occurs.
// Therefore, it's our straight line.
const tubeGeometry = new THREE.TubeBufferGeometry(
new THREE.CatmullRomCurve3([v1, v2]), 1, 3, 3, false);
const coneGeometry = new THREE.ConeGeometry(10, 10, 3, 1, false);
arrowLine = new THREE.Mesh(tubeGeometry, material);
arrowTip = new THREE.Mesh(coneGeometry, material);
// needs names to be updated later.
arrowLine.name = 'arrowLineName';
arrowTip.name = 'arrowTipName';
When placing the arrow, the user will click and drag to specify the start and end point of the arrow, so the arrow and its tip have to be updated with onMouseMove. We have to use Math.atan2 to get the angle in degrees between v1 and v2, with v1 as the center. Subtracting 90 orients the rotation to the default position.
... // on the onMouseMove event, pos is updated with new coords.
const setDirection = () => {
const v1 = new THREE.Vector3(pos[0], pos[1], pos[2]);
const v2 = new THREE.Vector3(pos[3], pos[4], pos[5]);
// copying the v2 pos ensures that the arrow tip is always at the end.
arrowTip.position.copy(v2);
// rotating the arrow tip according to the angle between start and end
// points, v1 and v2.
let angleDegrees = 180 - (Math.atan2(pos[1] - pos[4], pos[3] - pos[0]) * 180 / Math.PI - 90);
const angleRadians = angleDegrees * Math.PI / 180;
arrowTip.rotation.set(0, 0, angleRadians);
// NOT VERY EFFICIENT, but it does the job to "update" the curve.
arrowLine.geometry.copy( new THREE.TubeBufferGeometry(new THREE.CatmullRomCurve3([v1, v2]),1,3,3,false));
scene.add(arrowLine);
scene.add(arrowTip);
};
Out of the box, this "arrow" allows me to select and edit it with Raycaster without a problem. No worrying about line positioning, line thickness, or line length.

Related

How do I detect if X and Z position is intersecting a certain mesh (not using mouse)? - Three.js

Background of Question
I am working on a game that is a mix between Europa Universalis 4 and Age of Empires 3. The game is made in JavaScript and utilizes Three.js (r109) library. As of right now I have made randomly generated low-poly terrain with trees and reflective water. In the beginning I want the game to spawn a Navy, represented by a galleon (in screenshot below). I want to make it so when its called to spawn, it will pick a random location within the bounds of the water. The water mesh is represented by a semi-opaque plane spanning the size of the map- with a THREE.Reflector object underneath it. The terrain is also a plane but has been altered using a SimplexNoise heightmap.
The Question
How do I detect if an x and z position intersects with the water mesh and not the terrain mesh? THREE.Raycaster seems to be useful for what I am trying to do, but I wan't to know if there is a better solution. If using THREE.Raycaster is the best option, how would I go about implementing it for this purpose? Should I make an individual THREE.Raycaster for every object I am doing this with? Keep in mind I'm not placing this object with the mouse, I want to place it with a method that checks the position as stated above.
It's difficult to give specific advice without knowing anything at all about your code, but it sounds like all you need to do is create a collision list for your valid water surfaces and then check that when you want to spawn something.
A very simple jsfiddle is here. It creates a "land" mesh (green) and a "water" mesh (blue), adds the "water" mesh to a variable called collisionList. It then calls a spawn function for coordinates diagonally across both surfaces. The function uses a raycaster to check if the coordinates are over the "water" mesh and spawns a red cube if it is.
Here's the code:
window.onload = function() {
var camera = null, land = null, water = null, renderer = null, lights;
var collisionList;
var d, n, scene = null, animID;
n = document.getElementById('canvas');
function load() {
var height = 600, width = 800;
scene = new THREE.Scene();
camera = new THREE.PerspectiveCamera(60, width/height, 1, 1000);
camera.position.set(0, 0, -10);
camera.lookAt(new THREE.Vector3(0, 0, 0));
scene.add(camera);
lights = [];
lights[0] = new THREE.PointLight(0xffffff, 1, 0);
lights[1] = new THREE.PointLight(0xffffff, 1, 0);
lights[2] = new THREE.PointLight(0xffffff, 1, 0);
lights[0].position.set(0, 200, 0);
lights[1].position.set(100, 200, 100);
lights[2].position.set(-100, -200, -100);
scene.add(lights[0]);
scene.add(lights[1]);
scene.add(lights[2]);
water = new THREE.Mesh(new THREE.PlaneGeometry(7, 7, 10),
new THREE.MeshStandardMaterial({
color: 0x0000ff,
side: THREE.DoubleSide,
}));
water.position.set(0, 0, 0);
scene.add(water);
land = new THREE.Mesh(new THREE.PlaneGeometry(12, 12, 10),
new THREE.MeshStandardMaterial({
color: 0x00ff00,
side: THREE.DoubleSide,
}));
land.position.set(0, 0, 1);
scene.add(land);
renderer = new THREE.WebGLRenderer();
renderer.setSize(width, height);
n.appendChild(renderer.domElement);
collisionList = [ water ];
for(var i = -6; i < 6; i++)
spawn(i);
animate();
}
function spawn(x) {
var dir, intersect, mesh, ray, v;
v = new THREE.Vector3(x, x, -1);
dir = new THREE.Vector3(0, 0, 1);
ray = new THREE.Raycaster(v, dir.normalize(), 0, 100);
intersect = ray.intersectObjects(collisionList);
if(intersect.length <= 0)
return;
mesh = new THREE.Mesh(new THREE.BoxGeometry(1, 1, 1, 1, 1, 1),
new THREE.MeshStandardMaterial({ color: 0xff0000 }));
mesh.position.set(x, x, 0);
scene.add(mesh);
}
function animate() {
if(!scene) return;
animID = requestAnimationFrame(animate);
render();
update();
}
function render() {
if(!scene || !camera || !renderer) return;
renderer.render(scene, camera);
}
function update() {
if(!scene || !camera) return;
}
load();
As for whether this is a smart way to do it, that really depends on the design of the rest of your game.
If your world is procgen then it may be more efficient/less error prone to generate the spawn points (and any other "functional" parts of the world) first and use that to generate the geography instead of the other way around.

Camera position for endless game in ThreeJs

I'm studying Three.js and I'm tryng to do my first game: and endless game.
I have read this article and the purpose is to do something very similar.
The protagonist (the hero) is a blue ball that rolls towards the "infinity" and must avoid some obstacles that gradually arise in front of him. The user can avoid these obstacles by guiding the ball to the left or right and jumping (the idea is to use the keyboard and in particular the left/right arrow keys and the space bar to jump).
Here is my idea:
I want to follow the idea of the article but not to copy the code (I want to understand it).
This is what I've done so far:
let sceneWidth = window.innerWidth;
let sceneHeight = window.innerHeight;
let canvas;
let camera;
let scene;
let renderer;
let dom;
let sun;
let hero;
let ground;
let clock;
let spotLight;
let ambientLight;
init();
function init() {
createScene();
showHelpers();
update();
}
/**
* Set up scene.
*/
function createScene() {
clock = new THREE.Clock();
clock.start();
scene = new THREE.Scene();
window.scene = scene;
camera = new THREE.PerspectiveCamera(60, sceneWidth / sceneHeight, 0.1, 1000);
camera.position.set(0, 0, 0);
renderer = new THREE.WebGLRenderer({ antialias: true });
renderer.setClearColor(0x333f47, 1);
renderer.shadowMap.enabled = true;
renderer.shadowMapSoft = true;
renderer.setSize(sceneWidth, sceneHeight);
canvas = renderer.domElement;
document.body.appendChild(canvas);
// const orbitControls = new THREE.OrbitControls(camera, canvas);
addGround();
addHero();
addLight();
camera.position.set(0, -1, 0.6);
camera.lookAt(new THREE.Vector3(0, 0, 0));
window.addEventListener("resize", onWindowResize, false);
}
/**
* Show helper.
*/
function showHelpers() {
const axesHelper = new THREE.AxesHelper(5);
// scene.add(axesHelper);
const spotLightHelper = new THREE.SpotLightHelper(spotLight);
scene.add(spotLightHelper);
}
/**
* Add ground to scene.
*/
function addGround() {
const geometry = new THREE.PlaneGeometry(1, 4);
const material = new THREE.MeshLambertMaterial({
color: 0xcccccc,
side: THREE.DoubleSide
});
ground = new THREE.Mesh(geometry, material);
ground.position.set(0, 1, 0);
ground.receiveShadow = true;
scene.add(ground);
}
/**
* Add hero to scene.
*/
function addHero() {
var geometry = new THREE.SphereGeometry(0.03, 32, 32);
var material = new THREE.MeshLambertMaterial({
color: 0x3875d8,
side: THREE.DoubleSide
});
hero = new THREE.Mesh(geometry, material);
hero.receiveShadow = true;
hero.castShadow = true;
scene.add(hero);
hero.position.set(0, -0.62, 0.03);
}
/**
* Add light to scene.
*/
function addLight() {
// spot light
spotLight = new THREE.SpotLight(0xffffff);
spotLight.position.set(2, 30, 0);
spotLight.angle = degToRad(10);
spotLight.castShadow = true;
spotLight.shadow.mapSize.width = 1024;
spotLight.shadow.mapSize.height = 1024;
spotLight.shadow.camera.near = 1;
spotLight.shadow.camera.far = 4000;
spotLight.shadow.camera.fov = 45;
scene.add(spotLight);
// ambient light
ambientLight = new THREE.AmbientLight(0x303030, 2);
scene.add(ambientLight);
}
/**
* Call game loop.
*/
function update() {
render();
requestAnimationFrame(update);
}
/**
* Render the scene.
*/
function render() {
renderer.render(scene, camera);
}
/**
* On window resize, render again the scene.
*/
function onWindowResize() {
sceneHeight = window.innerHeight;
sceneWidth = window.innerWidth;
renderer.setSize(sceneWidth, sceneHeight);
camera.aspect = sceneWidth / sceneHeight;
camera.updateProjectionMatrix();
}
/**
* Degree to radiants
*/
function degToRad(degree) {
return degree * (Math.PI / 180);
}
<script src="https://threejs.org/build/three.min.js"></script>
(JSFiddle)
I'm having several problems, the first is the position of objects and the camera.
I would like to be able to position the plane so that the minor side is positioned at the beginning of the screen (the entire plane must therefore be visible, there must not be a hidden part).
I would like the ball to be positioned horizontally in the middle and vertically almost at the beginning of the floor (in short, as shown in the figure) and with the shadow projected onto the plane. Each object must have the shadow projected onto the plane.
I'm using a spotlight and Lambert materials so the shade should be there, but there is not. Why?
I don't even understand how to position objects.
I understood that the point (0, 0, 0) is the center of the screen.
I would like the ground to be at y=0 and all the other objects are positioned above as if they were resting.
My code works but I don't know if there are better ways to handle object placement.
I would also simplify my life by assigning to sphere radius 1 and not 0.03 and then making the scene "smaller" moving the camera away as zoom-out (I think this is the trick).
So, I need help setting the scene correctly.
That is my first application in ThreeJs so any advice is welcome!
EDIT 1
I changed camera.lookAt(new THREE.Vector3(0, 0, 0)); to camera.lookAt(new THREE.Vector3(0, 0, -5)); and I added spotLight.lookAt(new THREE.Vector3(0, 0, -5));.
This is the result:
Not exactly what I want...
You're right in placing your plane and sphere at 0 on the y-axis. The problem you're having is that you're telling the camera to look straight at (0, 0, 0) when you do
camera.lookAt(0, 0, 0);
so you'll get the ball perfectly centered. What you should do is tell the camera to look a little bit ahead of the sphere. You'll have to tweak the value, but something like this should do the trick:
camera.lookAt(0, 0, -5);
Additionally, your spotlight is pointing straight ahead. When you place it at (2, 30, 0), its effects get lost. You need to point it to where you want:
spotLight.lookAt(0, 0, -5);

smooth terrain from height map three js

I am currently trying to create some smooth terrain using the PlaneBufferGeometry of three.js from a height map I got from Google Images:
https://forums.unrealengine.com/filedata/fetch?id=1192062&d=1471726925
but the result is kinda choppy..
(Sorry, this is my first question and evidently I need 10 reputation to post images, otherwise I would.. but here's an even better thing: a live demo! left click + drag to rotate, scroll to zoom)
I want, like i said, a smooth terrain, so am I doing something wrong or is this just the result and i need to smoothen it afterwards somehow?
Also here is my code:
const IMAGE_SRC = 'terrain2.png';
const SIZE_AMPLIFIER = 5;
const HEIGHT_AMPLIFIER = 10;
var WIDTH;
var HEIGHT;
var container = jQuery('#wrapper');
var scene, camera, renderer, controls;
var data, plane;
image();
// init();
function image() {
var image = new Image();
image.src = IMAGE_SRC;
image.onload = function() {
WIDTH = image.width;
HEIGHT = image.height;
var canvas = document.createElement('canvas');
canvas.width = WIDTH;
canvas.height = HEIGHT;
var context = canvas.getContext('2d');
console.log('image loaded');
context.drawImage(image, 0, 0);
data = context.getImageData(0, 0, WIDTH, HEIGHT).data;
console.log(data);
init();
}
}
function init() {
// initialize camera
camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, .1, 100000);
camera.position.set(0, 1000, 0);
// initialize scene
scene = new THREE.Scene();
// initialize directional light (sun)
var sun = new THREE.DirectionalLight(0xFFFFFF, 1.0);
sun.position.set(300, 400, 300);
sun.distance = 1000;
scene.add(sun);
var frame = new THREE.SpotLightHelper(sun);
scene.add(frame);
// initialize renderer
renderer = new THREE.WebGLRenderer();
renderer.setClearColor(0x000000);
renderer.setPixelRatio(window.devicePixelRatio);
renderer.setSize(window.innerWidth, window.innerHeight);
container.append(renderer.domElement);
// initialize controls
controls = new THREE.OrbitControls(camera, renderer.domElement);
controls.enableDamping = true;
controls.dampingFactor = .05;
controls.rotateSpeed = .1;
// initialize plane
plane = new THREE.PlaneBufferGeometry(WIDTH * SIZE_AMPLIFIER, HEIGHT * SIZE_AMPLIFIER, WIDTH - 1, HEIGHT - 1);
plane.castShadow = true;
plane.receiveShadow = true;
var vertices = plane.attributes.position.array;
// apply height map to vertices of plane
for(i=0, j=2; i < data.length; i += 4, j += 3) {
vertices[j] = data[i] * HEIGHT_AMPLIFIER;
}
var material = new THREE.MeshPhongMaterial({color: 0xFFFFFF, side: THREE.DoubleSide, shading: THREE.FlatShading});
var mesh = new THREE.Mesh(plane, material);
mesh.rotation.x = - Math.PI / 2;
mesh.matrixAutoUpdate = false;
mesh.updateMatrix();
plane.computeFaceNormals();
plane.computeVertexNormals();
scene.add(mesh);
animate();
}
function animate() {
requestAnimationFrame(animate);
renderer.render(scene, camera);
controls.update();
}
The result is jagged because the height map has low color depth. I took the liberty of coloring a portion of the height map (Paint bucket in Photoshop, 0 tolerance, non-continuous) so you can see for yourself how large are the areas which have the same color value, i.e. the same height.
The areas of the same color will create a plateau in your terrain. That's why you have plateaus and sharp steps in your terrain.
What you can do is either smooth out the Z values of the geometry or use a height map which utilizes 16bits or event 32bits for height information. The current height map only uses 8bits, i.e. 256 values.
One thing you could do to smooth things out a bit is to sample more than just a single pixel from the heightmap. Right now, the vertex indices directly correspond to the pixel position in the data-array. And you just update the z-value from the image.
for(i=0, j=2; i < data.length; i += 4, j += 3) {
vertices[j] = data[i] * HEIGHT_AMPLIFIER;
}
Instead you could do things like this:
get multiple samples with certain offsets along the x/y axes
compute an (weighted) average value from the samples
That way you would get some smoothing at the borders of the same-height areas.
The second option is to use something like a blur-kernel (gaussian blur is horribly expensive, but maybe something like a fast box-blur would work for you).
As you are very limited in resolution due to just using a single byte, you should convert that image to float32 first:
const highResData = new Float32Array(data.length / 4);
for (let i = 0; i < highResData.length; i++) {
highResData[i] = data[4 * i] / 255;
}
Now the data is in a format that allows for far higher numeric resolution, so we can smooth that now. You could either adjust something like the StackBlur for the float32 use-case, use ndarrays and ndarray-gaussian-filter or implement something simple yourself. The basic idea is to find an average value for all the values in those uniformly colored plateaus.
Hope that helps, good luck :)

Rotating icosahedron with circles located at every vertex in three.js

I have an icosahedron mesh which I am rotating and then adding circle geometries and setting their location to each vertex at every frame in the animation loop.
geometry = new THREE.IcosahedronGeometry(isoRadius, 1);
var material = new THREE.MeshBasicMaterial({
color: wireframeColor,
wireframe: true
});
isoMesh = new THREE.Mesh(geometry, material);
scene.add(isoMesh);
Set each circle geometries location as the icosahedron mesh rotates:
function animate() {
isoMesh.rotation.x += 0.005;
isoMesh.rotation.y += 0.002;
// update vertices
isoMesh.updateMatrix();
isoMesh.geometry.applyMatrix(isoMesh.matrix);
isoMesh.rotation.set(0, 0, 0);
for (var i = 0; i < geometry.vertices.length; i++) {
nodes[i].position.copy(geometry.vertices[i]);
nodes[i].lookAt(camera.position);
}
Where var geometry is the geometry of the icosahedron. If I remove the line "isoMesh.rotation.set(0, 0, 0);", the icosahedron rotates correctly, but the rotation of the nodes compounds and spins way too quickly. If I add that line, the nodes rotate correctly, but the icosahedron does not move at all.
I do not understand three.js well enough yet to understand what is happening. Why would adding and removing this affect the nodes' and icosahedron's rotations separately? I believe it has something to do with the difference between the mesh and the geometry since I am using the geometry to position the nodes, but the rotation of the mesh is what shows visually. Any idea what is happening here?
The solution it multi-layered.
Your Icosahedron:
You were half-way there with rotating your icosahedron and its vertices. Rather than applying the rotation to all the vertices (which would actually cause some pretty extreme rotation), apply the rotation to the mesh only. But that doesn't update the vertices, right? Right. More on that in a moment.
Your Circles:
You have the right idea of placing them at each vertex, but as WestLangley said, you can't use lookAt for objects with rotated/translated parents, so you'll need to add them directly to the scene. Also, if you can't get the new positions of the vertices for the rotated icosahedron, the circles will simply remain in place. So let's get those updated vertices.
Getting Updated Vertex Positions:
Like I said above, rotating the mesh updates its transformation matrix, not the vertices. But we can USE that updated transformation matrix to get the updated matrix positions for the circles. Object3D.localToWorld allows us to transform a local THREE.Vector3 (like your icosahedron's vertices) into world coordinates. (Also note that I did a clone of each vertex, because localToWorld overwrites the given THREE.Vector3).
Takeaways:
I've tried to isolate the parts relative to your question into the JavaScript portion of the snippet below.
Try not to update geometry unless you have to.
Only use lookAt with objects in the world coordinate system
Use localToWorld and worldToLocal to transform vectors between
coordinate systems.
// You already had this part
var geometry = new THREE.IcosahedronGeometry(10, 1);
var material = new THREE.MeshBasicMaterial({
color: "blue",
wireframe: true
});
var isoMesh = new THREE.Mesh(geometry, material);
scene.add(isoMesh);
// Add your circles directly to the scene
var nodes = [];
for(var i = 0, l = geometry.vertices.length; i < l; ++i){
nodes.push(new THREE.Mesh(new THREE.CircleGeometry(1, 32), material));
scene.add(nodes[nodes.length - 1]);
}
// This is called in render. Get the world positions of the vertices and apply them to the circles.
var tempVector = new THREE.Vector3();
function updateVertices(){
if(typeof isoMesh !== "undefined" && typeof nodes !== "undefined" && nodes.length === isoMesh.geometry.vertices.length){
isoMesh.rotation.x += 0.005;
isoMesh.rotation.y += 0.002;
for(var i = 0, l = nodes.length; i < l; ++i){
tempVector.copy(isoMesh.geometry.vertices[i]);
nodes[i].position.copy(isoMesh.localToWorld(tempVector));
nodes[i].lookAt(camera.position);
}
}
}
html *{
padding: 0;
margin: 0;
width: 100%;
overflow: hidden;
}
#host {
width: 100%;
height: 100%;
}
<script src="http://threejs.org/build/three.js"></script>
<script src="http://threejs.org/examples/js/controls/TrackballControls.js"></script>
<script src="http://threejs.org/examples/js/libs/stats.min.js"></script>
<div id="host"></div>
<script>
// INITIALIZE
var WIDTH = window.innerWidth,
HEIGHT = window.innerHeight,
FOV = 35,
NEAR = 1,
FAR = 1000;
var renderer = new THREE.WebGLRenderer({ antialias: true });
renderer.setSize(WIDTH, HEIGHT);
document.getElementById('host').appendChild(renderer.domElement);
var stats= new Stats();
stats.domElement.style.position = 'absolute';
stats.domElement.style.top = '0';
document.body.appendChild(stats.domElement);
var camera = new THREE.PerspectiveCamera(FOV, WIDTH / HEIGHT, NEAR, FAR);
camera.position.z = 50;
var trackballControl = new THREE.TrackballControls(camera, renderer.domElement);
trackballControl.rotateSpeed = 5.0; // need to speed it up a little
var scene = new THREE.Scene();
var light = new THREE.PointLight(0xffffff, 1, Infinity);
camera.add(light);
scene.add(light);
function render(){
if(typeof updateVertices !== "undefined"){
updateVertices();
}
renderer.render(scene, camera);
stats.update();
}
function animate(){
requestAnimationFrame(animate);
trackballControl.update();
render();
}
animate();
</script>

Does three.js renderer clone the objects positions?

I created a small scene with 3 spheres and a triangle connecting the 3 centers of the spheres, i.e. the triangle vertex positions are the same variables as the sphere positions.
Now I expected that if i change the position of one of the spheres, the triangle vertex should be moved together with it (since it's the same position object) and therefore still connect the three spheres.
However, if I do this coordinate change AFTER the renderer was called, the triangle is NOT changed. (Though it does change if I move the sphere BEFORE the renderer is called.)
This seems to indicate that the renderer doesnt use the original position objects but a clone of them.
Q: Is there a way to avoid this cloning behaviour (or whatever is the reason for the independent positions) so I can still change two objects with one variable change? Or am I doing something wrong?
The code:
var width = window.innerWidth;
var height = window.innerHeight;
var renderer = new THREE.WebGLRenderer({ antialias: true });
renderer.setSize(width, height);
document.body.appendChild(renderer.domElement);
var scene = new THREE.Scene;
var camera = new THREE.PerspectiveCamera(30, width / height, 0.1, 10000);
camera.position=new THREE.Vector3(50,50,50);
camera.lookAt(new THREE.Vector3(0,0,0));
scene.add(camera);
var pointLight = new THREE.PointLight(0xffffff);
pointLight.position=camera.position;
scene.add(pointLight);
var sphere=[];
var sphereGeometry = new THREE.SphereGeometry(1,8,8);
var sphereMaterial = new THREE.MeshLambertMaterial({ color: 0xff0000 });
var triGeom = new THREE.Geometry();
for (var i=0; i<3; i++) {
sphere[i] = new THREE.Mesh(sphereGeometry, sphereMaterial);
sphere[i].position=new THREE.Vector3(10*i,20+5*(i-1)^2,0);
scene.add(sphere[i]);
triGeom.vertices.push(sphere[i].position);
}
triGeom.faces.push( new THREE.Face3( 0, 1, 2 ) );
triGeom.computeFaceNormals();
var tri= new THREE.Mesh( triGeom, new THREE.MeshLambertMaterial({side:THREE.DoubleSide, color: 0x00ff00}) );
scene.add(tri);
sphere[0].position.x+=10; // this changes both sphere and triangle vertex
renderer.render(scene, camera);
sphere[1].position.x+=10; // this changes only the sphere
renderer.render(scene, camera);
This is probably because of geometry caching feature. You will have to set triGeom.verticesNeedUpdate = true every time you change vertex position.

Categories

Resources