New to three.js, how to animate this ball in three.js - javascript

Hi I'm just learning webGL and javascript.
I've made this three.js webGL scene thing, and actually come to think of it... They're the same object
http://goo.gl/gOiHX4
The ball is 'joined' to the rest of the 3d object, so I'll make a another sphere in blender by itself.
So say I have a ball.js and the rest of the structure, tribunal.js
how would I mode the ball.js along the 3D environment in this case?
Like maybe in a circle around the structure. constant loop.
pastebin for code too:
http://paste.ubuntu.com/6549663/
<!doctype html>
<html lang="en">
<head>
<title>My 3D webGL experiment</title>
<meta charset="utf-8">
</head>
<body style="margin: 0;">
<script src="js/three.min.js"></script>
<script src="js/OrbitControls.js"></script>
<script>
// Set up the scene, camera, and renderer as global variables.
var scene, camera, renderer;
init();
animate();
// Sets up the scene.
function init() {
// Create the scene and set the scene size.
scene = new THREE.Scene();
var WIDTH = window.innerWidth,
HEIGHT = window.innerHeight;
// Create a renderer and add it to the DOM.
renderer = new THREE.WebGLRenderer({antialias:true});
renderer.setSize(WIDTH, HEIGHT);
document.body.appendChild(renderer.domElement);
// Create a camera, zoom it out from the model a bit, and add it to the scene.
camera = new THREE.PerspectiveCamera(45, WIDTH / HEIGHT, 0.1, 20000);
camera.position.set(90,80,0);
scene.add(camera);
// Create an event listener that resizes the renderer with the browser window.
window.addEventListener('resize', function() {
var WIDTH = window.innerWidth,
HEIGHT = window.innerHeight;
renderer.setSize(WIDTH, HEIGHT);
camera.aspect = WIDTH / HEIGHT;
camera.updateProjectionMatrix();
});
// Set the background color of the scene.
renderer.setClearColorHex(0xB5DBDB, 1);
// Create a light, set its position, and add it to the scene.
var light = new THREE.PointLight(0xf44fff);
light.position.set(200,200,200);
scene.add(light);
// Load in the mesh and add it to the scene.
var loader = new THREE.JSONLoader();
loader.load( "models/tribunal.js", function(geometry){
var material = new THREE.MeshLambertMaterial({color: 0xCC0000});
mesh = new THREE.Mesh(geometry, material);
scene.add(mesh);
});
// Add OrbitControls so that we can pan around with the mouse.
controls = new THREE.OrbitControls(camera, renderer.domElement);
}
// Renders the scene and updates the render as needed.
function animate() {
// Read more about requestAnimationFrame at http://www.paulirish.com/2011/requestanimationframe-for-smart-animating/
requestAnimationFrame(animate);
// Render the scene.
renderer.render(scene, camera);
controls.update();
}
</script>
</body>
</html>

In THREE.js, any movement of an object can be accomplished by changing its properties: position, rotation and/or scale. These properties are not simple numbers, so changing them to suit your needs often requires using built-in functions to be sure the change is handled correctly. For example, the position of a mesh is defined as a Vector which can be changed using the set() function like so:
mesh.position.set( 0, 0, 0 ); // Standard [ x, y, z ] coordinate system
There are many other ways to change the values in a Vector described in the documentation and code. Think of the properties of an object as objects themselves, and familiarize yourself with the methods available to you for those objects and their parent objects.
To answer your question: continuously moving an object over a period of time requires more code inside your animate() function. The following code will move a mesh 1 unit in a positive direction along the x-axis every time the animate() function is called:
mesh.position.translateX( 1 );
Tips:
There are many types of movement, but they are mostly combinations of position, rotation, and/or scale.
It is important to remember that child objects are affected by parent objects. If Mesh B is attached to Mesh A, movement applied to Mesh A will move Mesh B as well.
Variable references to objects inside your animate() loop need to be global so the animate() loop knows what object you are talking about.
Changes in position, scale and even rotation can quickly move an object out of frustum (or field-of-view) of the camera.
Use OrbitControls.js and console.log() to help debug animations.

Related

three.js DirectionalLight and shadow cut off

As per the screenshot, shadows cast onto the THREE.PlaneGeometry(250, 380, 1, 1) below are cut off.
Steps I've taken to enable shadows
renderer.shadowMap.enabled = true;
renderer.shadowMap.type = THREE.PCFSoftShadowMap;
..
camera = new THREE.PerspectiveCamera(35, window.innerWidth / window.innerHeight, 1, 1000);
..
mainLight = new THREE.DirectionalLight(0xffffff, 0.5);
mainLight.position.set(50, 50, 50);
mainLight.castShadow = true;
mainLight.shadow.mapSize.width = width * window.devicePixelRatio;
mainLight.shadow.mapSize.height = width * window.devicePixelRatio;
mainLight.shadow.camera.near = 1;
mainLight.shadow.camera.far = 1000;
mainLight.shadow.camera.fov = 100;
scene.add(mainLight);
..
plane.receiveShadow = true;
..
model.castShadow = true;
model.receiveShadow = true;
I've played with different values like the shadow camera FOV and far plane values...
Is this a caveat with using DirectionalLight? I need even lighting across all of my models, as opposed to SpotLight.
I found three.js shadow cutoff but it simply suggested using a SpotLight instead and gave no explanation as to why that changes anything.
When I do use a SpotLight, I suddenly lose shadows on ground plane altogether.
--
Thanks
See the three.js documentation for DirectionalLightShadow:
This is used internally by DirectionalLights for calculating shadows.
Unlike the other shadow classes, this uses an OrthographicCamera to calculate the shadows, rather than a PerspectiveCamera. This is because light rays from a DirectionalLights are parallel.
See further DirectionalLight
A common point of confusion for directional lights is that setting the rotation has no effect. This is because three.js's DirectionalLight is the equivalent to what is often called a 'Target Direct Light' in other applications.
This means that its direction is calculated as pointing from the light's position to the target's position (as opposed to a 'Free Direct Light' that just has a rotation component).
The reason for this is to allow the light to cast shadows - the shadow camera needs a position to calculate shadows from.
This means that the area affected by the shadow is defined by the position and the camera of the light source (DirectionalLight).
Set up the camera for the mainLight and define its orthographic projection for your needs:
mainLight.shadow.camera = new THREE.OrthographicCamera( -100, 100, 100, -100, 0.5, 1000 );

three.js - Invert camera rotation matrix

I have a scene with objects and a camera controlled by a trackball. When I add a new object to the root object, I want it in the orientation it would have had before the camera moved around. For example, if you don't rotate the camera, a torus will show up with the hole facing the screen, the ring in the x,y screen plane.
I tried to apply the inverse matrix of the camera, but that doesn't work.
var m = THREE.Matrix4()
m.getInverse(camera.matrixWorld)
obj.setRotationFromMatrix(m)
What am I missing ?
The solution was simply to apply the camera rotation:
obj.setRotationFromMatrix(camera.matrixWorld)
The object is then facing the camera.
You need to declare the object as "new" as well reordering some of the wording. Try this:
var m = new THREE.Matrix4();
m.getInverse( camera.matrixWorld );
obj.rotation.setFromRotationMatrix(m);

three.js rotate facing object

I did vector stuff for already 13 years now, but am still struggling with the way three.js handles things. So what I like to do is best described with a fan that always faces the camera and, of course, rotates.
This is how I achieved this in another language:
// Create a vector facing in camera direction.
temp.x = 0;
temp.y = 1;
temp.z = 0;
vec_rotate(temp, camera.pan);
// Apply vector direction to scene object.
vec_to_angle(entity.pan, temp);
// Rotate scene object's angle around another angle / imaginary line from camera to object.
ang_rotate(
entity.pan,
vector(random(360), 0, 0)
);
Thus, after applying entity.lookAt(camera.position) I am missing an angle rotation based on the current angle (last function call of the example).
One way to model a fan is like so:
var fan = new THREE.Mesh( ... );
var pivot = new THREE.Object3D();
scene.add( pivot );
pivot.lookAt( camera.position );
pivot.add( fan );
Then in the animation loop (assuming the fan mesh by default faces the positive-z axis),
fan.rotation.z += 0.01;
three.js r.68

Three.js reuse geometry for faceColors and vertexColors

I want the same object to be rendered twice, once on-screen and once off-screen. The on-screen mesh has a geometry and a MeshLambertMaterial. This material has vertexColors: THREE.VertexColors. The off-screen mesh has the same geometry and a MeshBasicMeterial with vertexColors: THREE.FaceColors. During initial setup each faceColor is set to a unique color. Each vertexColor is set to a single color (Later these vertexcolors can change by "painting" on the object).
Then I want to render both object. In this fiddle you see how that looks with two scenes rendered side by side. The object with the MeshLambertMeterial is now half red to make things clearer. As you can see, both scenes seem to use the same material. Also, when I switch the order I get the following error:
[.WebGLRenderingContext]GL ERROR :GL_INVALID_OPERATION : glDrawElements: attempt to access out of range vertices in attribute 1.
To make things even weirder, when run the fiddle I see the object rendered with the MeshBasicMaterial twice. However, when I run the exact same code locally, I see the object rendered with the MeshLambertMaterial twice.
Eventually I want the object with the MeshBasicMeterial to render to a rendertarget, but with that I run into the same problems. I want to be able to show the object with the LambertMetarial on screen and when I hover over the object with the mouse, to get the color in that position in the renderTarget, where the BasicMaterial is rendered to.
I hope it is clear what the problem is, if not, please let me know.
fiddle
EDIT: Issue resolved.
When using WebGLRenderer, two meshes having different materials can now share a geometry.
three.js r.88
First of all, always make sure you try the latest three.js everywhere. especially, do not use the jsfiddle's r54 but instead use an up-to-date external resource like http://threejs.org/build/three.min.js - this solves one of your problems.
The main thing is that you need to update the vertex colors between the render calls (and if they are changed, you have to do that anyway). Here's what does the trick:
function render () {
var SCREEN_WIDTH = WIDTH, SCREEN_HEIGHT = HEIGHT;
camera.aspect = 0.5 * SCREEN_WIDTH / SCREEN_HEIGHT;
camera.updateProjectionMatrix();
renderer.clear();
geometry.colorsNeedUpdate = true;
// left side
renderer.setViewport( 0, 0, 0.5 * SCREEN_WIDTH, SCREEN_HEIGHT );
renderer.render( scene, camera );
geometry.colorsNeedUpdate = true;
// right side
renderer.setViewport( 0.5 * SCREEN_WIDTH, 0, 0.5 * SCREEN_WIDTH, SCREEN_HEIGHT );
renderer.render( pickRenderScene, camera );
}
http://jsfiddle.net/E3F4f/4/

Three.js Child of camera isn't visible

I'm trying to attach an object to the camera so that it can be used more more or less as a GUI element.
My camera is defined as follows:
camera = new THREE.PerspectiveCamera( 45, windowWidth / windowHeight, 1, 2000 );
camera.position.z = 100;
In my init(), I define the object to be added:
obj = new THREE.Mesh(new THREE.CubeGeometry(5, 5, 5, 1, 1, 1),
new THREE.MeshBasicMaterial({ color: 0xFFFFFF } ));
obj.position.set( 0, 0, -50);
camera.add(obj);
But, the block does not show up. I have tried adding this object to scene, and it is visible. I added a loop to animate() that will slide the object's z position between (-50, 50), but I can't see it.
I tried using camera.lookAt(obj) and logging the world position of obj (obj position + camera position), and they behave as expected. World position seems to be what I'd expect, and camera.lookAt flips the camera when the z position crosses 0.
I apologize for not providing more clear example code, but I will do my best to cooperate with anyone trying to help me. Thanks!
Did you add the camera to the scene?
scene.add( camera );
The camera does not usually have to be added to the scene, but in this case, the object is a child of the camera, so you must.
three.js r.58

Categories

Resources