I am building a Three.js App (React Template, if it's important). I have this 3D object model that should act like the Planet Earth in app. And I've got this space station model. I wanna rotate the station around the world by giving some specific coordinates every other second. My questions are:
How can I place the space station above London, for example, if I have this coordinates:
long: 45.926013877299 and lat: 46.524648101056 (random)
How can I use my animate function now, because that's how I create the mesh:
const loader = new GLTFLoader();
loader.load("path/to/model"), gltf => {
scene.add(gltf.scene);
});
instead of:
const earthMesh = new THREE.Mesh(earthGeometry, earthMaterial);
scene.add(earthMesh);
My animate() function:
const animate = () => {
requestAnimationFrame(animate);
earthMesh.rotation.y -= 0.0015;
renderer.render(scene, camera);
};
Since loader.load() returns nothing, I cannot create a variable to use it like earthMesh (e.g. earthMesh.position.y -= 0.0015)
Sorry for the bad code parts but for one reason I don't know the code button {} is not formatting the text as expected.
Related
I'm using an augmented reality library that does some fancy image tracking stuff. After learning a whole lot about this project, I'm now beyond my current ability and could use some help. For our purposes, the library creates an (empty) anchor point at the center of an IRL image target in-camera. Then moves the virtual world around the IRL camera.
My goal is to drive plane.rotation to always face the camera, while keeping plane.position locked to the anchor point. Additionally, plane.rotation values will be referenced later in development.
const THREE = window.MINDAR.IMAGE.THREE;
document.addEventListener('DOMContentLoaded', () => {
const start = async() => {
// initialize MindAR
const mindarThree = new window.MINDAR.IMAGE.MindARThree({
container: document.body,
imageTargetSrc: '../../assets/targets/testQR.mind',
});
const {renderer, scene, camera} = mindarThree;
// create AR object
const geometry = new THREE.PlaneGeometry(1, 1.25);
const material = new THREE.MeshBasicMaterial({color: 0x00ffff, transparent: true, opacity: 0.5});
const plane = new THREE.Mesh(geometry, material);
// create anchor
const anchor = mindarThree.addAnchor(0);
anchor.group.add(plane);
// start AR
await mindarThree.start();
renderer.setAnimationLoop(() => {
renderer.render(scene, camera);
});
}
start();
});
Everything I've tried so far went into the solutions already massaged into the (functioning draft) code. I have, however, done some research and found a couple avenues that might or might not work. Just tossing them out to see what might stick or inspire another solution. Skill-wise, I'm still in the beginner category, so any help figuring this out is much appreciated.
identify plane object by its group index number;
drive (override lib?) object rotation (x, y, z) to face camera;
possible solutions from dev:
"You can get those values through the anchor object, e.g. anchor.group.position. Meaning that you can use the current three.js API and get those values but without using it for rendering i.e. don't append the renderer.domElement to document."
"You can hack into the source code of mindar (it's open source)."
"Another way might be easier for you to try is to just create another camera yourself. I believe you can have multiple cameras, and just render another layer on top using your new camera."
I think it may be as simple as calling lookAt in the animation loop function:
// start AR
await mindarThree.start();
renderer.setAnimationLoop(() => {
plane.lookAt(new THREE.Vector3());
renderer.render(scene, camera);
});
This assumes the camera is always located at (0,0,0) (i.e., new THREE.Vector3()). This seems to be true from my limited testing. I found it helpful to debug by copy-pasting the MindAR three.js example into this codepen and printing some relevant values to the console.
Also note that, internally, MindAR's three.js module seems to directly modify the world matrix of the anchor.group object without modifying the position/rotation/scale parameters.
How does one traverse a mesh loaded with GLTFLoader properly to walk through all layers?
I am trying to do a simple selective bloom pass on a model by traversing the model’s all parts, setting them to the bloom layer, and then rendering the combined original + bloomed layers. However, as we can see in the images below, only the yellow outer part of the model is actually found during the traversal, does anyone know how to extract the rest of the model for layer setting?
For reproduction, the model can be downloaded from here:
https://github.com/whatsmycode/Models/blob/master/PrimaryIonDrive.glb
This is the code I currently use:
import { GLTFLoader } from 'three/examples/jsm/loaders/GLTFLoader.js';
let BLOOM_LAYER = 1;
new GLTFLoader().load( 'models/PrimaryIonDrive.glb', function ( gltf ) {
const model = gltf.scene;
model.traverse( function( child ) {
child.layers.enable(BLOOM_LAYER);
});
scene.add( model );
});
This is the resulting image, bloom is applied to the yellow outer rings only.
This is the bloom-mask only
The issue was that I had not added the point- and ambient lights to both layers. The bloomed object has properties that requires light to show color for all parts except for the emitting yellow rings. To fix the problem, I simply enabled the lights for both layers before adding the lights to the scene.
const pointLight = new THREE.PointLight(0xffffff);
pointLight.layers.enable(ENTIRE_LAYER);
pointLight.layers.enable(BLOOM_LAYER);
const ambientLight = new THREE.AmbientLight(0xffffff);
ambientLight.layers.enable(ENTIRE_LAYER);
ambientLight.layers.enable(BLOOM_LAYER);
scene.add(pointLight, ambientLight);
I've been trying to make the face texture in a 2D canvas/plane move only along the X/Y axis, following the movements of the face without rotating, with the 2D background camera texture reflected accurately on top. Right now, when I connect the canvas to the face tracker I'm getting distorted scale, and the 2D plane rotates in 3D space. See below for the current canvas/camera texture/face tracker set-up. Manual scaling results in poor tracking.
Here is my code:
const FaceTracking = require('FaceTracking')
const Scene = require('Scene')
export const Diagnostics = require('Diagnostics');
// Locate the plane in the Scene
// Enable async/await in JS [part 1]
(async function () {
const [plane] = await Promise.all([
Scene.root.findFirst('blur_plane')
])
// Store a reference to a detected face
const face = FaceTracking.face(0)
// To access scene objects
const planeTransform = plane.transform
const faceTransform = face.cameraTransform
// const blurCanvas = Scene.root.find('canvas0');
// To access class properties
planeTransform.rotationX = faceTransform[0]
planeTransform.rotationY = faceTransform[0]
planeTransform.rotationZ = faceTransform[0]
})()
This is the current look:
I want the 375x667px canvas to look exactly like the camera layer beneath it, so that without adjustments to the camera texture the layer would not be visible.
Turns out Facebook has an example that deals with 2D movement but not scale:
https://sparkar.facebook.com/ar-studio/learn/reference/classes/facetrackingmodule
I am using the latest version (to post date) of ThreeJS. I am trying to import a ThreeJS Blender Model with rigged animations. All tutorials I have come across online have mentioned either THREE.AnimationHandler or THREE.Animation. But I get errors saying no such contructor exists.
When looking through the documentation online I can see them:
Animation
AnimationHandler
Neither states they are depricated. When looking through the src file I don't see them there either.
Am I missing something here?
I've faced same problem couple days ago. I found out that new animation system has been implemented in recent releases. This article helped me - New skinned mesh animation system in three.js. It seems like docs have not been updated yet.
So in my case I needed to import model in json and launch animation, code looked like this:
var loader = new THREE.ObjectLoader(),
clock = new THREE.Clock(),
mixer;
loader.load('models.json', function (object) {
// Get object animation
var sceneAnimationClip = object.animations[0];
// Create animation mixer and pass object to it
mixer = new THREE.AnimationMixer(object);
// Create animation action and start it
var sceneAnimation = mixer.clipAction(sceneAnimationClip);
sceneAnimation.play();
scene.add(object);
render()
});
function render() {
requestAnimationFrame(render);
// Update animation
var delta = clock.getDelta();
if( mixer ) {
mixer.update( delta );
}
renderer.render(scene, camera);
}
I've exported an animated model from Blender which doesn't seem to have any issue instantiating. I'm able to create the THREE.Animation and model, but I was finding there was no animation. I realized I needed to set skinning true on each material, but when I do that the entire mesh goes missing.
Below is my (quick and messy) code trying to get everything to work.
function loadModel() {
var loader = new THREE.JSONLoader();
loader.load('assets/models/Robot.js', function(geom, mat) {
_mesh = new THREE.Object3D();
_scene.add(_mesh);
geom.computeBoundingBox();
ensureLoop(geom.animation);
THREE.AnimationHandler.add(geom.animation);
for (var i = 0; i < mat.length; i++) {
var m = mat[i];
//m.skinning = true; <-- Uncommenting this makes the model disappear
//m.morphTargets = true; <-- This causes all sorts of WebGL warnings
m.wrapAround = true;
}
var mesh = new THREE.SkinnedMesh(geom, new THREE.MeshFaceMaterial(mat));
mesh.scale.set(400, 400, 400);
mesh.position.set(0, -200, 0);
mesh.rotation.set(Utils.toRadians(-90), 0, 0);
_mesh.add(mesh);
_robot = mesh;
Render.startRender(loop);
var animation = new THREE.Animation(mesh, geom.animation.name);
animation.JITCompile = false;
animation.interpolationType = THREE.AnimationHandler.LINEAR;
animation.play();
});
}
I believe I'm updating the AnimationHandler correctly in my loop
function loop() {
_mesh.rotation.y += 0.01;
var delta = 0.75 * _clock.getDelta();
THREE.AnimationHandler.update(delta);
}
In the section metadata of the exported JSON file the number of morphTargets and bones are both greater than 0?
I think that you followed the example here:
http://threejs.org/examples/#webgl_animation_skinning_morph
in which the animated model uses Morph Target and Skeletal Animation (see Wikipedia for the theoretical concepts).
If the animated model uses only Skeletal Animation as in this example http://alteredqualia.com/three/examples/webgl_animation_skinning_tf2.html
you have to instantiate a THREE.SkinnedMesh Object and then set only the m.skinning property to true.
I was having the same problem just now. What worked for me was to remake the model with applied scale and have keyframes for LocRotScale, not just location.
lately, I've encoutered a similar issue of mesh disapearing while exporting blender skinning animation to json. It turned out, the mesh I was using had double vertex (one vertice hidding another). All looks good While creating the vertex groups and the animations in blender, but when I imported the mesh via three.js, it kept disapearing as soon as the animation started. In other words, If 1 vertice from your mesh is omitted from the vertex groups, you will experience this disapearing behavior. To prevent this issue, I now use the "remove doubles" function from blender to validate the mesh integrity before exporting it to json. You might have encountered the same issue and redoing your mesh work fix it... Anyways, the question is pretty old, but the topic is still valid as of today, so I hope this fresh info will help someone out there...
Peace INF1