Related
I have made a simple plugin for the game Rust that dumps out the color information for the ingame map and NPC coordinates to datafile on a interval.
The map size ranges from -2000 to 2000 in the X and Z axis so the NPC coordinates X and Z also ranges from -2000 to 2000.
In three.js i have a PlaneBufferGeometry representing the map that is setup like this:
const mapGeometry = new THREE.PlaneBufferGeometry( 2, 2, 2000, 2000 ); // width,height,width segments,height segments
mapGeometry.rotateX( - Math.PI / 2 ); // rotate the geometry to match the scene
const customUniforms = {
bumpTexture: { value: heightTexture },
bumpScale: { type: "f", value: 0.02 },
colorTexture: { value: colorTexture }
};
const mapMaterial = new THREE.ShaderMaterial({
uniforms: customUniforms,
vertexShader: document.getElementById( 'vertexShader' ).textContent,
fragmentShader: document.getElementById( 'fragmentShader' ).textContent,
wireframe:true
});
const mapMesh = new THREE.Mesh( mapGeometry, mapMaterial );
scene.add( mapMesh );
The webpage is served with express server with socket.io integration.
The server emits updated coordinates to the connected clients on an interval.
socket.on('PositionData', function(data) {
storeNPCPositions(data);
});
I'm iterating over the NPC data and try to remap the coordinates to correspond with the setup in Three.js like this:
function storeNPCPositions(data) {
let npcs = [];
for (const npc in data.npcPositions) {
npcs.push({
name: npc,
position: {
x: remapPosition(data.npcPositions[npc].x, -2000, 2000, -1, 1), // i am uncertain about the -1 to 1 range, maybe 0 to 2?
y: remapPosition(data.npcPositions[npc].y, heightData.min, heightData.max, 0, .02),
z: remapPosition(data.npcPositions[npc].z, -2000, 2000, -1, 1), // i am uncertain about the -1 to 1 range, maybe 0 to 2?
}
});
}
window.murkymap.positionData.npcs = npcs;
}
function remapPosition(value, from1, to1, from2, to2)
{
return (value - from1) / (to1 - from1) * (to2 - from2) + from2;
}
As you can see in the code above in the storeNPCPositions function I have commented some uncertainty regarding the remapping, but either way it is wrong placement in the end result.
The image below is what I got right now, the npc's are not in the correct positions.
I hope that anyone can see the error in my code, i've been at it for many hours now.
The problem was that the NPC positions were flipped on the X axis. I made a THREE.Object3D() and added all the NPC's to that and then flipped it like this:
let npcContainer = new THREE.Object3D();
npcContainer.position.set(0,0,0);
npcContainer.rotateX(Math.PI);
let npcs = [];
const npcLineMaterial = new THREE.LineBasicMaterial({color: 0xff0000});
for (let i = 0; i < window.murkymap.positionData.npcs.length; i++) {
const npc = window.murkymap.positionData.npcs[i];
const npcPoints = [];
npcPoints.push(new THREE.Vector3(npc.position.x, 1000, npc.position.z));
npcPoints.push(new THREE.Vector3(npc.position.x,200,npc.position.z));
npcPoints.push(new THREE.Vector3(npc.position.x,-50,npc.position.z));
const npcLineGeometry = new THREE.BufferGeometry().setFromPoints( npcPoints );
const npcLine = new THREE.Line(npcLineGeometry, npcLineMaterial);
npcLine.position.y = -750;
npcLine.name = "npc";
npcLine.userData.prefab = npc.name;
npcs.push(npcLine);
}
npcContainer.remove(...npcContainer.children);
npcContainer.add(...npcs);
scene.add(npcContainer);
I have a particle emitter which emits multiple duplicates of the same image, as usual. However I'd like some of the particles to be flipped, either completely at a random amount, or sort of in the middle, so that particles falling to the left would be flipped and particles falling to the right won't be.
However I couldn't find anything regarding flipping particles without flipping ALL of them. I'd only like some to be flipped. Is this possible in any way?
There are serveral way's, I think the "fastest" would be just to use the scaleX property of the emiter.
this code flips about 10% of the particles ( 0.9 > Math.random() ), through multiplying it with -1, when it should be flipped.
Example Code:
this.add.particles('sparkle').createEmitter({
x: 200,
y: 100,
scaleX: {
onEmit: function () {
return ( 0.9 > Math.random() ) ? -1 : 1;
}
},
speed: { min: -100, max: 100 },
quantity: 0.1,
frequency: 1,
});
But I assume from a earlier question, that you have emitter with a "random scale" property. I that case you woud have to do something like this:
Example Code, for random scaled particles:
gameState.splash = this.add.particles('droplet').createEmitter({
x: gameState.height/2,
y: gameState.width/2,
scale: {
onEmit: function () {
// create random new scale
let newRandowmScale = Phaser.Math.FloatBetween(0.05, 0.3);
return ( 0.9 > Math.random() ) ? -1 * newRandowmScale : newRandowmScale;
}
},
speed: { min: -100, max: 100 },
...
});
UPDATE(SlowerFix): Example Code, for random scaled particles:
What the update does: save the current scale of the scaleX event and use it in the scaleY event. (it is hacky, but should work. I will see if there is a cleaner solution)
gameState.splash = this.add.particles('droplet').createEmitter({
x: gameState.height/2,
y: gameState.width/2,
scaleY:{
onEmit: function(particle){
// keep scale value positive
return Math.abs(particle.scaleX);
}
},
scaleX:{
onEmit: function(p){
let scale = Phaser.Math.FloatBetween(.2, .5);
return Math.random() > .9 ? scale * -1 : scale;
}
},
speed: { min: -100, max: 100 },
...
});
Thanks for taking the time to review my post. I hope that this post will not only yield results for myself but perhaps helps others too!
Introduction
Currently I am working on a project involving pointclouds generated with photogrammetry. It consists of photos combined with laser scans. The software used in making the pointcloud is Reality Capture. Besides the pointcloud export one can export "Internal/External camera parameters" providing the ability of retrieving photos that are used to make up a certain 3D point in the pointcloud. Reality Capture isn't that well documented online and I have also posted in their forum regarding camera variables, perhaps it can be of use in solving the issue at hand?
Only a few variables listed in the camera parameters file are relevant (for now) in referencing camera positioning such as filename, x,y,alt for location, heading, pitch and roll as its rotation.
Currently the generated pointcloud is loaded into the browser compatible THREE.JS viewer after which the camera parameters .csv file is loaded and for each known photo a 'PerspectiveCamera' is spawned with a green cube. An example is shown below:
The challenge
As a matter of fact you might already know what the issue might be based on the previous image (or the title of this post of course ;P) Just in case you might not have spotted it, the direction of the cameras is all wrong. Let me visualize it for you with shabby self-drawn vectors that rudimentary show in what direction it should be facing (Marked in red) and how it is currently vectored (green).
Row 37, DJI_0176.jpg is the most right camera with a red reference line row 38 is 177 etc. The last picture (Row 48 is DJI_189.jpg) and corresponds with the most left image of the clustured images (as I didn't draw the other two camera references within the image above I did not include the others).
When you copy the data below into an Excel sheet it should display correctly ^^
#name x y alt heading pitch roll f px py k1 k2 k3 k4 t1 t2
DJI_0174.JPG 3.116820957 -44.25690188 14.05258109 -26.86297007 66.43104338 1.912026354 30.35179628 7.25E-03 1.45E-03 -4.02E-03 -2.04E-02 3.94E-02 0 0 0
DJI_0175.JPG -5.22E-02 -46.97266554 14.18056658 -16.2033133 66.11532302 3.552072396 30.28063771 4.93E-03 4.21E-04 1.38E-02 -0.108013599 0.183136287 0 0 0
DJI_0176.JPG -3.056586953 -49.00754998 14.3474763 4.270483155 65.35247679 5.816970677 30.50596933 -5.05E-03 -3.53E-03 -4.94E-03 3.24E-02 -3.84E-02 0 0 0
DJI_0177.JPG -6.909437337 -50.15910066 14.38391206 19.4459053 64.26828897 6.685020944 30.6994734 -1.40E-02 4.72E-03 -5.33E-04 1.90E-02 -1.74E-02 0 0 0
DJI_0178.JPG -11.23696688 -50.36025313 14.56924433 19.19192622 64.40188316 6.265995184 30.7665397 -1.26E-02 2.41E-03 1.24E-04 -4.63E-03 2.84E-02 0 0 0
DJI_0179.JPG -16.04060554 -49.92320365 14.69721478 19.39979452 64.85507307 6.224929846 30.93772566 -1.19E-02 -4.31E-03 -1.27E-02 4.62E-02 -4.48E-02 0 0 0
DJI_0180.JPG -20.95614556 -49.22915437 14.92273203 20.39327092 65.02028543 6.164031482 30.99807237 -1.02E-02 -7.70E-03 1.44E-03 -2.22E-02 3.94E-02 0 0 0
DJI_0181.JPG -25.9335097 -48.45330177 15.37330388 34.24388008 64.82707628 6.979877709 31.3534556 -1.06E-02 -1.19E-02 -5.44E-03 2.39E-02 -2.38E-02 0 0 0
DJI_0182.JPG -30.40507957 -47.21269946 15.67804925 49.98858409 64.29238807 7.449650513 31.6699868 -8.75E-03 -1.31E-02 -4.57E-03 2.31E-02 2.68E-03 0 0 0
DJI_0183.JPG -34.64277285 -44.84034207 15.89229254 65.84203906 62.9109777 7.065942792 31.78292476 -8.39E-03 -2.94E-03 -1.40E-02 8.96E-02 -0.11801932 0 0 0
DJI_0184.JPG -39.17179024 -40.22577764 16.28164396 65.53938063 63.2592604 6.676581293 31.79546988 -9.81E-03 -8.13E-03 1.01E-02 -8.44E-02 0.179931606 0 0 0
DJI_0185.JPG -43.549378 -33.09364534 16.64130671 68.61427166 63.15205908 6.258411625 31.75339036 -9.78E-03 -7.12E-03 4.75E-03 -6.25E-02 0.1541638 0 0 0
DJI_0186.JPG -46.5381556 -24.2992233 17.2286956 74.42382577 63.75110346 6.279208736 31.88862443 -1.01E-02 -1.73E-02 1.02E-02 -6.15E-02 4.89E-02 0 0 0
DJI_0187.JPG -48.18737751 -14.67333218 17.85446854 79.54477952 63.0503902 5.980759013 31.69602914 -8.83E-03 -1.01E-02 -7.63E-03 -7.49E-03 2.71E-02 0 0 0
DJI_0188.JPG -48.48581505 -13.79840485 17.84756621 93.43316271 61.87561678 5.110113503 31.6671977 1.99E-03 -9.40E-04 2.40E-02 -0.180515731 0.32814456 0 0 0
DJI_0189.JPG -48.32815991 -13.88055437 17.77818573 106.3277582 60.87171036 4.039469869 31.50757712 2.84E-03 4.12E-03 8.54E-03 -1.32E-02 3.89E-02 0 0 0
Things tried so far
Something we discovered was that the exported model was mirrored from reality however this did not affect the placement of the camera references as they aligned perfectly. We attempted to mirror the referenced cameras, pointcloud and viewport camera but this did not seem to fix the issue at hand. (hence the camera.applyMatrix4(new THREE.Matrix4().makeScale(-1, 1, 1));)
So far we attempted to load Euler angles, set angles directly or convert and apply a Quaternion sadly without any good results. The camera reference file is being parsed with the following logic:
// Await the .csv file being parsed from the server
await new Promise((resolve) => {
(file as Blob).text().then((csvStr) => {
const rows = csvStr.split('\n');
for (const row of rows) {
const col = row.split(',');
if (col.length > 1) {
const suffixes = col[0].split('.');
const extension = suffixes[suffixes.length - 1].toLowerCase();
const validExtensions = ['jpeg', 'jpg', 'png'];
if (!validExtensions.includes(extension)) {
continue;
}
// == Parameter index by .csv column names ==
// 0: #name; 1: x; 2: y; 3: alt; 4: heading; 5: pitch; 6: roll; 7:f (focal);
// == Non .csv param ==
// 8: bool isRadianFormat default false
this.createCamera(col[0], parseFloat(col[1]), parseFloat(col[2]), parseFloat(col[3]), parseFloat(col[4]), parseFloat(col[5]), parseFloat(col[6]), parseFloat(col[7]));
}
}
resolve(true);
});
});
}
Below you will find the code snippet for instantiating a camera with its position and rotation. I left some additional comments to elaborate it somewhat more. I left the commented code lines in as well to see what else we have been trying:
private createCamera(fileName: string, xPos: number, yPos: number, zPos: number, xDeg: number, yDeg: number, zDeg: number, f: number, isRadianFormat = false) : void {
// Set radials as THREE.JS explicitly only works in radians
const xRad = isRadianFormat ? xDeg : THREE.MathUtils.degToRad(xDeg);
const yRad = isRadianFormat ? yDeg : THREE.MathUtils.degToRad(yDeg)
const zRad = isRadianFormat ? zDeg : THREE.MathUtils.degToRad(zDeg)
// Create camera reference and extract frustum
// Statically set the FOV and aspectratio; Near is set to 0,1 by default and Far is dynamically set whenever a point is clicked in a 3D space.
const camera = new THREE.PerspectiveCamera(67, 5280 / 2970, 0.1, 1);
const pos = new THREE.Vector3(xPos, yPos, zPos); // Reality capture z = up; THREE y = up;
/* ===
In order to set an Euler angle one must provide the heading (x), pitch (y) and roll(z) as well as the order (variable four 'XYZ') in which the rotations will be applied
As a last resort we even tried switching the x,y and zRad variables as well as switching the orientation orders.
Possible orders:
XYZ
XZY
YZX
YXZ
ZYX
ZXY
=== */
const rot = new THREE.Euler(xRad, yRad, zRad, 'XYZ');
//camera.setRotationFromAxisAngle(new THREE.Vector3(0,))
//camera.applyMatrix4(new THREE.Matrix4().makeScale(-1, 1, 1));
// const rot = new THREE.Quaternion();
// rot.setFromAxisAngle(new THREE.Vector3(1, 0, 0), zRad);
// rot.setFromAxisAngle(new THREE.Vector3(0, 1, 0), xRad);
// rot.setFromAxisAngle(new THREE.Vector3(0, 0, 1), yRad);
// XYZ
// === Update camera frustum ===
camera.position.copy(pos);
// camera.applyQuaternion(rot);
camera.rotation.copy(rot);
camera.setRotationFromEuler(rot);
camera.updateProjectionMatrix(); // TODO: Assert whether projection update is required here
/* ===
The camera.applyMatrix listed below was an attempt in rotating several aspects of the 3D viewer.
An attempt was made to rotate each individual photo camera position, the pointcloud itself aswell as the viewport camera both separately
as well as solo. It made no difference however.
=== */
//camera.applyMatrix4(new THREE.Matrix4().makeScale(-1, 1, 1));
// Instantiate CameraPosition instance and push to array
const photo: PhotoPosition = {
file: fileName,
camera,
position: pos,
rotation: rot,
focal: f,
width: 5120, // Statically set for now
height: 5120, // Statically set for now
};
this.photos.push(photo);
}
The cameras created in the snippet above are then grabbed by the next piece of code which passes the cameras to the camera manager and draws a CameraHelper (displayed in both 3D viewer pictures above). It is written within an async function awaiting the csv file to be loaded before proceeding to initialize the cameras.
private initializeCameraPoses(url: string, csvLoader: CSVLoader) {
const absoluteUrl = url + '\\references.csv';
(async (scene, csvLoader, url, renderer) => {
await csvLoader.init(url);
const photos = csvLoader.getPhotos(); // The cameras created by the createCamera() method
this.inspectionRenderer = new InspectionRenderer(scene); // InspectionRenderer manages all further camera operations
this.inspectionRenderer.populateCameras(photos);
for (const photoData of photos) {
// Draw the green cube
const geometry = new THREE.BoxGeometry(0.5, 0.5, 0.5);
const material = new THREE.MeshBasicMaterial({ color: 0x00ff00 });
const cube = new THREE.Mesh(geometry, material);
scene.add(cube);
cube.position.copy(photoData.position);
photoData.camera.updateProjectionMatrix();
// Draws the yellow camera viewport to the scene
const helper = new CameraHelper(photoData.camera);
renderer.render(scene, photoData.camera);
scene.add(helper);
}
})(this.scene, csvLoader, absoluteUrl, this.renderer);
}
Marquizzo's code snippet
The below posted code snippet of Marquizzo seems to bring us a lot closer towards a solution. The cameras seem to be orientated in the correct direction. However the pitch seems to a little off somehow. Below I will include the source image of DJI_0189.jpg. Note that for this example the FOV is currently not being set as it looks chaotic when for every camera position a camera helper is being rendered. For this example I have rendered only the DJI_0189 camera helper.
The edit #Marquizzo provided inverting the pitch (const rotX = deg2rad(photo.pitch * -1);) would result in the midpoint intersection point always being slightly lower as expected:
When the pitch is adjusted to const rotX = deg2rad(photo.pitch * -.5); you'll see that the midpoint intersection is (closer) as that of the source image:
Somehow I think that a solution is within reach and that in the end it'll come down to some very small detail that has been overlooked. I'm really looking forward into seeing a reply. If something is still unclear please say so and I'll provide the necessary details if required ^^
Thanks for reading this post so far!
At first glance, I see three possibilities:
It's hard to see where the issue is without showing how you're using the createCamera() method. You could be swapping pitch with heading or something like that. In Three.js, heading is rotation around the Y-axis, pitch around X-axis, and roll around Z-axis.
Secondly, do you know in what order the heading, pitch, roll measurements were taken by your sensor? That will affect the way in which you initiate your THREE.Euler(xRad, yRad, zRad, 'XYZ'), since the order in which to apply rotations could also be 'YZX', 'ZXY', 'XZY', 'YXZ' or 'ZYX'.
Finally, you have to think "What does heading: 0 mean to the sensor?" It could mean different things between real-world and Three.js coordinate system. A camera with no rotation in Three.js is looking straight down towards -Z axis, but your sensor might have it pointing towards +Z, or +X, etc.
Edit:
I added a demo below, I think this is what you needed from the screenshots. Notice I multiplied pitch * -1 so the cameras "Look down", and added +180 to the heading so they're pointing in the right... heading.
const DATA = [
{name: "DJI_0174.JPG", x: 3.116820957, y: -44.25690188, alt: 14.05258109, heading: -26.86297007, pitch: 66.43104338, roll: 1.912026354},
{name: "DJI_0175.JPG", x: -5.22E-02, y: -46.97266554, alt: 14.18056658, heading: -16.2033133, pitch: 66.11532302, roll: 3.552072396},
{name: "DJI_0176.JPG", x: -3.056586953, y: -49.00754998, alt: 14.3474763, heading: 4.270483155, pitch: 65.35247679, roll: 5.816970677},
{name: "DJI_0177.JPG", x: -6.909437337, y: -50.15910066, alt: 14.38391206, heading: 19.4459053, pitch: 64.26828897, roll: 6.685020944},
{name: "DJI_0178.JPG", x: -11.23696688, y: -50.36025313, alt: 14.56924433, heading: 19.19192622, pitch: 64.40188316, roll: 6.265995184},
{name: "DJI_0179.JPG", x: -16.04060554, y: -49.92320365, alt: 14.69721478, heading: 19.39979452, pitch: 64.85507307, roll: 6.224929846},
{name: "DJI_0180.JPG", x: -20.95614556, y: -49.22915437, alt: 14.92273203, heading: 20.39327092, pitch: 65.02028543, roll: 6.164031482},
{name: "DJI_0181.JPG", x: -25.9335097, y: -48.45330177, alt: 15.37330388, heading: 34.24388008, pitch: 64.82707628, roll: 6.979877709},
{name: "DJI_0182.JPG", x: -30.40507957, y: -47.21269946, alt: 15.67804925, heading: 49.98858409, pitch: 64.29238807, roll: 7.449650513},
{name: "DJI_0183.JPG", x: -34.64277285, y: -44.84034207, alt: 15.89229254, heading: 65.84203906, pitch: 62.9109777, roll: 7.065942792},
{name: "DJI_0184.JPG", x: -39.17179024, y: -40.22577764, alt: 16.28164396, heading: 65.53938063, pitch: 63.2592604, roll: 6.676581293},
{name: "DJI_0185.JPG", x: -43.549378, y: -33.09364534, alt: 16.64130671, heading: 68.61427166, pitch: 63.15205908, roll: 6.258411625},
{name: "DJI_0186.JPG", x: -46.5381556, y: -24.2992233, alt: 17.2286956, heading: 74.42382577, pitch: 63.75110346, roll: 6.279208736},
{name: "DJI_0187.JPG", x: -48.18737751, y: -14.67333218, alt: 17.85446854, heading: 79.54477952, pitch: 63.0503902, roll: 5.980759013},
{name: "DJI_0188.JPG", x: -48.48581505, y: -13.79840485, alt: 17.84756621, heading: 93.43316271, pitch: 61.87561678, roll: 5.110113503},
{name: "DJI_0189.JPG", x: -48.32815991, y: -13.88055437, alt: 17.77818573, heading: 106.3277582, pitch: 60.87171036, roll: 4.039469869},
];
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(
45,
window.innerWidth / window.innerHeight,
1,
1000
);
camera.position.z = 100;
const renderer = new THREE.WebGLRenderer({
antialias: true,
canvas: document.querySelector("#canvas")
});
renderer.setSize(window.innerWidth, window.innerHeight);
const controls = new THREE.OrbitControls( camera, renderer.domElement );
// Helpers
const axesHelper = new THREE.AxesHelper( 20 );
scene.add(axesHelper);
const plane = new THREE.Plane( new THREE.Vector3( 0, 1, 0 ), 0 );
const planeHelper = new THREE.PlaneHelper( plane, 50, 0xffff00 );
scene.add(planeHelper);
let deg2rad = THREE.MathUtils.degToRad;
function createCam(photo) {
let tempCam = new THREE.PerspectiveCamera(10, 2.0, 1, 30);
// Altitude is actually y-axis,
// "y" is actually z-axis
tempCam.position.set(photo.x, photo.alt, photo.y);
// Modify pitch & heading so it matches Three.js coordinates
const rotX = deg2rad(photo.pitch * -1);
const rotY = deg2rad(photo.heading + 180);
const rotZ = deg2rad(photo.roll);
tempCam.rotation.set(rotX, rotY, rotZ, "YXZ");
let helper = new THREE.CameraHelper(tempCam);
scene.add(tempCam);
scene.add(helper);
}
for(let i = 0; i < DATA.length; i++) {
createCam(DATA[i]);
}
function animate() {
renderer.render(scene, camera);
requestAnimationFrame(animate);
}
animate();
html, body { margin:0; padding:0;}
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/r128/three.min.js"></script><script src="https://rawgit.com/mrdoob/three.js/dev/examples/js/controls/OrbitControls.js"></script>
<canvas id="canvas"></canvas>
Trying to incorporate a scale translation to my mesh[0] using TweenMax. I'm not having any problems with certain animations, such as rotation, or even scaling when I use 'mesh[0].set.scale' as the first argument. However, in this situation I'm getting 'Uncaught TypeError: Cannot assign to read only property 'scale' of object '#'' errors from the console.
I'm guessing that this is to do with the combination of using GSAP and ThreeJS, because I've tried out the same code in plain javascript and it works OK.
I've tried to include minimal code, so please let me know if more is needed!
const geometry = new THREE.IcosahedronBufferGeometry( 1, 0 );
materialRed = new THREE.MeshStandardMaterial({
color: 0xFF0000
});
mesh[0] = new THREE.Mesh( geometry, materialRed );
scene.add(mesh[0]);
TweenMax.to(mesh[0], 1,
{
scale: 2,
ease: Elastic.easeOut,
yoyo: true,
repeat: -1,
yoyoEase: Bounce.easeOut,
delay: 1,
}
);
Figured out my issue:
TweenMax.to(mesh[0].scale, 1,
{ x: 1.2,
y: 1.2,
z: 1.2,
yoyo: true,
repeat: -1,
});
Seems as if I was trying to manipulate the whole mesh, when I should have been focusing on the scale of the mesh. From here I can scale up and manipulate however.
Repeat method call has been updated:
https://greensock.com/docs/TimelineMax/repeat()
var t = Math.random() * 0.6 + 0.3;
TweenMax.to( box.scale, t, {
x: 1 + Math.random() * 3,
y: 1 + Math.random() * 20,
z: 1 + Math.random() * 3,
ease: Power2.easeInOut
} ).repeat(-1);
Demo:
https://codepen.io/MAKIO135/pen/vmBzMv?editors=0010
in threeJS: I have an object3D and want to do simple keyframed Animations with it: Move, Rotate, Scale.
There is a simple example here: https://threejs.org/examples/#misc_animation_keys but it does not work anymore since Animation has changed completely animation rotation switched to quaternion in threeJS some time ago.
I am searching for a very simple example like that, but working with the new Animation system, i already googled it and did find nothing. There is no documentation on the threeJS Page.
Using Blender or Collada to create the animation is not an option, since i have imported the model from a step file, which is supported by neither one.
EDIT I have solved the problem with the example, but i still have problems, since i want to animate a nested Object3d, but only the root Object3d, so i specified keys only for the root object not the whole hierarchy. But it throws an error cause the animation keys hierarchy has not the same structure than the root Object3d hierarchy. But this is another problem and needs another question
The problem with the example was, that rotation in animation keys is now specified as quaternion, not as Euler rotation like in the example. So adding a fourth value (1) to the rotation param made it work.
Finally found one good example with setting desired values in key frames:
Misc animation keys
Full source can be found by inspecting that page.
Here is pasted essential part:
// create a keyframe track (i.e. a timed sequence of keyframes) for each animated property
// Note: the keyframe track type should correspond to the type of the property being animated
// POSITION
var positionKF = new THREE.VectorKeyframeTrack( '.position', [ 0, 1, 2 ], [ 0, 0, 0, 30, 0, 0, 0, 0, 0 ] );
// SCALE
var scaleKF = new THREE.VectorKeyframeTrack( '.scale', [ 0, 1, 2 ], [ 1, 1, 1, 2, 2, 2, 1, 1, 1 ] );
// ROTATION
// Rotation should be performed using quaternions, using a QuaternionKeyframeTrack
// Interpolating Euler angles (.rotation property) can be problematic and is currently not supported
// set up rotation about x axis
var xAxis = new THREE.Vector3( 1, 0, 0 );
var qInitial = new THREE.Quaternion().setFromAxisAngle( xAxis, 0 );
var qFinal = new THREE.Quaternion().setFromAxisAngle( xAxis, Math.PI );
var quaternionKF = new THREE.QuaternionKeyframeTrack( '.quaternion', [ 0, 1, 2 ], [ qInitial.x, qInitial.y, qInitial.z, qInitial.w, qFinal.x, qFinal.y, qFinal.z, qFinal.w, qInitial.x, qInitial.y, qInitial.z, qInitial.w ] );
// COLOR
var colorKF = new THREE.ColorKeyframeTrack( '.material.color', [ 0, 1, 2 ], [ 1, 0, 0, 0, 1, 0, 0, 0, 1 ], THREE.InterpolateDiscrete );
// OPACITY
var opacityKF = new THREE.NumberKeyframeTrack( '.material.opacity', [ 0, 1, 2 ], [ 1, 0, 1 ] );
// create an animation sequence with the tracks
// If a negative time value is passed, the duration will be calculated from the times of the passed tracks array
var clip = new THREE.AnimationClip( 'Action', 3, [ scaleKF, positionKF, quaternionKF, colorKF, opacityKF ] );
// setup the AnimationMixer
mixer = new THREE.AnimationMixer( mesh );
// create a ClipAction and set it to play
var clipAction = mixer.clipAction( clip );
clipAction.play();
Animation has 3 key frames [0,1,2] = [initial,final,initial]
Position array [ 0, 0, 0, 30, 0, 0, 0, 0, 0 ] means (0,0,0) -> (30,0,0) -> (0,0,0)
I find only this one:
https://github.com/mrdoob/three.js/blob/master/examples/webgl_animation_scene.html
Also, was able to write one myself:
//Let's create a mesh
this.mesh = new THREE.Mesh( geometry, material );
this.clock = new THREE.Clock();
//Save this mixer somewhere
this.mixer = new THREE.AnimationMixer( this.mesh );
let animation = THREE.AnimationClipCreator.CreateRotationAnimation(100, "y");
this.mixer.clipAction(animation ).play();
//In the animation block of your scene:
var delta = 0.75 * clock.getDelta();
this.mixer.update( delta );
This is going to rotate the given mesh around of the y axis.