Ground shadows look inaccurate and muddy - javascript

I have made a Threejs project, where I dynamically create a ground from perlin noise. Here is the code:
createGround() {
const resolutionX = 100
const resolutionY = 100
const actualResolutionX = resolutionX + 1 // plane adds one vertex
const actualResolutionY = resolutionY + 1
const geometryPlane = new THREE.PlaneGeometry(this.sizeX, this.sizeY, resolutionX, resolutionY)
const noise = perlin.generatePerlinNoise(actualResolutionX, actualResolutionY)
let i = 0
for (let x = 0; x < actualResolutionX; x++) {
for (let y = 0; y < actualResolutionY; y++) {
let h = noise[i]
}
geometryPlane.vertices[i].z = h
i++
}
}
geometryPlane.verticesNeedUpdate = true
geometryPlane.computeFaceNormals()
const materialPlane = new THREE.MeshStandardMaterial({
color: 0xffff00,
side: THREE.FrontSide,
roughness: 1,
metallness: 0,
})
const ground = new THREE.Mesh(geometryPlane, materialPlane)
geometryPlane.computeVertexNormals()
ground.name = GROUND_NAME
ground.receiveShadow = true
scene.add(ground)
}
I am happy with the geometry that is generated, but the problem is the shadows look really inaccurate.
Here is my code for the light:
const light = new THREE.DirectionalLight(
'white',
1,
)
light.shadow.mapSize.width = 512 * 12 // default is 512. doesnt do anything
light.shadow.mapSize.height = 512 * 12 // default is 512
light.castShadow = true
light.position.set(100, 100, 100)
scene.add(light)
light.shadowCameraVisible = true
My question is, how can I make the grounds shadow look more accuarate and defined, and show off the grounds geometry?
Rest of the code can be found here: https://github.com/Waltari10/workwork

This shadow doesn't look so bad given the fact that you didn't setup any lights :).
I suppose this render works with some default lighting or lighting that you've added before thinking about shadows.
Try to add some directional light, simulating Sun direction and color and see if it helps, or tell us more about your lighting setup.
If you have any lights that render shadowmap already then check its resolution, you may need to make it bigger.

Related

Making a visualization of ingame map and NPC positions with Three.js involving some math such as remapping of positions but the result is incorrect

I have made a simple plugin for the game Rust that dumps out the color information for the ingame map and NPC coordinates to datafile on a interval.
The map size ranges from -2000 to 2000 in the X and Z axis so the NPC coordinates X and Z also ranges from -2000 to 2000.
In three.js i have a PlaneBufferGeometry representing the map that is setup like this:
const mapGeometry = new THREE.PlaneBufferGeometry( 2, 2, 2000, 2000 ); // width,height,width segments,height segments
mapGeometry.rotateX( - Math.PI / 2 ); // rotate the geometry to match the scene
const customUniforms = {
bumpTexture: { value: heightTexture },
bumpScale: { type: "f", value: 0.02 },
colorTexture: { value: colorTexture }
};
const mapMaterial = new THREE.ShaderMaterial({
uniforms: customUniforms,
vertexShader: document.getElementById( 'vertexShader' ).textContent,
fragmentShader: document.getElementById( 'fragmentShader' ).textContent,
wireframe:true
});
const mapMesh = new THREE.Mesh( mapGeometry, mapMaterial );
scene.add( mapMesh );
The webpage is served with express server with socket.io integration.
The server emits updated coordinates to the connected clients on an interval.
socket.on('PositionData', function(data) {
storeNPCPositions(data);
});
I'm iterating over the NPC data and try to remap the coordinates to correspond with the setup in Three.js like this:
function storeNPCPositions(data) {
let npcs = [];
for (const npc in data.npcPositions) {
npcs.push({
name: npc,
position: {
x: remapPosition(data.npcPositions[npc].x, -2000, 2000, -1, 1), // i am uncertain about the -1 to 1 range, maybe 0 to 2?
y: remapPosition(data.npcPositions[npc].y, heightData.min, heightData.max, 0, .02),
z: remapPosition(data.npcPositions[npc].z, -2000, 2000, -1, 1), // i am uncertain about the -1 to 1 range, maybe 0 to 2?
}
});
}
window.murkymap.positionData.npcs = npcs;
}
function remapPosition(value, from1, to1, from2, to2)
{
return (value - from1) / (to1 - from1) * (to2 - from2) + from2;
}
As you can see in the code above in the storeNPCPositions function I have commented some uncertainty regarding the remapping, but either way it is wrong placement in the end result.
The image below is what I got right now, the npc's are not in the correct positions.
I hope that anyone can see the error in my code, i've been at it for many hours now.
The problem was that the NPC positions were flipped on the X axis. I made a THREE.Object3D() and added all the NPC's to that and then flipped it like this:
let npcContainer = new THREE.Object3D();
npcContainer.position.set(0,0,0);
npcContainer.rotateX(Math.PI);
let npcs = [];
const npcLineMaterial = new THREE.LineBasicMaterial({color: 0xff0000});
for (let i = 0; i < window.murkymap.positionData.npcs.length; i++) {
const npc = window.murkymap.positionData.npcs[i];
const npcPoints = [];
npcPoints.push(new THREE.Vector3(npc.position.x, 1000, npc.position.z));
npcPoints.push(new THREE.Vector3(npc.position.x,200,npc.position.z));
npcPoints.push(new THREE.Vector3(npc.position.x,-50,npc.position.z));
const npcLineGeometry = new THREE.BufferGeometry().setFromPoints( npcPoints );
const npcLine = new THREE.Line(npcLineGeometry, npcLineMaterial);
npcLine.position.y = -750;
npcLine.name = "npc";
npcLine.userData.prefab = npc.name;
npcs.push(npcLine);
}
npcContainer.remove(...npcContainer.children);
npcContainer.add(...npcs);
scene.add(npcContainer);

Showing highlighted silhouette of mesh ONLY when it is occluded

so I'm trying to create an online game using Babylon.js but have run into a problem thats got me a little stumped so hoping someone here would be willing to help me out. Please bear with me on this one, i'm a complete newbie with babylon as i've only every worked with THREE.js. Right now my game consists of a scene compromising of multiple meshes with multiple users represented as avatars (created from basic circle geometry for the moment) loaded into an environment. What I want to do is highlight the outline of these avatars ONLY when they are occluded by any other object, meaning that when they are not occluded they look normal with no highlight but when behind an object their highlighted silhouette can be seen by others (including yourself as you can see your own avatar). This is very akin to effects used in many other video games (see example below).
Example of Effect
Thus far, based on some googling and forum browsing (Babylonjs outline through walls & https://forum.babylonjs.com/t/highlight-through-objects/8002/4) I've figured out how to highlight the outline of objects using Babylon.HighlighLayer and I know that i can render objects above others via RenderingGroups but I can't seem to figure out how to use them in conjunction to create the effect I want. The best i've managed to do is get the highlighted avatar render above everything but I need just the silhouette not the entire mesh. I'm also constrained by the fact that my scene has many meshes in it that are loaded dynamically and i'm also trying to keep things as optimal as possible. Can't afford to use very computationally expensive procedures.
Anybody know of the best way to approach this? Would greatly appreciate any advice or assistance you can provide.Thanks!
So I asked the same question on the babylon forums which helped me to find a solution. All credit goes to the guy's that helped me out over there but just in case someone else comes across this question seeking an answer, here is a link to that forum question https://forum.babylonjs.com/t/showing-highlighted-silhouette-of-mesh-only-when-it-is-occluded/27783/7
Edit:
Ok thought i'd include the two possible solutions here properly as well as their babylon playgrounds. All credit goes to roland & evgeni_popov who came up with these solutions on the forum linked above.
The first solution is easier to implement but slightly less performant than the second solution.
Clone Solution: https://playground.babylonjs.com/#JXYGLT%235
// roland#babylonjs.xyz, 2022
const createScene = function () {
const scene = new BABYLON.Scene(engine);
const camera = new BABYLON.ArcRotateCamera('camera', -Math.PI / 2, Math.PI / 2, 20, new BABYLON.Vector3(0, 0, 0), scene)
camera.attachControl(canvas, true);
const light = new BABYLON.HemisphericLight("light", new BABYLON.Vector3(0, 1, 0), scene);
light.intensity = 0.7;
const wall = BABYLON.MeshBuilder.CreateBox('wall', { width: 5, height: 5, depth: 0.5 }, scene)
wall.position.y = 1
wall.position.z = -2
const sphere = BABYLON.MeshBuilder.CreateSphere('sphere', { diameter: 2, segments: 32 }, scene)
sphere.position.y = 1
const sphereClone = sphere.clone('sphereClone')
sphereClone.setEnabled(false)
const matc = new BABYLON.StandardMaterial("matc", scene);
matc.depthFunction = BABYLON.Constants.ALWAYS;
matc.disableColorWrite = true;
matc.disableDepthWrite = true;
sphereClone.material = matc;
sphere.occlusionQueryAlgorithmType = BABYLON.AbstractMesh.OCCLUSION_ALGORITHM_TYPE_ACCURATE
sphere.occlusionType = BABYLON.AbstractMesh.OCCLUSION_TYPE_STRICT
const hl = new BABYLON.HighlightLayer('hl1', scene, { camera: camera })
hl.addMesh(sphereClone, BABYLON.Color3.Green())
hl.addExcludedMesh(wall);
let t = 0;
scene.onBeforeRenderObservable.add(() => {
sphere.position.x = 10 * Math.cos(t);
sphere.position.z = 100 + 104 * Math.sin(t);
if (sphere.isOccluded) {
sphereClone.setEnabled(true)
sphereClone.position.copyFrom(sphere.position);
} else {
sphereClone.setEnabled(false)
}
t += 0.03;
})
return scene;
};
This second solution is slightly more performant than above as you don't need a clone but involves overriding the AbstactMesh._checkOcclusionQuery function which is the function that updates the isOccluded property for meshes such that the mesh is always rendered even when occluded. There’s no overhead if you are using the occlusion queries only for the purpose of drawing silhouettes however If you are also using them to avoid drawing occluded meshes then there’s an overhead because the meshes will be drawn even if they are occluded. In which case your probably best of going with the first solution
Non-Clone solution: https://playground.babylonjs.com/#JXYGLT#14
// roland#babylonjs.xyz, 2022
const createScene = function () {
const scene = new BABYLON.Scene(engine);
const camera = new BABYLON.ArcRotateCamera('camera', -Math.PI / 2, Math.PI / 2, 20, new BABYLON.Vector3(0, 0, 0), scene)
camera.attachControl(canvas, true);
const light = new BABYLON.HemisphericLight("light", new BABYLON.Vector3(0, 1, 0), scene);
light.intensity = 0.7;
const wall = BABYLON.MeshBuilder.CreateBox('wall', { width: 5, height: 5, depth: 0.5 }, scene)
wall.position.y = 1
wall.position.z = -2
const sphere = BABYLON.MeshBuilder.CreateSphere('sphere', { diameter: 2, segments: 32 }, scene)
sphere.position.y = 1
sphere.occlusionQueryAlgorithmType = BABYLON.AbstractMesh.OCCLUSION_ALGORITHM_TYPE_ACCURATE
sphere.occlusionType = BABYLON.AbstractMesh.OCCLUSION_TYPE_STRICT
const mats = new BABYLON.StandardMaterial("mats", scene);
sphere.material = mats;
const hl = new BABYLON.HighlightLayer('hl1', scene, { camera: camera })
hl.addExcludedMesh(wall);
let t = 0;
const cur = BABYLON.AbstractMesh.prototype._checkOcclusionQuery;
scene.onDisposeObservable.add(() => {
BABYLON.AbstractMesh.prototype._checkOcclusionQuery = cur;
});
BABYLON.AbstractMesh.prototype._checkOcclusionQuery = function() {
cur.apply(this);
return false;
}
scene.onBeforeRenderObservable.add(() => {
sphere.position.x = 10 * Math.cos(t);
sphere.position.z = 100 + 104 * Math.sin(t);
if (sphere.isOccluded) {
hl.addMesh(sphere, BABYLON.Color3.Green())
mats.depthFunction = BABYLON.Constants.ALWAYS;
mats.disableColorWrite = true;
} else {
hl.removeMesh(sphere);
mats.depthFunction = BABYLON.Constants.LESS;
mats.disableColorWrite = false;
}
t += 0.03;
})
return scene;
};

Render plane from 3 vertices in three.js

I'm trying to render a plane a set of 3 vertices (as shown). However every method I tried (mostly from SO or the official three.js forum) doesn't work for me.
// example vertices
const vert1 = new THREE.Vector3(768, -512, 40)
const vert2 = new THREE.Vector3(768, -496, 40)
const vert3 = new THREE.Vector3(616, -496, 40)
I already tried the following code for calculating the width and height of the plane, but I think it's way over-complicated (as I only calculate the X and Y coords and I think my code would grow exponentially if I'd also add the Z-coordinate and the plane's position to this logic).
const width = vert1.x !== vert2.x ? Math.abs(vert1.x - vert2.x) : Math.abs(vert1.x - vert3.x)
const height = vert1.y !== vert2.y ? Math.abs(vert1.y - vert2.y) : Math.abs(vert1.y - vert3.y)
Example:
I want to create a plane with 3 corners of points A, B and C and a plane with 3 corners of points D, E and F.
Example Video
You can use THREE.Plane.setFromCoplanarPoints() to create a plane from three coplanar points. However, an instance of THREE.Plane is just a mathematical representation of an infinite plane dividing the 3D space in two half spaces. If you want to visualize it, consider to use THREE.PlaneHelper. Or you use the approach from the following thread to derive a plane mesh from your instance of THREE.Plane.
Three.js - PlaneGeometry from Math.Plane
I create algorithm which compute mid point of longest edge of triangle. After this compute vector from point which isn't on longest edge to midpoint. On end just add computed vector to midpoint and you have coordinates of fourth point.
On end just create PlaneGeometry from this points and create mesh. Code is in typescript.
Code here:
type Line = {
startPoint: Vector3;
startPointIdx: number;
endPoint: Vector3;
endPointIdx: number;
vector: Vector3;
length: Vector3;
}
function createTestPlaneWithTexture(): void {
const pointsIn = [new Vector3(28, 3, 3), new Vector3(20, 15, 20), new Vector3(1, 13, 3)]
const lines = Array<Line>();
for (let i = 0; i < pointsIn.length; i++) {
let length, distVect;
if (i <= pointsIn.length - 2) {
distVect = new Vector3().subVectors(pointsIn[i], pointsIn[i + 1]);
length = distVect.length()
lines.push({ vector: distVect, startPoint: pointsIn[i], startPointIdx: i, endPoint: pointsIn[i + 1], endPointIdx: i + 1, length: length })
} else {
const distVect = new Vector3().subVectors(pointsIn[i], pointsIn[0]);
length = distVect.length()
lines.push({ vector: distVect, startPoint: pointsIn[i], startPointIdx: i, endPoint: pointsIn[0], endPointIdx: 0, length: length })
}
}
// find longest edge of triangle
let maxLine: LineType;
lines.forEach(line => {
if (maxLine) {
if (line.length > maxLine.length)
maxLine = line;
} else {
maxLine = line;
}
})
//get midpoint of longest edge
const midPoint = maxLine.endPoint.clone().add(maxLine.vector.clone().multiplyScalar(0.5));
//get idx unused point
const idx = [0, 1, 2].filter(value => value !== maxLine.endPointIdx && value !== maxLine.startPointIdx)[0];
//diagonal point one
const thirdPoint = pointsIn[idx];
const vec = new Vector3().subVectors(midPoint, thirdPoint);
//diagonal point two diagonal === longer diagonal of reactangle
const fourthPoint = midPoint.clone().add(vec);
const edge1 = thirdPoint.clone().sub(maxLine.endPoint).length();
const edge2 = fourthPoint.clone().sub(maxLine.endPoint).length();
//const topLeft = new Vector3(bottomLeft.x, topRight.y, bottomLeft.y);
const points = [thirdPoint, maxLine.startPoint, maxLine.endPoint, fourthPoint];
// console.log(points)
const geo = new PlaneGeometry().setFromPoints(points)
const texture = new TextureLoader().load(textureImage);
texture.wrapS = RepeatWrapping;
texture.wrapT = RepeatWrapping;
texture.repeat.set(edge2, edge1);
const mat = new MeshBasicMaterial({ color: 0xFFFFFFF, side: DoubleSide, map: texture });
const plane = new Mesh(geo, mat);
}

GPU picking on loaded GLTF objects

i have tried a lot of ways to go around this topic, before asking and now i really have no clue how to accomplish object picking with gpu on a gltf loaded file, so im hoping for any help that i can get :(
I've loaded a huge GLTF file, with a lot of little objects in it, due to the file count its not possible to achieve a good fps, if i just add them to the scene, so i have managed to achieve 60fps merging the gltfs children into chunks, but when i try to implement the webgl_interactive_cubes_gpu example, but it doesn't seem to be working for me, I always get the same object when im clicking.
To debug i have tried rendering the pickingScene and everything seems to be in place, graphically speaking, but when it comes to picking it doesnt seem to be working as i expected, unless im doing something wrong.
Raycast picking is not a suitable option for me as there are a lot of objects and adding renderin them would kill the fps. (55k objects);
Below is the code once the gltf is loaded:
var child = gltf.scene.children[i];
var childGeomCopy = child.geometry.clone();
childGeomCopy.translate(geomPosition.x, geomPosition.y, geomPosition.z);
childGeomCopy.scale(child.scale.x * Scalar, child.scale.y * Scalar, child.scale.z * Scalar);
childGeomCopy.computeBoundingBox();
childGeomCopy.computeBoundingSphere();
childGeomCopy.applyMatrix(new THREE.Matrix4());
geometriesPicking.push(childGeomCopy);
var individualObj = new THREE.Mesh(childGeomCopy, IndividualObjMat);
individualObj.name = "individual_" + child.name;
pickingData[childCounter] = {
object: individualObj,
position: individualObj.position.clone(),
rotation: individualObj.rotation.clone(),
scale: individualObj.scale.clone()
};
childCounter++;
Edit:
gltf.scene.traverse(function (child) {
//console.log(child.type);
if (child.isMesh) {
let geometry = child.geometry.clone();
let position = new THREE.Vector3();
position.x = child.position.x;
position.y = child.position.y;
position.z = child.position.z;
let rotation = new THREE.Euler();
rotation.x = child.rotation.x;
rotation.y = child.rotation.y;
rotation.z = child.rotation.z;
let scale = new THREE.Vector3();
scale.x = child.scale.x;
scale.y = child.scale.y;
scale.z = child.scale.z;
quaternion.setFromEuler(rotation);
matrix.compose(position.multiplyScalar(Scalar), quaternion, scale.multiplyScalar(Scalar));
geometry.applyMatrix(matrix);
applyVertexColors(geometry, color.setHex(Math.random() * 0xffffff));
geometriesDrawn.push(geometry);
geometry = geometry.clone();
applyVertexColors(geometry, color.setHex(childCounter));
geometriesPicking.push(geometry);
pickingData[childCounter] = {
object: new THREE.Mesh(geometry.clone(), new THREE.MeshBasicMaterial({ color: 0xffff00, blending: THREE.AdditiveBlending, transparent: true, opacity: 0.8 })),
id: childCounter,
position: position,
rotation: rotation,
scale: scale
};
childCounter++;
//console.log("%c [childCounter] :", "", childCounter);
}
});
...
var pickingGeom = THREE.BufferGeometryUtils.mergeBufferGeometries(geometriesPicking);
pickingGeom.rotateX(THREE.Math.degToRad(90)); pickingScene.add(new THREE.Mesh(pickingGeom, pickingMaterial));
Then on my MouseUp function I call pick(mouse*) and pass in the mouse* information:
function pick(mouse) {
camera.setViewOffset(renderer.domElement.width, renderer.domElement.height, mouse.x * window.devicePixelRatio | 0, mouse.y * window.devicePixelRatio | 0, 1, 1);
renderer.setRenderTarget(pickingTexture);
renderer.render(pickingScene, camera);
camera.clearViewOffset();
var pixelBuffer = new Uint8Array(4);
renderer.readRenderTargetPixels(pickingTexture, 0, 0, 1, 1, pixelBuffer);
var id = (pixelBuffer[0] << 16) | (pixelBuffer[1] << 8) | (pixelBuffer[2]);
var data = pickingData[id];
if (data) {
console.log(data.object.name, ":", data.position); // Always return the same object
}}

Drawing/Rendering 3D objects with epicycles and fourier transformations [Animation]

First Note: They wont let me embed images until i have more reputation points (sorry), but all the links are images posted on imgur! :) thanks
I have replicated a method to animate any single path (1 closed path) using fourier transforms. This creates an animation of epicylces (rotating circles) which rotate around each other, and follow the imputed points, tracing the path as a continuous loop/function.
I would like to adopt this system to 3D. the two methods i can think of to achieve this is to use a Spherical Coordinate system (two complex planes) or 3 Epicycles --> one for each axis (x,y,z) with their individual parametric equations. This is probably the best way to start!!
2 Cycles, One for X and one for Y:
Picture: One Cycle --> Complex Numbers --> For X and Y
Fourier Transformation Background!!!:
• Eulers formula allows us to decompose each point in the complex plane into an angle (the argument to the exponential function) and an amplitude (Cn coefficients)
• In this sense, there is a connection to imaging each term in the infinite series above as representing a point on a circle with radius cn, offset by 2πnt/T radians
• The image below shows how a sum of complex numbers in terms of phases/amplitudes can be visualized as a set of concatenated cirlces in the complex plane. Each red line is a vector representing a term in the sequence of sums: cne2πi(nT)t
• Adding the summands corresponds to simply concatenating each of these red vectors in complex space:
Animated Rotating Circles:
Circles to Animated Drawings:
• If you have a line drawing in 2D (x-y) space, you can describe this path mathematically as a parametric function. (two separate single variable functions, both in terms of an auxiliary variable (T in this case):
• For example, below is a simple line drawing of a horse, and a parametric path through the black pixels in image, and that path then seperated into its X and Y components:
• At this point, we need to calculate the Fourier approximations of these two paths, and use coefficients from this approximation to determine the phase and amplitudes of the circles needed for the final visualization.
Python Code:
The python code used for this example can be found here on guithub
I have successful animated this process in 2D, but i would like to adopt this to 3D.
The Following Code Represents Animations in 2D --> something I already have working:
[Using JavaScript & P5.js library]
The Fourier Algorithm (fourier.js):
// a + bi
class Complex {
constructor(a, b) {
this.re = a;
this.im = b;
}
add(c) {
this.re += c.re;
this.im += c.im;
}
mult(c) {
const re = this.re * c.re - this.im * c.im;
const im = this.re * c.im + this.im * c.re;
return new Complex(re, im);
}
}
function dft(x) {
const X = [];
const Values = [];
const N = x.length;
for (let k = 0; k < N; k++) {
let sum = new Complex(0, 0);
for (let n = 0; n < N; n++) {
const phi = (TWO_PI * k * n) / N;
const c = new Complex(cos(phi), -sin(phi));
sum.add(x[n].mult(c));
}
sum.re = sum.re / N;
sum.im = sum.im / N;
let freq = k;
let amp = sqrt(sum.re * sum.re + sum.im * sum.im);
let phase = atan2(sum.im, sum.re);
X[k] = { re: sum.re, im: sum.im, freq, amp, phase };
Values[k] = {phase};
console.log(Values[k]);
}
return X;
}
The Sketch Function/ Animations (Sketch.js):
let x = [];
let fourierX;
let time = 0;
let path = [];
function setup() {
createCanvas(800, 600);
const skip = 1;
for (let i = 0; i < drawing.length; i += skip) {
const c = new Complex(drawing[i].x, drawing[i].y);
x.push(c);
}
fourierX = dft(x);
fourierX.sort((a, b) => b.amp - a.amp);
}
function epicycles(x, y, rotation, fourier) {
for (let i = 0; i < fourier.length; i++) {
let prevx = x;
let prevy = y;
let freq = fourier[i].freq;
let radius = fourier[i].amp;
let phase = fourier[i].phase;
x += radius * cos(freq * time + phase + rotation);
y += radius * sin(freq * time + phase + rotation);
stroke(255, 100);
noFill();
ellipse(prevx, prevy, radius * 2);
stroke(255);
line(prevx, prevy, x, y);
}
return createVector(x, y);
}
function draw() {
background(0);
let v = epicycles(width / 2, height / 2, 0, fourierX);
path.unshift(v);
beginShape();
noFill();
for (let i = 0; i < path.length; i++) {
vertex(path[i].x, path[i].y);
}
endShape();
const dt = TWO_PI / fourierX.length;
time += dt;
And Most Importantly! THE PATH / COORDINATES:
(this one is a triangle)
let drawing = [
{ y: -8.001009734 , x: -50 },
{ y: -7.680969345 , x: -49 },
{ y: -7.360928956 , x: -48 },
{ y: -7.040888566 , x: -47 },
{ y: -6.720848177 , x: -46 },
{ y: -6.400807788 , x: -45 },
{ y: -6.080767398 , x: -44 },
{ y: -5.760727009 , x: -43 },
{ y: -5.440686619 , x: -42 },
{ y: -5.12064623 , x: -41 },
{ y: -4.800605841 , x: -40 },
...
...
{ y: -8.001009734 , x: -47 },
{ y: -8.001009734 , x: -48 },
{ y: -8.001009734 , x: -49 },
];
This answer is in response to: "Do you think [three.js] can replicate what i have in 2D but in 3D? with the rotating circles and stuff?"
Am not sure whether you're looking to learn 3D modeling from scratch (ie, creating your own library of vector routines, homogeneous coordinate transformations, rendering perspective, etc) or whether you're simply looking to produce a final product. In the case of the latter, three.js is a powerful graphics library built on webGL that in my estimation is simple enough for a beginner to dabble with, but has a lot of depth to produce very sophisticated 3D effects. (Peruse the examples at https://threejs.org/examples/ and you'll see for yourself.)
I happen to be working a three.js project of my own, and whipped up a quick example of epicyclic circles as a warm up exercise. This involved pulling pieces and parts from the following references...
https://threejs.org/docs/index.html#manual/en/introduction/Creating-a-scene
https://threejs.org/examples/#misc_controls_orbit
https://threejs.org/examples/#webgl_geometry_shapes (This three.js example is a great resource showing a variety of ways that a shape can be rendered.)
The result is a simple scene with one circle running around the other, permitting mouse controls to orbit around the scene, viewing it from different angles and distances.
<html>
<head>
<title>Epicyclic Circles</title>
<style>
body { margin: 0; }
canvas { width: 100%; height: 100% }
</style>
</head>
<body>
<script src="https://rawgit.com/mrdoob/three.js/dev/build/three.js"></script>
<script src="https://rawgit.com/mrdoob/three.js/dev/examples/js/controls/OrbitControls.js"></script>
<script>
// Set up the basic scene, camera, and lights.
var scene = new THREE.Scene();
scene.background = new THREE.Color( 0xf0f0f0 );
var camera = new THREE.PerspectiveCamera( 75, window.innerWidth/window.innerHeight, 0.1, 1000 );
scene.add(camera)
var light = new THREE.PointLight( 0xffffff, 0.8 );
camera.add( light );
camera.position.z = 50;
var renderer = new THREE.WebGLRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( renderer.domElement );
// Add the orbit controls to permit viewing the scene from different angles via the mouse.
controls = new THREE.OrbitControls( camera, renderer.domElement );
controls.enableDamping = true; // an animation loop is required when either damping or auto-rotation are enabled
controls.dampingFactor = 0.25;
controls.screenSpacePanning = false;
controls.minDistance = 0;
controls.maxDistance = 500;
// Create center and epicyclic circles, extruding them to give them some depth.
var extrudeSettings = { depth: 2, bevelEnabled: true, bevelSegments: 2, steps: 2, bevelSize: .25, bevelThickness: .25 };
var arcShape1 = new THREE.Shape();
arcShape1.moveTo( 0, 0 );
arcShape1.absarc( 0, 0, 15, 0, Math.PI * 2, false );
var holePath1 = new THREE.Path();
holePath1.moveTo( 0, 10 );
holePath1.absarc( 0, 10, 2, 0, Math.PI * 2, true );
arcShape1.holes.push( holePath1 );
var geometry1 = new THREE.ExtrudeBufferGeometry( arcShape1, extrudeSettings );
var mesh1 = new THREE.Mesh( geometry1, new THREE.MeshPhongMaterial( { color: 0x804000 } ) );
scene.add( mesh1 );
var arcShape2 = new THREE.Shape();
arcShape2.moveTo( 0, 0 );
arcShape2.absarc( 0, 0, 15, 0, Math.PI * 2, false );
var holePath2 = new THREE.Path();
holePath2.moveTo( 0, 10 );
holePath2.absarc( 0, 10, 2, 0, Math.PI * 2, true );
arcShape2.holes.push( holePath2 );
var geometry2 = new THREE.ExtrudeGeometry( arcShape2, extrudeSettings );
var mesh2 = new THREE.Mesh( geometry2, new THREE.MeshPhongMaterial( { color: 0x00ff00 } ) );
scene.add( mesh2 );
// Define variables to hold the current epicyclic radius and current angle.
var mesh2AxisRadius = 30;
var mesh2AxisAngle = 0;
var animate = function () {
requestAnimationFrame( animate );
// During each animation frame, let's rotate the objects on their center axis,
// and also set the position of the epicyclic circle.
mesh1.rotation.z -= 0.02;
mesh2.rotation.z += 0.02;
mesh2AxisAngle += 0.01;
mesh2.position.set ( mesh2AxisRadius * Math.cos(mesh2AxisAngle), mesh2AxisRadius * Math.sin(mesh2AxisAngle), 0 );
renderer.render( scene, camera );
};
animate();
</script>
</body>
</html>
Note that I've used basic trigonometry within the animate function to position the epicyclic circle around the center circle, and fudged the rate of rotation for the circles (rather than doing the precise math), but there's probably a better "three.js"-way of doing this via matrices or built in functions. Given that you obviously have a strong math background, I don't think you'll have any issues with translating your 2D model of multi-epicyclic circles using basic trigonometry when porting to 3D.
Hope this helps in your decision making process on how to proceed with a 3D version of your program.
The method that I would suggest is as follows. Start with a parametrized path v(t) = (v_x(t), v_y(t), v_z(t)). Consider the following projection onto the X-Y plane: v1(t) = (v_x(t)/2, v_y(t), 0). And the corresponding projection onto the X-Z plane: v2(t) = (v_x(t)/2, 0, v_z(t)).
When we add these projections together we get the original curve. But each projection is now a closed 2-D curve, and you have solutions for arbitrary closed 2-D curves. So solve each problem. And then interleave them to get a projection where your first circle goes in the X-Y plane, your second one in the X-Z plane, your third one in the X-Y plane, your fourth one in the X-Z plane ... and they sum up to your answer!

Categories

Resources