Extrude Geometry in three.js - javascript

I'm currently loading an STL onject into my three.js scene.
For some reason, it takes a lot of GPU resources to render/animate, slowing the entire scene down so I've been considering alternatives.
As it's quite a simple shape, I thought I could use create the 2D shape and extrude it.
The 3D shape is a square frame (it's a photo frame), no curves or any other clever geometry.
Initially, I thought about creating 4x 3D oblongs, rotating each one by 90 degrees and placing them just right in the scene to make it look like a frame - but that's not ideal.
So as an alternative to loading the STL model into the scene, how can I create this shape in three.js (with empty space in the centre) and then extrude it to give it some depth?

Basic extrusion example: Shape -> ExtrudeGeometry -> Mesh
const { renderer, scene, camera } = initThree();
//Create a frame shape..
var frame = new THREE.Shape();
frame.moveTo(-4, -3);
frame.lineTo( 4, -3);
frame.lineTo( 4, 3);
frame.lineTo(-4, 3);
//..with a hole:
var hole = new THREE.Path();
hole.moveTo(-3, -2);
hole.lineTo( 3, -2);
hole.lineTo( 3, 2);
hole.lineTo(-3, 2);
frame.holes.push(hole);
//Extrude the shape into a geometry, and create a mesh from it:
var extrudeSettings = {
steps: 1,
depth: 1,
bevelEnabled: false,
};
var geom = new THREE.ExtrudeGeometry(frame, extrudeSettings);
var mesh = new THREE.Mesh(geom, new THREE.MeshPhongMaterial({ color: 0xffaaaa }));
scene.add(mesh);
renderer.render(scene, camera);
body {
margin: 0;
overflow: hidden;
}
canvas {
display: block;
}
<script src="//cdnjs.cloudflare.com/ajax/libs/three.js/102/three.min.js"></script>
<script src="//cdn.rawgit.com/Sphinxxxx/298702f070e34a5df30326cd9943260a/raw/16afc701da1ed8ed267a896907692d8acdce9b7d/init-three.js"></script>

Related

scaling down and repeating a texture in threejs

I have quite a large plane with a set displacement map and scale which I do not want to be changed. I simply want the loaded texture to apply to that mesh without it having to scale up so largely.
Currently, a floor texture doesn't look like a floor as it has been upscaled to suit the large plane.
How would I be able to scale down the texture and multiply it across the plane so it looks more like actual terrain?
const tilesNormalMap = textureLoader.load(
"./textures/Stylized_Stone_Floor_005_normal.jpg"
);
function createGround() {
let disMap = new THREE.TextureLoader().load("./models/Heightmap.png");
disMap.wrapS = disMap.wrapT = THREE.RepeatWrapping;
disMap.repeat.set(4, 2);
const groundMat = new THREE.MeshStandardMaterial({
map: tilesBaseColor,
normalMap: tilesNormalMap,
displacementMap: disMap,
displacementScale: 2
});
const groundGeo = new THREE.PlaneGeometry(300, 300, 800, 800);
let groundMesh = new THREE.Mesh(groundGeo, groundMat);
scene.add(groundMesh);
groundMesh.rotation.x = -Math.PI / 2;
groundMesh.position.y -= 1.5;
I tried using the .repeat method as shown below but i can't figure out how this would be implemented
tilesBaseColor.repeat.set(0.9, 0.9);
tilesBaseColor.offset.set(0.001, 0.001);
a photo of the current ground texture
enter image description here
First of all what you want to achieve does currently not work with three.js since it's only possible to a have a single uv transform for all textures (except for light and ao map). And map has priority in your case so you can't have different repeat settings for the displacement map. Related issue at GitHub: https://github.com/mrdoob/three.js/issues/9457
Currently, a floor texture doesn't look like a floor as it has been upscaled to suit the large plane. How would I be able to scale down the texture and multiply it across the plane so it looks more like actual terrain?
In this case, you have to use repeat values great 1 otherwise you zoom into the texture. Do it like in the following live example:
let camera, scene, renderer;
init().then(render);
async function init() {
camera = new THREE.PerspectiveCamera(70, window.innerWidth / window.innerHeight, 0.01, 10);
camera.position.z = 1;
scene = new THREE.Scene();
const loader = new THREE.TextureLoader();
const texture = await loader.loadAsync('https://threejs.org/examples/textures/uv_grid_opengl.jpg');
// repeat settings
texture.wrapS = THREE.RepeatWrapping;
texture.wrapT = THREE.RepeatWrapping;
texture.repeat.set(2, 2);
const geometry = new THREE.PlaneGeometry();
const material = new THREE.MeshBasicMaterial({map: texture});
const mesh = new THREE.Mesh(geometry, material);
scene.add(mesh);
renderer = new THREE.WebGLRenderer({antialias: true});
renderer.setPixelRatio(window.devicePixelRatio);
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
}
function render() {
renderer.render(scene, camera);
}
body {
margin: 0;
}
<script src="https://cdn.jsdelivr.net/npm/three#0.148/build/three.min.js"></script>

How do I detect if X and Z position is intersecting a certain mesh (not using mouse)? - Three.js

Background of Question
I am working on a game that is a mix between Europa Universalis 4 and Age of Empires 3. The game is made in JavaScript and utilizes Three.js (r109) library. As of right now I have made randomly generated low-poly terrain with trees and reflective water. In the beginning I want the game to spawn a Navy, represented by a galleon (in screenshot below). I want to make it so when its called to spawn, it will pick a random location within the bounds of the water. The water mesh is represented by a semi-opaque plane spanning the size of the map- with a THREE.Reflector object underneath it. The terrain is also a plane but has been altered using a SimplexNoise heightmap.
The Question
How do I detect if an x and z position intersects with the water mesh and not the terrain mesh? THREE.Raycaster seems to be useful for what I am trying to do, but I wan't to know if there is a better solution. If using THREE.Raycaster is the best option, how would I go about implementing it for this purpose? Should I make an individual THREE.Raycaster for every object I am doing this with? Keep in mind I'm not placing this object with the mouse, I want to place it with a method that checks the position as stated above.
It's difficult to give specific advice without knowing anything at all about your code, but it sounds like all you need to do is create a collision list for your valid water surfaces and then check that when you want to spawn something.
A very simple jsfiddle is here. It creates a "land" mesh (green) and a "water" mesh (blue), adds the "water" mesh to a variable called collisionList. It then calls a spawn function for coordinates diagonally across both surfaces. The function uses a raycaster to check if the coordinates are over the "water" mesh and spawns a red cube if it is.
Here's the code:
window.onload = function() {
var camera = null, land = null, water = null, renderer = null, lights;
var collisionList;
var d, n, scene = null, animID;
n = document.getElementById('canvas');
function load() {
var height = 600, width = 800;
scene = new THREE.Scene();
camera = new THREE.PerspectiveCamera(60, width/height, 1, 1000);
camera.position.set(0, 0, -10);
camera.lookAt(new THREE.Vector3(0, 0, 0));
scene.add(camera);
lights = [];
lights[0] = new THREE.PointLight(0xffffff, 1, 0);
lights[1] = new THREE.PointLight(0xffffff, 1, 0);
lights[2] = new THREE.PointLight(0xffffff, 1, 0);
lights[0].position.set(0, 200, 0);
lights[1].position.set(100, 200, 100);
lights[2].position.set(-100, -200, -100);
scene.add(lights[0]);
scene.add(lights[1]);
scene.add(lights[2]);
water = new THREE.Mesh(new THREE.PlaneGeometry(7, 7, 10),
new THREE.MeshStandardMaterial({
color: 0x0000ff,
side: THREE.DoubleSide,
}));
water.position.set(0, 0, 0);
scene.add(water);
land = new THREE.Mesh(new THREE.PlaneGeometry(12, 12, 10),
new THREE.MeshStandardMaterial({
color: 0x00ff00,
side: THREE.DoubleSide,
}));
land.position.set(0, 0, 1);
scene.add(land);
renderer = new THREE.WebGLRenderer();
renderer.setSize(width, height);
n.appendChild(renderer.domElement);
collisionList = [ water ];
for(var i = -6; i < 6; i++)
spawn(i);
animate();
}
function spawn(x) {
var dir, intersect, mesh, ray, v;
v = new THREE.Vector3(x, x, -1);
dir = new THREE.Vector3(0, 0, 1);
ray = new THREE.Raycaster(v, dir.normalize(), 0, 100);
intersect = ray.intersectObjects(collisionList);
if(intersect.length <= 0)
return;
mesh = new THREE.Mesh(new THREE.BoxGeometry(1, 1, 1, 1, 1, 1),
new THREE.MeshStandardMaterial({ color: 0xff0000 }));
mesh.position.set(x, x, 0);
scene.add(mesh);
}
function animate() {
if(!scene) return;
animID = requestAnimationFrame(animate);
render();
update();
}
function render() {
if(!scene || !camera || !renderer) return;
renderer.render(scene, camera);
}
function update() {
if(!scene || !camera) return;
}
load();
As for whether this is a smart way to do it, that really depends on the design of the rest of your game.
If your world is procgen then it may be more efficient/less error prone to generate the spawn points (and any other "functional" parts of the world) first and use that to generate the geography instead of the other way around.

Rotating icosahedron with circles located at every vertex in three.js

I have an icosahedron mesh which I am rotating and then adding circle geometries and setting their location to each vertex at every frame in the animation loop.
geometry = new THREE.IcosahedronGeometry(isoRadius, 1);
var material = new THREE.MeshBasicMaterial({
color: wireframeColor,
wireframe: true
});
isoMesh = new THREE.Mesh(geometry, material);
scene.add(isoMesh);
Set each circle geometries location as the icosahedron mesh rotates:
function animate() {
isoMesh.rotation.x += 0.005;
isoMesh.rotation.y += 0.002;
// update vertices
isoMesh.updateMatrix();
isoMesh.geometry.applyMatrix(isoMesh.matrix);
isoMesh.rotation.set(0, 0, 0);
for (var i = 0; i < geometry.vertices.length; i++) {
nodes[i].position.copy(geometry.vertices[i]);
nodes[i].lookAt(camera.position);
}
Where var geometry is the geometry of the icosahedron. If I remove the line "isoMesh.rotation.set(0, 0, 0);", the icosahedron rotates correctly, but the rotation of the nodes compounds and spins way too quickly. If I add that line, the nodes rotate correctly, but the icosahedron does not move at all.
I do not understand three.js well enough yet to understand what is happening. Why would adding and removing this affect the nodes' and icosahedron's rotations separately? I believe it has something to do with the difference between the mesh and the geometry since I am using the geometry to position the nodes, but the rotation of the mesh is what shows visually. Any idea what is happening here?
The solution it multi-layered.
Your Icosahedron:
You were half-way there with rotating your icosahedron and its vertices. Rather than applying the rotation to all the vertices (which would actually cause some pretty extreme rotation), apply the rotation to the mesh only. But that doesn't update the vertices, right? Right. More on that in a moment.
Your Circles:
You have the right idea of placing them at each vertex, but as WestLangley said, you can't use lookAt for objects with rotated/translated parents, so you'll need to add them directly to the scene. Also, if you can't get the new positions of the vertices for the rotated icosahedron, the circles will simply remain in place. So let's get those updated vertices.
Getting Updated Vertex Positions:
Like I said above, rotating the mesh updates its transformation matrix, not the vertices. But we can USE that updated transformation matrix to get the updated matrix positions for the circles. Object3D.localToWorld allows us to transform a local THREE.Vector3 (like your icosahedron's vertices) into world coordinates. (Also note that I did a clone of each vertex, because localToWorld overwrites the given THREE.Vector3).
Takeaways:
I've tried to isolate the parts relative to your question into the JavaScript portion of the snippet below.
Try not to update geometry unless you have to.
Only use lookAt with objects in the world coordinate system
Use localToWorld and worldToLocal to transform vectors between
coordinate systems.
// You already had this part
var geometry = new THREE.IcosahedronGeometry(10, 1);
var material = new THREE.MeshBasicMaterial({
color: "blue",
wireframe: true
});
var isoMesh = new THREE.Mesh(geometry, material);
scene.add(isoMesh);
// Add your circles directly to the scene
var nodes = [];
for(var i = 0, l = geometry.vertices.length; i < l; ++i){
nodes.push(new THREE.Mesh(new THREE.CircleGeometry(1, 32), material));
scene.add(nodes[nodes.length - 1]);
}
// This is called in render. Get the world positions of the vertices and apply them to the circles.
var tempVector = new THREE.Vector3();
function updateVertices(){
if(typeof isoMesh !== "undefined" && typeof nodes !== "undefined" && nodes.length === isoMesh.geometry.vertices.length){
isoMesh.rotation.x += 0.005;
isoMesh.rotation.y += 0.002;
for(var i = 0, l = nodes.length; i < l; ++i){
tempVector.copy(isoMesh.geometry.vertices[i]);
nodes[i].position.copy(isoMesh.localToWorld(tempVector));
nodes[i].lookAt(camera.position);
}
}
}
html *{
padding: 0;
margin: 0;
width: 100%;
overflow: hidden;
}
#host {
width: 100%;
height: 100%;
}
<script src="http://threejs.org/build/three.js"></script>
<script src="http://threejs.org/examples/js/controls/TrackballControls.js"></script>
<script src="http://threejs.org/examples/js/libs/stats.min.js"></script>
<div id="host"></div>
<script>
// INITIALIZE
var WIDTH = window.innerWidth,
HEIGHT = window.innerHeight,
FOV = 35,
NEAR = 1,
FAR = 1000;
var renderer = new THREE.WebGLRenderer({ antialias: true });
renderer.setSize(WIDTH, HEIGHT);
document.getElementById('host').appendChild(renderer.domElement);
var stats= new Stats();
stats.domElement.style.position = 'absolute';
stats.domElement.style.top = '0';
document.body.appendChild(stats.domElement);
var camera = new THREE.PerspectiveCamera(FOV, WIDTH / HEIGHT, NEAR, FAR);
camera.position.z = 50;
var trackballControl = new THREE.TrackballControls(camera, renderer.domElement);
trackballControl.rotateSpeed = 5.0; // need to speed it up a little
var scene = new THREE.Scene();
var light = new THREE.PointLight(0xffffff, 1, Infinity);
camera.add(light);
scene.add(light);
function render(){
if(typeof updateVertices !== "undefined"){
updateVertices();
}
renderer.render(scene, camera);
stats.update();
}
function animate(){
requestAnimationFrame(animate);
trackballControl.update();
render();
}
animate();
</script>

Does three.js renderer clone the objects positions?

I created a small scene with 3 spheres and a triangle connecting the 3 centers of the spheres, i.e. the triangle vertex positions are the same variables as the sphere positions.
Now I expected that if i change the position of one of the spheres, the triangle vertex should be moved together with it (since it's the same position object) and therefore still connect the three spheres.
However, if I do this coordinate change AFTER the renderer was called, the triangle is NOT changed. (Though it does change if I move the sphere BEFORE the renderer is called.)
This seems to indicate that the renderer doesnt use the original position objects but a clone of them.
Q: Is there a way to avoid this cloning behaviour (or whatever is the reason for the independent positions) so I can still change two objects with one variable change? Or am I doing something wrong?
The code:
var width = window.innerWidth;
var height = window.innerHeight;
var renderer = new THREE.WebGLRenderer({ antialias: true });
renderer.setSize(width, height);
document.body.appendChild(renderer.domElement);
var scene = new THREE.Scene;
var camera = new THREE.PerspectiveCamera(30, width / height, 0.1, 10000);
camera.position=new THREE.Vector3(50,50,50);
camera.lookAt(new THREE.Vector3(0,0,0));
scene.add(camera);
var pointLight = new THREE.PointLight(0xffffff);
pointLight.position=camera.position;
scene.add(pointLight);
var sphere=[];
var sphereGeometry = new THREE.SphereGeometry(1,8,8);
var sphereMaterial = new THREE.MeshLambertMaterial({ color: 0xff0000 });
var triGeom = new THREE.Geometry();
for (var i=0; i<3; i++) {
sphere[i] = new THREE.Mesh(sphereGeometry, sphereMaterial);
sphere[i].position=new THREE.Vector3(10*i,20+5*(i-1)^2,0);
scene.add(sphere[i]);
triGeom.vertices.push(sphere[i].position);
}
triGeom.faces.push( new THREE.Face3( 0, 1, 2 ) );
triGeom.computeFaceNormals();
var tri= new THREE.Mesh( triGeom, new THREE.MeshLambertMaterial({side:THREE.DoubleSide, color: 0x00ff00}) );
scene.add(tri);
sphere[0].position.x+=10; // this changes both sphere and triangle vertex
renderer.render(scene, camera);
sphere[1].position.x+=10; // this changes only the sphere
renderer.render(scene, camera);
This is probably because of geometry caching feature. You will have to set triGeom.verticesNeedUpdate = true every time you change vertex position.

Using multiuple textures on a sphere [Three.js]

Is it possible to load multiple textures on a sphere?
I mean to say is there any way in Three.js to split a sphere into n pieces , texture them separately and render those pieces once again as a whole sphere?
I do not want to load the entire texture on the sphere, instead, only those parts are to be rendered which the user will first see on the screen and as the user rotates the sphere the rest part of the texture must be loaded.
Moreover, when I use a single image on a sphere it seems to converge at poles making it worse.
This should help: https://open.bekk.no/procedural-planet-in-webgl-and-three-js
Instead of using a sphere try using a cube and expanding it into a sphere. Cube logic on the cube sphere will save you a good amount of time.
var geometry = new THREE.BoxGeometry( 1, 1, 1, 8, 8, 8 );
for ( var i in geometry.vertices ) {
var vertex = geometry.vertices[ i ];
vertex.normalize().multiplyScalar(radius);
}
var materialArray = [];
var faceMaterial = new THREE.MeshLambertMaterial({
color: sphereColor,
transparent: true,
opacity: 0.4
});
for (var i = 0; i < 6; i++) {
materialArray.push(faceMaterial);
}
var material = new THREE.MeshFaceMaterial(materialArray);
var sphere = new THREE.Mesh( geometry, material );
scene.add( sphere );

Categories

Resources