three js - how to make a decagon - javascript

var vector = new THREE.Vector3();
var spherical = new THREE.Spherical();
for ( var i = 0, l = objects.length; i < l; i ++ ) {
var phi = Math.acos( -1 + ( 2 * i ) / l );
var theta = Math.sqrt( l * Math.PI ) * phi;
var object = new THREE.Object3D();
spherical.set(1000, phi, theta );
object.position.setFromSpherical( spherical );
vector.copy( object.position ).multiplyScalar( 2 );
object.lookAt( vector );
}
This code describes round three.js sphere. How to make a decagon out of it? Thanks

Related

Porting 3D Rose written by Wolfram Language into JavaScript

I'd like to get help from Geometry / Wolfram Mathematica people.
I want to visualize this 3D Rose in JavaScript (p5.js) environment.
This figure is originally generated using wolfram language by Paul Nylanderin 2004-2006, and below is the code:
Rose[x_, theta_] := Module[{
phi = (Pi/2)Exp[-theta/(8 Pi)],
X = 1 - (1/2)((5/4)(1 - Mod[3.6 theta, 2 Pi]/Pi)^2 - 1/4)^2},
y = 1.95653 x^2 (1.27689 x - 1)^2 Sin[phi];
r = X(x Sin[phi] + y Cos[phi]);
{r Sin[theta], r Cos[theta], X(x Cos[phi] - y Sin[phi]), EdgeForm[]
}];
ParametricPlot3D[
Rose[x, theta], {x, 0, 1}, {theta, -2 Pi, 15 Pi},
PlotPoints -> {25, 576}, LightSources -> {{{0, 0, 1}, RGBColor[1, 0, 0]}},
Compiled -> False
]
I tried implement that code in JavaScript like this below.
function rose(){
for(let theta = 0; theta < 2700; theta += 3){
beginShape(POINTS);
for(let x = 2.3; x < 3.3; x += 0.02){
let phi = (180/2) * Math.exp(- theta / (8*180));
let X = 1 - (1/2) * pow(((5/4) * pow((1 - (3.6 * theta % 360)/180), 2) - 1/4), 2);
let y = 1.95653 * pow(x, 2) * pow((1.27689*x - 1), 2) * sin(phi);
let r = X * (x*sin(phi) + y*cos(phi));
let pX = r * sin(theta);
let pY = r * cos(theta);
let pZ = (-X * (x * cos(phi) - y * sin(phi)))-200;
vertex(pX, pY, pZ);
}
endShape();
}
}
But I got this result below
Unlike original one, the petal at the top is too stretched.
I suspected that the
let y = 1.95653 * pow(x, 2) * pow((1.27689*x - 1), 2) * sin(phi);
may should be like below...
let y = pow(1.95653*x, 2*pow(1.27689*x - 1, 2*sin(theta)));
But that went even further away from the original.
Maybe I'm asking a dumb question, but I've been stuck for several days.
If you see a mistake, please let me know.
Thank you in advanse🙏
Update:
I changed the x range to 0~1 as defined by the original one.
Also simplified the JS code like below to find the error.
function rose_debug(){
for(let theta = 0; theta < 15*PI; theta += PI/60){
beginShape(POINTS);
for(let x = 0.0; x < 1.0; x += 0.005){
let phi = (PI/2) * Math.exp(- theta / (8*PI));
let y = pow(x, 4) * sin(phi);
let r = (x * sin(phi) + y * cos(phi));
let pX = r * sin(theta);
let pY = r * cos(theta);
let pZ = x * cos(phi) - y * sin(phi);
vertex(pX, pY, pZ);
}
endShape();
}
}
But the result still keeps the wrong proportion↓↓↓
Also, when I remove the term "sin(phi)" in the line "let y =..." like below
let y = pow(x, 4);
then I got a figure somewhat resemble the original like below🤣
At this moment I was starting to suspect the mistake on the original equation, but I found another article by Jorge García Tíscar(Spanish) that implemented the exact same 3D rose in wolfram language successfully.
So, now I really don't know how the original is formed by the equation😇
Update2: Solved
I followed a suggestion by Trentium (Answer No.2 below) that stick to 0 ~ 1 as the range of x, then multiply the r and X by an arbitrary number.
for(let x = 0; x < 1; x += 0.05){
r = r * 200;
X = X * 200;
Then I got this correct result looks exactly the same as the original🥳
Simplified final code:
function rose_debug3(){
for(let x = 0; x <= 1; x += 0.05){
beginShape(POINTS);
for(let theta = -2*PI; theta <= 15*PI; theta += 17*PI/2000){
let phi = (PI / 2) * Math.exp(- theta / (8 * PI));
let X = 1 - (1/2) * ((5/4) * (1 - ((3.6 * theta) % (2*PI))/PI) ** 2 - 1/4) ** 2;
let y = 1.95653 * (x ** 2) * ((1.27689*x - 1) ** 2) * sin(phi);
let r = X * (x * sin(phi) + y * cos(phi));
if(0 < r){
const factor = 200;
let pX = r * sin(theta)*factor;
let pY = r * cos(theta)*factor;
let pZ = X * (x * cos(phi) - y * sin(phi))*factor;
vertex(pX, pY, pZ);
}
}
endShape();
}
}
The reason I got the vertically stretched figure at first was the range of the x. I thought that changing the range of the x just affect the whole size of the figure. But actually, the range affects like this below.
(1): 0 ~ x ~ 1, (2): 0 ~ x ~ 1.2
(3): 0 ~ x ~ 1.5, (4): 0 ~ x ~ 2.0
(5): flipped the (4)
So far I saw the result like (5) above, didn't realize that the correct shape was hiding inside that figure.
Thank you Trentium so much for kindly helping me a lot!
Since this response is a significant departure from my earlier response, am adding a new answer...
In rendering the rose algorithm in ThreeJS (sorry, I'm not a P5 guy) it became apparent that when generating the points, that only the points with a positive radius are to be rendered. Otherwise, superfluous points are rendered far outside the rose petals.
(Note: When running the code snippet, use the mouse to zoom and rotate the rendering of the rose.)
<script type="module">
import * as THREE from 'https://cdn.jsdelivr.net/npm/three#0.115.0/build/three.module.js';
import { OrbitControls } from 'https://cdn.jsdelivr.net/npm/three#0.115.0/examples/jsm/controls/OrbitControls.js';
//
// Set up the ThreeJS environment.
//
var renderer = new THREE.WebGLRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( renderer.domElement );
var camera = new THREE.PerspectiveCamera( 45, window.innerWidth / window.innerHeight, 1, 500 );
camera.position.set( 0, 0, 100 );
camera.lookAt( 0, 0, 0 );
var scene = new THREE.Scene();
let controls = new OrbitControls(camera, renderer.domElement);
//
// Create the points.
//
function rose( xLo, xHi, xCount, thetaLo, thetaHi, thetaCount ){
let vertex = [];
let colors = [];
let radius = [];
for( let x = xLo; x <= xHi; x += ( xHi - xLo ) / xCount ) {
for( let theta = thetaLo; theta <= thetaHi; theta += ( thetaHi - thetaLo ) / thetaCount ) {
let phi = ( Math.PI / 2 ) * Math.exp( -theta / ( 8 * Math.PI ) );
let X = 1 - ( 1 / 2 ) * ( ( 5 / 4 ) * ( 1 - ( ( 3.6 * theta ) % ( 2 * Math.PI ) ) / Math.PI ) ** 2 - 1 / 4 ) ** 2;
let y = 1.95653 * ( x ** 2 ) * ( (1.27689 * x - 1) ** 2 ) * Math.sin( phi );
let r = X * ( x * Math.sin( phi ) + y * Math.cos( phi ) );
//
// Fix: Ensure radius is positive, and scale up accordingly...
//
if ( 0 < r ) {
const factor = 20;
r = r * factor;
radius.push( r );
X = X * factor;
vertex.push( r * Math.sin( theta ), r * Math.cos( theta ), X * ( x * Math.cos( phi ) - y * Math.sin( phi ) ) );
}
}
}
//
// For the fun of it, lets adjust the color of the points based on the radius
// of the point such that the larger the radius, the deeper the red.
//
let rLo = Math.min( ...radius );
let rHi = Math.max( ...radius );
for ( let i = 0; i < radius.length; i++ ) {
let clr = new THREE.Color( Math.floor( 0x22 + ( 0xff - 0x22 ) * ( ( radius[ i ] - rLo ) / ( rHi - rLo ) ) ) * 0x10000 + 0x002222 );
colors.push( clr.r, clr.g, clr.b );
}
return [ vertex, colors, radius ];
}
//
// Create the geometry and mesh, and add to the THREE scene.
//
const geometry = new THREE.BufferGeometry();
let [ positions, colors, radius ] = rose( 0, 1, 20, -2 * Math.PI, 15 * Math.PI, 2000 );
geometry.setAttribute( 'position', new THREE.Float32BufferAttribute( positions, 3 ) );
geometry.setAttribute( 'color', new THREE.Float32BufferAttribute( colors, 3 ) );
const material = new THREE.PointsMaterial( { size: 4, vertexColors: true, depthTest: false, sizeAttenuation: false } );
const mesh = new THREE.Points( geometry, material );
scene.add( mesh );
//
// Render...
//
var animate = function () {
requestAnimationFrame( animate );
renderer.render( scene, camera );
};
animate();
</script>
Couple of notables:
When calling rose( xLo, xHi, xCount, thetaLo, thetaHi, thetaCount ), the upper range thetaHi can vary from Math.PI to 15 * Math.PI, which varies the number of petals.
Both xCount and thetaCount vary the density of the points. The Wolfram example uses 25 and 576, respectively, but this is to create a geometry mesh, whereas if creating a point field the density of points needs to be increased. Hence, in the code the values are 20 and 2000.
Enjoy!
Presumably the algorithm above is referencing cos() and sin() functions that handle the angles in degrees rather than radians, but wherever using angles while employing non-trigonometric transformations, the result will be incorrect.
For example, the following formula using radians...
phi = (Pi/2)Exp[-theta/(8 Pi)]
...has been incorrectly translated to...
phi = ( 180 / 2 ) * Math.exp( -theta / ( 8 * 180 ) )
To test, let's assume theta = 2. Using the original formula in radians...
phi = ( Math.PI / 2 ) * Math.exp( -2 / ( 8 * Math.PI ) )
= 1.451 rad
= 83.12 deg
...and now the incorrect version using degrees, which returns a different angle...
phi = ( 180 / 2 ) * Math.exp( -2 / ( 8 * 180 ) )
= 89.88 deg
= 1.569 rad
A similar issue will occur with the incorrectly translated expression...
pow( ( 1 - ( 3.6 * theta % 360 ) / 180 ), 2 )
Bottom line: Stick to radians.
P.S. Note that there might be other issues, but using radians rather than degrees needs to be corrected foremost...

Pins position for Cloth in Three.js

Can someone help! I am simulating a cloth attached to their 4 corners. I am trying to re-locate the 4 pins 0, 10, 88, 98 of the Cloth with an 10x10 array. I want to be able to place each Pin at a different position in x,y,z.
For this simulation I am using Three.js and Cloth.js.
Something similar to this example:
[https://threejs.org/examples/#webgl_animation_cloth][1]
Here is my Code and also the Cloth code I am using.
var pinsFormation = [];
pinsFormation.push( pins );
pins = [ 0, 10, 88, 98 ];
var container, stats;
var camera, scene, renderer, clothGeometry, object;
init();
animate();
function init() {
container = document.createElement( 'div' );
document.body.appendChild( container );
scene = new THREE.Scene();
scene.background = new THREE.Color( 0xFFFFFF );
camera = new THREE.PerspectiveCamera( 30, window.innerWidth / window.innerHeight, 1, 10000 );
camera.position.set( 1000, 50, 1000 );
// cloth
var material_wire = new THREE.MeshBasicMaterial( { color : 0x000000, side: THREE.DoubleSide, wireframe: true } );
clothGeometry = new THREE.ParametricGeometry( clothFunction, cloth.w, cloth.h );
object = new THREE.Mesh( clothGeometry, material_wire ); // clothMaterial
object.position.set( 0, 0, 0 );
scene.add( object );
// renderer
renderer = new THREE.WebGLRenderer( { antialias: true } );
renderer.setPixelRatio( window.devicePixelRatio );
renderer.setSize( window.innerWidth, window.innerHeight );
container.appendChild( renderer.domElement );
var controls = new THREE.OrbitControls( camera, renderer.domElement );
controls.maxPolarAngle = Math.PI * 1.5;
window.addEventListener( 'resize', onWindowResize, false );
}
function onWindowResize() {
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
renderer.setSize( window.innerWidth, window.innerHeight );
}
function animate() {
requestAnimationFrame( animate );
var time = Date.now();
var windStrength = Math.cos( time / 7000 ) * 20 + 40;
windForce.set( Math.sin( time / 2000 ), Math.cos( time / 3000 ), Math.sin( time / 1000 ) )
windForce.normalize()
windForce.multiplyScalar( windStrength );
simulate( time );
render();
}
function render() {
var p = cloth.particles;
for ( var i = 0, il = p.length; i < il; i ++ ) {
clothGeometry.vertices[ i ].copy( p[ i ].position );
}
clothGeometry.verticesNeedUpdate = true;
clothGeometry.computeFaceNormals();
clothGeometry.computeVertexNormals();
renderer.render( scene, camera );
}
// cloth.js
var DAMPING = 0.03;
var DRAG = 1 - DAMPING;
var MASS = 0.1;
var restDistance = 25;
var xSegs = 10;
var ySegs = 10;
var clothFunction = plane( restDistance * xSegs, restDistance * ySegs );
var cloth = new Cloth( xSegs, ySegs );
var GRAVITY = 981 * 1.4;
var gravity = new THREE.Vector3( 0, - GRAVITY, 0 ).multiplyScalar( MASS );
var TIMESTEP = 18 / 1000;
var TIMESTEP_SQ = TIMESTEP * TIMESTEP;
var pins = [];
var wind = true;
var windStrength = 2;
var windForce = new THREE.Vector3( 0, 0, 0 );
var tmpForce = new THREE.Vector3();
var lastTime;
function plane( width, height ) {
return function( u, v ) {
var x = ( u - 0.5 ) * width;
var y = ( v - 0.1 ) * height;
var z = 0;
return new THREE.Vector3( x, y, z );
};
}
function Particle( x, y, z, mass ) {
this.position = clothFunction( x, y ); // position
this.previous = clothFunction( x, y ); // previous
this.original = clothFunction( x, y );
this.a = new THREE.Vector3( 0, 0, 0 ); // acceleration
this.mass = mass;
this.invMass = 1 / mass;
this.tmp = new THREE.Vector3();
this.tmp2 = new THREE.Vector3();
}
// Force -> Acceleration
Particle.prototype.addForce = function( force ) {
this.a.add(
this.tmp2.copy( force ).multiplyScalar( this.invMass )
);
};
// Performs Verlet integration
Particle.prototype.integrate = function( timesq ) {
var newPos = this.tmp.subVectors( this.position, this.previous );
newPos.multiplyScalar( DRAG ).add( this.position );
newPos.add( this.a.multiplyScalar( timesq ) );
this.tmp = this.previous;
this.previous = this.position;
this.position = newPos;
this.a.set( 0, 0, 0 );
};
var diff = new THREE.Vector3();
function satisfyConstraints( p1, p2, distance ) {
diff.subVectors( p2.position, p1.position );
var currentDist = diff.length();
if ( currentDist === 0 ) return;
var correction = diff.multiplyScalar( 1 - distance / currentDist );
var correctionHalf = correction.multiplyScalar( 0.5 );
p1.position.add( correctionHalf );
p2.position.sub( correctionHalf );
}
function Cloth( w, h ) {
w = w || 20;
h = h || 20;
this.w = w;
this.h = h;
var particles = [];
var constraints = [];
var u, v;
// Create particles
for ( v = 0; v <= h; v ++ ) {
for ( u = 0; u <= w; u ++ ) {
particles.push(
new Particle( u / w, v / h, 0, MASS )
);
}
}
// Structural
for ( v = 0; v < h; v ++ ) {
for ( u = 0; u < w; u ++ ) {
constraints.push( [
particles[ index( u, v ) ],
particles[ index( u, v + 1 ) ],
restDistance
] );
constraints.push( [
particles[ index( u, v ) ],
particles[ index( u + 1, v ) ],
restDistance
] );
}
}
for ( u = w, v = 0; v < h; v ++ ) {
constraints.push( [
particles[ index( u, v ) ],
particles[ index( u, v + 1 ) ],
restDistance
] );
}
for ( v = h, u = 0; u < w; u ++ ) {
constraints.push( [
particles[ index( u, v ) ],
particles[ index( u + 1, v ) ],
restDistance
] );
}
this.particles = particles;
this.constraints = constraints;
function index( u, v ) {
return u + v * ( w + 1 );
}
this.index = index;
}
function simulate( time ) {
if ( ! lastTime ) {
lastTime = time;
return;
}
var i, il, particles, particle, pt, constraints, constraint;
// Aerodynamics forces
if ( wind ) {
var face, faces = clothGeometry.faces, normal;
particles = cloth.particles;
for ( i = 0, il = faces.length; i < il; i ++ ) {
face = faces[ i ];
normal = face.normal;
tmpForce.copy( normal ).normalize().multiplyScalar( normal.dot( windForce ) );
particles[ face.a ].addForce( tmpForce );
particles[ face.b ].addForce( tmpForce );
particles[ face.c ].addForce( tmpForce );
}
}
for ( particles = cloth.particles, i = 0, il = particles.length; i < il; i ++ ) {
particle = particles[ i ];
particle.addForce( gravity );
particle.integrate( TIMESTEP_SQ );
}
// Start Constraints
constraints = cloth.constraints;
il = constraints.length;
for ( i = 0; i < il; i ++ ) {
constraint = constraints[ i ];
satisfyConstraints( constraint[ 0 ], constraint[ 1 ], constraint[ 2 ] );
}
// Pin Constraints
for ( i = 0, il = pins.length; i < il; i ++ ) {
var xy = pins[ i ];
var p = particles[ xy ];
p.position.copy( particles.original );
p.previous.copy( particles.original );
}
}
The "pin" is just the index of one of the vertices.. so what you'll have to do is identify the vertex corresponding to the spot you want to pin.. you can get that from a raycast when the user clicks the mesh, or figure it our analytically.

How to get LatLng by clicking on rotating sphere (three.js)?

1) create earth object
self.earth = new THREE.Mesh(new THREE.SphereGeometry(50, 32, 32), new THREE.MeshBasicMaterial({map: tex}));
THREE.ImageUtils.crossOrigin = '';
self.obj = new THREE.Object3D();
self.obj.add(self.earth);
// self.obj.rotation.y = 34.3;
2) intersects
var mouse3D = new THREE.Vector3( );
var raycaster = new THREE.Raycaster();
mouse3D.set( ( (event.clientX) / window.innerWidth ) * 2 - 1, -( (event.clientY) / window.innerHeight ) * 2 + 1, 0.5 ).unproject(self.camera);raycaster.set(self.camera.position, mouse3D.sub(self.camera.position ).normalize());
var intersects = raycaster.intersectObject( self.earth );
if (intersects.length > 0) {
object = intersects[0];
r = 50; // radius
x = object.point.x ;
y = object.point.y ;
z = object.point.z ;
3) coords
var lat = ( 90 - (Math.acos(y / r)) * 180 / Math.PI ) - 10;
var lon = ((270 + (Math.atan2(x , z)) * 180 / Math.PI) % 360) - 10;
Its work, but rotating (self.obj.rotation.y = 34.3;) broke calculating, why?

Add offset to DeviceOrientationControls in three.js

I'm trying to add an offset to the camera after deviceControls.update(); command. I used DeviceOrientationControls as shown in this first example.
The offset will be the result of a drag gesture, as presents in this example.
When I multiply the 2 quaternions (I have tried a x b and b x a), the final result is not correct.
Here is my operation :
const m1 = new THREE.Matrix4();
m1.lookAt(new THREE.Vector3(), camera.target, THREE.Object3D.DefaultUp.clone());
const quater = new THREE.Quaternion();
quater.setFromRotationMatrix(m1);
const finalQuater = new THREE.Quaternion();
finalQuater.multiplyQuaternions(quater, camera.quaternion);
camera.quaternion.copy(finalQuater);
camera.target is my final drag target (Vector3), and camera.quaternion has been set by deviceControls.update() and is equals to the camera orientation, according to the device gyroscope.
Thanks for your help
Update : I have tried to changer rotate order, same problem. I think it is due to the origin change after the device orientation update, but can't find how to solve.
DeviceOrientationControls now has a property alphaOffsetAngle, and a method
controls.updateAlphaOffsetAngle( angle ); // angle is in radians
that will rotate the scene around the three.js 'Y' axis.
three.js r.77
var rotY = 0;
var rotX = 0;
function setObjectQuaternion(quaternion, alpha, beta, gamma, orient) {
var zee = new THREE.Vector3( 0, 0, 1 );
var euler = new THREE.Euler();
var q0 = new THREE.Quaternion();
var q1 = new THREE.Quaternion( -Math.sqrt( 0.5 ), 0, 0, Math.sqrt( 0.5 ) ); // - PI/2 around the x-axis
if (screenOrientation == 0) {
var vectorFingerY = new THREE.Vector3( 1, 0, 0 );
var fingerQY = new THREE.Quaternion();
fingerQY.setFromAxisAngle ( vectorFingerY, -rotX );
}else if (screenOrientation == 180) {
var vectorFingerY = new THREE.Vector3( 1, 0, 0 );
var fingerQY = new THREE.Quaternion();
fingerQY.setFromAxisAngle ( vectorFingerY, rotX );
}else if (screenOrientation == 90) {
var vectorFingerY = new THREE.Vector3( 0, 1, 0 );
var fingerQY = new THREE.Quaternion();
fingerQY.setFromAxisAngle ( vectorFingerY, rotX );
}else if (screenOrientation == -90) {
var vectorFingerY = new THREE.Vector3( 0, 1, 0 );
var fingerQY = new THREE.Quaternion();
fingerQY.setFromAxisAngle ( vectorFingerY, -rotX );
}
q1.multiply( fingerQY );
euler.set( beta, alpha, - gamma, 'YXZ' ); // 'ZXY' for the device, but 'YXZ' for us
quaternion.setFromEuler( euler ); // orient the device
quaternion.multiply( q1 ); // camera looks out the back of the device, not the top
quaternion.multiply( q0.setFromAxisAngle( zee, - orient ) ); // adjust for screen orientation
};
function update(camera) {
if (window.orientation !== undefined && window.orientation !== null) screenOrientation = window.orientation;
var alpha = deviceOrientation.alpha ? THREE.Math.degToRad( deviceOrientation.alpha ) : 0; // Z
var beta = deviceOrientation.beta ? THREE.Math.degToRad( deviceOrientation.beta ) : 0; // X'
var gamma = deviceOrientation.gamma ? THREE.Math.degToRad( deviceOrientation.gamma ) : 0; // Y''
var orient = screenOrientation ? THREE.Math.degToRad( screenOrientation ) : 0; // O
setObjectQuaternion( camera.quaternion, alpha, beta, gamma, orient );
};
add this to your init
container.appendChild( renderer.domElement );
renderer.domElement.addEventListener( 'touchstart', function (e) {
if (controls) {
e.preventDefault();
e.stopPropagation();
tempX = e.touches[ 0 ].pageX;
tempY = e.touches[ 0 ].pageY;
}
}, false );
renderer.domElement.addEventListener( 'touchmove', function (e) {
if (controls) {
e.preventDefault();
e.stopPropagation();
rotY += THREE.Math.degToRad((tempX - e.touches[ 0 ].pageX)/4);
rotX += THREE.Math.degToRad((tempY - e.touches[ 0 ].pageY)/4);
mesh.quaternion.copy(MeshStartQY);
var vectorFingerY = new THREE.Vector3( 0, 1, 0 );
var fingerQY = new THREE.Quaternion();
fingerQY.setFromAxisAngle ( vectorFingerY, rotY );
mesh.quaternion.multiply(fingerQY);
tempX = e.touches[ 0 ].pageX;
tempY = e.touches[ 0 ].pageY;
}
}, false );

How to add faces to THREE.BufferGeometry?

I have created programmatically a simple mesh:
var CreateSimpleMesh = new function () {
var xy = [],
maxX = 7,
maxY = 10,
river = [[0, 5], [0, 4], [1, 3], [2, 2], [3, 2], [4, 1], [5, 1], [6, 0]],
grassGeometry = new THREE.BufferGeometry(),
grassVertexPositions = []
this.init = function () {
for (i = 0; i < maxX; i++) {
for (j = 0; j < maxY; j++) {
xy.push([i, j])
}
}
for (var i = 0; i < xy.length; i++) {
grassVertexPositions.push([xy[i][0], xy[i][1], 0])
grassVertexPositions.push([xy[i][0] + 1, xy[i][1], 0])
grassVertexPositions.push([xy[i][0], xy[i][1] + 1, 0])
grassVertexPositions.push([xy[i][0] + 1, xy[i][1] + 1, 0])
grassVertexPositions.push([xy[i][0], xy[i][1] + 1, 0])
grassVertexPositions.push([xy[i][0] + 1, xy[i][1], 0])
}
for (var i = 0; i < grassVertexPositions.length; i++) {
for (var j = 0; j < river.length; j++) {
if (river[j][0] == grassVertexPositions[i][0] && river[j][1] == grassVertexPositions[i][1]) {
grassVertexPositions[i][2] = -0.5
}
}
}
var grassVertices = new Float32Array(grassVertexPositions.length * 3)
for (var i = 0; i < grassVertexPositions.length; i++) {
grassVertices[i * 3 + 0] = grassVertexPositions[i][0];
grassVertices[i * 3 + 1] = grassVertexPositions[i][1];
grassVertices[i * 3 + 2] = grassVertexPositions[i][2];
}
grassGeometry.addAttribute('position', new THREE.BufferAttribute(grassVertices, 3))
var grassMaterial = new THREE.MeshLambertMaterial({color: 0x00ff00}),
grassMesh = new THREE.Mesh(grassGeometry, grassMaterial)
grassMesh.rotation.x = -Math.PI / 2
Test.getScene().add(grassMesh);
}
}
Problem is that this mesh has only vertices. I have tried to add to it faces like in this question using THREE.Shape.Utils.triangulateShape but BufferGeometry is different than normal geometry and it does not work. Is it possible to add faces to BufferGeometry?
EDIT:
Working fiddle
Here is how to create a mesh having BufferGeometry. This is the simpler "non-indexed" BufferGeometry where vertices are not shared.
// non-indexed buffer geometry
var geometry = new THREE.BufferGeometry();
// number of triangles
var NUM_TRIANGLES = 10;
// attributes
var positions = new Float32Array( NUM_TRIANGLES * 3 * 3 );
var normals = new Float32Array( NUM_TRIANGLES * 3 * 3 );
var colors = new Float32Array( NUM_TRIANGLES * 3 * 3 );
var uvs = new Float32Array( NUM_TRIANGLES * 3 * 2 );
var color = new THREE.Color();
var scale = 15;
var size = 5;
var x, y, z;
for ( var i = 0, l = NUM_TRIANGLES * 3; i < l; i ++ ) {
if ( i % 3 === 0 ) {
x = ( Math.random() - 0.5 ) * scale;
y = ( Math.random() - 0.5 ) * scale;
z = ( Math.random() - 0.5 ) * scale;
} else {
x = x + size * ( Math.random() - 0.5 );
y = y + size * ( Math.random() - 0.5 );
z = z + size * ( Math.random() - 0.5 );
}
var index = 3 * i;
// positions
positions[ index ] = x;
positions[ index + 1 ] = y;
positions[ index + 2 ] = z;
//normals -- we will set normals later
// colors
color.setHSL( i / l, 1.0, 0.5 );
colors[ index ] = color.r;
colors[ index + 1 ] = color.g;
colors[ index + 2 ] = color.b;
// uvs
uvs[ index ] = Math.random(); // just something...
uvs[ index + 1 ] = Math.random();
}
geometry.addAttribute( 'position', new THREE.BufferAttribute( positions, 3 ) );
geometry.addAttribute( 'normal', new THREE.BufferAttribute( normals, 3 ) );
geometry.addAttribute( 'color', new THREE.BufferAttribute( colors, 3 ) );
geometry.addAttribute( 'uv', new THREE.BufferAttribute( uvs, 2 ) );
// optional
geometry.computeBoundingBox();
geometry.computeBoundingSphere();
// set the normals
geometry.computeVertexNormals(); // computed vertex normals are orthogonal to the face for non-indexed BufferGeometry
See the three.js examples for many additional examples of creating BufferGeometry. Also check out the source code for PlaneGeometry and SphereGeometry, which are reasonably easy to understand.
three.js r.143
You can add faces using three.js internal function- fromBufferGeometry. In your case it would be something like this.
var directGeo = new THREE.Geometry();
directGeo.fromBufferGeometry(grassGeometry);
Then use directGeo to build your mesh, and it will have faces.

Categories

Resources