I'm trying to use part of a video as a texture in a Three.js mesh.
Video is here, http://video-processing.s3.amazonaws.com/example.MP4 it's a fisheye lens and I want to only use the part with actual content, i.e. the circle in the middle.
I want to somehow mask, crop or position and stretch the video on the mesh so that only this part shows and the black part is ignored.
Video code
var video = document.createElement( 'video' );
video.loop = true;
video.crossOrigin = 'anonymous';
video.preload = 'auto';
video.src = "http://video-processing.s3.amazonaws.com/example.MP4";
video.play();
var texture = new THREE.VideoTexture( video );
texture.minFilter = THREE.NearestFilter;
texture.magFilter = THREE.LinearFilter;
texture.format = THREE.RGBFormat;
var material = new THREE.MeshBasicMaterial( { map : texture } );
The video is then projected onto a 220 degree sphere, to give the VR impression.
var geometry = new THREE.SphereGeometry( 200,100,100, 0, 220 * Math.PI / 180, 0, Math.PI);
Here is a code pen
http://codepen.io/bknill/pen/vXBWGv
Can anyone let me know how I'm best to do this?
You can use texture.repeat to scale the texture
http://threejs.org/docs/#Reference/Textures/Texture
for example, to scale 2x on both axis
texture.repeat.set(0.5, 0.5);
In short, you need to update the UV-Map of the sphere so that the relevant area of your texture is assigned to the corresponding vertices of the sphere.
The UV-coordinates for each vertex define the coordinates within the texture that is assigned to that vertex (in a range [0..1], so coordinates (0, 0) are the top left corner and (1,1) the bottom right corner of your video). This example should give you an Idea what this is about.
Those UV-coordinates are stored in your geometry as geometry.faceVertexUvs[0] such that every vertex of every face has a THREE.Vector2 value for the UV-coordinate. This is a two-dimensional array, the first index is the face-index and the second one the vertex-index for the face (see example).
As for generating the UV-map there are at least two ways to do this. The probably easier way (ymmv, but I'd always go this route) would be to create the UV-map using 3D-editing software like blender and export the resulting object using the three.js exporter-plugin.
The other way is to compute the values by hand. I would suggest you first try to simply use an orthographic projection of the sphere. So basically, if you have a unit-sphere at the origin, simply drop the z-coordinate of the vertices and use u = x/2 + 0.5 and v = y/2 + 0.5 as UV-coordinates.
In JS that would be something like this:
// create the geometry (note that for simplicity, we're
// a) using a unit-sphere and
// b) use an exact half-sphere)
const geometry = new THREE.SphereGeometry(1, 18, 18, Math.PI, Math.PI)
const uvs = geometry.faceVertexUvs[0];
const vertices = geometry.vertices;
// compute the UV from the vertices of the sphere. You will probably need
// something a bit more elaborate than this for the 220degree FOV, also maybe
// some lens-distorion, but it will boild down to something like this:
for(let i = 0; i<geometry.faces.length; i++) {
const face = geometry.faces[i];
const faceVertices = [vertices[face.a], vertices[face.b], vertices[face.c]];
for(let j = 0; j<3; j++) {
const vertex = faceVertices[j];
uvs[i][j].set(vertex.x/2 + 0.5, vertex.y/2 + 0.5);
}
}
geometry.uvsNeedUpdate = true;
(if you need more information in either direction, drop a comment and i will elaborate)
Related
I'd like to be able to set the rotation of a Three.js sphere to an absolute value, but whenever I set rotateY the value I apply is added or subtracted from the last rotation, rather than becoming a new absolute rotation setting.
In a related answer about a cube (Three.js Set absolute local rotation), the cube has a rotation attribute, and cube.rotation.x = someValue results in the kind of absolute rotation that I'm looking for.
But the SphereGeometry object that I'm using (with a world map as its texture) has no rotation attribute.
I suppose I could keep track of previous rotations, and apply only the difference, but I'd think that would suffer eventually from cumulative round-off errors.
Is there another way to do this? A reset method of some sort?
async orient(lon: number, lat: number): Promise<void> {
if (Globe.mapFailed)
throw new Error('Map not available');
else if (!Globe.mapImage)
await new Promise<void>((resolve, reject) => Globe.waitList.push({ resolve, reject }));
if (!this.initialized) {
this.camera = new PerspectiveCamera(FIELD_OF_VIEW, 1);
this.scene = new Scene();
this.globe = new SphereGeometry(GLOBE_RADIUS, 50, 50);
const mesh = new Mesh(
this.globe,
new MeshBasicMaterial({
map: new CanvasTexture(Globe.mapCanvas)
})
);
this.renderer = new WebGLRenderer({ alpha: true });
this.renderer.setSize(GLOBE_PIXEL_SIZE, GLOBE_PIXEL_SIZE);
this.rendererHost.appendChild(this.renderer.domElement);
this.scene.add(mesh);
this.camera.position.z = VIEW_DISTANCE;
this.camera.rotation.order = 'YXZ';
this.initialized = true;
}
this.globe.rotateY(PI / 20); // Just a sample value I experimented with
this.camera.rotation.z = (lat >= 0 ? PI : 0);
requestAnimationFrame(() => this.renderer.render(this.scene, this.camera));
}
Update:
My workaround for now is this:
this.globe.rotateX(-this.lat);
this.globe.rotateY(this.lon);
this.lon = to_radian(lon);
this.lat = to_radian(lat);
this.globe.rotateY(-this.lon);
this.globe.rotateX(this.lat);
I'm saving the previous rotations which have been done so that I can undo them, then apply new rotations. (Degree/radian conversions, and the sign of the longitude rotation needing to be reversed, obscures the process a bit.)
I think you're confusing geometry.rotateY(rot) with mesh.rotation.y = rot. As explained in the docs:
.rotateY(): Rotate the geometry about the Y axis. This is typically done as a one time operation, and not during a loop. Use Object3D.rotation for typical real-time mesh rotation.
geometry.rotateY(rot) should only be used once because it updates the values of all the vertex positions, so it has to iterate through every vertex and update it. This is useful if you need to modify the "original state" of your geometry, for example a character model that needs to start facing down the z-axis.
mesh.rotation.y = rot; is what you're probably looking for. This is what you use during realtime rotations, so the intrinsic vertex positions are left untouched, you're just rotating the mesh as a whole. For example, when your character is running all over the map.
this.mesh = new Mesh(geometry, material);
// Set rotation to an absolute rotation value
this.mesh.rotation.y = Math.PI / 20;
// Increment rotation a relative amount (like once per frame):
this.mesh.rotation.y += Math.PI / 20;
I am currently trying to project an image onto the inside of a halfsphere in a three.js project. The half sphere is created via
const geometry = new THREE.SphereGeometry(Component.radius, this.resolution, this.resolution,
Math.PI, Math.PI, 0, Math.PI);
this.material = new THREE.MeshStandardMaterial({color: 0xffffff});
this.material.side = THREE.DoubleSide;
this.sphere = new THREE.Mesh(geometry, this.material);
// The image of the texture is set later dynamically via
// this.material.map = textureLoader.load(...);
With radius and resolution being constants. This works fine, except for one issue: The image becomes distorted around the "top" and "bottom" of the sphere, like this:
Simple example of distorted texture:
The texture originally had the value "0,0" in the bottom left and "0,1" in the bottom right, and with the camera facing down from the center of the demisphere the bottom left and bottom right corner are squished onto the "bottom" point of the sphere.
I want to change this behavior so the texture corners are instead on where they would be if you place the square texture into a sphere, with the corners touching the circle, then stretching the lines between the corners to match the circle. Simple mock of what i mean:
I have tried playing with the mapping Attribute of my texture, but that doesn't change the behaviour from my understanding.
After changing the UV coordinates, my half sphere texture is't stretching on border as well:
this.sphereGeometry = new THREE.SphereGeometry(
10,
32,
24,
0,
Math.PI,
0,
Math.PI
);
const {
uv: uvAttribute,
normal
} = this.sphereGeometry.attributes;
for (let i = 0; i < normal.count; i += 1) {
let u = normal.getX(i);
let v = normal.getY(i);
u = u / 2 + 0.5;
v = v / 2 + 0.5;
uvAttribute.setXY(i, u, v);
}
const texture = new THREE.TextureLoader().load(
'https://i.imgur.com/CslEXIS.jpg'
);
texture.flipY = false;
texture.mapping = THREE.CubeUVRefractionMapping;
texture.needsUpdate = true;
const basicMaterial = new THREE.MeshBasicMaterial({
map: texture,
side: THREE.DoubleSide,
});
this.sphere = new THREE.Mesh(this.sphereGeometry, basicMaterial);
this.scene.add(this.sphere);
I am new to Three.js so perhaps I am not going abut this optimally,
I have geometry which I create as follows,
const geo = new THREE.PlaneBufferGeometry(10,0);
I then apply a rotation to it
geo.applyMatrix( new THREE.Matrix4().makeRotationX( Math.PI * 0.5 ) );
then I create a Mesh from it
const open = new THREE.Mesh( geo, materialNormal);
I then apply a bunch of operations to the mesh to position it correctly, as follows:
open.position.copy(v2(10,20);
open.position.z = 0.5*10
open.position.x -= 20
open.position.y -= 10
open.rotation.z = angle;
Now what is the best way to get the vertices of the mesh both before and after it's position is changed? I was surpised to discover that the vertices of a mesh are not in-built into three.js.
Any hints and code samples would be greatly appreciated.
I think you're getting tripped-up by some semantics regarding three.js objects.
1) A Mesh does not have vertices. A Mesh contains references to Geometry/BufferGeometry, and Material(s). The vertices are contained in the Mesh's geometry property/object.
2) You're using PlaneBufferGeometry, which means an implementation of a BufferGeometry object. BufferGeometry keeps its vertices in the position attribute (mesh.geometry.attributes.position). Keep in mind that the vertex order may be affected by the index property (mesh.geometry.index).
Now to your question, the geometric origin is also its parent Mesh's origin, so your "before mesh transformation" vertex positions are exactly the same as when you created the mesh. Just read them out as-is.
To get the "after mesh transformation" vertex positions, you'll need to take each vertex, and convert it from the Mesh's local space, into world space. Luckily, three.js has a convenient function to do this:
var tempVertex = new THREE.Vector3();
// set tempVertex based on information from mesh.geometry.attributes.position
mesh.localToWorld(tempVertex);
// tempVertex is converted from local coordinates into world coordinates,
// which is its "after mesh transformation" position
Here's an example written by typescript.
It gets the grid's position in the world coordinate system.
GetObjectVertices(obj: THREE.Object3D): { pts: Array<THREE.Vector3>, faces: Array<THREE.Face3> }
{
let pts: Array<THREE.Vector3> = [];
let rs = { pts: pts, faces: null };
if (obj.hasOwnProperty("geometry"))
{
let geo = obj["geometry"];
if (geo instanceof THREE.Geometry)
{
for (let pt of geo.vertices)
{
pts.push(pt.clone().applyMatrix4(obj.matrix));
}
rs.faces = geo.faces;
}
else if (geo instanceof THREE.BufferGeometry)
{
let tempGeo = new THREE.Geometry().fromBufferGeometry(geo);
for (let pt of tempGeo.vertices)
{
pts.push(pt.applyMatrix4(obj.matrix));
}
rs.faces = tempGeo.faces;
tempGeo.dispose();
}
}
return rs;
}
or
if (geo instanceof THREE.BufferGeometry)
{
let positions: Float32Array = geo.attributes["position"].array;
let ptCout = positions.length / 3;
for (let i = 0; i < ptCout; i++)
{
let p = new THREE.Vector3(positions[i * 3], positions[i * 3 + 1], positions[i * 3 + 2]);
}
}
My display has a resolution of 7680x4320 pixels. I want to display up to 4 million different colored squares. And I want to change the number of squares with a slider. If have currently two versions. One with canvas-fillRect which looks somethink like this:
var c = document.getElementById("myCanvas");
var ctx = c.getContext("2d");
for (var i = 0; i < num_squares; i ++) {
ctx.fillStyle = someColor;
ctx.fillRect(pos_x, pos_y, pos_x + square_width, pos_y + square_height);
// set pos_x and pos_y for next square
}
And one with webGL and three.js. Same loop, but I create a box geometry and a mesh for every square:
var geometry = new THREE.BoxGeometry( width_height, width_height, 0);
for (var i = 0; i < num_squares; i ++) {
var material = new THREE.MeshLambertMaterial( { color: Math.random() * 0xffffff } );
material.emissive = new THREE.Color( Math.random(), Math.random(), Math.random() );
var object = new THREE.Mesh( geometry, material );
}
They both work quite fine for a few thousand squares. The first version can do up to one million squares, but everything over a million is just awful slow. I want to update the color and the number of squares dynamically.
Does anyone has tips on how to be more efficient with three.js/ WebGL/ Canvas?
EDIT1: Second version: This is what I do at the beginning and when the slider has changed:
// Remove all objects from scene
var obj, i;
for ( i = scene.children.length - 1; i >= 0 ; i -- ) {
obj = scene.children[ i ];
if ( obj !== camera) {
scene.remove(obj);
}
}
// Fill scene with new objects
num_squares = gui_dat.squareNum;
var window_pixel = window.innerWidth * window.innerHeight;
var pixel_per_square = window_pixel / num_squares;
var width_height = Math.floor(Math.sqrt(pixel_per_square));
var geometry = new THREE.BoxGeometry( width_height, width_height, 0);
var pos_x = width_height/2;
var pos_y = width_height/2;
for (var i = 0; i < num_squares; i ++) {
//var object = new THREE.Mesh( geometry, );
var material = new THREE.Material()( { color: Math.random() * 0xffffff } );
material.emissive = new THREE.Color( Math.random(), Math.random(), Math.random() );
var object = new THREE.Mesh( geometry, material );
object.position.x = pos_x;
object.position.y = pos_y;
pos_x += width_height;
if (pos_x > window.innerWidth) {
pos_x = width_height/2;
pos_y += width_height;
}
scene.add( object );
}
The fastest way to draw squares is to use the gl.POINTS primitive and then setting gl_PointSize to the pixel size.
In three.js, gl.POINTS is wrapped inside the THREE.PointCloud object.
You'll have to create a geometry object with one position for each point and pass that to the PointCloud constructor.
Here is an example of THREE.PointCloud in action:
http://codepen.io/seanseansean/pen/EaBZEY
geometry = new THREE.Geometry();
for (i = 0; i < particleCount; i++) {
var vertex = new THREE.Vector3();
vertex.x = Math.random() * 2000 - 1000;
vertex.y = Math.random() * 2000 - 1000;
vertex.z = Math.random() * 2000 - 1000;
geometry.vertices.push(vertex);
}
...
materials[i] = new THREE.PointCloudMaterial({size:size});
particles = new THREE.PointCloud(geometry, materials[i]);
I didn't dig through all the code but I've set the particle count to 2m and from my understanding, 5 point clouds are generated so 2m*5 = 10m particles and I'm getting around 30fps.
The highest number of individual points I've seen so far was with potree.
http://potree.org/, https://github.com/potree
Try some demo, I was able to observe 5 millions of points in 3D at 20-30fps. I believe this is also current technological limit.
I didn't test potree on my own, so I cant say much about this tech. But there is data convertor and viewer (threejs based) so should only figure out how to convert the data.
Briefly about your question
The best way handle large data is group them as quad-tree (2d) or oct-tree (3d). This will allow you to not bother program with part that is too far from camera or not visible at all.
On the other hand, program doesnt like when you do too many webgl calls. Try to understand it like this, you want to do create ~60 images each second. But each time you set some parameter for GPU, program must do some sync. Spliting data means you will need to do more setup so tree must not be too detialed.
Last thing, someone said:
You'll probably want to pass an array of values as one of the shader uniforms
I dont suggest it, bad idea. Texture lookup is quite fast, but attributes are always faster. If we are talking about 4M points, you cant afford reading data from uniforms.
Sorry I cant help you with the code, I could do it without threejs, Im not threejs expert :)
I would recommend trying pixi framework( as mentioned in above comments ).
It has webgl renderer and some benchmarks are very promising.
http://www.goodboydigital.com/pixijs/bunnymark_v3/
It can handle allot of animated sprites.
If your app only displays the squares, and doesnt animate, and they are very simple sprites( only one color ) then it would give better performance than the demo link above.
Ive been having the linewidth problem (something to do with ANGLE on window). I have resorted to using cylinders between 2 points (in 3D space). I have already solved the issue on getting the length of the cylinder based on the 2 points-3D distance formula.
I have been having trouble however getting the angle. I want the cylinder to rotate so that the angle found will make it so that the cylinder connects the two points.
Essensially I am trying to find a way to find the angle between (x1,y1,z1) and (x2,y2,z2). And having it modify a cylinder (cylinder.rotation.x, cylinder.rotation.y, and cylinder.rotation.z).
You can use a transformation matrix to do that. Here's some example code:
function createCylinderFromEnds( material, radiusTop, radiusBottom, top, bottom, segmentsWidth, openEnded)
{
// defaults
segmentsWidth = (segmentsWidth === undefined) ? 32 : segmentsWidth;
openEnded = (openEnded === undefined) ? false : openEnded;
// Dummy settings, replace with proper code:
var length = 100;
var cylAxis = new THREE.Vector3(100,100,-100);
var center = new THREE.Vector3(-100,100,100);
////////////////////
var cylGeom = new THREE.CylinderGeometry( radiusTop, radiusBottom, length, segmentsWidth, 1, openEnded );
var cyl = new THREE.Mesh( cylGeom, material );
// pass in the cylinder itself, its desired axis, and the place to move the center.
makeLengthAngleAxisTransform( cyl, cylAxis, center );
return cyl;
}
// Transform cylinder to align with given axis and then move to center
function makeLengthAngleAxisTransform( cyl, cylAxis, center )
{
cyl.matrixAutoUpdate = false;
// From left to right using frames: translate, then rotate; TR.
// So translate is first.
cyl.matrix.makeTranslation( center.x, center.y, center.z );
// take cross product of cylAxis and up vector to get axis of rotation
var yAxis = new THREE.Vector3(0,1,0);
// Needed later for dot product, just do it now;
// a little lazy, should really copy it to a local Vector3.
cylAxis.normalize();
var rotationAxis = new THREE.Vector3();
rotationAxis.crossVectors( cylAxis, yAxis );
if ( rotationAxis.length() < 0.000001 )
{
// Special case: if rotationAxis is just about zero, set to X axis,
// so that the angle can be given as 0 or PI. This works ONLY
// because we know one of the two axes is +Y.
rotationAxis.set( 1, 0, 0 );
}
rotationAxis.normalize();
// take dot product of cylAxis and up vector to get cosine of angle of rotation
var theta = -Math.acos( cylAxis.dot( yAxis ) );
//cyl.matrix.makeRotationAxis( rotationAxis, theta );
var rotMatrix = new THREE.Matrix4();
rotMatrix.makeRotationAxis( rotationAxis, theta );
cyl.matrix.multiply( rotMatrix );
}
I didn't write this. Find the full working sample here.
It's part of Chapter 5: Matrices from this awesome free Interactive 3D Graphics course taught using three.js.
I warmly recommend it. If you didn't have a chance to play with transformations you might want to start with Chapter 4.
As a side note. You can also cheat a bit and use Matrix4's lookAt() to solve the rotation, offset the translation so the pivot is at the tip of the cylinder, then apply the matrix to the cylinder.