Strange behaviour of shadowing in ThreeJS - javascript

so I have a threeJS scene, and I have some spheres (multimaterial) added. I have also a directional light added :
this.light = new THREE.DirectionalLight( 0xFFFFFF, 1 );
this.light.position.set( 2, 10, 2 );
this.light.castShadow = true;
this.light.shadowMapSoft = true;
this.light.shadowCameraVisible = true;
this.light.shadowCameraNear = 1;
this.light.shadowCameraFar = 10;
this.light.shadowBias = 0.0009;
this.light.shadowDarkness = 0.3;
this.light.shadowMapWidth = 1024;
this.light.shadowMapHeight = 1024;
this.light.shadowCameraLeft = -8;
this.light.shadowCameraRight = 8;
this.light.shadowCameraTop = 8;
this.light.shadowCameraBottom = -8;
So when the user adds or removes spheres, a function which executes "reforms" the shadow camera frustum like this :
this.light.position.set( posV.x, posV.y, posV.z);
this.light.shadowCamera.far = l2*3;
this.light.shadowCamera.left = -l2;
this.light.shadowCamera.right = l2;
this.light.shadowCamera.bottom = -l2;
this.light.shadowCamera.top = l2;
this.light.shadowCamera.updateProjectionMatrix();
and a possible result of the above code is this :
an other aspect of the exact situation of above :
I have set the camera's frustum visibility to on, so there it is. The problem is the shadowing that is generated for no reason (pointed out by the red arrows). No other objects are in the scene at that moment and the spheres are fully inside the camera frustum.
Problems in shadowing like these are common after updating the spheres (add/remove), does anybody have any idea about it?
I use three.js r72 ,
thanks!

Based on your comments, what you are seeing is self-shadowing artifacts.
You need to adjust the shadowBias ( now called shadow.bias ) parameter.
Varying the shadow bias results in a trade-off between "peter-panning" (too much positive bias) and self-shadowing artifacts (too much negative bias).
three.js r.73

Related

Dynamically update plane / cube vertices using threejs - issue while updating matrix

I have a plane, which is added to the scene using threejs. I am adjusting position & rotation using User Interface controls , if there is a change in position or in rotation then I am running below code to get updated plane / cube vertices & Matrix Values:
plane.updateMatrixWorld();
plane.updateMatrix();
plane.geometry.applyMatrix( plane.matrix );
plane.matrix.identity();
console.log(plane.matrix); // using this to get matrix values
console.log(plane.geometry.vertices); //using this to get plane vertices.
When I am running above code facing issue in position shift for the plane / cube / mesh in the scene.
Tried adding below code to make it dynamic updates to vertices but it did not work:
plane.verticesNeedUpdate = true;
plane.elementsNeedUpdate = true;
plane.morphTargetsNeedUpdate = true;
plane.uvsNeedUpdate = true;
plane.normalsNeedUpdate = true;
plane.colorsNeedUpdate = true;
plane.tangentsNeedUpdate = true;
Yes, Got the solution,When there is a change in position/rotation call below code.
var plane_vector;
function updateVertices(){
plane.updateMatrixWorld();
console.log("plane Vertices: ");
for(i = 0; i<=(plane.geometry.vertices.length-1); i++){
plane_vector[i] =
plane.geometry.vertices[i].clone();
plane_vector[i].applyMatrix4(
plane.matrixWorld );
}
console.log(plane.matrix);
}

Three.js part of video as texture

I'm trying to use part of a video as a texture in a Three.js mesh.
Video is here, http://video-processing.s3.amazonaws.com/example.MP4 it's a fisheye lens and I want to only use the part with actual content, i.e. the circle in the middle.
I want to somehow mask, crop or position and stretch the video on the mesh so that only this part shows and the black part is ignored.
Video code
var video = document.createElement( 'video' );
video.loop = true;
video.crossOrigin = 'anonymous';
video.preload = 'auto';
video.src = "http://video-processing.s3.amazonaws.com/example.MP4";
video.play();
var texture = new THREE.VideoTexture( video );
texture.minFilter = THREE.NearestFilter;
texture.magFilter = THREE.LinearFilter;
texture.format = THREE.RGBFormat;
var material = new THREE.MeshBasicMaterial( { map : texture } );
The video is then projected onto a 220 degree sphere, to give the VR impression.
var geometry = new THREE.SphereGeometry( 200,100,100, 0, 220 * Math.PI / 180, 0, Math.PI);
Here is a code pen
http://codepen.io/bknill/pen/vXBWGv
Can anyone let me know how I'm best to do this?
You can use texture.repeat to scale the texture
http://threejs.org/docs/#Reference/Textures/Texture
for example, to scale 2x on both axis
texture.repeat.set(0.5, 0.5);
In short, you need to update the UV-Map of the sphere so that the relevant area of your texture is assigned to the corresponding vertices of the sphere.
The UV-coordinates for each vertex define the coordinates within the texture that is assigned to that vertex (in a range [0..1], so coordinates (0, 0) are the top left corner and (1,1) the bottom right corner of your video). This example should give you an Idea what this is about.
Those UV-coordinates are stored in your geometry as geometry.faceVertexUvs[0] such that every vertex of every face has a THREE.Vector2 value for the UV-coordinate. This is a two-dimensional array, the first index is the face-index and the second one the vertex-index for the face (see example).
As for generating the UV-map there are at least two ways to do this. The probably easier way (ymmv, but I'd always go this route) would be to create the UV-map using 3D-editing software like blender and export the resulting object using the three.js exporter-plugin.
The other way is to compute the values by hand. I would suggest you first try to simply use an orthographic projection of the sphere. So basically, if you have a unit-sphere at the origin, simply drop the z-coordinate of the vertices and use u = x/2 + 0.5 and v = y/2 + 0.5 as UV-coordinates.
In JS that would be something like this:
// create the geometry (note that for simplicity, we're
// a) using a unit-sphere and
// b) use an exact half-sphere)
const geometry = new THREE.SphereGeometry(1, 18, 18, Math.PI, Math.PI)
const uvs = geometry.faceVertexUvs[0];
const vertices = geometry.vertices;
// compute the UV from the vertices of the sphere. You will probably need
// something a bit more elaborate than this for the 220degree FOV, also maybe
// some lens-distorion, but it will boild down to something like this:
for(let i = 0; i<geometry.faces.length; i++) {
const face = geometry.faces[i];
const faceVertices = [vertices[face.a], vertices[face.b], vertices[face.c]];
for(let j = 0; j<3; j++) {
const vertex = faceVertices[j];
uvs[i][j].set(vertex.x/2 + 0.5, vertex.y/2 + 0.5);
}
}
geometry.uvsNeedUpdate = true;
(if you need more information in either direction, drop a comment and i will elaborate)

What is the most efficient way to display 4 million 2D squares in a browser?

My display has a resolution of 7680x4320 pixels. I want to display up to 4 million different colored squares. And I want to change the number of squares with a slider. If have currently two versions. One with canvas-fillRect which looks somethink like this:
var c = document.getElementById("myCanvas");
var ctx = c.getContext("2d");
for (var i = 0; i < num_squares; i ++) {
ctx.fillStyle = someColor;
ctx.fillRect(pos_x, pos_y, pos_x + square_width, pos_y + square_height);
// set pos_x and pos_y for next square
}
And one with webGL and three.js. Same loop, but I create a box geometry and a mesh for every square:
var geometry = new THREE.BoxGeometry( width_height, width_height, 0);
for (var i = 0; i < num_squares; i ++) {
var material = new THREE.MeshLambertMaterial( { color: Math.random() * 0xffffff } );
material.emissive = new THREE.Color( Math.random(), Math.random(), Math.random() );
var object = new THREE.Mesh( geometry, material );
}
They both work quite fine for a few thousand squares. The first version can do up to one million squares, but everything over a million is just awful slow. I want to update the color and the number of squares dynamically.
Does anyone has tips on how to be more efficient with three.js/ WebGL/ Canvas?
EDIT1: Second version: This is what I do at the beginning and when the slider has changed:
// Remove all objects from scene
var obj, i;
for ( i = scene.children.length - 1; i >= 0 ; i -- ) {
obj = scene.children[ i ];
if ( obj !== camera) {
scene.remove(obj);
}
}
// Fill scene with new objects
num_squares = gui_dat.squareNum;
var window_pixel = window.innerWidth * window.innerHeight;
var pixel_per_square = window_pixel / num_squares;
var width_height = Math.floor(Math.sqrt(pixel_per_square));
var geometry = new THREE.BoxGeometry( width_height, width_height, 0);
var pos_x = width_height/2;
var pos_y = width_height/2;
for (var i = 0; i < num_squares; i ++) {
//var object = new THREE.Mesh( geometry, );
var material = new THREE.Material()( { color: Math.random() * 0xffffff } );
material.emissive = new THREE.Color( Math.random(), Math.random(), Math.random() );
var object = new THREE.Mesh( geometry, material );
object.position.x = pos_x;
object.position.y = pos_y;
pos_x += width_height;
if (pos_x > window.innerWidth) {
pos_x = width_height/2;
pos_y += width_height;
}
scene.add( object );
}
The fastest way to draw squares is to use the gl.POINTS primitive and then setting gl_PointSize to the pixel size.
In three.js, gl.POINTS is wrapped inside the THREE.PointCloud object.
You'll have to create a geometry object with one position for each point and pass that to the PointCloud constructor.
Here is an example of THREE.PointCloud in action:
http://codepen.io/seanseansean/pen/EaBZEY
geometry = new THREE.Geometry();
for (i = 0; i < particleCount; i++) {
var vertex = new THREE.Vector3();
vertex.x = Math.random() * 2000 - 1000;
vertex.y = Math.random() * 2000 - 1000;
vertex.z = Math.random() * 2000 - 1000;
geometry.vertices.push(vertex);
}
...
materials[i] = new THREE.PointCloudMaterial({size:size});
particles = new THREE.PointCloud(geometry, materials[i]);
I didn't dig through all the code but I've set the particle count to 2m and from my understanding, 5 point clouds are generated so 2m*5 = 10m particles and I'm getting around 30fps.
The highest number of individual points I've seen so far was with potree.
http://potree.org/, https://github.com/potree
Try some demo, I was able to observe 5 millions of points in 3D at 20-30fps. I believe this is also current technological limit.
I didn't test potree on my own, so I cant say much about this tech. But there is data convertor and viewer (threejs based) so should only figure out how to convert the data.
Briefly about your question
The best way handle large data is group them as quad-tree (2d) or oct-tree (3d). This will allow you to not bother program with part that is too far from camera or not visible at all.
On the other hand, program doesnt like when you do too many webgl calls. Try to understand it like this, you want to do create ~60 images each second. But each time you set some parameter for GPU, program must do some sync. Spliting data means you will need to do more setup so tree must not be too detialed.
Last thing, someone said:
You'll probably want to pass an array of values as one of the shader uniforms
I dont suggest it, bad idea. Texture lookup is quite fast, but attributes are always faster. If we are talking about 4M points, you cant afford reading data from uniforms.
Sorry I cant help you with the code, I could do it without threejs, Im not threejs expert :)
I would recommend trying pixi framework( as mentioned in above comments ).
It has webgl renderer and some benchmarks are very promising.
http://www.goodboydigital.com/pixijs/bunnymark_v3/
It can handle allot of animated sprites.
If your app only displays the squares, and doesnt animate, and they are very simple sprites( only one color ) then it would give better performance than the demo link above.

Three.js touching faces artifacts

I've created two transparent boxes whose faces touch. This works great unless the boxes' faces touch.
// inner object
var mesh2 = new THREE.Mesh(geometry, material);
mesh2.position.x = 0;
mesh2.position.y = 0;
mesh2.position.z = 0;
mesh2.scale.x = 100;
mesh2.scale.y = 50;
mesh2.scale.z = 100;
scene.add( mesh2 );
// outer object
var mesh1 = new THREE.Mesh(geometry, material);
mesh1.position.x = 0;
mesh1.position.y = 0;
mesh1.position.z = 0;
mesh1.scale.x = 100;
mesh1.scale.y = 100;
mesh1.scale.z = 100;
scene.add( mesh1 );
Here's the code:
http://jsfiddle.net/unkya/14/
How do I get rid of these artifacts and still have the faces touch?
Also, is there a way to add the boxes to the scene without having to insert the inner most ones first?
Many thanks!
This is called z-fighting.
There are two ways around this.
The first is simply to offset the values by a small amount. Even 0.01 might do it. The important part here is to ensure your camera's near plane and far plane are within ranges that are reasonable.
The second way is to use the polygonOffset property of THREE.js materials. This will allow you to force an object to render above or below other objects, similar to a z-index ordering. I believe transparency also needs to be enabled, so you should put this on your semi-transparent cube if possible.

Strange shaking while rotating the camera with Orbit Controls in Three.js

I'm making a model of the Solar System. This is my current metric:
scale = 0.001;
// 1 unit - 1 kilometer
var AU = 149597871 * scale;
This is how i define the camera, renderer and controls:
camera = new THREE.PerspectiveCamera(70, window.innerWidth / window.innerHeight, 0.1 * scale, 0.1 * AU);
renderer = new THREE.WebGLRenderer({ alpha: true, antialias: true });
controls = new THREE.OrbitControls(camera, renderer.domElement);
Then i give the user the option to jump between the objects so this is how i set the camera after user selects a planet/moon:
function cameraGoTo() {
for (var i = scene.children.length - 1; i >= 0 ; i--) {
var obj = scene.children[i];
if (obj.name == parameters.selected) {
controls.target = obj.position;
camera.position.copy(obj.position);
camera.position.y += obj.radius * 2;
}
}
}
The problem is that for small planets/moons ( <= 1000 km in radius) camera is shaking while rotating around the object. I have only basic knowledge of computer graphics so i don't know either this is the problem of Orbit Controls or it has something to with renderer itself...so I've tried to set logarithmicDepthBuffer = true but it didn't help. Also trying different scale didn't change anything.
Thank in advance for any help/clues.
EDIT:
Here's the fiddle:
http://jsfiddle.net/twxyz/8kxcdkjj/
You can see that shaking increases with any of the following:
the smaller the object,
the further the object from the point of origin,
What is the cause of this? It clearly seems it has nothing to do with the camera near/far spectrum values but is related to the distance the objects are from the center of the scene.
I've come up with the solution.
My problem was with the floating point precision errors when dealing with objects far from the point of origin. This turns out to be a very known problem and there are various solutions. I've used this one:
http://answers.unity3d.com/questions/54739/any-solution-for-extreamly-large-gameworlds-single.html
What happens is basically instead of moving the camera/player, we transform whole scene relative to the camera/player that is always at the point of origin. In this case, Orbit Controls' target is always point of origin.

Categories

Resources