Water/Mirrored surface in WebGL using ThreeJS - javascript

I am trying to make a water surface in WebGL using Three.js. I think I will start with just a mirror as I think I know how to add displacement to make basic ripple effects.
This is what I know: Reflection is usually made by rendering a vertically (y-axis) flipped scene on a FBO using the water plane as a culling plane. Then this FBO is used as a texture for the water plane. Using a displacement map (or a noise texture) the image can be displaced and a water effect achieved.
The problems: First off, I can't find a way to flip the scene in ThreeJS. In OpenGL you can just use glScale and put -1 for Y, but I don't think this is possible in WebGL (or GLES on which it is based). At least I found no such thing in ThreeJS. There is a scale parameter for geometry, but there is none for scene. One solution could be changing the .matrixWorldInverse in Camera, but I am not sure how I could do that. Any ideas?
The second hurdle is clipping/culling plane. Again, the old way was using glClipPlane, but its not supported even in the newest OpenGL standard as far as I know, so its also not in WebGL. I read somewhere that you can do that in vertex shader, but in ThreeJS I only know how to add shaders as materials and I need this during the render to FBO.
And third, rendering the FBO to water plane with correct texture coordinates, so I think basically projecting from the camera position.
I can't find any more information on this on the internet. There are very few WebGL reflection examples and the only thing close was here, and it used some "Oblique View Frustum" method for culling. Is this really the best way to do it nowadays? Instead of one function we now must code this ourselves in software (to be ran on CPU not GPU)? Also cube reflections provided in ThreeJS of course is not applicable for a plane, so yes, I tried those.
If someone can make as easy as possible example on how to do this I would greatly appreciate it.

Check this three.js example out.
Out of the box and ready to use, straight from the source:
water = new THREE.Water( renderer, camera, scene, {
textureWidth: 512,
textureHeight: 512,
waterNormals: waterNormals,
alpha: 1.0,
sunDirection: light.position.clone().normalize(),
sunColor: 0xffffff,
waterColor: 0x001e0f,
distortionScale: 50.0,
} );
mirrorMesh = new THREE.Mesh(
new THREE.PlaneBufferGeometry( parameters.width * 500, parameters.height * 500 ),
water.material
);
mirrorMesh.add( water );
mirrorMesh.rotation.x = - Math.PI * 0.5;
scene.add( mirrorMesh );
Seems to look like an ocean to me :)

You can see this presentation http://29a.ch/slides/2012/webglwater/
and this fiddle may be useful for you jsfiddle.net/ahmedadel/44tjE

This only addresses the scaling part of your question. The matrix that is attached to the Object3D has a makeScale method.

Related

What are the properties of three.js emissive materials

I'm working on a simple demonstration in three.js and am confused by the behaviour of THREE.MeshPhongMaterial coming from a background in the Unity Game Engine.
create_ring() {
// creates a ring mesh per inputed class data
const material = new THREE.MeshPhongMaterial({
color: this.color,
emissive: this.color,
emissiveIntensity: 1.6
});
const ring_geo = new THREE.TorusGeometry(this.radius, this.thickness, 16, 100);
// Translate in space
ring_geo.translate(5, 5, 0)
// add texture to mesh and output
const ring_mesh = new THREE.Mesh(ring_geo, material);
ring_mesh.receiveShadow = true;
ring_mesh.castShadow = true;
ring_mesh.name = "ring";
return ring_mesh
}
I was under the impression the materials would create a nice gentle pool of light on the floor geometry but now having researched the problem either I need some advice on how to implement this as a shader feature? Or I'm not understanding the limits and behaviour of materials in three.js? Below is an example of what is possible with a material's emissive option in Unity.
There's more than just an emissive material shown in the Unity screenshot above — the objects around the light probably were probably also marked as static, which Unity uses to "bake" the glow effect onto them, while compiling the application. There could also be a "bloom" post-processing effect to create the dynamic glow seen by the camera around the object.
Because three.js runs on the web and does not have an offline compilation step, these additional effects have to be configured manually. You can see the three.js bloom example for some help adding the bloom effect to a scene. Baking the light onto surrounding objects would generally be done in Blender, and then loaded into three.js with the base color texture or a lightmap.

How to improve merging by computing new faces in ThreeJS

I'm learning ThreeJS for 4 months, applying it into a personal project.
Yesterday, I achieved building a stronghold using most of ThreeJS geometries and some CSG tricks. The result looks fine, but I like precision and my geometry is kind of a mess (mostly after CSG subtractions).
[Question] I wonder if there's a known way to merge two geometries and replacing its old faces by new computed faces ? There is a JSFiddle to illustrate my question.
[Edit : Updated the fiddle with a fourth and a fifth mesh]
// FIGURE 1 : Basic merged geometry
var figure1 = new THREE.Geometry();
figure1.merge(box1Geometry);
figure1.merge(box2Geometry);
figure1.merge(box3Geometry);
figure1.computeFaceNormals();
figure1.computeVertexNormals();
var mesh = new THREE.Mesh(figure1, material);
scene.add(mesh);
// FIGURE 2 : Merged geometry with merged vertices
var figure2 = figure1.clone();
figure2.mergeVertices();
figure2.computeFaceNormals();
figure2.computeVertexNormals();
mesh = new THREE.Mesh(figure2, material);
// FIGURE 3 : Expected merged geometry (less faces)
var figure3 = new THREE.Geometry();
figure3`.vertices.push(
// manually create vertices here
);
figure3.faces.push(
// manually create the faces here
);
figure3.computeBoundingSphere();
figure3.computeFaceNormals();
figure3.computeVertexNormals();
mesh = new THREE.Mesh(figure3, material);
scene.add(mesh);
Three ways to get the same mesh
The first mesh on the left is a basic merged geometry composed of three boxGeometry.
The second mesh in the middle is exactly the same mesh, after calling the mergeVertices() function. It results saving 4 vertices. But faces inside the mesh are still there. It results not only in looking bad (for me), but also in issues for texturing or lighting these parts (face normals aren't where they should be).
The last mesh on the right is the mesh I would expect after merging. Look at the faces below the middle box, they only fit what they should.
The fact that it leads to texture and lighting issues (look at the JSFiddle, it lights the inner parts of the mesh) makes me think that it must be a simple and well-known way to solve this but I'm just feeling like a big noob.
This issue is directly linked with another question I'll ask if I don't find (or understand) any answer on SO (and maybe it'll help you to understand why I want to do that): Is there a way to apply a texture on this merged geometry without creating an unique material for each face of each geometry (because of the different UV mapping and mesh sizes) ? I can't imagine to do it manually for each face of my huge stronghold...
[EDIT] Writing my question, I just realized that ThreeCSG and its union() function do the trick. But I don't like the mess of vertices it creates. Even for basic geometry like these boxes, ThreeCSG will create strange vertices and faces on parts of the geometry where everything was already fine.
I updated the JSFiddle with a fourth mesh (CSG). In this simple usecase, we can see that there are 2 vertices and 2 faces more than expected. It seems that it kept the old faces (look at the wireframe !).
Is ThreeCSG union the best option for now ?
[EDIT 2] Fiddle updated with native CSG geometry. It gives the result I expected with only 20 vertices and 32 faces. Thanks to Wilt for this idea. The issue is that hard coding the polygons takes too long (take a look at the code for only three boxes). I have no JSON file to load and generate the polygons, I only have ThreeJS geometries. So I'll look at the conversion between ThreeJS and ThreeCSG geometries and I hope to understand why when there is a conversion, it gives a bad result.

How to hide parts of 3D objects that stick out of the back of other (complex) 3D objects?

I'm rendering a complex 3D mesh with Three.js (an iliac bone). Then I'm rendering some simple spheres along with this mesh to mark certain points on the surface (where muscles would attach):
The problem is, the mesh is quite thin in some areas, and the markers will stick out the back.
Assume that the marker coordinates are always closer to the front face of the mesh than the back face, and that the spheres always show more surface area / volume on the front of the mesh than on the back:
How could I hide the parts that extrude out the back without manually intervening for specific markers?
Edit: Here's a (naive?) way of how I might do it. I would like feedback on the feasibility of the idea, and (some pointers to writing) actual code to do it:
for each marker sphere:
find all faces of the mesh that intersect with the sphere
compute all outward-facing normal vectors of those faces (vertex-normals? face-normals?)
compute all distances from the center of the face to the center of the sphere
add all those normal vectors, weighed by their respective distances
given the (normalized?) result vector, hide the hemisphere pointing in that direction
I'm not sure how to code any of those steps. Nor am I sure if this is even a sensible approach.
Draw hemispheres instead of full spheres.
Use phiStart and phiLength parameters of the SphereGeometry constructor.
The centers of the spheres will still be on the surface of the bone (a vertex).
The orientation of one sphere will be given by the normal calculated in the sphere origin.
Three.js already calculates the normals for a mesh in order to determine how light will bounce from the mesh. You can use the VertexNormalsHelper to display normals for your mesh:
var bone = ...; // bone mesh
var scene = ...; //your THREE.Scene
scene.add(new THREE.VertexNormalsHelper(bone));
The source code for VertexNormalsHelper can be found here:VertexNormalsHelper
You have to calculate the difference angles between the normal vector and oZ axis so you obtain difX and difY. These are the ammounts you must rotate your sphere in the X and Y directions to make it perpendicular on the local surface of the bone.

Double sided transparent shader looks buggy

I have made a little test that allows you to experiment with shaders in a 3D environment using three.js.
There's a sphere in the scene that shows the shader.
The demo shader I have created is a very simple shader that uses a 2D noise implementation. A big part of the sphere remains black, which I made transparent. I want the other side of the sphere to be visible too. So I have enabled transparency and set rendering side to double-sided.
material = new THREE.ShaderMaterial({
'uniforms': uniforms,
'fragmentShader': $('textarea#input-fragment').val(),
'vertexShader': $('textarea#input-vertex').val()
});
material.side = THREE.DoubleSide;
material.transparent = true;
On this example, the buggyness is easier to notice.
When the sphere is viewed from the top, you only see the shader from the outer side. When viewed from the side there seems to be a bit choppyness, and when viewed from the bottom it seems to be working.
These are the different angles (top - side - bottom):
Here's the important bit of my fragment shader:
void main() {
float r = cnoise(vNormal.yz * 2.0 + t);
float g = cnoise(vNormal.xz * -1.0 + t);
float b = cnoise(vNormal.xy * -2.0 + t);
// opacity ranges assumable from 0 - 3, which is OK
gl_FragColor = vec4(r, g, b, r + g + b);
}
So why am I seeing the choppy edges and why does the viewing angle matters?
There is nothing wrong with your shader. You can also see the effect if you set:
gl_FragColor = vec4( 1.0, 1.0, 1.0, 0.5 );
Self-transparency is tricky in three.js.
For performance reasons in WebGLRenderer, depth sorting works only between objects (based on their position), not within a single object.
The rendering order of the individual faces within an object cannot be controlled.
This is why from some viewing angles your scene looks better than from others.
One work-around is to explode the geometry into individual meshes of one face each.
Another work-around (your best bet, IMO) is to replace your transparent, double-sided sphere with two transparent spheres in the same location -- a front-sided one and a back-sided one.
three.js r.56
Very similar to what I ran into. The WHY to understand this is best explained on Three.js Transparency fundamentals.
Without more details on your code or goals, here is an alternate solution as of version r128. Just add one more line to your material:
material.depthTest: false,
in a nutshell, your shader is fine as #WestLangley mentioned, but during rendering transparency, the depth of pixels in relation to one another is taken into account as well - ending up in certain pixels not rendering. This is where your "buggy-ness" came from. Not really a bug, but the way your scene is rendered by default until told to do otherwise. There are a lot of *issues you can run into that compete with your expectations so I recommend reading up on the link I posted.
*One such issue: If there are other objects in your scene, then of course since you turned off depthTest you can get the incorrect object placement as an object that should be in the background can get rendered in the foreground.

Three.js outlines

Is it possible to have an black outline on my 3d models with three.js?
I would have graphics which looks like Borderlands 2. (toon shading + black outlines)
I'm sure I came in late. Let's hope this would solve someone's question later.
Here's the deal, you don't need to render everything twice, the overhead actually is not substantial, all you need to do is duplicate the mesh and set the duplicate mesh's material side to "backside". No double passes. You will be rendering two meshes instead, with most of the outline's geometry culled by WebGL's "backface culling".
Here's an example:
var scene = new THREE.Scene();
//Create main object
var mesh_geo = new THREE.BoxGeometry(1, 1, 1);
var mesh_mat = new THREE.MeshBasicMaterial({color : 0xff0000});
var mesh = new THREE.Mesh(mesh_geo, mesh_mat);
scene.add(mesh);
//Create outline object
var outline_geo = new THREE.BoxGeometry(1, 1, 1);
//Notice the second parameter of the material
var outline_mat = new THREE.MeshBasicMaterial({color : 0x00ff00, side: THREE.BackSide});
var outline = new THREE.Mesh(outline_geo, outline_mat);
//Scale the object up to have an outline (as discussed in previous answer)
outline.scale.multiplyScalar(1.5);
scene.add(outline);
For more details on backface culling, check out: http://en.wikipedia.org/wiki/Back-face_culling
The above approach works well if you want to add an outline to objects, without adding a toon shader, and thus losing "realism".
Toon shading by itself supports edge detection. They've developed the 'cel' shader in Borderlands to achieve this effect.
In cel shading devs can either use the object duplication method (done at the [low] pipeline level), or can use image processing filters for edge detection. This is the point at which performance tradeoff is compared between the two techniques.
More info on cel: http://en.wikipedia.org/wiki/Cel_shading
Cheers!
Yes it is possible but not in a simple out-of-the-box way. For toon shading there are even shaders included in /examples/js/ShaderToon.js
For the outlines I think the most commonly suggested method is to render in two passes. First pass renders the models in black, and slightly larger scale. Second pass is normal scale and with the toon shaders. This way you'll see the larger black models as an outline. It's not perfect but I don't think there's an easy way out. You might have more success searching for "three.js hidden line rendering", as, while different look, somewhat similar method is used to achieve that.
Its a old question but here is what i did.
I created a Outlined Cel-shader for my CG course. Unfortunately it takes 3 rendering passes. Im currently trying to figure out how to remove one pass.
Here's the idea:
1) Render NormalDepth image to texture.
In vertex shader you do what you normally do, position to screen space and normal to screen space.
In fragment shader you calculate the depth of the pixel and then create the normal color with the depth as the alpha value
float ndcDepth = (2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) / (gl_DepthRange.far - gl_DepthRange.near);
float clipDepth = ndcDepth / gl_FragCoord.w;
2) Render the scene on to a texture with cel-shading. I changed the scene override material.
3)Make quad and render both textures on the quad and have a orto camera look at it. Cel-shaded texture is just renderd on quad but the normaldepth shaded on that you use some edge detection and then with that you know when the pixel needs to be black(edge).

Categories

Resources