Double sided transparent shader looks buggy - javascript

I have made a little test that allows you to experiment with shaders in a 3D environment using three.js.
There's a sphere in the scene that shows the shader.
The demo shader I have created is a very simple shader that uses a 2D noise implementation. A big part of the sphere remains black, which I made transparent. I want the other side of the sphere to be visible too. So I have enabled transparency and set rendering side to double-sided.
material = new THREE.ShaderMaterial({
'uniforms': uniforms,
'fragmentShader': $('textarea#input-fragment').val(),
'vertexShader': $('textarea#input-vertex').val()
});
material.side = THREE.DoubleSide;
material.transparent = true;
On this example, the buggyness is easier to notice.
When the sphere is viewed from the top, you only see the shader from the outer side. When viewed from the side there seems to be a bit choppyness, and when viewed from the bottom it seems to be working.
These are the different angles (top - side - bottom):
Here's the important bit of my fragment shader:
void main() {
float r = cnoise(vNormal.yz * 2.0 + t);
float g = cnoise(vNormal.xz * -1.0 + t);
float b = cnoise(vNormal.xy * -2.0 + t);
// opacity ranges assumable from 0 - 3, which is OK
gl_FragColor = vec4(r, g, b, r + g + b);
}
So why am I seeing the choppy edges and why does the viewing angle matters?

There is nothing wrong with your shader. You can also see the effect if you set:
gl_FragColor = vec4( 1.0, 1.0, 1.0, 0.5 );
Self-transparency is tricky in three.js.
For performance reasons in WebGLRenderer, depth sorting works only between objects (based on their position), not within a single object.
The rendering order of the individual faces within an object cannot be controlled.
This is why from some viewing angles your scene looks better than from others.
One work-around is to explode the geometry into individual meshes of one face each.
Another work-around (your best bet, IMO) is to replace your transparent, double-sided sphere with two transparent spheres in the same location -- a front-sided one and a back-sided one.
three.js r.56

Very similar to what I ran into. The WHY to understand this is best explained on Three.js Transparency fundamentals.
Without more details on your code or goals, here is an alternate solution as of version r128. Just add one more line to your material:
material.depthTest: false,
in a nutshell, your shader is fine as #WestLangley mentioned, but during rendering transparency, the depth of pixels in relation to one another is taken into account as well - ending up in certain pixels not rendering. This is where your "buggy-ness" came from. Not really a bug, but the way your scene is rendered by default until told to do otherwise. There are a lot of *issues you can run into that compete with your expectations so I recommend reading up on the link I posted.
*One such issue: If there are other objects in your scene, then of course since you turned off depthTest you can get the incorrect object placement as an object that should be in the background can get rendered in the foreground.

Related

3D model in HTML/CSS; Calculate Euler rotation of triangle

TLDR; Given a set of triangle vertices and a normal vector (all in unit space), how do I calculate X, Y, Z Euler rotation angles of the triangle in world space?
I am attemping to display a 3D model in HTML - with actual HTML tags and CSS transforms. I've already loaded an OBJ file into a Javascript class instance.
The model is triangulated. My first aim is just to display the triangles as planes (HTML elements are rectangular) - I'll be 'cutting out' the triangle shapes with CSS clip-path later on.
I am really struggling to understand and get the triangles of the model rotated correctly.
I thought a rotation matrix could help me out, but my only experience with those is where I already have the rotation vector and I need to convert and send that to WebGL. This time there is no WebGL (or tutorials) to make things easier.
The following excerpt shows the face creation/'rendering' of faces. I'm using the face normal as the rotation but I know this is wrong.
for (const face of _obj.faces) {
const vertices = face.vertices.map(_index => _obj.vertices[_index]);
const center = [
(vertices[0][0] + vertices[1][0] + vertices[2][0]) / 3,
(vertices[0][1] + vertices[1][1] + vertices[2][1]) / 3,
(vertices[0][2] + vertices[1][2] + vertices[2][2]) / 3
];
// Each vertex has a normal but I am just picking the first vertex' normal
// to use as the 'face normal'.
const normals = face.normals.map(_index => _obj.normals[_index]);
const normal = normals[0];
// HTML element creation code goes here; reference is 'element'.
// Set face position (unit space)
element.style.setProperty('--posX', center[0]);
element.style.setProperty('--posY', center[1]);
element.style.setProperty('--posZ', center[2]);
// Set face rotation, converting to degrees also.
const rotation = [
normal[0] * toDeg,
normal[1] * toDeg,
normal[2] * toDeg,
];
element.style.setProperty('--rotX', rotation[0]);
element.style.setProperty('--rotY', rotation[1]);
element.style.setProperty('--rotZ', rotation[2]);
}
The CSS first translates the face on X,Y,Z, then rotates it on X,Y,Z in that order.
I think I need to 'decompose' my triangles' rotation into separate axis rotations - i.e rotate on X, then on Y, then on Z to get the correct rotation as per the model face.
I realise that the normal vector gives me an orientation but not a rotation around itself - I need to calculate that. I think I have to determine a vector along one triangle side and cross it with the normal, but this is something I am not clear on.
I have spent hours looking at similar questions on SO but I'm not smart enough to understand or make them work for me.
Is it possible to describe what steps to take without Latex equations? I'm good with pseudo code but my Math skills are severely lacking.
The full code is here: https://whoshotdk.co.uk/cssfps/ (view HTML source)
The mesh building function is at line 422.
The OBJ file is here: https://whoshotdk.co.uk/cssfps/data/model/test.obj
The Blender file is here: https://whoshotdk.co.uk/cssfps/data/model/test.blend
The mesh is just a single plane at an angle, displayed in my example (wrongly) in pink.
The world is setup so that -X is left, -Y is up, -Z is into the screen.
Thank You!
If you have a plane and want to rotate it to be in the same direction as some normal, you need to figure out the angles between that plane's normal vector and the normal vector you want. The Euler angles between two 3D vectors can be complicated, but in this case the initial plane normal should always be the same, so I'll assume the plane normal starts pointing towards positive X to make the maths simpler.
You also probably want to rotate before you translate, so that everything is easier since you'll be rotating around the origin of the coordinate system.
By taking the general 3D rotation matrix (all three 3D rotation matrices multiplied together, you can find it on the Wikipedia page) and applying it to the vector (1,0,0) you can then get the equations for the three angles a, b, and c needed to rotate that initial vector to the vector (x,y,z). This results in:
x = cos(a)*cos(b)
y = sin(a)*cos(b)
z = -sin(b)
Then rearranging these equations to find a, b and c, which will be the three angles you need (the three values of the rotation array, respectively):
a = atan(y/x)
b = asin(-z)
c = 0
So in your code this would look like:
const rotation = [
Math.atan2(normal[1], normal[0]) * toDeg,
Math.asin(-normal[2]) * toDeg,
0
];
It may be that you need to use a different rotation matrix (if the order of the rotations is not what you expected) or a different starting vector (although you can just use this method and then do an extra 90 degree rotation if each plane actually starts in the positive Y direction, for example).

How to position an object for drawing in webgl? and why

I've managed to make a webgl example all in one file with no included libraries, and only functions that are being used: https://jsfiddle.net/vmLab6jr/
I'm drawing a square made of 2 triangles and I'm making it move farther away and closer to the camera. I want to understand how this part works:
// Now move the drawing position a bit to where we want to start
// drawing the square.
mvMatrix = [
[1,0,0,0],
[0,1,0,0],
[0,0,1,-12+Math.sin(g.loops/6)*4],
[0,0,0,1]
];
var mvUniform = gl.getUniformLocation(g.shaderProgram, "uMVMatrix");
gl.uniformMatrix4fv(mvUniform, false, g.float32(mvMatrix));
Why does webgl want a 4x4 matrix to set the position for drawing an object? Or is there a way to use 1x3, like [x,y,z]? Is it because the shaders I'm using we're arbitrarily set to 4x4?
I cannot find information on what uniformMatrix4fv() does and when and why it's used and what the alternatives are.
Why does the element [2][3] control the z of the object?
I know it has something to do with the frustum matrix being 4x4. And that same spot in the frustum matrix has D, where var D = -2*zfar*znear/(zfar-znear); But to change the x of the object I'm drawing I need to change [0][3] but that slot in the frustum matrix just has a 0.
function makeFrustum(left, right, bottom, top, znear, zfar)
{
var X = 2*znear/(right-left);
var Y = 2*znear/(top-bottom);
var A = (right+left)/(right-left);
var B = (top+bottom)/(top-bottom);
var C = -(zfar+znear)/(zfar-znear);
var D = -2*zfar*znear/(zfar-znear);
return [
[X, 0, A, 0],
[0, Y, B, 0],
[0, 0, C, D],
[0, 0, -1, 0]
];
}
I've been using this tutorial: https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API/Tutorial/Getting_started_with_WebGL
WebGL does not want a 4x4 matrix. WebGL is just a rasterization library
All it cares about is you provide a vertex shader that fills in a special variable called gl_Position with a clip space coordinate and then you also provide a fragment shader that sets the special variable gl_FragColor with a color.
No matrices are required to do that. Any matrices you use are yours, provided by you to code you supply. There are no required matrices in WebGL.
That said if you follow these tutorials they will eventually lead you to how to use matrices and how the frustum function works
There's also this Q&A: Trying to understand the math behind the perspective matrix in WebGL
As for your multiple questions
Why does webgl want a 4x4 matrix to set the position for drawing an object?
It doesn't. The shader you provided does.
Or is there a way to use 1x3, like [x,y,z]?
Yes, provide a shader that uses 1x3 math
Is it because the shaders I'm using we're arbitrarily set to 4x4?
Yes
I cannot find information on what uniformMatrix4fv() does and when and why it's used and what the alternatives are.
WebGL 1.0 is based on OpenGL ES 2.0 and so the WebGL spec basically says "look at the OpenGL ES 2.0 spec". Specifically it says
1.1 Conventions
...
The remaining sections of this document are intended to be read in conjunction with the OpenGL ES 2.0 specification (2.0.25 at the time of this writing, available from the Khronos OpenGL ES API Registry). Unless otherwise specified, the behavior of each method is defined by the OpenGL ES 2.0 specification.
As for uniformMatrix4fv the various uniform functions are used to set global variables you declared inside the shaders you provided. These global variables are called uniforms because they keep a uniform value from iteration to iteration of your shaders. That's in contrast to 2 other kinds of shader inputs. One called attributes which generally pull the next set of values out of buffers during each iteration of your vertex shader. The other type are called varyings which you set in your vertex shader and are interpolated for each iteration of your fragment shader.

Passing PointLight Info to a custom Shader with three.js

I want to create an effect like the undulating sphere described in the Aerotwist Tutorial. However, in the tutorial Paul creates a fake GLSL hard-coded light in the fragment shader - instead I want to pass info from a three.js PointLight instance to my shaders, manipulate vertices/normals, then perform Phong shading.
My understanding of the various levels of GPU consideration when shading a scene in three.js is as follows (sticking with Phong, for example):
No GPU consideration: Use a MeshPhongMaterial and don't worry about shaders. This is super easy but doesn't let you mess around on the GPU side.
Some GPU consideration: Use a ShaderLib Phong shader. This allows you to push shading calculations to the GPU, but since they're pre-written you can't do any custom modification of vertex positions, normals, or illumination calculations.
Full GPU management: Use a ShaderMesh and write your shaders from scratch. This gives you full customization, but also forces you to explicitly pass the attributes and uniforms your shaders will need.
Q1: Is the above understanding accurate?
Q2: Is there a way to do something between levels 2 and 3? I want the ability to customize the shaders to mess with vertex positions/normals, but I don't want to write my own Phong shader when a perfectly good one is included with three.js.
Q3: If there is no such middle ground between levels 2 and 3, and I need to just go for level 3, whats the best way to go about it? Do I pass the light's position, intensity, etc. as uniforms, do my vertex/normal modifications, then finally explicitly write the Phong shading calculations?
It's very straightforward to do what you are asking with three.js
I'm not sure where it falls in your Q[]
Q1
You are still using the shaders, someone else wrote them for you. You only have access to the interface. Under the hood, calling something like MeshBasicMaterial can actually compile a different shader based on what you feed into it. Like, it may not process any UVS and not include them in the shader if there is no map called etc. You still have the power to impact the GPU depending on what you call.
If you are referring to the shader chunks, it's possible to hack stuff here, but it's pretty cumbersome. My advice is to study the code, for example the phong shading and start building your own piece by piece, using the chunks. Look at what goes in, what goes out.
No need to pass attributes. THREE.ShaderMaterial is not entirely built from scratch. It still provides you with quite a bit of stuff, and has a bunch of properties that you can set to get more. The basic attributes for one, are setup for you ie. you don't declare "attribute vec3 position". You can get an array containing all the lights in the scene if you tick the lighting flag as West illustrated, but you can ignore this if for example, you are building a particle shader, or some screen effect. Pretty much every shader is set up to read some basic attributes like 'position' 'uv' 'normal'. You can easily add your own on a procedural mesh, but on an actual model it's not trivial. You get some uniforms by default, you get the entire set of MVP matrices, 'cameraPosition' etc. Writing a phong shader from there is straightforward.
Now for how would you do this. Say that you are following this tutorial and you have this shader:
// same name and type as VS
varying vec3 vNormal;
void main() {
//this is hardcoded you want to pass it from your environment
vec3 light = vec3(0.5, 0.2, 1.0);//it needs to be a uniform
// ensure it's normalized
light = normalize(light);//you can normalize it outside of the shader, since it's a directional light
// calculate the dot product of
// the light to the vertex normal
float dProd = max(0.0,
dot(vNormal, light));
// feed into our frag colour
gl_FragColor = vec4(dProd, // R
dProd, // G
dProd, // B
1.0); // A
}
Here's what you need to do:
GLSL
uniform vec3 myLightPos;//comes in
void main(){
vec3 light = normalize(myLightPos);//but you better do this in javascript and just pass the normalized vec3
}
Javascript
new THREE.ShaderMaterial({
uniforms:{
myLightPos:{
type:"v3",
value: new THREE.Vector3()
}
},
vertexShader: yourVertShader,
fragmentShader: yourFragmentShader
});
Q1: Correct. Although, some users on this board have posted work-arounds for hacking MeshPhongMaterial, but that is not the original intent.
Q2 and Q3: Look at ShaderLib.js and you will see the "Normal Map Shader". This is a perfect template for you. Yes, you can duplicate/rename it and modify it to your liking.
It uses a Phong-based lighting model, and even accesses the scene lights for you. You call it like so:
var shader = THREE.ShaderLib[ "normalmap" ];
var uniforms = THREE.UniformsUtils.clone( shader.uniforms );
. . .
var parameters = {
fragmentShader: shader.fragmentShader,
vertexShader: shader.vertexShader,
uniforms: uniforms,
lights: true // set this flag and you have access to scene lights
};
var material = new THREE.ShaderMaterial( parameters );
See these examples: http://threejs.org/examples/webgl_materials_normalmap.html and http://threejs.org/examples/webgl_materials_normalmap2.html.
For coding patterns to follow, see ShaderLib.js and ShaderChunk.js.
three.js r.67

Water/Mirrored surface in WebGL using ThreeJS

I am trying to make a water surface in WebGL using Three.js. I think I will start with just a mirror as I think I know how to add displacement to make basic ripple effects.
This is what I know: Reflection is usually made by rendering a vertically (y-axis) flipped scene on a FBO using the water plane as a culling plane. Then this FBO is used as a texture for the water plane. Using a displacement map (or a noise texture) the image can be displaced and a water effect achieved.
The problems: First off, I can't find a way to flip the scene in ThreeJS. In OpenGL you can just use glScale and put -1 for Y, but I don't think this is possible in WebGL (or GLES on which it is based). At least I found no such thing in ThreeJS. There is a scale parameter for geometry, but there is none for scene. One solution could be changing the .matrixWorldInverse in Camera, but I am not sure how I could do that. Any ideas?
The second hurdle is clipping/culling plane. Again, the old way was using glClipPlane, but its not supported even in the newest OpenGL standard as far as I know, so its also not in WebGL. I read somewhere that you can do that in vertex shader, but in ThreeJS I only know how to add shaders as materials and I need this during the render to FBO.
And third, rendering the FBO to water plane with correct texture coordinates, so I think basically projecting from the camera position.
I can't find any more information on this on the internet. There are very few WebGL reflection examples and the only thing close was here, and it used some "Oblique View Frustum" method for culling. Is this really the best way to do it nowadays? Instead of one function we now must code this ourselves in software (to be ran on CPU not GPU)? Also cube reflections provided in ThreeJS of course is not applicable for a plane, so yes, I tried those.
If someone can make as easy as possible example on how to do this I would greatly appreciate it.
Check this three.js example out.
Out of the box and ready to use, straight from the source:
water = new THREE.Water( renderer, camera, scene, {
textureWidth: 512,
textureHeight: 512,
waterNormals: waterNormals,
alpha: 1.0,
sunDirection: light.position.clone().normalize(),
sunColor: 0xffffff,
waterColor: 0x001e0f,
distortionScale: 50.0,
} );
mirrorMesh = new THREE.Mesh(
new THREE.PlaneBufferGeometry( parameters.width * 500, parameters.height * 500 ),
water.material
);
mirrorMesh.add( water );
mirrorMesh.rotation.x = - Math.PI * 0.5;
scene.add( mirrorMesh );
Seems to look like an ocean to me :)
You can see this presentation http://29a.ch/slides/2012/webglwater/
and this fiddle may be useful for you jsfiddle.net/ahmedadel/44tjE
This only addresses the scaling part of your question. The matrix that is attached to the Object3D has a makeScale method.

Three.js outlines

Is it possible to have an black outline on my 3d models with three.js?
I would have graphics which looks like Borderlands 2. (toon shading + black outlines)
I'm sure I came in late. Let's hope this would solve someone's question later.
Here's the deal, you don't need to render everything twice, the overhead actually is not substantial, all you need to do is duplicate the mesh and set the duplicate mesh's material side to "backside". No double passes. You will be rendering two meshes instead, with most of the outline's geometry culled by WebGL's "backface culling".
Here's an example:
var scene = new THREE.Scene();
//Create main object
var mesh_geo = new THREE.BoxGeometry(1, 1, 1);
var mesh_mat = new THREE.MeshBasicMaterial({color : 0xff0000});
var mesh = new THREE.Mesh(mesh_geo, mesh_mat);
scene.add(mesh);
//Create outline object
var outline_geo = new THREE.BoxGeometry(1, 1, 1);
//Notice the second parameter of the material
var outline_mat = new THREE.MeshBasicMaterial({color : 0x00ff00, side: THREE.BackSide});
var outline = new THREE.Mesh(outline_geo, outline_mat);
//Scale the object up to have an outline (as discussed in previous answer)
outline.scale.multiplyScalar(1.5);
scene.add(outline);
For more details on backface culling, check out: http://en.wikipedia.org/wiki/Back-face_culling
The above approach works well if you want to add an outline to objects, without adding a toon shader, and thus losing "realism".
Toon shading by itself supports edge detection. They've developed the 'cel' shader in Borderlands to achieve this effect.
In cel shading devs can either use the object duplication method (done at the [low] pipeline level), or can use image processing filters for edge detection. This is the point at which performance tradeoff is compared between the two techniques.
More info on cel: http://en.wikipedia.org/wiki/Cel_shading
Cheers!
Yes it is possible but not in a simple out-of-the-box way. For toon shading there are even shaders included in /examples/js/ShaderToon.js
For the outlines I think the most commonly suggested method is to render in two passes. First pass renders the models in black, and slightly larger scale. Second pass is normal scale and with the toon shaders. This way you'll see the larger black models as an outline. It's not perfect but I don't think there's an easy way out. You might have more success searching for "three.js hidden line rendering", as, while different look, somewhat similar method is used to achieve that.
Its a old question but here is what i did.
I created a Outlined Cel-shader for my CG course. Unfortunately it takes 3 rendering passes. Im currently trying to figure out how to remove one pass.
Here's the idea:
1) Render NormalDepth image to texture.
In vertex shader you do what you normally do, position to screen space and normal to screen space.
In fragment shader you calculate the depth of the pixel and then create the normal color with the depth as the alpha value
float ndcDepth = (2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) / (gl_DepthRange.far - gl_DepthRange.near);
float clipDepth = ndcDepth / gl_FragCoord.w;
2) Render the scene on to a texture with cel-shading. I changed the scene override material.
3)Make quad and render both textures on the quad and have a orto camera look at it. Cel-shaded texture is just renderd on quad but the normaldepth shaded on that you use some edge detection and then with that you know when the pixel needs to be black(edge).

Categories

Resources