Complex shape character outline - javascript

Say I have this character and I want allow user to select it, so when it s selected I want to show an outline around it.
the character is an object3D with some meshes.
I tried to clone and set a backside material, but it did NOT work, the problem was each cube in the shape was render with backside separately so the outline was wrong.
do I need to create another mesh for the outline, is there an easier way?

What #spassvolgel wrote is correct;
What I suspect needs to be done is something like this: 1. First the background needs to be rendered 2. Then, on a separate transparent layer, the character model with a flat color, slightly bigger than the original, 3. On another transparent layer the character with its normal material / texture 4. Finally, the character layer needs to go on top of the outline layer and them combined need to be placed in the bg
You just create multiple scenes and combine them with sequential render passes:
renderer.autoClear = false;
. . .
renderer.render(scene, camera); // the entire scene
renderer.clearDepth();
renderer.render(scene2, camera); // just the selected item, larger, in a flat color
renderer.render(scene3, camera); // the selected item again
three.js.r.129

An generic solution that applies to geometries of any complexity might be to apply a fragment shader via the ShaderMaterial class in three.js. Not sure what your experience level is at, but if you need it an introduction to shaders can be found here.
A good example where shaders are used to highlight geometries can be found here. In their vertex shader, they calculate the normal for a vertex and a parameter used to express intensity of a glow effect:
uniform vec3 viewVector;
uniform float c;
uniform float p;
varying float intensity;
void main()
{
vec3 vNormal = normalize( normalMatrix * normal );
vec3 vNormel = normalize( normalMatrix * viewVector );
intensity = pow( c - dot(vNormal, vNormel), p );
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
These parameters are passed to the fragment shader where they are used to modify the color values of pixels surrounding the geometry:
uniform vec3 glowColor;
varying float intensity;
void main()
{
vec3 glow = glowColor * intensity;
gl_FragColor = vec4( glow, 1.0 );
}

I found something on gamedev.stackexchange.com/ that could be useful. They talk of a stencil buffer. I have no idea on how to apply this to THREE.js though..
https://gamedev.stackexchange.com/questions/59361/opengl-get-the-outline-of-multiple-overlapping-objects

You can get good results by rendering your outlined object(s) to a texture that is (ideally) the size of your destination framebuffer, then render a framebuffer-sized quad using that texture and have the fragment shader blur or do other image transforms. I have an example here that uses raw WebGL, but you can make a custom ShaderMaterial without too much trouble.

I haven't found the answer yet but I wanted to demonstrate what happens when I create multiple meshes, and put another mesh behind each of these meshes with
side: THREE.BackSide
http://jsfiddle.net/GwS9c/8/
as you can see, it's not the desired effect. I would like a clean outline behind ALL three meshes, that doesn't overlap. My level of programming shaders is really non-existent, but on most online resources people say to use this approach of cloning the meshes.

Related

ThreeJS Softer/Gradient Shadow Casting With Alpha Texture

Using ThreeJS I've made a setup that displays a model with a complex material. I'm using an alpha texture to make semi-transparent surfaces, like glass. However when shadows are being cast, the shadow is removed completely instead of using the alpha texture as gradient.
Basically I want a similar effect like this BabylonJS example: https://playground.babylonjs.com/#P1RZV0
When casting shadows I'm using the MeshDepthMaterial in order to let the light cast shadows through transparent surfaces, like this:
var customDepthMaterial = new THREE.MeshDepthMaterial( {
depthPacking: THREE.RGBADepthPacking,
alphaMap: alphaTex,
alphaTest: 1,
skinning: true
})
This works fine however as mentioned, shadows should not disappear completely but rather use the alpha texture as indicator how strong the shadow should be. The more opaque a surface is the stronger the shadow should be and vice versa.
You can take a look at the project here: https://jsfiddle.net/miger/2c7no6mj/
Apparently BabylonJS makes it as simple as using the parameter .transparencyShadow = true; and I've not found anything simmilar in ThreeJS. It seems that there has not been an official solution implemented in ThreeJS yet. Instead the only thing I found was this discussion about it here: https://github.com/mrdoob/three.js/issues/10600
I've already implemented some GLSL code in order to get selective bloom working. I'm guessing to implement the shadow effect I'm looking for I need to do something with the shaders but I do not know how.
These are the shaders I'm using so far:
<script type="x-shader/x-vertex" id="vertexshader">
varying vec2 vUv;
void main() {
vUv = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
</script>
<script type="x-shader/x-fragment" id="fragmentshader">
uniform sampler2D baseTexture;
uniform sampler2D bloomTexture;
varying vec2 vUv;
void main() {
gl_FragColor = ( texture2D( baseTexture, vUv ) + vec4( 1.0 ) * texture2D( bloomTexture, vUv ) );
}
</script>
And this is how the selective bloom is implemented using .onBeforeCompile:
child.material.onBeforeCompile = shader => {
shader.uniforms.globalBloom = uniforms.globalBloom;
shader.fragmentShader = `
uniform float globalBloom;
${shader.fragmentShader}
`.replace(
`#include <dithering_fragment>`,
`#include <dithering_fragment>
if (globalBloom > 0.5){
gl_FragColor = texture2D( emissiveMap, vUv );
}
`
);
}
I'm really bad with shaders, there have been some solutions in the above mentioned discussion, however I've just way to little experience with shaders and GLSL to know what I need to do.
How can I implement soft/gradient shadow casting using an alpha texture as strength indicator?

Threejs: make custom shader match the standard rendering

For performance reasons I need to display hundred of moving tetrahedrons in a scene. There for I use instancedbuffergeometry which requires a custom shader.
The scene also contains objects with regular geometry (non-buffer) and some lights (I prepared boiled-down snippet: https://jsfiddle.net/negan/xs3k1yz4/ ).
My problem is that the tetrahedrons are not shaded such that their lighting plausibly fits to the rest of the scene. The reason is probably the primitive shader I built:
<script id="vertexShader" type="x-shader/x-vertex">
attribute vec3 offset;
attribute vec4 color;
varying vec4 vColor;
varying vec3 vNormal;
void main() {
vColor = color;
vNormal = normalMatrix * vec3(normal);
gl_Position = projectionMatrix *
modelViewMatrix *
vec4(position*1.0+offset,1.0);
}
</script>
<script id="fragmentShader" type="x-shader/x-fragment">
varying vec4 vColor;
varying vec3 vNormal;
void main() {
float di = 0.4*dot(vNormal,normalize(vec3(1.,1.,0.)))+0.4*dot(vNormal,normalize(vec3(1.,0.,1.)));
di = di+0.2;
vec4 vColor2= vColor*vec4(1.0,1.,1.,0.2)*di;
gl_FragColor = vColor2;// adjust the alpha
}
</script>
Is there a way to make the custom shader fit the lights I defined in the scene? The shader also renders the faces in such a way which does not make the impression of directed light. I'd rather like to have single faces lit evenly rather that have the color interpolated from the vertices but I was unable to achieve that.
Any pointer or help is appreciated.
Here's a gist of the full fragment and vertex shader source for a Three.js MeshPhongMaterial shader, as of r83.
Three.js builds shaders using a string concatenation system, so figuring out the source of a shader from looking at the Three.js source will be almost impossible.
The above Gist was generated by installing the shader editor Chrome extension, going to a Three.js example page that has a MeshPhongMaterial like this one and using the shader editor to inspect the full source of a running shader program:
Three.js passes all default uniforms, like lighting data, to all shader programs, so if you create a custom shader with the above Gist code, lights, bones, etc, will all work automatically.
I would take the full source code and add your logic in manually, literally adding the result of your calculations to the existing gl_FragColor = ...
In the future, my tool ShaderFrog will let you automatically add any Three.js shader feature (lights, bones, morph targets, etc) to any custom shader. ShaderFrog can already combine any shaders together automatically, but I haven't done the manual work needed to fully support Three features yet.
There's InstancedMesh module for three.js . It allows mesh instancing without using shader materials. It patches the existing material.
https://github.com/pailhead/three-instanced-mesh

Projecting FBO value to screen-space to read from depth texture

EDIT: Updated the JSFiddle link as it wasn't rendering correctly in Chrome on Windows 7.
Context
I'm playing around with particles in THREE.JS and using a frame buffer / render target (double buffered) to write positions to a texture. This texture is affected by its own ShaderMaterial, and then read by the PointCloud's ShaderMaterial to position the particles. All well and good so far; everything works as expected.
What I'm trying to do now is use my scene's depth texture to see if any of the particles are intersecting my scene's geometry.
The first thing I did was to reference my depth texture in the PointCloud's fragment shader, using gl_FragCoord.xy / screenResolution.xy to generate my uv for the depth texture lookup.
There's a JSFiddle of this here. It's working well - when a particle is behind something in the scene, I tell the particle to be rendered red, not white.
My issue arises when I try to do the same depth comparison in the position texture shader. In the draw fragment shader, I can use the value of gl_FragCoord to get the particle's position in screen space and use that for the depth uv lookup, since in the draw vertex shader I use the modelViewMatrix and projectionMatrix to set the value of gl_Position.
I've tried doing this in the position fragment shader, but to no avail. By the way, what I'm aiming to do with this is particle collision with the scene on the GPU.
So... the question (finally!):
Given a texture where each pixel/texel is a world-space 3d vector representing a particle's position, how can I project this vector to screen-space in the fragment shader, with the end goal of using the .xy properties of this vector as a uv lookup in the depth texture?
What I've tried
In the position texture shader, using the same transformations as the draw shader to transform a particle's position to (what I think is) screen-space using the model-view and projection matrices:
// Position texture's fragment shader:
void main() {
vec2 uv = gl_FragCoord.xy / textureResolution.xy;
vec4 particlePosition = texture2D( tPosition, uv );
vec2 screenspacePosition = modelViewMatrix * projectionMatrix * vec4( particlePosition, 1.0 );
vec2 depthUV = vec2( screenspacePosition.xy / screenResolution.xy );
float depth = texture2D( tDepth, depthUV ).x;
if( depth < screenspacePosition.z ) {
// Particle is behind something in the scene,
// so do something...
}
gl_FragColor = vec4( particlePosition.xyz, 1.0 );
}
Variations on a theme of the above:
Offsetting the depth's uv by doing 0.5 - depthUV
Using the tPosition texture resolution instead of the screen resolution to scale the depthUV.
Another depth uv variation: doing depthUV = (depthUV - 1.0) * 2.0;. This helps a little, but the scale is completely off.
Help! And thanks in advance.
After a lot of experimentation and research, I narrowed the issue down to the values of modelViewMatrix and projectionMatrix that THREE.js automatically assigns when one creates an instance of THREE.ShaderMaterial.
What I wanted to do was working absolutely fine when in my 'draw' shaders, where the modelViewMatrix for these shaders was set (by THREE.js) to:
new THREE.Matrix4().multiplyMatrices( camera.matrixWorldInverse, object.matrixWorld)
It appears that when one creates a ShaderMaterial to render values to a texture (and thus not attached to an object in the scene/world), the object.matrixWorld is essentially an identity matrix. What I needed to do was to make my position texture shaders have the same modelViewMatrix value as my draw shaders (which were attached to an object in the scene/world).
Once that was in place, the only other thing to do was make sure I was transforming a particle's position to screen-space correctly. I wrote some helper functions in GLSL to do this:
// Transform a worldspace coordinate to a clipspace coordinate
// Note that `mvpMatrix` is: `projectionMatrix * modelViewMatrix`
vec4 worldToClip( vec3 v, mat4 mvpMatrix ) {
return ( mvpMatrix * vec4( v, 1.0 ) );
}
// Transform a clipspace coordinate to a screenspace one.
vec3 clipToScreen( vec4 v ) {
return ( vec3( v.xyz ) / ( v.w * 2.0 ) );
}
// Transform a screenspace coordinate to a 2d vector for
// use as a texture UV lookup.
vec2 screenToUV( vec2 v ) {
return 0.5 - vec2( v.xy ) * -1.0;
}
I've made a JSFiddle to show this in action, here. I've commented it (probably too much) so hopefully it explains what is going on well enough for people that aren't familiar with this kind of stuff to understand.
Quick note about the fiddle: it doesn't look all that impressive, as all I'm doing is emulating what depthTest: true would do were that property set on the PointCloud, albeit in this example I'm setting the y position of particles that have collided with scene geometry to 70.0, so that's what the white band is near the top of the rendering screen. Eventually, I'll do this calculation in a velocity texture shader, so I can do proper collision response.
Hope this helps someone :)
EDIT: Here's a version of this implemented with a (possibly buggy) collision response.

Create animation of transparency, revealing the Mesh in threejs

I need to create linear animation (something like slideUp in 2d jquery object) with revealing a really complex mesh (building 3d model) - form the bottom to the top.
I was looking for opacity channel / opacity map or something like that and now I know that is not possible.
Using sprites of textures and changing offset is not the best idea because my UVs map is too complicated.
Is there any way to create that effect in THREE.JS?
Render entire scene into first framebuffer (texture).
Rendre only mesh into second framebuffer (texture).
Render a fullscreen rectangle that would use two previously mentioned textures, and use some some version of the code below:
uniform sampler2D texScene;
uniform sampler2D texMesh;
uniform vec2 uResolution;
uniform float time;
void main() {
vec2 uv = gl_FragCoord.xy / uResolution;
vec3 s = texture2D( texScene, uv ).xyz;
vec4 m = texture2D( texMesh, uv );
// slide up effect
float percent = clamp( time, 0, endAnim ) / endAnim; // endAnim is the time animation ends (assuming animation starts at time=0)
vec3 color = s;
if( uv.y > (1.0 - percent) ) {
color = s * (1.0 - m.a) + m.xyz * m.a;
}
gl_FragColor = vec4( color, 1.0 );
}
It should be intuitively understood how code works. Depending on the passed time, it checks at which percent the animation is at, and depending on that, it calculates if it should include the mesh's color, or just output background color.
Hope it helps.
Alternatively you can draw the building to the screen using gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA), and give your building an alpha gradient (from top to bottom).
The way drawing to any output works, is that WebGL evaluates the source information, and the destination information (the stuff that has already been drawn to that output) and combines the two, but you can dictate how it does that.
The equation for drawing to an output can be loosely described as:
SOURCE_VALUE * [SOURCE_FACTOR] [BLEND EQUATION] DESTINATION_VALUE * [DESTINATION_FACTOR];
By default this is:
SOURCE_VALUE * 1 + DESTINATION_VALUE * 0;
This equation discards all existing information in the buffer, and draws over it with the new information.
What we want to do is to tell WebGL to keep the existing information where we're not drawing onto the buffer, and take the new information where we are going to draw, so the equation becomes:
SOURCE_VALUE * SRC_ALPHA + DESTINATION_VALUE * ONE_MINUS_SRC_ALPHA;
If your building is 20% transparent in one fragment, then the fragment will be 20% the colour of the building, and 80% of the colour of whatever's behind the building.
This method of drawing semitransparent objects honours the depth buffer.
I figured another solution.
I use one texture for the whole building (no repeated pattern).
I put UVS progressively vertically (faces from bottom on the bottom of texture, etc) and I animate texture by filling with transparent rectangle (canvas texture).
// x - current step
// steps - number of steps
var canvas = document.getElementById('canvas-texture'),
ctx = canvas.getContext('2d');
ctx.beginPath();
ctx.drawImage(image, 0, 0);
ctx.globalCompositeOperation = 'destination-out';
ctx.fillRect(0, 0, width, height/steps * x);
ctx.closePath();
I needed it ASAP so at weekend if I find some time I'll try yours ideas and if you want I could create some fiddle with my solution.
Anyway, thanks for your help guys.

How to use texture, and color also in WebGL?

I'm learning WebGL, and I would like to make a program, where there are colored, and textured objects also. I tried to modify a bit the fragment shader: If the object don't have texture, then it will be colored.
precision mediump float;
varying vec4 vColor;
varying vec2 vTextureCoord;
uniform sampler2D uSampler;
void main(void) {
gl_FragColor = vColor;
gl_FragColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t));
}
It doesn't works. I get an error: Could not initialize shaders.
This may or may not make sense but whether you have a texture or not requires different shaders. To get around this many people use a single pixel white texture when they don't want textures. That way they only need 1 shader to cover both cases. The shader might look something like this
uniform vec4 u_color;
uniform sampler2D u_texture;
varying vec2 v_texCoord;
void main(void) {
gl_FragColor = texture2D(u_texture, v_texCoord) * u_color;
}
Now, in your code you can draw textured like this
// at init time.
var whiteColor = new Float32Array([1, 1, 1, 1]);
// at draw time
gl.uniform4fv(u_colorLocation, whiteColor); // use white color
gl.bindTexture(gl.TEXTURE_2D, someTexture); // and some texture
... draw ...
And to draw un-textured you do this
// at init time.
var whiteTexture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, whiteTexture);
var whitePixel = new Uint8Array([255, 255, 255, 255]);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, 1, 1, 0,
gl.RGBA, gl.UNSIGNED_BYTE, whitePixel);
// at draw time
gl.uniform4fv(u_colorLocation, someColor); // use the color I want
gl.bindTexture(gl.TEXTURE_2D, whiteTexture); // use the white texture
... draw ...
That also gives you the ability to tint your textures by using a non white color and a texture together.
I think we may need to see some more code before we can hel you figure out the exact issue. For example, what does your vertex shader look like, and how are you linking them?
In the meantime there are a couple of pointers I can give you about the shader you posted. For one, it will not give you the effect that you described. What will actually happen is that it will always show the texture color or, if no texture is given, display black. The vertex color will never be shown. This is because you are assigning to the final fragment color twice, with the second assignment always overriding the first.
What you might want to try is blending the two values together by either adding or multiplying the color and texture values. Try something like so:
gl_FragColor = vColor + texture2D(uSampler, vTextureCoord);
The only problem there is that if the texture is empty you'll still get black (anything multiplied by 0 is still 0). To avoid that, always use a solid white texture if you don't have an actual texture available.
If you want the exact effect you described (vertex color only if no texture) you may need to use a conditional statement. Those are typically best avoided but I doubt your usage will be anything terribly heavy.
Also, as a style tip, The following lines are equivalent:
vec2(vTextureCoord.s, vTextureCoord.t)
vTextureCoord.st
The second may be faster, however, and should be preferred. Beyond that, however, you could just pass in vTextureCoord directly (like I showed) because it already is a vec2.

Categories

Resources