EDIT: Updated the JSFiddle link as it wasn't rendering correctly in Chrome on Windows 7.
Context
I'm playing around with particles in THREE.JS and using a frame buffer / render target (double buffered) to write positions to a texture. This texture is affected by its own ShaderMaterial, and then read by the PointCloud's ShaderMaterial to position the particles. All well and good so far; everything works as expected.
What I'm trying to do now is use my scene's depth texture to see if any of the particles are intersecting my scene's geometry.
The first thing I did was to reference my depth texture in the PointCloud's fragment shader, using gl_FragCoord.xy / screenResolution.xy to generate my uv for the depth texture lookup.
There's a JSFiddle of this here. It's working well - when a particle is behind something in the scene, I tell the particle to be rendered red, not white.
My issue arises when I try to do the same depth comparison in the position texture shader. In the draw fragment shader, I can use the value of gl_FragCoord to get the particle's position in screen space and use that for the depth uv lookup, since in the draw vertex shader I use the modelViewMatrix and projectionMatrix to set the value of gl_Position.
I've tried doing this in the position fragment shader, but to no avail. By the way, what I'm aiming to do with this is particle collision with the scene on the GPU.
So... the question (finally!):
Given a texture where each pixel/texel is a world-space 3d vector representing a particle's position, how can I project this vector to screen-space in the fragment shader, with the end goal of using the .xy properties of this vector as a uv lookup in the depth texture?
What I've tried
In the position texture shader, using the same transformations as the draw shader to transform a particle's position to (what I think is) screen-space using the model-view and projection matrices:
// Position texture's fragment shader:
void main() {
vec2 uv = gl_FragCoord.xy / textureResolution.xy;
vec4 particlePosition = texture2D( tPosition, uv );
vec2 screenspacePosition = modelViewMatrix * projectionMatrix * vec4( particlePosition, 1.0 );
vec2 depthUV = vec2( screenspacePosition.xy / screenResolution.xy );
float depth = texture2D( tDepth, depthUV ).x;
if( depth < screenspacePosition.z ) {
// Particle is behind something in the scene,
// so do something...
}
gl_FragColor = vec4( particlePosition.xyz, 1.0 );
}
Variations on a theme of the above:
Offsetting the depth's uv by doing 0.5 - depthUV
Using the tPosition texture resolution instead of the screen resolution to scale the depthUV.
Another depth uv variation: doing depthUV = (depthUV - 1.0) * 2.0;. This helps a little, but the scale is completely off.
Help! And thanks in advance.
After a lot of experimentation and research, I narrowed the issue down to the values of modelViewMatrix and projectionMatrix that THREE.js automatically assigns when one creates an instance of THREE.ShaderMaterial.
What I wanted to do was working absolutely fine when in my 'draw' shaders, where the modelViewMatrix for these shaders was set (by THREE.js) to:
new THREE.Matrix4().multiplyMatrices( camera.matrixWorldInverse, object.matrixWorld)
It appears that when one creates a ShaderMaterial to render values to a texture (and thus not attached to an object in the scene/world), the object.matrixWorld is essentially an identity matrix. What I needed to do was to make my position texture shaders have the same modelViewMatrix value as my draw shaders (which were attached to an object in the scene/world).
Once that was in place, the only other thing to do was make sure I was transforming a particle's position to screen-space correctly. I wrote some helper functions in GLSL to do this:
// Transform a worldspace coordinate to a clipspace coordinate
// Note that `mvpMatrix` is: `projectionMatrix * modelViewMatrix`
vec4 worldToClip( vec3 v, mat4 mvpMatrix ) {
return ( mvpMatrix * vec4( v, 1.0 ) );
}
// Transform a clipspace coordinate to a screenspace one.
vec3 clipToScreen( vec4 v ) {
return ( vec3( v.xyz ) / ( v.w * 2.0 ) );
}
// Transform a screenspace coordinate to a 2d vector for
// use as a texture UV lookup.
vec2 screenToUV( vec2 v ) {
return 0.5 - vec2( v.xy ) * -1.0;
}
I've made a JSFiddle to show this in action, here. I've commented it (probably too much) so hopefully it explains what is going on well enough for people that aren't familiar with this kind of stuff to understand.
Quick note about the fiddle: it doesn't look all that impressive, as all I'm doing is emulating what depthTest: true would do were that property set on the PointCloud, albeit in this example I'm setting the y position of particles that have collided with scene geometry to 70.0, so that's what the white band is near the top of the rendering screen. Eventually, I'll do this calculation in a velocity texture shader, so I can do proper collision response.
Hope this helps someone :)
EDIT: Here's a version of this implemented with a (possibly buggy) collision response.
Related
I'm maintaining a vertex shader encapsulated in a custom material class (inherited from ShaderMaterial but now from MeshStandardMaterial) which convert 3D coordinate to NDC as usual:
vec4 ndcPos = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
ndcPos /= ndcPos.w;
//...other transformations to ndcPos
then a set of transformations are applied to ndcPos. As you can see they are applied in NDC space. I need to take the result coordinate back to camera (eye) space so I guess we need to inverse the steps, something like this:
vec4 mvPos = ndcPos * ndcPos.w;
mvPos *= inverse of projectionMatrix;
Expected result: mvPos has only the modelView transformation applied.
Questions:
Is that correct?
How to compute inverse of projectionMatrix? It would be easy and low-cost passing the camera.projectionMatrixInverse as uniform to the vertex shader but I didn't find a way to do so in three.js: neither ancestor material classes, nor onBeforeCompile can access the camera.
The inverse operation to compute the view space position is
vec4 p = inverse(projectionMatrix) * ndc;
vec4 mvPos = p / p.w;
and the inverse operation to compute the object position is
vec4 p = inverse(projectionMatrix * modelViewMatrix) * ndc;
vec4 pos = p / p.w;
(Note that in the above code pos corresponds to the attribute position and mvPos corresponds to position * modelViewMatrix.)
Note: If you transform a Cartesian coordinate with a perspective projection matrix (or inverse perspective projection matrix), the result is a Homogeneous coordinate. To convert a Homogeneous coordinate into a Cartesian coordinate, you must perform a Perspective Divide (after the Perspective Divide, the component w is 1).
It should be mentioned that the inverse function exists only since GLSL ES 3.00 (WebGL 2.0) and is not available in GLSL ES 1.00 (WebGL 1.0). So it may be necessary to calculate the inverse matrix in Javascript and pass it as a uniform variable to the shader.
I am plotting webgl points on a map and at present it works fine. Now I am wanting to add another layer to the map. I am trying to work out the best way to do this. Because of the way my code is written I am sending the gl draw function one long array with the following format:
[lat, lng, r, g, b, a, id, lat, lng, r, g, b, a, id, etc...] //where id is used for selecting the marker.
The points are drawn using:
this.delegate.gl.drawArrays(this.delegate.gl.POINTS, 0, numPoints);
When adding the extra layer I want one layer to show as circles and the other as squares. My idea was to add another element to array which codes whether to draw a circle or a square i.e 0 or 1 so the array stride would now be eight:
[lat, lng, r, g, b, a, id, code, lat, lng, r, g, b, a, id, code etc...]
The shader code then decides whether to draw a circle or a square. Is this possible? I am unsure how to pass the shape code attribute to the shader to determine which shape to draw.
Here is the shader code, at present there are two fragment shader programs. One draws circles, one draw squares.
<script id="vshader" type="x-shader/x-vertex">
uniform mat4 u_matrix;
attribute vec4 a_vertex;
attribute float a_pointSize;
attribute vec4 a_color;
varying vec4 v_color;
void main() {
gl_PointSize = a_pointSize;
gl_Position = u_matrix * a_vertex;
v_color = a_color;
}
</script>
<script id="fshader" type="x-shader/x-fragment">
precision mediump float;
varying vec4 v_color;
void main() {
float border = 0.05;
float radius = 0.5;
vec2 m = gl_PointCoord.xy - vec2(0.5, 0.5);
float dist = radius - sqrt(m.x * m.x + m.y * m.y);
float t = 0.0;
if (dist > border)
t = 1.0;
else if (dist > 0.0)
t = dist / border;
gl_FragColor = mix(vec4(0), v_color, t);
}
</script>
<script id="fshader-square" type="x-shader/x-fragment">
precision mediump float;
varying vec4 v_color;
void main() {
gl_FragColor = v_color;
}
</script>
My attribute pointers are setup like this:
this.gl.vertexAttribPointer(vertLoc, 2, this.gl.FLOAT, false, fsize*7, 0); //vertex
this.gl.vertexAttribPointer(colorLoc, 4, this.gl.FLOAT, true, fsize*7, fsize*2); //color
The most common way to draw points with different shapes is to use a texture, that way your designers can make markers etc..
It's also common not to draw POINTS but instead to draw quads made from TRIANGLES. Neither Google Maps nor Mapbox use POINTS (which you can verify yourself)
POINTS have 2 issues
the spec says the largest size you can draw a POINT is implementation dependent and can be just 1 pixel
Whether points immediately disappear when their centers go outside the screen is implementation dependent (that is not part of the spec but it is unfortunately true)
POINTS can only be aligned squares.
If the shape you want to draw is tall and thin you need to waste a bunch of texture space and or overdraw drawing a square large enough to hold the tall thin rectangle you wanted to draw. Similarly if you want to rotate the image it's much easier to do this with triangles than points.
As for implemenetations that's all up to you. Some random ideas
Use POINTS, add an imageId per point. Use imageId and gl_PointCoord to choose an image from a texture atlas
assumes all the images are the same size
uniform vec2 textureAtlasSize; // eg 64x32
uniform vec2 imageSize; // eg 16x16
float imagesAcross = floor(textureAtlasSize.x / imageSize.x);
vec2 imageCoord = vec2(mod(imageId, imagesAcross), floor(imageId / imagesAcross));
vec2 uv = (imageCoord + imageSize * gl_PointCoord) / textureAtlasSize;
gl_FragColor = texture2D(textureAtlas, uv);
note that if you make your imageIds a vec2 instead of a float and just pass in the id as a imageCoord then you don't need the imageCoord math in the shader.
Use POINTS, a texture atlas, and vec2 offset, vec2 range for each point
now the images don't need to be the same size but you need to set offset and range appropriately for each point
gl_FragColor = texture2D(textureAtlas, offset + range * gl_PointCoord);
Use TRIANGLES and instanced drawing
This is really no different than above except you create a single 2 triangle quad and use drawArrayInstanced or drawElementsInstanced. You need to change references to gl_PointCoord with your own texture coordinates and you need to compute the points in the vertex shader
attribute vec2 reusedPosition; // the 6 points (1, -1)
... all the attributes you had before ...
uniform vec2 outputResolution; // gl.canvas.width, gl.canvas.height
varying vec2 ourPointCoord;
void main() {
... -- insert code that you had before above this line -- ...
// now take gl_Position and convert to point
float ourPointSize = ???
gl_Position.xy += reusedPosition * ourPointSize / outputResolution * gl_Position.w;
ourPointCoord = reusedPosition * 0.5 + 0.5;
Use TRIANGLES with merged geometry.
This just means instead of one vertex per point you need 4 (if indexed) or 6.
Use TRIANGLES with only an id, put data in textures.
If updating 4 to 6 vertices to move a point is too much work (hint: it's probably not). Then you can put your data in a texture and look up the data for each point based on an id. So you put 4 ids per point plus some vertex id in some buffer (ie, ids 0,0,0,0,1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4, vertex ids 0,1,2,3,0,1,2,3,0,1,2,3,0,1,2,3) you can then use those to compute quad coordinates, texture coordinates and uvs to look up per point data in a texture. Advantage, you only have to update one value per point instead of 4 to 6 values per point if you want to move a point.
Note: all of the above assumes you want to draw 1000s of points in a single draw call. If you're drawing 250 or less points, maybe even 1000-2000 points, drawing them one point per draw call the normal way maybe be just fine. eg
for each point
setup uniforms
gl.drawXXX
Not points but just as an example the WebGL Aquarium is using that loop. It is not using instancing or merging geometry in any way. Here's another example just drawing 1 quad per draw call
I am drawing circles/ellipses in WebGL using a single Quad and a fragment shader, in order to draw them in a resolution independent manner (Edge distance anti-aliasing)
Here is my fragment shader currently:
'#extension GL_OES_standard_derivatives : enable',
'precision mediump float;',
'varying vec2 coord;',
'vec4 circleColor = vec4(1.0, 0.5, 0.0, 1.0);',
'vec4 outlineColor = vec4(0.0, 0.0, 0.0, 1.0);',
'uniform float strokeWidth;',
'float outerEdgeCenter = 0.5 - strokeWidth;',
'void main(void){',
'float dx = 0.5 - coord.x;',
'float dy = 0.5 - coord.y;',
'float distance = sqrt(dx*dx + dy*dy);',
'float delta = fwidth(distance);',
'float alpha = 1.0 - smoothstep(0.45 - delta, 0.45, distance);',
'float stroke = 1.0 - smoothstep(outerEdgeCenter - delta, outerEdgeCenter + delta, distance);',
'gl_FragColor = vec4( mix(outlineColor.rgb, circleColor.rgb, stroke), alpha );',
'}'
This creates an orange circle with a black outline that is perfectly antialiased whatever the size.
However, as soon as I transform the quad (scale it) in order to turn the circle into an ellipse, the distance calculation transforms along with it, causing the outline to also scale. My understanding is that I would somehow need to account for the quad's transform by inverting it.
What I would like is for the distance to remain uniform even when the quad is transformed, in effect producing a constant width outline around the whole circle/ellipse.
Any Help would be greatly appreciated.
The problem is, your fragment shader (FS) is kind of blackbox now (becausethe the code is lack of information).
Your FS is written to work within square space. So it always render circle in space where x and y are same size ( -1.0; 1.0 interval).
While quad is transformed outside (in VS or anywhere else) and there is no way how to reflect that transformation in FS yet.
To solve the problem, I suggest to push an additional information into FS about the scaling. Something like Shadertoy provides in shader inputs:
uniform vec3 iResolution; // viewport resolution (in pixels)
except this wont be the resolution of the screen size, but the information about quad transformation, so something like:
varying vec2 trans;
// where value (1.0, 1.0) mean no transformation
Then you can use this value to calculate different stroke. Instead of inverting the transformation for the current stroke, I would rather calculate unique dx and dy values for it.
There are more ways to achieve the working solution and it depends on how do you want to use it later (what kinds of transformations should be possible etc.). So I present only the basic but the easiest solution.
I need to create linear animation (something like slideUp in 2d jquery object) with revealing a really complex mesh (building 3d model) - form the bottom to the top.
I was looking for opacity channel / opacity map or something like that and now I know that is not possible.
Using sprites of textures and changing offset is not the best idea because my UVs map is too complicated.
Is there any way to create that effect in THREE.JS?
Render entire scene into first framebuffer (texture).
Rendre only mesh into second framebuffer (texture).
Render a fullscreen rectangle that would use two previously mentioned textures, and use some some version of the code below:
uniform sampler2D texScene;
uniform sampler2D texMesh;
uniform vec2 uResolution;
uniform float time;
void main() {
vec2 uv = gl_FragCoord.xy / uResolution;
vec3 s = texture2D( texScene, uv ).xyz;
vec4 m = texture2D( texMesh, uv );
// slide up effect
float percent = clamp( time, 0, endAnim ) / endAnim; // endAnim is the time animation ends (assuming animation starts at time=0)
vec3 color = s;
if( uv.y > (1.0 - percent) ) {
color = s * (1.0 - m.a) + m.xyz * m.a;
}
gl_FragColor = vec4( color, 1.0 );
}
It should be intuitively understood how code works. Depending on the passed time, it checks at which percent the animation is at, and depending on that, it calculates if it should include the mesh's color, or just output background color.
Hope it helps.
Alternatively you can draw the building to the screen using gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA), and give your building an alpha gradient (from top to bottom).
The way drawing to any output works, is that WebGL evaluates the source information, and the destination information (the stuff that has already been drawn to that output) and combines the two, but you can dictate how it does that.
The equation for drawing to an output can be loosely described as:
SOURCE_VALUE * [SOURCE_FACTOR] [BLEND EQUATION] DESTINATION_VALUE * [DESTINATION_FACTOR];
By default this is:
SOURCE_VALUE * 1 + DESTINATION_VALUE * 0;
This equation discards all existing information in the buffer, and draws over it with the new information.
What we want to do is to tell WebGL to keep the existing information where we're not drawing onto the buffer, and take the new information where we are going to draw, so the equation becomes:
SOURCE_VALUE * SRC_ALPHA + DESTINATION_VALUE * ONE_MINUS_SRC_ALPHA;
If your building is 20% transparent in one fragment, then the fragment will be 20% the colour of the building, and 80% of the colour of whatever's behind the building.
This method of drawing semitransparent objects honours the depth buffer.
I figured another solution.
I use one texture for the whole building (no repeated pattern).
I put UVS progressively vertically (faces from bottom on the bottom of texture, etc) and I animate texture by filling with transparent rectangle (canvas texture).
// x - current step
// steps - number of steps
var canvas = document.getElementById('canvas-texture'),
ctx = canvas.getContext('2d');
ctx.beginPath();
ctx.drawImage(image, 0, 0);
ctx.globalCompositeOperation = 'destination-out';
ctx.fillRect(0, 0, width, height/steps * x);
ctx.closePath();
I needed it ASAP so at weekend if I find some time I'll try yours ideas and if you want I could create some fiddle with my solution.
Anyway, thanks for your help guys.
Say I have this character and I want allow user to select it, so when it s selected I want to show an outline around it.
the character is an object3D with some meshes.
I tried to clone and set a backside material, but it did NOT work, the problem was each cube in the shape was render with backside separately so the outline was wrong.
do I need to create another mesh for the outline, is there an easier way?
What #spassvolgel wrote is correct;
What I suspect needs to be done is something like this: 1. First the background needs to be rendered 2. Then, on a separate transparent layer, the character model with a flat color, slightly bigger than the original, 3. On another transparent layer the character with its normal material / texture 4. Finally, the character layer needs to go on top of the outline layer and them combined need to be placed in the bg
You just create multiple scenes and combine them with sequential render passes:
renderer.autoClear = false;
. . .
renderer.render(scene, camera); // the entire scene
renderer.clearDepth();
renderer.render(scene2, camera); // just the selected item, larger, in a flat color
renderer.render(scene3, camera); // the selected item again
three.js.r.129
An generic solution that applies to geometries of any complexity might be to apply a fragment shader via the ShaderMaterial class in three.js. Not sure what your experience level is at, but if you need it an introduction to shaders can be found here.
A good example where shaders are used to highlight geometries can be found here. In their vertex shader, they calculate the normal for a vertex and a parameter used to express intensity of a glow effect:
uniform vec3 viewVector;
uniform float c;
uniform float p;
varying float intensity;
void main()
{
vec3 vNormal = normalize( normalMatrix * normal );
vec3 vNormel = normalize( normalMatrix * viewVector );
intensity = pow( c - dot(vNormal, vNormel), p );
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
These parameters are passed to the fragment shader where they are used to modify the color values of pixels surrounding the geometry:
uniform vec3 glowColor;
varying float intensity;
void main()
{
vec3 glow = glowColor * intensity;
gl_FragColor = vec4( glow, 1.0 );
}
I found something on gamedev.stackexchange.com/ that could be useful. They talk of a stencil buffer. I have no idea on how to apply this to THREE.js though..
https://gamedev.stackexchange.com/questions/59361/opengl-get-the-outline-of-multiple-overlapping-objects
You can get good results by rendering your outlined object(s) to a texture that is (ideally) the size of your destination framebuffer, then render a framebuffer-sized quad using that texture and have the fragment shader blur or do other image transforms. I have an example here that uses raw WebGL, but you can make a custom ShaderMaterial without too much trouble.
I haven't found the answer yet but I wanted to demonstrate what happens when I create multiple meshes, and put another mesh behind each of these meshes with
side: THREE.BackSide
http://jsfiddle.net/GwS9c/8/
as you can see, it's not the desired effect. I would like a clean outline behind ALL three meshes, that doesn't overlap. My level of programming shaders is really non-existent, but on most online resources people say to use this approach of cloning the meshes.