Two layers with different point shapes - javascript

I am plotting webgl points on a map and at present it works fine. Now I am wanting to add another layer to the map. I am trying to work out the best way to do this. Because of the way my code is written I am sending the gl draw function one long array with the following format:
[lat, lng, r, g, b, a, id, lat, lng, r, g, b, a, id, etc...] //where id is used for selecting the marker.
The points are drawn using:
this.delegate.gl.drawArrays(this.delegate.gl.POINTS, 0, numPoints);
When adding the extra layer I want one layer to show as circles and the other as squares. My idea was to add another element to array which codes whether to draw a circle or a square i.e 0 or 1 so the array stride would now be eight:
[lat, lng, r, g, b, a, id, code, lat, lng, r, g, b, a, id, code etc...]
The shader code then decides whether to draw a circle or a square. Is this possible? I am unsure how to pass the shape code attribute to the shader to determine which shape to draw.
Here is the shader code, at present there are two fragment shader programs. One draws circles, one draw squares.
<script id="vshader" type="x-shader/x-vertex">
uniform mat4 u_matrix;
attribute vec4 a_vertex;
attribute float a_pointSize;
attribute vec4 a_color;
varying vec4 v_color;
void main() {
gl_PointSize = a_pointSize;
gl_Position = u_matrix * a_vertex;
v_color = a_color;
}
</script>
<script id="fshader" type="x-shader/x-fragment">
precision mediump float;
varying vec4 v_color;
void main() {
float border = 0.05;
float radius = 0.5;
vec2 m = gl_PointCoord.xy - vec2(0.5, 0.5);
float dist = radius - sqrt(m.x * m.x + m.y * m.y);
float t = 0.0;
if (dist > border)
t = 1.0;
else if (dist > 0.0)
t = dist / border;
gl_FragColor = mix(vec4(0), v_color, t);
}
</script>
<script id="fshader-square" type="x-shader/x-fragment">
precision mediump float;
varying vec4 v_color;
void main() {
gl_FragColor = v_color;
}
</script>
My attribute pointers are setup like this:
this.gl.vertexAttribPointer(vertLoc, 2, this.gl.FLOAT, false, fsize*7, 0); //vertex
this.gl.vertexAttribPointer(colorLoc, 4, this.gl.FLOAT, true, fsize*7, fsize*2); //color

The most common way to draw points with different shapes is to use a texture, that way your designers can make markers etc..
It's also common not to draw POINTS but instead to draw quads made from TRIANGLES. Neither Google Maps nor Mapbox use POINTS (which you can verify yourself)
POINTS have 2 issues
the spec says the largest size you can draw a POINT is implementation dependent and can be just 1 pixel
Whether points immediately disappear when their centers go outside the screen is implementation dependent (that is not part of the spec but it is unfortunately true)
POINTS can only be aligned squares.
If the shape you want to draw is tall and thin you need to waste a bunch of texture space and or overdraw drawing a square large enough to hold the tall thin rectangle you wanted to draw. Similarly if you want to rotate the image it's much easier to do this with triangles than points.
As for implemenetations that's all up to you. Some random ideas
Use POINTS, add an imageId per point. Use imageId and gl_PointCoord to choose an image from a texture atlas
assumes all the images are the same size
uniform vec2 textureAtlasSize; // eg 64x32
uniform vec2 imageSize; // eg 16x16
float imagesAcross = floor(textureAtlasSize.x / imageSize.x);
vec2 imageCoord = vec2(mod(imageId, imagesAcross), floor(imageId / imagesAcross));
vec2 uv = (imageCoord + imageSize * gl_PointCoord) / textureAtlasSize;
gl_FragColor = texture2D(textureAtlas, uv);
note that if you make your imageIds a vec2 instead of a float and just pass in the id as a imageCoord then you don't need the imageCoord math in the shader.
Use POINTS, a texture atlas, and vec2 offset, vec2 range for each point
now the images don't need to be the same size but you need to set offset and range appropriately for each point
gl_FragColor = texture2D(textureAtlas, offset + range * gl_PointCoord);
Use TRIANGLES and instanced drawing
This is really no different than above except you create a single 2 triangle quad and use drawArrayInstanced or drawElementsInstanced. You need to change references to gl_PointCoord with your own texture coordinates and you need to compute the points in the vertex shader
attribute vec2 reusedPosition; // the 6 points (1, -1)
... all the attributes you had before ...
uniform vec2 outputResolution; // gl.canvas.width, gl.canvas.height
varying vec2 ourPointCoord;
void main() {
... -- insert code that you had before above this line -- ...
// now take gl_Position and convert to point
float ourPointSize = ???
gl_Position.xy += reusedPosition * ourPointSize / outputResolution * gl_Position.w;
ourPointCoord = reusedPosition * 0.5 + 0.5;
Use TRIANGLES with merged geometry.
This just means instead of one vertex per point you need 4 (if indexed) or 6.
Use TRIANGLES with only an id, put data in textures.
If updating 4 to 6 vertices to move a point is too much work (hint: it's probably not). Then you can put your data in a texture and look up the data for each point based on an id. So you put 4 ids per point plus some vertex id in some buffer (ie, ids 0,0,0,0,1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4, vertex ids 0,1,2,3,0,1,2,3,0,1,2,3,0,1,2,3) you can then use those to compute quad coordinates, texture coordinates and uvs to look up per point data in a texture. Advantage, you only have to update one value per point instead of 4 to 6 values per point if you want to move a point.
Note: all of the above assumes you want to draw 1000s of points in a single draw call. If you're drawing 250 or less points, maybe even 1000-2000 points, drawing them one point per draw call the normal way maybe be just fine. eg
for each point
setup uniforms
gl.drawXXX
Not points but just as an example the WebGL Aquarium is using that loop. It is not using instancing or merging geometry in any way. Here's another example just drawing 1 quad per draw call

Related

How to convert coordinate from NDC to camera space in vertex shader?

I'm maintaining a vertex shader encapsulated in a custom material class (inherited from ShaderMaterial but now from MeshStandardMaterial) which convert 3D coordinate to NDC as usual:
vec4 ndcPos = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
ndcPos /= ndcPos.w;
//...other transformations to ndcPos
then a set of transformations are applied to ndcPos. As you can see they are applied in NDC space. I need to take the result coordinate back to camera (eye) space so I guess we need to inverse the steps, something like this:
vec4 mvPos = ndcPos * ndcPos.w;
mvPos *= inverse of projectionMatrix;
Expected result: mvPos has only the modelView transformation applied.
Questions:
Is that correct?
How to compute inverse of projectionMatrix? It would be easy and low-cost passing the camera.projectionMatrixInverse as uniform to the vertex shader but I didn't find a way to do so in three.js: neither ancestor material classes, nor onBeforeCompile can access the camera.
The inverse operation to compute the view space position is
vec4 p = inverse(projectionMatrix) * ndc;
vec4 mvPos = p / p.w;
and the inverse operation to compute the object position is
vec4 p = inverse(projectionMatrix * modelViewMatrix) * ndc;
vec4 pos = p / p.w;
(Note that in the above code pos corresponds to the attribute position and mvPos corresponds to position * modelViewMatrix.)
Note: If you transform a Cartesian coordinate with a perspective projection matrix (or inverse perspective projection matrix), the result is a Homogeneous coordinate. To convert a Homogeneous coordinate into a Cartesian coordinate, you must perform a Perspective Divide (after the Perspective Divide, the component w is 1).
It should be mentioned that the inverse function exists only since GLSL ES 3.00 (WebGL 2.0) and is not available in GLSL ES 1.00 (WebGL 1.0). So it may be necessary to calculate the inverse matrix in Javascript and pass it as a uniform variable to the shader.

Converting and using Shadertoy's iResolution variable in a Three.js script

I'm converting a Shadertoy to a local Three.js project, and can't get it to render. You can try out the full snippet here.
I think the problem may lie in how I'm converting the iResolution variable. As I understand it, the built-in Shadertoy global variable iResolution contains the pixel dimensions of the window. Here is how iResolution is used in the original Shadertoy:
vec2 uv = fragCoord.xy / iResolution.y;
vec2 ak = abs(fragCoord.xy / iResolution.xy-0.5);
In converting this Shadertoy into a local Three.js-based script I have tried two approaches to converting iResolution:
1) Loading the window dimensions as a Vector2 and sending them into the shader as the uniform vec2 uResolution:
vec2 uv = gl_FragCoord.xy / uResolution.y;
vec2 ak = abs(gl_FragCoord.xy / uResolution.xy-0.5);
This solution sticks closest to the design of the original Shadertoy, but alas nothing renders.
2) The second approach comes from this SO Answer and converts the uv coordinates to xy absolute coordinates:
vec2 uvCustom = -1.0 + 2.0 *vUv;
vec2 ak = abs(gl_FragCoord.xy / uvCustom.xy-0.5);
In this one, I admit I don't fully understand how it works, and my use of the uvCustom in the second line may not be correct.
In the end, nothing renders onscreen except a Three.js CameraHelper I'm using. Otherwise, the screen is black and the console shows no errors for the Javascript or WebGL. Thanks for taking a look!
For starters, you don't need to even do this division. If you are using a full screen quad (PlaneBufferGeometry), you can render it with just the uvs:
vec2 uv = gl_FragCoord.xy / uResolution.y;
vec2 vUv = varyingUV;
uv == vUv; //sort of
your vertex shader can look something like this
varying vec2 varyingUV;
void main(){
varyingUV = uv;
gl_Position = vec4( position.xy , 0. , 1.);
}
If you make a new THREE.PlaneGeometry(2,2,1,1); this should render as a full screen quad

How can I add a uniform width outline to WebGL shader drawn circles/ellipses (drawn using edge/distance antialiasing)

I am drawing circles/ellipses in WebGL using a single Quad and a fragment shader, in order to draw them in a resolution independent manner (Edge distance anti-aliasing)
Here is my fragment shader currently:
'#extension GL_OES_standard_derivatives : enable',
'precision mediump float;',
'varying vec2 coord;',
'vec4 circleColor = vec4(1.0, 0.5, 0.0, 1.0);',
'vec4 outlineColor = vec4(0.0, 0.0, 0.0, 1.0);',
'uniform float strokeWidth;',
'float outerEdgeCenter = 0.5 - strokeWidth;',
'void main(void){',
'float dx = 0.5 - coord.x;',
'float dy = 0.5 - coord.y;',
'float distance = sqrt(dx*dx + dy*dy);',
'float delta = fwidth(distance);',
'float alpha = 1.0 - smoothstep(0.45 - delta, 0.45, distance);',
'float stroke = 1.0 - smoothstep(outerEdgeCenter - delta, outerEdgeCenter + delta, distance);',
'gl_FragColor = vec4( mix(outlineColor.rgb, circleColor.rgb, stroke), alpha );',
'}'
This creates an orange circle with a black outline that is perfectly antialiased whatever the size.
However, as soon as I transform the quad (scale it) in order to turn the circle into an ellipse, the distance calculation transforms along with it, causing the outline to also scale. My understanding is that I would somehow need to account for the quad's transform by inverting it.
What I would like is for the distance to remain uniform even when the quad is transformed, in effect producing a constant width outline around the whole circle/ellipse.
Any Help would be greatly appreciated.
The problem is, your fragment shader (FS) is kind of blackbox now (becausethe the code is lack of information).
Your FS is written to work within square space. So it always render circle in space where x and y are same size ( -1.0; 1.0 interval).
While quad is transformed outside (in VS or anywhere else) and there is no way how to reflect that transformation in FS yet.
To solve the problem, I suggest to push an additional information into FS about the scaling. Something like Shadertoy provides in shader inputs:
uniform vec3 iResolution; // viewport resolution (in pixels)
except this wont be the resolution of the screen size, but the information about quad transformation, so something like:
varying vec2 trans;
// where value (1.0, 1.0) mean no transformation
Then you can use this value to calculate different stroke. Instead of inverting the transformation for the current stroke, I would rather calculate unique dx and dy values for it.
There are more ways to achieve the working solution and it depends on how do you want to use it later (what kinds of transformations should be possible etc.). So I present only the basic but the easiest solution.

Projecting FBO value to screen-space to read from depth texture

EDIT: Updated the JSFiddle link as it wasn't rendering correctly in Chrome on Windows 7.
Context
I'm playing around with particles in THREE.JS and using a frame buffer / render target (double buffered) to write positions to a texture. This texture is affected by its own ShaderMaterial, and then read by the PointCloud's ShaderMaterial to position the particles. All well and good so far; everything works as expected.
What I'm trying to do now is use my scene's depth texture to see if any of the particles are intersecting my scene's geometry.
The first thing I did was to reference my depth texture in the PointCloud's fragment shader, using gl_FragCoord.xy / screenResolution.xy to generate my uv for the depth texture lookup.
There's a JSFiddle of this here. It's working well - when a particle is behind something in the scene, I tell the particle to be rendered red, not white.
My issue arises when I try to do the same depth comparison in the position texture shader. In the draw fragment shader, I can use the value of gl_FragCoord to get the particle's position in screen space and use that for the depth uv lookup, since in the draw vertex shader I use the modelViewMatrix and projectionMatrix to set the value of gl_Position.
I've tried doing this in the position fragment shader, but to no avail. By the way, what I'm aiming to do with this is particle collision with the scene on the GPU.
So... the question (finally!):
Given a texture where each pixel/texel is a world-space 3d vector representing a particle's position, how can I project this vector to screen-space in the fragment shader, with the end goal of using the .xy properties of this vector as a uv lookup in the depth texture?
What I've tried
In the position texture shader, using the same transformations as the draw shader to transform a particle's position to (what I think is) screen-space using the model-view and projection matrices:
// Position texture's fragment shader:
void main() {
vec2 uv = gl_FragCoord.xy / textureResolution.xy;
vec4 particlePosition = texture2D( tPosition, uv );
vec2 screenspacePosition = modelViewMatrix * projectionMatrix * vec4( particlePosition, 1.0 );
vec2 depthUV = vec2( screenspacePosition.xy / screenResolution.xy );
float depth = texture2D( tDepth, depthUV ).x;
if( depth < screenspacePosition.z ) {
// Particle is behind something in the scene,
// so do something...
}
gl_FragColor = vec4( particlePosition.xyz, 1.0 );
}
Variations on a theme of the above:
Offsetting the depth's uv by doing 0.5 - depthUV
Using the tPosition texture resolution instead of the screen resolution to scale the depthUV.
Another depth uv variation: doing depthUV = (depthUV - 1.0) * 2.0;. This helps a little, but the scale is completely off.
Help! And thanks in advance.
After a lot of experimentation and research, I narrowed the issue down to the values of modelViewMatrix and projectionMatrix that THREE.js automatically assigns when one creates an instance of THREE.ShaderMaterial.
What I wanted to do was working absolutely fine when in my 'draw' shaders, where the modelViewMatrix for these shaders was set (by THREE.js) to:
new THREE.Matrix4().multiplyMatrices( camera.matrixWorldInverse, object.matrixWorld)
It appears that when one creates a ShaderMaterial to render values to a texture (and thus not attached to an object in the scene/world), the object.matrixWorld is essentially an identity matrix. What I needed to do was to make my position texture shaders have the same modelViewMatrix value as my draw shaders (which were attached to an object in the scene/world).
Once that was in place, the only other thing to do was make sure I was transforming a particle's position to screen-space correctly. I wrote some helper functions in GLSL to do this:
// Transform a worldspace coordinate to a clipspace coordinate
// Note that `mvpMatrix` is: `projectionMatrix * modelViewMatrix`
vec4 worldToClip( vec3 v, mat4 mvpMatrix ) {
return ( mvpMatrix * vec4( v, 1.0 ) );
}
// Transform a clipspace coordinate to a screenspace one.
vec3 clipToScreen( vec4 v ) {
return ( vec3( v.xyz ) / ( v.w * 2.0 ) );
}
// Transform a screenspace coordinate to a 2d vector for
// use as a texture UV lookup.
vec2 screenToUV( vec2 v ) {
return 0.5 - vec2( v.xy ) * -1.0;
}
I've made a JSFiddle to show this in action, here. I've commented it (probably too much) so hopefully it explains what is going on well enough for people that aren't familiar with this kind of stuff to understand.
Quick note about the fiddle: it doesn't look all that impressive, as all I'm doing is emulating what depthTest: true would do were that property set on the PointCloud, albeit in this example I'm setting the y position of particles that have collided with scene geometry to 70.0, so that's what the white band is near the top of the rendering screen. Eventually, I'll do this calculation in a velocity texture shader, so I can do proper collision response.
Hope this helps someone :)
EDIT: Here's a version of this implemented with a (possibly buggy) collision response.

Create animation of transparency, revealing the Mesh in threejs

I need to create linear animation (something like slideUp in 2d jquery object) with revealing a really complex mesh (building 3d model) - form the bottom to the top.
I was looking for opacity channel / opacity map or something like that and now I know that is not possible.
Using sprites of textures and changing offset is not the best idea because my UVs map is too complicated.
Is there any way to create that effect in THREE.JS?
Render entire scene into first framebuffer (texture).
Rendre only mesh into second framebuffer (texture).
Render a fullscreen rectangle that would use two previously mentioned textures, and use some some version of the code below:
uniform sampler2D texScene;
uniform sampler2D texMesh;
uniform vec2 uResolution;
uniform float time;
void main() {
vec2 uv = gl_FragCoord.xy / uResolution;
vec3 s = texture2D( texScene, uv ).xyz;
vec4 m = texture2D( texMesh, uv );
// slide up effect
float percent = clamp( time, 0, endAnim ) / endAnim; // endAnim is the time animation ends (assuming animation starts at time=0)
vec3 color = s;
if( uv.y > (1.0 - percent) ) {
color = s * (1.0 - m.a) + m.xyz * m.a;
}
gl_FragColor = vec4( color, 1.0 );
}
It should be intuitively understood how code works. Depending on the passed time, it checks at which percent the animation is at, and depending on that, it calculates if it should include the mesh's color, or just output background color.
Hope it helps.
Alternatively you can draw the building to the screen using gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA), and give your building an alpha gradient (from top to bottom).
The way drawing to any output works, is that WebGL evaluates the source information, and the destination information (the stuff that has already been drawn to that output) and combines the two, but you can dictate how it does that.
The equation for drawing to an output can be loosely described as:
SOURCE_VALUE * [SOURCE_FACTOR] [BLEND EQUATION] DESTINATION_VALUE * [DESTINATION_FACTOR];
By default this is:
SOURCE_VALUE * 1 + DESTINATION_VALUE * 0;
This equation discards all existing information in the buffer, and draws over it with the new information.
What we want to do is to tell WebGL to keep the existing information where we're not drawing onto the buffer, and take the new information where we are going to draw, so the equation becomes:
SOURCE_VALUE * SRC_ALPHA + DESTINATION_VALUE * ONE_MINUS_SRC_ALPHA;
If your building is 20% transparent in one fragment, then the fragment will be 20% the colour of the building, and 80% of the colour of whatever's behind the building.
This method of drawing semitransparent objects honours the depth buffer.
I figured another solution.
I use one texture for the whole building (no repeated pattern).
I put UVS progressively vertically (faces from bottom on the bottom of texture, etc) and I animate texture by filling with transparent rectangle (canvas texture).
// x - current step
// steps - number of steps
var canvas = document.getElementById('canvas-texture'),
ctx = canvas.getContext('2d');
ctx.beginPath();
ctx.drawImage(image, 0, 0);
ctx.globalCompositeOperation = 'destination-out';
ctx.fillRect(0, 0, width, height/steps * x);
ctx.closePath();
I needed it ASAP so at weekend if I find some time I'll try yours ideas and if you want I could create some fiddle with my solution.
Anyway, thanks for your help guys.

Categories

Resources