I'm converting a Shadertoy to a local Three.js project, and can't get it to render. You can try out the full snippet here.
I think the problem may lie in how I'm converting the iResolution variable. As I understand it, the built-in Shadertoy global variable iResolution contains the pixel dimensions of the window. Here is how iResolution is used in the original Shadertoy:
vec2 uv = fragCoord.xy / iResolution.y;
vec2 ak = abs(fragCoord.xy / iResolution.xy-0.5);
In converting this Shadertoy into a local Three.js-based script I have tried two approaches to converting iResolution:
1) Loading the window dimensions as a Vector2 and sending them into the shader as the uniform vec2 uResolution:
vec2 uv = gl_FragCoord.xy / uResolution.y;
vec2 ak = abs(gl_FragCoord.xy / uResolution.xy-0.5);
This solution sticks closest to the design of the original Shadertoy, but alas nothing renders.
2) The second approach comes from this SO Answer and converts the uv coordinates to xy absolute coordinates:
vec2 uvCustom = -1.0 + 2.0 *vUv;
vec2 ak = abs(gl_FragCoord.xy / uvCustom.xy-0.5);
In this one, I admit I don't fully understand how it works, and my use of the uvCustom in the second line may not be correct.
In the end, nothing renders onscreen except a Three.js CameraHelper I'm using. Otherwise, the screen is black and the console shows no errors for the Javascript or WebGL. Thanks for taking a look!
For starters, you don't need to even do this division. If you are using a full screen quad (PlaneBufferGeometry), you can render it with just the uvs:
vec2 uv = gl_FragCoord.xy / uResolution.y;
vec2 vUv = varyingUV;
uv == vUv; //sort of
your vertex shader can look something like this
varying vec2 varyingUV;
void main(){
varyingUV = uv;
gl_Position = vec4( position.xy , 0. , 1.);
}
If you make a new THREE.PlaneGeometry(2,2,1,1); this should render as a full screen quad
Related
Using ThreeJS I've made a setup that displays a model with a complex material. I'm using an alpha texture to make semi-transparent surfaces, like glass. However when shadows are being cast, the shadow is removed completely instead of using the alpha texture as gradient.
Basically I want a similar effect like this BabylonJS example: https://playground.babylonjs.com/#P1RZV0
When casting shadows I'm using the MeshDepthMaterial in order to let the light cast shadows through transparent surfaces, like this:
var customDepthMaterial = new THREE.MeshDepthMaterial( {
depthPacking: THREE.RGBADepthPacking,
alphaMap: alphaTex,
alphaTest: 1,
skinning: true
})
This works fine however as mentioned, shadows should not disappear completely but rather use the alpha texture as indicator how strong the shadow should be. The more opaque a surface is the stronger the shadow should be and vice versa.
You can take a look at the project here: https://jsfiddle.net/miger/2c7no6mj/
Apparently BabylonJS makes it as simple as using the parameter .transparencyShadow = true; and I've not found anything simmilar in ThreeJS. It seems that there has not been an official solution implemented in ThreeJS yet. Instead the only thing I found was this discussion about it here: https://github.com/mrdoob/three.js/issues/10600
I've already implemented some GLSL code in order to get selective bloom working. I'm guessing to implement the shadow effect I'm looking for I need to do something with the shaders but I do not know how.
These are the shaders I'm using so far:
<script type="x-shader/x-vertex" id="vertexshader">
varying vec2 vUv;
void main() {
vUv = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
</script>
<script type="x-shader/x-fragment" id="fragmentshader">
uniform sampler2D baseTexture;
uniform sampler2D bloomTexture;
varying vec2 vUv;
void main() {
gl_FragColor = ( texture2D( baseTexture, vUv ) + vec4( 1.0 ) * texture2D( bloomTexture, vUv ) );
}
</script>
And this is how the selective bloom is implemented using .onBeforeCompile:
child.material.onBeforeCompile = shader => {
shader.uniforms.globalBloom = uniforms.globalBloom;
shader.fragmentShader = `
uniform float globalBloom;
${shader.fragmentShader}
`.replace(
`#include <dithering_fragment>`,
`#include <dithering_fragment>
if (globalBloom > 0.5){
gl_FragColor = texture2D( emissiveMap, vUv );
}
`
);
}
I'm really bad with shaders, there have been some solutions in the above mentioned discussion, however I've just way to little experience with shaders and GLSL to know what I need to do.
How can I implement soft/gradient shadow casting using an alpha texture as strength indicator?
I am plotting webgl points on a map and at present it works fine. Now I am wanting to add another layer to the map. I am trying to work out the best way to do this. Because of the way my code is written I am sending the gl draw function one long array with the following format:
[lat, lng, r, g, b, a, id, lat, lng, r, g, b, a, id, etc...] //where id is used for selecting the marker.
The points are drawn using:
this.delegate.gl.drawArrays(this.delegate.gl.POINTS, 0, numPoints);
When adding the extra layer I want one layer to show as circles and the other as squares. My idea was to add another element to array which codes whether to draw a circle or a square i.e 0 or 1 so the array stride would now be eight:
[lat, lng, r, g, b, a, id, code, lat, lng, r, g, b, a, id, code etc...]
The shader code then decides whether to draw a circle or a square. Is this possible? I am unsure how to pass the shape code attribute to the shader to determine which shape to draw.
Here is the shader code, at present there are two fragment shader programs. One draws circles, one draw squares.
<script id="vshader" type="x-shader/x-vertex">
uniform mat4 u_matrix;
attribute vec4 a_vertex;
attribute float a_pointSize;
attribute vec4 a_color;
varying vec4 v_color;
void main() {
gl_PointSize = a_pointSize;
gl_Position = u_matrix * a_vertex;
v_color = a_color;
}
</script>
<script id="fshader" type="x-shader/x-fragment">
precision mediump float;
varying vec4 v_color;
void main() {
float border = 0.05;
float radius = 0.5;
vec2 m = gl_PointCoord.xy - vec2(0.5, 0.5);
float dist = radius - sqrt(m.x * m.x + m.y * m.y);
float t = 0.0;
if (dist > border)
t = 1.0;
else if (dist > 0.0)
t = dist / border;
gl_FragColor = mix(vec4(0), v_color, t);
}
</script>
<script id="fshader-square" type="x-shader/x-fragment">
precision mediump float;
varying vec4 v_color;
void main() {
gl_FragColor = v_color;
}
</script>
My attribute pointers are setup like this:
this.gl.vertexAttribPointer(vertLoc, 2, this.gl.FLOAT, false, fsize*7, 0); //vertex
this.gl.vertexAttribPointer(colorLoc, 4, this.gl.FLOAT, true, fsize*7, fsize*2); //color
The most common way to draw points with different shapes is to use a texture, that way your designers can make markers etc..
It's also common not to draw POINTS but instead to draw quads made from TRIANGLES. Neither Google Maps nor Mapbox use POINTS (which you can verify yourself)
POINTS have 2 issues
the spec says the largest size you can draw a POINT is implementation dependent and can be just 1 pixel
Whether points immediately disappear when their centers go outside the screen is implementation dependent (that is not part of the spec but it is unfortunately true)
POINTS can only be aligned squares.
If the shape you want to draw is tall and thin you need to waste a bunch of texture space and or overdraw drawing a square large enough to hold the tall thin rectangle you wanted to draw. Similarly if you want to rotate the image it's much easier to do this with triangles than points.
As for implemenetations that's all up to you. Some random ideas
Use POINTS, add an imageId per point. Use imageId and gl_PointCoord to choose an image from a texture atlas
assumes all the images are the same size
uniform vec2 textureAtlasSize; // eg 64x32
uniform vec2 imageSize; // eg 16x16
float imagesAcross = floor(textureAtlasSize.x / imageSize.x);
vec2 imageCoord = vec2(mod(imageId, imagesAcross), floor(imageId / imagesAcross));
vec2 uv = (imageCoord + imageSize * gl_PointCoord) / textureAtlasSize;
gl_FragColor = texture2D(textureAtlas, uv);
note that if you make your imageIds a vec2 instead of a float and just pass in the id as a imageCoord then you don't need the imageCoord math in the shader.
Use POINTS, a texture atlas, and vec2 offset, vec2 range for each point
now the images don't need to be the same size but you need to set offset and range appropriately for each point
gl_FragColor = texture2D(textureAtlas, offset + range * gl_PointCoord);
Use TRIANGLES and instanced drawing
This is really no different than above except you create a single 2 triangle quad and use drawArrayInstanced or drawElementsInstanced. You need to change references to gl_PointCoord with your own texture coordinates and you need to compute the points in the vertex shader
attribute vec2 reusedPosition; // the 6 points (1, -1)
... all the attributes you had before ...
uniform vec2 outputResolution; // gl.canvas.width, gl.canvas.height
varying vec2 ourPointCoord;
void main() {
... -- insert code that you had before above this line -- ...
// now take gl_Position and convert to point
float ourPointSize = ???
gl_Position.xy += reusedPosition * ourPointSize / outputResolution * gl_Position.w;
ourPointCoord = reusedPosition * 0.5 + 0.5;
Use TRIANGLES with merged geometry.
This just means instead of one vertex per point you need 4 (if indexed) or 6.
Use TRIANGLES with only an id, put data in textures.
If updating 4 to 6 vertices to move a point is too much work (hint: it's probably not). Then you can put your data in a texture and look up the data for each point based on an id. So you put 4 ids per point plus some vertex id in some buffer (ie, ids 0,0,0,0,1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4, vertex ids 0,1,2,3,0,1,2,3,0,1,2,3,0,1,2,3) you can then use those to compute quad coordinates, texture coordinates and uvs to look up per point data in a texture. Advantage, you only have to update one value per point instead of 4 to 6 values per point if you want to move a point.
Note: all of the above assumes you want to draw 1000s of points in a single draw call. If you're drawing 250 or less points, maybe even 1000-2000 points, drawing them one point per draw call the normal way maybe be just fine. eg
for each point
setup uniforms
gl.drawXXX
Not points but just as an example the WebGL Aquarium is using that loop. It is not using instancing or merging geometry in any way. Here's another example just drawing 1 quad per draw call
Given a WebGL scene (created from THREE.js), how would you go about accessing the floating point values (as an array of data outside of the WebGL context) from the DEPTH_ATTACHMENT given the framebuffer has been bound to texture using framebufferTexture2D.
I've gathered one solution thus far which is to render the scene to a texture target using a custom shader override which accesses the depth texture information and then encodes it to RGB format. The code used is very similar to this THREE.js example found here: Depth-Texture-Example.
#include <packing>
varying vec2 vUv;
uniform sampler2D tDiffuse;
uniform sampler2D tDepth;
uniform float cameraNear;
uniform float cameraFar;
float readDepth (sampler2D depthSampler, vec2 coord) {
float fragCoordZ = texture2D(depthSampler, coord).x;
float viewZ = perspectiveDepthToViewZ( fragCoordZ, cameraNear, cameraFar );
return viewZToOrthographicDepth( viewZ, cameraNear, cameraFar );
}
void main() {
vec3 diffuse = texture2D(tDiffuse, vUv).rgb;
float depth = readDepth(tDepth, vUv);
gl_FragColor.rgb = vec3(depth);
gl_FragColor.a = 1.0;
}
Once this has rendered I can then use readPixels to read the specific pixels into an array. However, this option has incredibly low precision restricted to 256 discrete values given vec3(float) = vec3(float, float, float). Is there a way to get higher precision out of this specific method or an alternative?
Ultimately what I want is access to the depth buffer as an array of floating point values outside of the WebGL context and in an efficient manner. I have a custom rasterizer that can create a rather good depth buffer but I don't want to waste any time redoing steps that are already done.
One possibility would be to encode the 24 significant bits of a 32 bit IEEE 754 floating point value to a vec3:
vec3 PackDepth(float depth)
{
float depthVal = depth * (256.0*256.0*256.0 - 1.0) / (256.0*256.0*256.0);
vec4 encode = fract(depthVal * vec4(1.0, 256.0, 256.0*256.0, 256.0*256.0*256.0));
return encode.xyz - encode.yzw / 256.0 + 1.0/512.0;
}
The R, G and B color channels can be decoded to a depth in range [0.0, 1.0] like this:
depth = (R*256.0*256.0 + G*256.0 + B) / (256.0*256.0*256.0 - 1.0);
For performance reasons I need to display hundred of moving tetrahedrons in a scene. There for I use instancedbuffergeometry which requires a custom shader.
The scene also contains objects with regular geometry (non-buffer) and some lights (I prepared boiled-down snippet: https://jsfiddle.net/negan/xs3k1yz4/ ).
My problem is that the tetrahedrons are not shaded such that their lighting plausibly fits to the rest of the scene. The reason is probably the primitive shader I built:
<script id="vertexShader" type="x-shader/x-vertex">
attribute vec3 offset;
attribute vec4 color;
varying vec4 vColor;
varying vec3 vNormal;
void main() {
vColor = color;
vNormal = normalMatrix * vec3(normal);
gl_Position = projectionMatrix *
modelViewMatrix *
vec4(position*1.0+offset,1.0);
}
</script>
<script id="fragmentShader" type="x-shader/x-fragment">
varying vec4 vColor;
varying vec3 vNormal;
void main() {
float di = 0.4*dot(vNormal,normalize(vec3(1.,1.,0.)))+0.4*dot(vNormal,normalize(vec3(1.,0.,1.)));
di = di+0.2;
vec4 vColor2= vColor*vec4(1.0,1.,1.,0.2)*di;
gl_FragColor = vColor2;// adjust the alpha
}
</script>
Is there a way to make the custom shader fit the lights I defined in the scene? The shader also renders the faces in such a way which does not make the impression of directed light. I'd rather like to have single faces lit evenly rather that have the color interpolated from the vertices but I was unable to achieve that.
Any pointer or help is appreciated.
Here's a gist of the full fragment and vertex shader source for a Three.js MeshPhongMaterial shader, as of r83.
Three.js builds shaders using a string concatenation system, so figuring out the source of a shader from looking at the Three.js source will be almost impossible.
The above Gist was generated by installing the shader editor Chrome extension, going to a Three.js example page that has a MeshPhongMaterial like this one and using the shader editor to inspect the full source of a running shader program:
Three.js passes all default uniforms, like lighting data, to all shader programs, so if you create a custom shader with the above Gist code, lights, bones, etc, will all work automatically.
I would take the full source code and add your logic in manually, literally adding the result of your calculations to the existing gl_FragColor = ...
In the future, my tool ShaderFrog will let you automatically add any Three.js shader feature (lights, bones, morph targets, etc) to any custom shader. ShaderFrog can already combine any shaders together automatically, but I haven't done the manual work needed to fully support Three features yet.
There's InstancedMesh module for three.js . It allows mesh instancing without using shader materials. It patches the existing material.
https://github.com/pailhead/three-instanced-mesh
It seems that every second triangle of a 3d object is missing. For example when a cube is drawn, there are only 6 triangles, one on each face. Here is a picture of the problem. This problem persists even if only one object is drawn.
Here are parts of my code where I think the problem might be:
//start up
gl = canvas.getContext('webgl');
gl.enable(gl.DEPTH_TEST);
gl.depthFunc(gl.LEQUAL);
gl.clearColor(0.0,0.0,0.0,0.0);
gl.clearDepth(1.0);
//shaders
//vertexShader
'
attribute vec3 vertexPosition;
uniform mat4 transformationMatrix;
attribute vec2 uv;
varying vec2 vUV;
void main()
{
gl_Position = transformationMatrix * vec4(vertexPosition,1);
vUV = uv;
}'
//fragmentShader
//problem is still there even when not using a texture
'
precision highp float;
uniform sampler2D sampler;
varying vec2 vUV;
void main()
{
gl_FragColor = texture2D(sampler,vUV);
}'
//after shaders are compiled and some getAttribLocation/enableVertex/AttribLocation/etc calls are made, models and textures are loaded
//each model has a vertexBuffer, indexBuffer, and nIndices, among some other things
//models are retrieved in a for loop
//models are exported to json format with a vertex format of [X,Y,Z,Nx,Ny,Nz,U,V]
var retrievedModel = retrieveResourceFile(resourcesToLoad.models[m].source);
//make vertex buffer
resourcesToLoad.models[m].path.vertexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER,resourcesToLoad.models[m].path.vertexBuffer);
gl.bufferData(gl.ARRAY_BUFFER,new Float32Array(retrievedModel.vertices),gl.STATIC_DRAW);
//make index buffer
resourcesToLoad.models[m].path.indexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER,resourcesToLoad.models[m].path.indexBuffer);
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER,new Uint16Array(retrievedModel.indices),gl.STATIC_DRAW);
resourcesToLoad.models[m].path.nIndices = retrievedModel.indices.length;
//now in draw loop, objects that need to be drawn are stored in Draw.objectsToRender array
gl.bindBuffer(gl.ARRAY_BUFFER,Draw.objectsToRender[i].model.vertexBuffer);
gl.vertexAttribPointer(Shaders.modelShaderProgram.vertexPosition,3,gl.FLOAT,false,4*(3+3+2),0);
gl.vertexAttribPointer(Shaders.modelShaderProgram.uv,2,gl.FLOAT,false,4*(3+3+2),(3+3)*4);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER,Draw.objectsToRender[i].model.indexBuffer);
gl.drawElements(gl.TRIANGLES,Draw.objectsToRender[i].model.nIndices,gl.UNSIGNED_SHORT,0);
I've been comparing my code to a tutorial I used http://www.webglacademy.com/#6, (particularly part 6 section 8), and I can't spot anything different that would cause this problem. The models I am using were exported from Blender using the export script provided in part 6 of the tutorial, but I don't think there is a problem there, as I tried using the dragon model data that works fine in the tutorial, and it has the missing triangles problem when used with my code. Thank you very much.