ComputeTangents in rev. 72 - javascript

In my initial question thread I asked about why my lighting wasn't showing and got an answer based on the info I provided, but I realize i wasn't asking the right question. I kind of think I know my problem now. The newest revision doesn't seem to support tangents in the same way that the original script uses the function, as it seems to automatically delete the .computeTangent command, and I thought that didn't matter, but I just noticed my shaders rely heavily on this to map the light to the texture as it rotates around my object.
So this is what my shader scripts look like, is there any way to update this so it works with the newest revision.
<script id="vertexShader" type="x-shader/x-vertex">
attribute vec4 tangent;
uniform vec2 uvScale;
uniform vec3 lightPosition;
varying vec2 vUv;
varying mat3 tbn;
varying vec3 vLightVector;
void main() {
vUv = uvScale * uv;
/** Create tangent-binormal-normal matrix used to transform
coordinates from object space to tangent space */
vec3 vNormal = normalize(normalMatrix * normal);
vec3 vTangent = normalize( normalMatrix * tangent.xyz );
vec3 vBinormal = normalize(cross( vNormal, vTangent ) * tangent.w);
tbn = mat3(vTangent, vBinormal, vNormal);
/** Calculate the vertex-to-light vector */
vec4 lightVector = viewMatrix * vec4(lightPosition, 1.0);
vec4 modelViewPosition = modelViewMatrix * vec4(position, 1.0);
vLightVector = normalize(lightVector.xyz - modelViewPosition.xyz);
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
</script>
<!-- GLSL fragment shader for the moon -->
<script id="fragmentShader" type="x-shader/x-fragment">
uniform sampler2D textureMap;
uniform sampler2D normalMap;
varying vec2 vUv;
varying mat3 tbn;
varying vec3 vLightVector;
void main() {
/** Transform texture coordinate of normal map to a range (-1, 1) */
vec3 normalCoordinate = texture2D(normalMap, vUv).xyz * 2.0 - 1.0;
/** Transform the normal vector in the RGB channels to tangent space */
vec3 normal = normalize(tbn * normalCoordinate.rgb);
/** Lighting intensity is calculated as dot of normal vector and
the vertex-to-light vector */
float intensity = max(0.07, dot(normal, vLightVector));
vec4 lighting = vec4(intensity, intensity, intensity, 1.0);
/** Final color is calculated with the lighting applied */
gl_FragColor = texture2D(textureMap, vUv) * lighting;
}

Related

Scale video texture with mask over image texture background

Referencing this previous question Scaling Video texture mixed with image texture
I have the webcam video texture partially scaling with a mask provided by bodypix data. However I can't figure out how to remove the white background produced from mixing the two. I'm not sure where the extra alpha is coming from.
Updated fiddle https://jsfiddle.net/danrossi303/q8gz5cun/10/
the fragment for this
precision mediump float;
uniform sampler2D background;
uniform sampler2D frame;
uniform sampler2D mask;
uniform float texWidth;
uniform float texHeight;
void main(void) {
vec2 texCoord = gl_FragCoord.xy / vec2(texWidth,texHeight);
vec2 frameuv = texCoord * vec2(texWidth, texHeight) / vec2(200.0, 200.0);
vec4 texel0 = texture2D(background, texCoord);
vec4 frameTex = texture2D(frame, frameuv.xy);
vec4 maskTex = texture2D(mask, frameuv.xy);
vec4 texel1 = vec4(frameTex.rgb, maskTex.a * 255.);
gl_FragColor = mix(texture2D(background, texCoord), texel1, step(frameuv.x, 1.0) * step(frameuv.y, 1.0));
}
I need to position the scaled video to any corner of the canvas, and somehow provide a dynamic uniform to change the texture coord back to the viewport size. Currently it is hardcoded to 200, but should revert back.
Unscaled example with masked bodypix video over image.
https://jsfiddle.net/danrossi303/q8gz5cun/6/
The alpha channel is in range [0.0, 1.0] (linke the color channels. Hence the multiplication with 255. is obviously wrong:
vec4 texel1 = vec4(frameTex.rgb, maskTex.a * 255.);
vec4 texel1 = vec4(frameTex.rgb, maskTex.a);
Mixing the textures will likely depend on the mask:
gl_FragColor = mix(texture2D(background, texCoord), texel1,
step(frameuv.x, 1.0) * step(frameuv.y, 1.0) * maskTex.a);

Determining the center of the screen in a WEBGL fragment shader

I started learning WebGL a couple of weeks ago and as I am trying to learn by practice, I stumbled upon a simple example of a shader that I could implement using p5.js.
In this example, I am creating concentric circles starting from the center of the screen, using this fragment, where u_resolution and u_time are uniforms passed down from p5 script as [windowWidth, windowHeight] and respectively frameCount as below:
void main(void) {
float maxAxis = max(u_resolution.x, u_resolution.y);
vec2 uv = gl_FragCoord.xy / maxAxis;
vec2 center = u_resolution / maxAxis;
gl_FragColor = vec4(
vec3(sin(u_time + distance(uv, center) * 255.0)),
1.0);
}
Using this example, I can achieve what I want, but I cannot understand why I cannot calculate the center of the fragment using the formula:
vec2 center = vec2(u_resolution.x * 0.5, u_resolution.y * 0.5);
If I do this, then it will mess up the whole rendering.
Is there a coordinate system mismatch that I am missing, or something else?
For a better explanation, I included a snippet of the original experiment that I am doing in CodePen right here.
uv and center are in range [0.0, 1.0]. There for the center of the viewport is:
vec2 center = vec2(u_resolution.x * 0.5, u_resolution.y * 0.5);
vec2 center = 0.5 * u_resolution / maxAxis;
let myShaderIn, myShaderOut;
let isPlaying = true;
const vertexShader = document.getElementById("vert-shader").textContent;
const fragmentShaderStyleIn = document.getElementById("frag-shader-style-in")
.textContent;
function setup() {
const canvas = createCanvas(windowWidth, windowHeight, WEBGL);
// canvas.mousePressed(toggleSound);
rectMode(CENTER);
// shaders
myShaderIn = createShader(vertexShader, fragmentShaderStyleIn);
// register shaders
shader(myShaderIn);
// shapes setup
noStroke();
}
function draw() {
background(0);
drawEllipse();
}
function drawEllipse() {
myShaderIn.setUniform("u_resolution", [float(width), float(height)]);
myShaderIn.setUniform("u_time", float(frameCount));
shader(myShaderIn);
ellipse(0, 0, width/2);
}
function windowResized() {
resizeCanvas(windowWidth, windowHeight);
clear();
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.3.1/p5.min.js"></script>
<!-- vertex shader -->
<script type="x-shader/x-vertex" id="vert-shader">
#ifdef GL_FRAGMENT_PRECISION_HIGH
precision highp float;
#else
precision mediump float;
#endif
attribute vec3 aPosition;
uniform mat4 uProjectionMatrix;
uniform mat4 uModelViewMatrix;
void main() {
vec4 newPosition = vec4(aPosition, 1.0);
gl_Position = uProjectionMatrix * uModelViewMatrix * newPosition;
}
</script>
<!-- fragment shader -->
<script type="x-shader/x-fragment" id="frag-shader-style-in">
#ifdef GL_FRAGMENT_PRECISION_HIGH
precision highp float;
#else
precision mediump float;
#endif
uniform vec2 u_resolution; // canvas size (width, height)
uniform float u_time; // time in seconds since load
void main(void) {
float maxAxis = max(u_resolution.x, u_resolution.y);
// If you want to map the pixel coordinate values to the range 0 to 1, you divide by resolution.
/*With vec4 gl_FragCoord, we know where a thread is working inside the billboard.
In this case we don't call it uniform because it will be different from thread to thread,
instead gl_FragCoord is called a varying. */
vec2 uv = gl_FragCoord.xy / maxAxis;
vec2 center = 0.5 * u_resolution / maxAxis;
gl_FragColor = vec4(
vec3(sin(u_time * 0.1 + distance(uv, center) * 255.0)),
1.0);
}
</script>

WebGL shader flickering

I'm a beginner on WebGL and shaders, so if you find upgrades/optimizations for my code, I would love to see them.
My problem:
I have two WebGL instances (with curtainsjs) on two separate pages, one is working but not the other. They basically have the same code. The only thing that is different is the size of the output canvas. For the bugged one, it's the size of an image (not full page width and height), and for the working one it take the whole viewport.
The bugged one is flickery, like a stroboscopic white effect (see here: https://youtu.be/Yxkmm0HgNlE) but only on iOS devices (working fine on windows, macos, android, but not on ipad and iphone).
Things I already tried:
lowp, mediump, highp
gl.flutch()
change "translate" value of vectors positions
Nothing worked.
Here is the shaders :
Vertex
#ifdef GL_ES
precision highp float;
#endif
attribute vec3 aVertexPosition;
attribute vec2 aTextureCoord;
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
uniform float uTime;
uniform float uMouseMoveStrength;
uniform vec2 uMousePosition;
uniform mat4 uTextureMatrix0;
varying vec3 vVertexPosition;
varying vec2 vTextureCoord;
void main() {
vec3 vertexPosition = aVertexPosition;
float distanceFromMouse = distance(uMousePosition, vec2(vertexPosition.x, vertexPosition.y));
float waveSinusoid = cos(5.0 * (distanceFromMouse - (uTime / 75.0)));
float distanceStrength = (0.4 / (distanceFromMouse + 0.4));
float distortionEffect = distanceStrength * waveSinusoid * uMouseMoveStrength;
vertexPosition.z += distortionEffect / 30.0;
vertexPosition.x += (distortionEffect / 10.0 * (uMousePosition.x - vertexPosition.x));
vertexPosition.y += distortionEffect / 30.0 * (uMousePosition.y - vertexPosition.y);
gl_Position = uPMatrix * uMVMatrix * vec4(vertexPosition, 1.05);
vTextureCoord = (uTextureMatrix0 * vec4(aTextureCoord, 0.0, 1.0)).xy;
vVertexPosition = vertexPosition;
}
Fragment:
#ifdef GL_ES
precision highp float;
#endif
varying vec3 vVertexPosition;
varying vec2 vTextureCoord;
uniform sampler2D uSampler0;
void main() {
vec2 textureCoord = vTextureCoord;
gl_FragColor = texture2D(uSampler0, textureCoord);
}

How to alternate between two textures?

I'm trying to draw to a texture, so that the rendering keeps getting compounded on top of each other.
I'm using two textures and two framebuffers.
texture[0] is attached to framebuffer[0], and texture[1] is attached to framebuffer[1].
var gl = webgl.context;
gl.useProgram(this.program);
gl.bindTexture(gl.TEXTURE_2D, this.texture[this.pingpong]);
this.pingpong = (this.pingpong==0?1:0);
var pp = this.pingpong;
gl.bindFramebuffer(gl.FRAMEBUFFER, this.frameBuffer[pp]);
gl.drawArrays(gl.TRIANGLES, 0, 6); //primitiveType, offset, count
gl.bindTexture(gl.TEXTURE_2D, this.texture[pp]); //bind to the texture we just drew to
gl.bindFramebuffer(gl.FRAMEBUFFER, null); //render the above texture to the canvas
gl.drawArrays(gl.TRIANGLES, 0, 6);
The issue, is that I'm not seeing the previous renders get saved into the textures.
I thought that bindframebuffer() would make it render to the texture.
my fragment shader:
precision mediump float; // fragment shaders don't have a default precision so we need to pick one. mediump is a good default
varying vec2 v_texCoord; // the texCoords passed in from the vertex shader.
uniform vec2 u_resolution; // a uniform
uniform vec2 u_mouse;
uniform sampler2D u_image; // this isn't set, so it will default to 0 (the current active texture)
void main() {
vec4 texColor = texture2D(u_image, v_texCoord); // Look up a color from the texture.
vec2 coord = vec2(gl_FragCoord.x,u_resolution.y-gl_FragCoord.y);
vec4 color = step(distance(coord,u_mouse),100.0)*vec4(1,0,0,1) + step(100.0,distance(coord,u_mouse))*texColor;
gl_FragColor = color; // gl_FragColor is a special variable a fragment shader is responsible for setting
}
u_image is set to default 0, the active texture.
Is there something I'm overlooking? Why won't the previous renders get compounded on top of each other? It is just showing the latest render as if the textures haven't been altered.
here is the vertex shader:
precision mediump float;
attribute vec2 a_position; // an attribute will receive data from a buffer
attribute vec2 a_texCoord;
varying vec2 v_texCoord; // a varying
uniform vec2 u_resolution; // a uniform
uniform vec2 u_mouse;
uniform float u_flip;
// all shaders have a main function
void main() {
v_texCoord = a_texCoord; // pass the texCoord to the fragment shader.The GPU will interpolate this value between points
vec2 zeroToOne = a_position / u_resolution; // convert the position from pixels to 0.0 to 1.0
vec2 zeroToTwo = zeroToOne * 2.0; // convert from 0->1 to 0->2
vec2 clipSpace = zeroToTwo - 1.0; // convert from 0->2 to -1->+1 (clipspace)
// gl_Position is a special variable a vertex shader is responsible for setting
gl_Position = vec4(clipSpace * vec2(1, u_flip), 0, 1);
}
I tried to emulate what you are doing:
loop.flexStep = function(){
gl.clear(gl.COLOR_BUFFER_BIT);
pingPong.pingPong().applyPass( //pingPong() just swaps the FBOs and return the current FBO being draw to
"outColor += src0 * 0.98;",
pingPong.otherTexture() // the src0 texture
);
points.drawPoint([mouse.x, mouse.y, 0 ]);
points.renderAll(camera);
screenBuffer.applyPass(
"outColor = src0;",
pingPong.resultFBO // src0
);
};
Here is what it looks like (gl.POINTS instead of circle):
Here are the gl commands:
You should check if your framebuffers are setup correctly first. Since that part isnt shown its hard to say.
Also consider to separate your shader. You should have one shader in the pingpong stage to copy/alter the result of previous texture to the current pingPong FBO texture. And another shader to draw the new stuff to the current pingPong FBO texture.
Issue seemed to be with getting the wrong texture coordinate and also flipping the texture.
It's working as expected now.
here's the updated vertex/fragment shaders:
this.vertex = `
precision mediump float;
attribute vec2 a_position; // an attribute will receive data from a buffer
attribute vec2 a_texCoord;
varying vec2 v_texCoord; // a varying
uniform vec2 u_resolution; // a uniform
uniform vec2 u_mouse;
uniform float u_flip;
// all shaders have a main function
void main() {
v_texCoord = a_texCoord / u_resolution; // pass the texCoord to the fragment shader.The GPU will interpolate this value between points
vec2 zeroToOne = a_position / u_resolution; // convert the position from pixels to 0.0 to 1.0
vec2 zeroToTwo = zeroToOne * 2.0; // convert from 0->1 to 0->2
vec2 clipSpace = zeroToTwo - 1.0; // convert from 0->2 to -1->+1 (clipspace)
// gl_Position is a special variable a vertex shader is responsible for setting
gl_Position = vec4(clipSpace * vec2(1, u_flip), 0, 1);
}
`;
this.fragment = `
precision mediump float; // fragment shaders don't have a default precision so we need to pick one. mediump is a good default
varying vec2 v_texCoord; // the texCoords passed in from the vertex shader.
uniform vec2 u_resolution; // a uniform
uniform vec2 u_mouse;
uniform float u_flip;
uniform sampler2D u_image; // this isn't set, so it will default to 0 (the current active texture)
uniform sampler2D u_texture0;
uniform sampler2D u_texture1;
void main() {
vec4 texColor = texture2D(u_image, v_texCoord); // Look up a color from the texture.
vec2 coord = vec2(gl_FragCoord.x, step(0.0,u_flip)*gl_FragCoord.y + step(u_flip,0.0)*(u_resolution.y-gl_FragCoord.y) );
vec4 color = step(distance(coord,u_mouse),100.0)*vec4(1,0,0,1) + step(100.0,distance(coord,u_mouse))*texColor;
gl_FragColor = color; // gl_FragColor is a special variable a fragment shader is responsible for setting
}
`;

How can I improve this WebGL / GLSL image downsampling shader

I am using WebGL to resize images clientside very quickly within an app I am working on. I have written a GLSL shader that performs simple bilinear filtering on the images that I am downsizing.
It works fine for the most part but there are many occasions where the resize is huge e.g. from a 2048x2048 image down to 110x110 in order to generate a thumbnail. In these instances the quality is poor and far too blurry.
My current GLSL shader is as follows:
uniform float textureSizeWidth;\
uniform float textureSizeHeight;\
uniform float texelSizeX;\
uniform float texelSizeY;\
varying mediump vec2 texCoord;\
uniform sampler2D texture;\
\
vec4 tex2DBiLinear( sampler2D textureSampler_i, vec2 texCoord_i )\
{\
vec4 p0q0 = texture2D(textureSampler_i, texCoord_i);\
vec4 p1q0 = texture2D(textureSampler_i, texCoord_i + vec2(texelSizeX, 0));\
\
vec4 p0q1 = texture2D(textureSampler_i, texCoord_i + vec2(0, texelSizeY));\
vec4 p1q1 = texture2D(textureSampler_i, texCoord_i + vec2(texelSizeX , texelSizeY));\
\
float a = fract( texCoord_i.x * textureSizeWidth );\
\
vec4 pInterp_q0 = mix( p0q0, p1q0, a );\
vec4 pInterp_q1 = mix( p0q1, p1q1, a );\
\
float b = fract( texCoord_i.y * textureSizeHeight );\
return mix( pInterp_q0, pInterp_q1, b );\
}\
void main() { \
\
gl_FragColor = tex2DBiLinear(texture,texCoord);\
}');
TexelsizeX and TexelsizeY are simply (1.0 / texture width) and height respectively...
I would like to implement a higher quality filtering technique, ideally a [Lancosz][1] filter which should produce far better results but I cannot seem to get my head around how to implement the algorithm with GLSL as I am very new to WebGL and GLSL in general.
Could anybody point me in the right direction?
Thanks in advance.
If you're looking for Lanczos resampling, the following is the shader program I use in my open source GPUImage library:
Vertex shader:
attribute vec4 position;
attribute vec2 inputTextureCoordinate;
uniform float texelWidthOffset;
uniform float texelHeightOffset;
varying vec2 centerTextureCoordinate;
varying vec2 oneStepLeftTextureCoordinate;
varying vec2 twoStepsLeftTextureCoordinate;
varying vec2 threeStepsLeftTextureCoordinate;
varying vec2 fourStepsLeftTextureCoordinate;
varying vec2 oneStepRightTextureCoordinate;
varying vec2 twoStepsRightTextureCoordinate;
varying vec2 threeStepsRightTextureCoordinate;
varying vec2 fourStepsRightTextureCoordinate;
void main()
{
gl_Position = position;
vec2 firstOffset = vec2(texelWidthOffset, texelHeightOffset);
vec2 secondOffset = vec2(2.0 * texelWidthOffset, 2.0 * texelHeightOffset);
vec2 thirdOffset = vec2(3.0 * texelWidthOffset, 3.0 * texelHeightOffset);
vec2 fourthOffset = vec2(4.0 * texelWidthOffset, 4.0 * texelHeightOffset);
centerTextureCoordinate = inputTextureCoordinate;
oneStepLeftTextureCoordinate = inputTextureCoordinate - firstOffset;
twoStepsLeftTextureCoordinate = inputTextureCoordinate - secondOffset;
threeStepsLeftTextureCoordinate = inputTextureCoordinate - thirdOffset;
fourStepsLeftTextureCoordinate = inputTextureCoordinate - fourthOffset;
oneStepRightTextureCoordinate = inputTextureCoordinate + firstOffset;
twoStepsRightTextureCoordinate = inputTextureCoordinate + secondOffset;
threeStepsRightTextureCoordinate = inputTextureCoordinate + thirdOffset;
fourStepsRightTextureCoordinate = inputTextureCoordinate + fourthOffset;
}
Fragment shader:
precision highp float;
uniform sampler2D inputImageTexture;
varying vec2 centerTextureCoordinate;
varying vec2 oneStepLeftTextureCoordinate;
varying vec2 twoStepsLeftTextureCoordinate;
varying vec2 threeStepsLeftTextureCoordinate;
varying vec2 fourStepsLeftTextureCoordinate;
varying vec2 oneStepRightTextureCoordinate;
varying vec2 twoStepsRightTextureCoordinate;
varying vec2 threeStepsRightTextureCoordinate;
varying vec2 fourStepsRightTextureCoordinate;
// sinc(x) * sinc(x/a) = (a * sin(pi * x) * sin(pi * x / a)) / (pi^2 * x^2)
// Assuming a Lanczos constant of 2.0, and scaling values to max out at x = +/- 1.5
void main()
{
lowp vec4 fragmentColor = texture2D(inputImageTexture, centerTextureCoordinate) * 0.38026;
fragmentColor += texture2D(inputImageTexture, oneStepLeftTextureCoordinate) * 0.27667;
fragmentColor += texture2D(inputImageTexture, oneStepRightTextureCoordinate) * 0.27667;
fragmentColor += texture2D(inputImageTexture, twoStepsLeftTextureCoordinate) * 0.08074;
fragmentColor += texture2D(inputImageTexture, twoStepsRightTextureCoordinate) * 0.08074;
fragmentColor += texture2D(inputImageTexture, threeStepsLeftTextureCoordinate) * -0.02612;
fragmentColor += texture2D(inputImageTexture, threeStepsRightTextureCoordinate) * -0.02612;
fragmentColor += texture2D(inputImageTexture, fourStepsLeftTextureCoordinate) * -0.02143;
fragmentColor += texture2D(inputImageTexture, fourStepsRightTextureCoordinate) * -0.02143;
gl_FragColor = fragmentColor;
}
This is applied in two passes, with the first performing a horizontal downsampling and the second a vertical downsampling. The texelWidthOffset and texelHeightOffset uniforms are alternately set to 0.0 and the width fraction or height fraction of a single pixel in the image.
I hard-calculate the texel offsets in the vertex shader because this avoids dependent texture reads on the mobile devices I'm targeting with this, leading to significantly better performance there. It is a little verbose, though.
Results from this Lanczos resampling:
Normal bilinear downsampling:
Nearest-neighbor downsampling:

Categories

Resources