How to apply custom shader to sprite in THREE.js - javascript

I want to be able to apply some procedural structures to faces. First task, when I faced such demand is to create billboard, on which is drawn nuclear blast in open space. I hoped to make it as a animated radial gradient and I have succeed partly.
The main thing is for each fragment shader - to have access to UV as to uniform var.
Seems like the main thing about rendering sprites - is to access to camera projection matrix in the vertex shader.
Here's example http://goo.gl/A7pY01!
Now I want to draw this onto the billboard sprite. I supposed to use THREE.Sprite for this with THREE.ShaderMaterial, but had no luck in this. It seemed, that THREE.SpriteMaterial is only good material for sprites. And after inspecting some source-code I revealed why Sprites are draw in one special way using plugins.
So, before I found myself inventing my own bicycle, I felt needness to ask people how to place my own custom shader on my own custom sprite without hacking THREE.js?

So.
After a small research and work I have considered THREE.ShaderMaterial is the best option to complete this little task. Thanks to /extras/renderers/plugins/SpritePlugin, I realized how to form and position sprites using vertex shaders. I still have some question, but I found one good solution.
To accomplish my task, firstly I create a simple plane geometry:
var geometry = new THREE.PlaneGeometry( 1, 1 );
And use it in mesh with ShaderMaterial:
uniforms = {
cur_time: {type:"f", value:1.0},
beg_time:{type:"f", value:1.0},
scale:{type: "v3", value:new THREE.Vector3()}
};
var material = new THREE.ShaderMaterial( {
uniforms: uniforms,
vertexShader: document.getElementById( 'vertexShader' ).textContent,
fragmentShader: document.getElementById( 'fragmentShader' ).textContent,
transparent: true,
blending:THREE.AdditiveBlending // It looks like real blast with Additive blending!!!
} );
var mesh = new THREE.Mesh( geometry, material );
Here's my shaders:
Vertex shader:
varying vec2 vUv;
uniform vec3 scale;
void main() {
vUv = uv;
float rotation = 0.0;
vec3 alignedPosition = vec3(position.x * scale.x, position.y * scale.y, position.z*scale.z);
vec2 pos = alignedPosition.xy;
vec2 rotatedPosition;
rotatedPosition.x = cos( rotation ) * alignedPosition.x - sin( rotation ) * alignedPosition.y;
rotatedPosition.y = sin( rotation ) * alignedPosition.x + cos( rotation ) * alignedPosition.y;
vec4 finalPosition;
finalPosition = modelViewMatrix * vec4( 0.0, 0.0, 0.0, 1.0 );
finalPosition.xy += rotatedPosition;
finalPosition = projectionMatrix * finalPosition;
gl_Position = finalPosition;
}
I got vertex shader from original Sprite Plugin source code, and changed it slightly.
BTW, changing += to = makes sprite screen-sticky. This thing wasted a lot of my time.
And this is my fragment shader:
uniform float cur_time;
uniform float beg_time;
varying vec2 vUv;
void main() {
float full_time = 5000.;
float time_left = cur_time - beg_time;
float expl_step0 = 0.;
float expl_step1 = 0.3;
float expl_max = 1.;
float as0 = 0.;
float as1 = 1.;
float as2 = 0.;
float time_perc = clamp( (time_left / full_time), 0., 1. ) ;
float alphap;
alphap = mix(as0,as1, smoothstep(expl_step0, expl_step1, time_perc));
alphap = mix(alphap,as2, smoothstep(expl_step1, expl_max, time_perc));
vec2 p = vUv;
vec2 c = vec2(0.5, 0.5);
float max_g = 1.;
float dist = length(p - c) * 2. ;
float step1 = 0.;
float step2 = 0.2;
float step3 = 0.3;
vec4 color;
float a0 = 1.;
float a1 = 1.;
float a2 = 0.7;
float a3 = 0.0;
vec4 c0 = vec4(1., 1., 1., a0 * alphap);
vec4 c1 = vec4(0.9, 0.9, 1., a1 * alphap);
vec4 c2 = vec4(0.7, 0.7, 1., a2 * alphap);
vec4 c3 = vec4(0., 0., 0., 0.);
color = mix(c0, c1, smoothstep(step1, step2, dist));
color = mix(color, c2, smoothstep(step2, step3, dist));
color = mix(color, c3, smoothstep(step3, max_g, dist));
gl_FragColor = color;
}
Here's example of how to make multipoint gradient, animated by time. There's a lot to optimize and several thoughts how to make this even more beautiful.
But this one is almost what I wanted.

Related

Three.js : Modify the UV of my texture inside a custom ShaderMaterial

I've a plane geometry and I'm creating a CustomShader material related to it. It will receive some textures as uniforms. I'd like the textures to perfectly cover my plane (like the background-size:cover css property)
I managed to do it with an utility function when I used my textures with a MeshBasicMaterial :
cover( texture, aspect ) {
var imageAspect = texture.image.width / texture.image.height;
if ( aspect < imageAspect ) {
texture.matrix.setUvTransform( 0, 0, aspect / imageAspect, 1, 0, 0.5, 0.5 );
} else {
texture.matrix.setUvTransform( 0, 0, 1, imageAspect / aspect, 0, 0.5, 0.5 );
}
}
But unfortunately since I'm using the ShaderMaterial, my "cover" function doesn't apply anymore. Am I force to do it inside my fragment shader? If so how can I manage to reproduce this behavior ?
Here's my code :
const vertexShader = `
precision highp float;
uniform mat3 uUvTransform;
varying vec2 vUv;
void main() {
vUv = ( uUvTransform * vec3( uv, 1 ) ).xy;
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}`;
const fragmentShader = `
precision highp float;
uniform sampler2D uText1;
varying vec2 vUv;
void main() {
vec2 xy = vUv;
vec4 color = texture2D(uText1,xy);
gl_FragColor = color;
}`;
And here's my current result :
Thanks a lot
You could simply use a custom uniform, e.g. :
uniform sampler2D uText1;
uniform vec2 uUvScale;
varying vec2 vUv;
void main() {
vec2 uv = (vUv - 0.5) * uUvScale + 0.5;
gl_FragColor = texture2D(uText1, uv);
}
And :
var imageAspect = texture.image.width / texture.image.height;
if ( aspect < imageAspect ) {
material.uniforms.uUvScale.value.set(aspect / imageAspect, 1)
} else {
material.uniforms.uUvScale.value.set(1, imageAspect / aspect)
}
The way Three.js handles texture transformations like .offset, .repeat, .rotation, .center is via a Matrix3 that gets passed as a uniform into the vertex shader. The vertex shader performs the matrix multiplication, then passes the modified UVs as a varying to the fragment shader.
You can see that uniform being declared in the uv_pars_vertex.glsl.js file
You can see the transform being applied in the uv_vertex.glsl.js file
You could copy those lines of GLSL code to your ShaderMaterial's vertex shader, and I think the texture properties will come through in the Matrix3 automatically. However, if for some reason it doesn't, you could recreate the Matrix3 by copying it from the source and passing it as a uniform manually. I don't know what your utility function looks like, so it's hard to tell how you're achieving the desired scaling.

Three.js: correctly combine modified vUv and gl_FragCoord

I'd like to reproduce this effect on my three.js scene : https://www.shadertoy.com/view/3ljfzV
To do so, I'm using a ShaderMaterial(). I first made sure that my textures fit perfectly my scene based on this solution.
Then, I get rid of the gl_FragCoord since I have modified UVs. I replaced it with this formula : glFragCoord = modifiedUVs * uResolution
You can see here my current result. Here's the related fragment shader ↓
precision highp float;
uniform sampler2D uText1; // texture 1
uniform sampler2D uText2; // texture 2
uniform vec3 uResolution; // width and height of my scene
uniform float uTime;
uniform vec2 uUvScale; // UV Scale calculated with the resolution of my texture and the viewport
varying vec2 vUv; // uvs from my vertex shader
// parameters for the effect
float freq = 3.2, period = 8.0, speed = 2., fade = 4., displacement = 0.2;
void main()
{
// make my textures fits like the css background-size:cover property
vec2 uv = (vUv - 0.5) * uUvScale + 0.5;
vec2 R = uResolution.xy,
U = (2. * (uv * uResolution.xy) - R) / min(R.x, R.y), //2.
T = ((uv * uResolution.xy)) / R.y;
float D = length(U);
float frame_time = mod(uTime * speed, period);
float pixel_time = max(0.0, frame_time - D);
float wave_height = (cos(pixel_time * freq) + 1.0) / 2.0;
float wave_scale = (1.0 - min(1.0, pixel_time / fade));
float frac = wave_height * wave_scale;
if (mod(uTime * speed, period * 2.0) > period)
{
frac = 1. - frac;
}
vec2 tc = T + ((U / D) * -((sin(pixel_time * freq) / fade) * wave_scale) * displacement);
gl_FragColor = mix(
texture2D(uText1,tc),
texture2D(uText2,tc),
frac);
}
As you can see, the displacement works great, but I have trouble to make my textures fits the whole scene.
I think I'm pretty close to make it fully work because when I change the texture coordinates by the modified uvs my textures displays correctly ↓ In this case, only the displacement is missing as you can see here.
gl_FragColor = mix(
texture2D(uText1,uv),
texture2D(uText2,uv),
frac);
Does anyone know how can I combine correctly my modified UVs with the gl_FragCoord value? Should I replace the gl_FragCoord by another formula to keep both the displacement and the position of my textures?
Thank you very much
EDIT :
I've been told that I can add this line :
tc.x *= uResolution.y/uResolution.x;
It fixed the textures positions but now I don't have a perfect circular displacement as you can see here.

Creating an efficient early instance clip WebGL2 vertext shader

Discarding instances in the vertex shader
I am using instanced geom to display content using webGL2. As part of the process each instance has a color component which for some instances may have an alpha value of zero.
Rather than have it passed on to the fragment shader to discard, I check alpha in the vertext shader. If zero then I output each vert as vec4(-2) to put it outside the clip or at worst have it as a 1 pixel point.
I can not find information on how this is handled by the rendering pipe.
Is this the best strategy to remove instances from the pipeline?
The alternative is to remove the instances from the buffer which in JS is a CPU intensive operation when dealing with 1000's of instances.
The shaders
const vertexSrc = `#version 300 es
#define alphaCut 0.0
in vec2 verts;
in vec2 pos;
in vec2 scale;
in vec2 offset;
in float rotate;
in float zIdx; // z index for zBuf clip only
in vec4 color; // RGBA to color.a == 0.0 to remove not render
in uint spriteIdx;
uniform vec4 sheetLayout[128];
uniform vec2 sheetSize;
uniform mat2 view;
uniform vec2 origin;
out vec2 spr;
out vec4 col;
void main() {
if (color.a <= alphaCut) {
gl_Position = vec4(-2); // put this instance outside clip
return;
}
col = color;
vec4 sprite = sheetLayout[spriteIdx];
spr = sprite.xy + verts * sprite.zw / sheetSize;
vec2 loc = (verts - offset) * scale * sprite.zw;
float xdx = cos(rotate);
float xdy = sin(rotate);
loc = view * (vec2(loc.x * xdx - loc.y * xdy, loc.x * xdy + loc.y * xdx) + pos - origin);
gl_Position = vec4(loc, zIdx, 1);
}`;
const fragmentSrc = `#version 300 es
precision mediump float;
uniform sampler2D tex;
#define alphaCut 0.0;
in vec2 spr;
in vec4 col;
out vec4 pixel;
void main() {
pixel = texture(tex, spr) * col;
if (pixel.a <= alphaCut) { discard; }
}`;
As pointed out in the question's comments and the deleted answer, to move the vertex outside the clip space requires gl_Position = vec4(vec3(-2), 1)
Setting gl_Position = vec4(-2) will put the vertex at vec3(1) (top right on the near plane). This is inside the clip area and thus instance geom will end up in the fragment shader.
But why?
Perspective division
When the vertex moves from the vertex shader as gl_position it is as a 4D vector. To clip the vertex we need a 3D vector that is outside 3d clip space.
The normalization of the vertex from 4D to 3D is called perspective division and is performed automatically by dividing the vectors x, y and z components by the w component.
Thus to correctly clip the geom instance and ensure it does not reach the fragment shader...
#define alphaCut 0.1
//...
void main() {
if (color.a <= alphaCut) {
gl_Position = vec4(vec3(2), 1);
return;
}
//... rest of code

Three.js Verlet Cloth Simulation on GPU: Can't follow my logic for finding bug

I got a problem understanding the logic I am trying to implement with Three.js and the GPUComputationRenderer by yomboprime.
(https://github.com/yomboprime/GPGPU-threejs-demos/blob/gh-pages/js/GPUComputationRenderer.js)
I want to make a simple Verlet-Cloth-Simulation. Here is the logic I was already able to implement (short version):
1) Position-Fragment-Shader: This shader takes the old and current position texture and computes the new position like this:
vec3 position = texture2D( texturePosition, uv ).xyz;
vec3 oldPosition = texture2D( textureOldPosition, uv ).xyz;
position = (position * 2.0 - oldPosition + acceleration * delta *delta )
t = checkConstraints(position);
position += t;
gl_FragColor = vec4(position,1);
2) Old-Position-Shader This shader just saves the current position and saves it for the next step.
vec3 position = texture2D( texturePosition, uv ).xyz;
gl_FragColor = vec4(position,1);
This works fine, but with that pattern it's not possible to calculate the constraints more than once in one step, because each vertex is observed separately and cannot see the change of position that other pixels would have done in the first iteration.
What I am trying to do is to separate the constraints from the verlet. At the moment it looks somehow like this:
1) Position-Shader (texturePosition)
vec3 position = texture2D( textureConstraints, uv ).xyz;
vec3 oldPosition = texture2D( textureOldPosition, uv ).xyz;
position = (position * 2.0 - oldPosition + acceleration * delta *delta );
gl_FragColor = vec4(position, 1 );
2) Constraint-Shader (textureConstraints)
vec3 position = texture2D( texturePosition, uv ).xyz;
t = checkConstraints(position);
position += t;
gl_FragColor = vec4(position,1);
3) Old-Position-Shader (textureOldPosition)
vec3 position = texture2D( textureConstraints, uv ).xyz;
gl_FragColor = vec4(position,1);
This logic is not working, even if I don't calculate constraints at all and just pass the values like they were before. As soon as some acceleration is added in the Position-Shader the position values are exploding into nowhere.
What am I doing wrong?
This example is not verlet cloth, but I think the basic premise may help you. I have a fiddle that uses the GPUComputationRender to accomplish some spring physics on a point cloud. I think you could adapt it to your needs.
What you need is more information. You'll need fixed references to the cloth's original shape (as if it were a flat board) as well as the force currently being exerted on any of those points (by gravity + wind + structural integrity or whatever else), which then gives you that point's current position. Those point references to its original shape in combination with the forces are what will give your cloth a memory instead of flinging apart as it has been.
Here, for example, is my spring physics shader which the GPUComputationRenderer uses to compute the point positions in my visualization. The tOffsets in this case are the coordinates that give the cloud a permanent memory of its original shape - they never change. It is a DataTexture I add to the uniforms at the beginning of the program. Various forces like the the mass, springConstant, gravity, and damping also remain consistent and live in the shader. tPositions are the vec4 coords that change (two of the coords record current position, the other two record current velocity):
<script id="position_fragment_shader" type="x-shader/x-fragment">
// This shader handles only the math to move the various points. Adding the sprites and point opacity comes in the following shader.
uniform sampler2D tOffsets;
uniform float uTime;
varying vec2 vUv;
float hash(float n) { return fract(sin(n) * 1e4); }
float noise(float x) {
float i = floor(x);
float f = fract(x);
float u = f * f * (3.0 - 2.0 * f);
return mix(hash(i), hash(i + 1.0), u);
}
void main() {
vec2 uv = gl_FragCoord.xy / resolution.xy;
float damping = 0.98;
vec4 nowPos = texture2D( tPositions, uv ).xyzw;
vec4 offsets = texture2D( tOffsets, uv ).xyzw;
vec2 velocity = vec2(nowPos.z, nowPos.w);
float anchorHeight = 100.0;
float yAnchor = anchorHeight;
vec2 anchor = vec2( -(uTime * 50.0) + offsets.x, yAnchor + (noise(uTime) * 30.0) );
// Newton's law: F = M * A
float mass = 24.0;
vec2 acceleration = vec2(0.0, 0.0);
// 1. apply gravity's force:
vec2 gravity = vec2(0.0, 2.0);
gravity /= mass;
acceleration += gravity;
// 2. apply the spring force
float restLength = yAnchor - offsets.y;
float springConstant = 0.2;
// Vector pointing from anchor to point position
vec2 springForce = vec2(nowPos.x - anchor.x, nowPos.y - anchor.y);
// length of the vector
float distance = length( springForce );
// stretch is the difference between the current distance and restLength
float stretch = distance - restLength;
// Calculate springForce according to Hooke's Law
springForce = normalize(springForce);
springForce *= (springConstant * stretch);
springForce /= mass;
acceleration += springForce;
velocity += acceleration;
velocity *= damping;
vec2 newPosition = vec2(nowPos.x - velocity.x, nowPos.y - velocity.y);
// Write new position out
gl_FragColor = vec4(newPosition.x, newPosition.y, velocity.x, velocity.y);
}
</script>

Three js Shader Material - Pixelated glitching when transparency is set to true

I'm playing around with shaders for the first time and using THREE.RawShaderMaterial on a few meshes.
I'm getting some strange artifacts on my very simple shaders:
Vertex shader:
precision mediump float;
precision mediump int;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
attribute vec3 position;
attribute vec4 color;
varying vec3 vPosition;
void main() {
vPosition = position;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
Fragment shader:
precision mediump float;
precision mediump int;
varying vec3 vPosition;
void main() {
gl_FragColor.r = vPosition.x;
gl_FragColor.g = vPosition.y;
gl_FragColor.b = 1.0;
}
which I use on a whole bunch of objects which are created like so:
asdfobject = new THREE.Object3D();
scene.add(asdfobject);
var geometry = new THREE.SphereGeometry(1, 4, 4);
var material = new THREE.RawShaderMaterial({
uniforms: {
time: { type: "f", value: 1.0 }
},
vertexShader: document.getElementById( 'vertexShader' ).textContent,
fragmentShader: document.getElementById( 'fragmentShader' ).textContent,
side: THREE.DoubleSide,
transparent: true,
} );
for(var i = 0; i < 80; i++) {
var mesh = new THREE.Mesh(geometry, material);
mesh.position.set(Math.random() - 0.5, Math.random() - 0.5, Math.random() - 0.5);
mesh.position.multiplyScalar(400);
mesh.rotation.set(Math.random() * 2, Math.random() * 2, Math.random() * 2);
mesh.scale.x = mesh.scale.y = mesh.scale.z = Math.random() * 50;
asdfobject.add(mesh);
}
The colour on the objects should be completely smooth, but sometimes they look a bit "glitchy" and pixelated like shown in this fiddle:
https://jsfiddle.net/weqqv5z5/
This happens especially when resizing the window.
I cannot figure out why this happens, I just know that the effect does not happen when transparent in the material is set to false (on line 23 in the fiddle)
I haven't been able to test this on any other devices yet, so it may be a graphics card specific problem, too. I am running a 64-bit Arch Linux laptop with Intel HD 4000 graphics.
Thanks in advance for any help!
You need to set the alpha value of your fragment color.
gl_FragColor.a = 0.5;
fiddle: https://jsfiddle.net/weqqv5z5/1/
three.js r.71

Categories

Resources