Apply a vertex shader on a mesh, independently of its position - javascript

I'd like to apply some movements on my meshes using a vertex-shader. I noticed that when I translate my meshes in my scene it also moves the position of my simple sinus wave.
I'd like to have the same sinus wave on both my meshes even if I translate my meshes in my scene.
I had a first lead with this post : Keep movement of vertexShader despite of its mesh rotation
I tried to reproduce this solution but I'm missing something. Here's my shader :
uniform float uTime;
void main()
{
mat4 translate = mat4(1.0);
vec4 modelPosition = modelMatrix * vec4(position, 1.0);
translate[3].y = sin(uTime * 2. + modelPosition.x *1.);
vec4 modelPosition2 = modelMatrix * translate * vec4(position, 1.0);
gl_Position = projectionMatrix * viewMatrix * modelPosition2;
}
I think I shoudn't apply my modelPosition.x on the translate[3].y line but I don't know what to call instead. As you can see on this codepen, my planes look different : https://codepen.io/michaelgrc/pen/GRMxYBm
Does anyone see what I'm missing ? Thanks a lot

I needed to use position.x instead of modelPosition.x to solve my issue :
translate[3].y = sin(uTime * 2. + position.x);

Related

how to make a point cloud viewer with webgl

I am trying to make a point cloud viewer with webgl, every tutorial i have found explain how the projection works with objects like a cube.
but that projection / perspective is not working with points because the point size is not changing.
How do i change my point size based on Z axis.
var vertCode =
'attribute vec3 coordinates;' +
'uniform mat4 rMatrix;' +
'void main(void) {' +
' gl_Position = rMatrix * vec4(coordinates, 1.0);' +
'gl_PointSize = 5.0;'+
'}';
[...]
const matrix = glMatrix.mat4.create();
const projectionMatrix = glMatrix.mat4.create();
const finalMatrix = glMatrix.mat4.create();
glMatrix.mat4.perspective(projectionMatrix,
75 * Math.PI/180, // vertical field of view ( angle in radian)
canvas.width/canvas.width, //aspect ratio
1e-4, // near cull distance ... aka null plane
1e4 // fat cull distance
);
//glMatrix.mat4.translate(matrix, matrix,[-.5,-.5,-.5]);
function animate(){
requestAnimationFrame(animate);
glMatrix.mat4.rotateY(matrix, matrix, Math.PI/6)
glMatrix.mat4.multiply(finalMatrix, projectionMatrix, matrix);
gl.uniformMatrix4fv(uniformLocations.matrix, false, matrix);
gl.drawArrays(gl.POINTS, 0, vertices.length/3);
}
sub question:
right now my code is a constant rotation, any ressource where to look on how to get the mouse event and create one or 2 rotation matrix based on the mouse drag( not quite sure if i can encode x and y rotation in the same matrix )
sub question 2:
any idea on how to have round points instead of square one ?
do i have to create small sphere ?

Three.js: correctly combine modified vUv and gl_FragCoord

I'd like to reproduce this effect on my three.js scene : https://www.shadertoy.com/view/3ljfzV
To do so, I'm using a ShaderMaterial(). I first made sure that my textures fit perfectly my scene based on this solution.
Then, I get rid of the gl_FragCoord since I have modified UVs. I replaced it with this formula : glFragCoord = modifiedUVs * uResolution
You can see here my current result. Here's the related fragment shader ↓
precision highp float;
uniform sampler2D uText1; // texture 1
uniform sampler2D uText2; // texture 2
uniform vec3 uResolution; // width and height of my scene
uniform float uTime;
uniform vec2 uUvScale; // UV Scale calculated with the resolution of my texture and the viewport
varying vec2 vUv; // uvs from my vertex shader
// parameters for the effect
float freq = 3.2, period = 8.0, speed = 2., fade = 4., displacement = 0.2;
void main()
{
// make my textures fits like the css background-size:cover property
vec2 uv = (vUv - 0.5) * uUvScale + 0.5;
vec2 R = uResolution.xy,
U = (2. * (uv * uResolution.xy) - R) / min(R.x, R.y), //2.
T = ((uv * uResolution.xy)) / R.y;
float D = length(U);
float frame_time = mod(uTime * speed, period);
float pixel_time = max(0.0, frame_time - D);
float wave_height = (cos(pixel_time * freq) + 1.0) / 2.0;
float wave_scale = (1.0 - min(1.0, pixel_time / fade));
float frac = wave_height * wave_scale;
if (mod(uTime * speed, period * 2.0) > period)
{
frac = 1. - frac;
}
vec2 tc = T + ((U / D) * -((sin(pixel_time * freq) / fade) * wave_scale) * displacement);
gl_FragColor = mix(
texture2D(uText1,tc),
texture2D(uText2,tc),
frac);
}
As you can see, the displacement works great, but I have trouble to make my textures fits the whole scene.
I think I'm pretty close to make it fully work because when I change the texture coordinates by the modified uvs my textures displays correctly ↓ In this case, only the displacement is missing as you can see here.
gl_FragColor = mix(
texture2D(uText1,uv),
texture2D(uText2,uv),
frac);
Does anyone know how can I combine correctly my modified UVs with the gl_FragCoord value? Should I replace the gl_FragCoord by another formula to keep both the displacement and the position of my textures?
Thank you very much
EDIT :
I've been told that I can add this line :
tc.x *= uResolution.y/uResolution.x;
It fixed the textures positions but now I don't have a perfect circular displacement as you can see here.

Three.js Verlet Cloth Simulation on GPU: Can't follow my logic for finding bug

I got a problem understanding the logic I am trying to implement with Three.js and the GPUComputationRenderer by yomboprime.
(https://github.com/yomboprime/GPGPU-threejs-demos/blob/gh-pages/js/GPUComputationRenderer.js)
I want to make a simple Verlet-Cloth-Simulation. Here is the logic I was already able to implement (short version):
1) Position-Fragment-Shader: This shader takes the old and current position texture and computes the new position like this:
vec3 position = texture2D( texturePosition, uv ).xyz;
vec3 oldPosition = texture2D( textureOldPosition, uv ).xyz;
position = (position * 2.0 - oldPosition + acceleration * delta *delta )
t = checkConstraints(position);
position += t;
gl_FragColor = vec4(position,1);
2) Old-Position-Shader This shader just saves the current position and saves it for the next step.
vec3 position = texture2D( texturePosition, uv ).xyz;
gl_FragColor = vec4(position,1);
This works fine, but with that pattern it's not possible to calculate the constraints more than once in one step, because each vertex is observed separately and cannot see the change of position that other pixels would have done in the first iteration.
What I am trying to do is to separate the constraints from the verlet. At the moment it looks somehow like this:
1) Position-Shader (texturePosition)
vec3 position = texture2D( textureConstraints, uv ).xyz;
vec3 oldPosition = texture2D( textureOldPosition, uv ).xyz;
position = (position * 2.0 - oldPosition + acceleration * delta *delta );
gl_FragColor = vec4(position, 1 );
2) Constraint-Shader (textureConstraints)
vec3 position = texture2D( texturePosition, uv ).xyz;
t = checkConstraints(position);
position += t;
gl_FragColor = vec4(position,1);
3) Old-Position-Shader (textureOldPosition)
vec3 position = texture2D( textureConstraints, uv ).xyz;
gl_FragColor = vec4(position,1);
This logic is not working, even if I don't calculate constraints at all and just pass the values like they were before. As soon as some acceleration is added in the Position-Shader the position values are exploding into nowhere.
What am I doing wrong?
This example is not verlet cloth, but I think the basic premise may help you. I have a fiddle that uses the GPUComputationRender to accomplish some spring physics on a point cloud. I think you could adapt it to your needs.
What you need is more information. You'll need fixed references to the cloth's original shape (as if it were a flat board) as well as the force currently being exerted on any of those points (by gravity + wind + structural integrity or whatever else), which then gives you that point's current position. Those point references to its original shape in combination with the forces are what will give your cloth a memory instead of flinging apart as it has been.
Here, for example, is my spring physics shader which the GPUComputationRenderer uses to compute the point positions in my visualization. The tOffsets in this case are the coordinates that give the cloud a permanent memory of its original shape - they never change. It is a DataTexture I add to the uniforms at the beginning of the program. Various forces like the the mass, springConstant, gravity, and damping also remain consistent and live in the shader. tPositions are the vec4 coords that change (two of the coords record current position, the other two record current velocity):
<script id="position_fragment_shader" type="x-shader/x-fragment">
// This shader handles only the math to move the various points. Adding the sprites and point opacity comes in the following shader.
uniform sampler2D tOffsets;
uniform float uTime;
varying vec2 vUv;
float hash(float n) { return fract(sin(n) * 1e4); }
float noise(float x) {
float i = floor(x);
float f = fract(x);
float u = f * f * (3.0 - 2.0 * f);
return mix(hash(i), hash(i + 1.0), u);
}
void main() {
vec2 uv = gl_FragCoord.xy / resolution.xy;
float damping = 0.98;
vec4 nowPos = texture2D( tPositions, uv ).xyzw;
vec4 offsets = texture2D( tOffsets, uv ).xyzw;
vec2 velocity = vec2(nowPos.z, nowPos.w);
float anchorHeight = 100.0;
float yAnchor = anchorHeight;
vec2 anchor = vec2( -(uTime * 50.0) + offsets.x, yAnchor + (noise(uTime) * 30.0) );
// Newton's law: F = M * A
float mass = 24.0;
vec2 acceleration = vec2(0.0, 0.0);
// 1. apply gravity's force:
vec2 gravity = vec2(0.0, 2.0);
gravity /= mass;
acceleration += gravity;
// 2. apply the spring force
float restLength = yAnchor - offsets.y;
float springConstant = 0.2;
// Vector pointing from anchor to point position
vec2 springForce = vec2(nowPos.x - anchor.x, nowPos.y - anchor.y);
// length of the vector
float distance = length( springForce );
// stretch is the difference between the current distance and restLength
float stretch = distance - restLength;
// Calculate springForce according to Hooke's Law
springForce = normalize(springForce);
springForce *= (springConstant * stretch);
springForce /= mass;
acceleration += springForce;
velocity += acceleration;
velocity *= damping;
vec2 newPosition = vec2(nowPos.x - velocity.x, nowPos.y - velocity.y);
// Write new position out
gl_FragColor = vec4(newPosition.x, newPosition.y, velocity.x, velocity.y);
}
</script>

How to apply custom shader to sprite in THREE.js

I want to be able to apply some procedural structures to faces. First task, when I faced such demand is to create billboard, on which is drawn nuclear blast in open space. I hoped to make it as a animated radial gradient and I have succeed partly.
The main thing is for each fragment shader - to have access to UV as to uniform var.
Seems like the main thing about rendering sprites - is to access to camera projection matrix in the vertex shader.
Here's example http://goo.gl/A7pY01!
Now I want to draw this onto the billboard sprite. I supposed to use THREE.Sprite for this with THREE.ShaderMaterial, but had no luck in this. It seemed, that THREE.SpriteMaterial is only good material for sprites. And after inspecting some source-code I revealed why Sprites are draw in one special way using plugins.
So, before I found myself inventing my own bicycle, I felt needness to ask people how to place my own custom shader on my own custom sprite without hacking THREE.js?
So.
After a small research and work I have considered THREE.ShaderMaterial is the best option to complete this little task. Thanks to /extras/renderers/plugins/SpritePlugin, I realized how to form and position sprites using vertex shaders. I still have some question, but I found one good solution.
To accomplish my task, firstly I create a simple plane geometry:
var geometry = new THREE.PlaneGeometry( 1, 1 );
And use it in mesh with ShaderMaterial:
uniforms = {
cur_time: {type:"f", value:1.0},
beg_time:{type:"f", value:1.0},
scale:{type: "v3", value:new THREE.Vector3()}
};
var material = new THREE.ShaderMaterial( {
uniforms: uniforms,
vertexShader: document.getElementById( 'vertexShader' ).textContent,
fragmentShader: document.getElementById( 'fragmentShader' ).textContent,
transparent: true,
blending:THREE.AdditiveBlending // It looks like real blast with Additive blending!!!
} );
var mesh = new THREE.Mesh( geometry, material );
Here's my shaders:
Vertex shader:
varying vec2 vUv;
uniform vec3 scale;
void main() {
vUv = uv;
float rotation = 0.0;
vec3 alignedPosition = vec3(position.x * scale.x, position.y * scale.y, position.z*scale.z);
vec2 pos = alignedPosition.xy;
vec2 rotatedPosition;
rotatedPosition.x = cos( rotation ) * alignedPosition.x - sin( rotation ) * alignedPosition.y;
rotatedPosition.y = sin( rotation ) * alignedPosition.x + cos( rotation ) * alignedPosition.y;
vec4 finalPosition;
finalPosition = modelViewMatrix * vec4( 0.0, 0.0, 0.0, 1.0 );
finalPosition.xy += rotatedPosition;
finalPosition = projectionMatrix * finalPosition;
gl_Position = finalPosition;
}
I got vertex shader from original Sprite Plugin source code, and changed it slightly.
BTW, changing += to = makes sprite screen-sticky. This thing wasted a lot of my time.
And this is my fragment shader:
uniform float cur_time;
uniform float beg_time;
varying vec2 vUv;
void main() {
float full_time = 5000.;
float time_left = cur_time - beg_time;
float expl_step0 = 0.;
float expl_step1 = 0.3;
float expl_max = 1.;
float as0 = 0.;
float as1 = 1.;
float as2 = 0.;
float time_perc = clamp( (time_left / full_time), 0., 1. ) ;
float alphap;
alphap = mix(as0,as1, smoothstep(expl_step0, expl_step1, time_perc));
alphap = mix(alphap,as2, smoothstep(expl_step1, expl_max, time_perc));
vec2 p = vUv;
vec2 c = vec2(0.5, 0.5);
float max_g = 1.;
float dist = length(p - c) * 2. ;
float step1 = 0.;
float step2 = 0.2;
float step3 = 0.3;
vec4 color;
float a0 = 1.;
float a1 = 1.;
float a2 = 0.7;
float a3 = 0.0;
vec4 c0 = vec4(1., 1., 1., a0 * alphap);
vec4 c1 = vec4(0.9, 0.9, 1., a1 * alphap);
vec4 c2 = vec4(0.7, 0.7, 1., a2 * alphap);
vec4 c3 = vec4(0., 0., 0., 0.);
color = mix(c0, c1, smoothstep(step1, step2, dist));
color = mix(color, c2, smoothstep(step2, step3, dist));
color = mix(color, c3, smoothstep(step3, max_g, dist));
gl_FragColor = color;
}
Here's example of how to make multipoint gradient, animated by time. There's a lot to optimize and several thoughts how to make this even more beautiful.
But this one is almost what I wanted.

Is there a way to create a 3D cylinder of the canvas element in CSS?

I'd like to present the canvas as a cylindrical cone, which you can spin like a wheel in both directions. Is this at all possible with JS/CSS3?
You should take a look at this new CSS3 feature: the Custom filters / CSS shaders.
Here are some really nice presentations made which describe the whole thing better than I could do (how to enable it on Chrome, how to start, how it works, etc):
HTML5rocks.com - Introduction to Custom Filters (aka CSS Shaders) by Paul Lewis (#aerotwist)
Alteredqualia.com - Getting started with CSS custom filters by #alteredq
Basically, if you're already familiar with shaders and CSS3 transforms, it's all done...
Advantages
WebGl-like GPU/3D acceleration
JS-free (only GL shaders and CSS)
Possibility to combine it with CSS3-transitions
Inconvenients
New feature / Only supported by recent versions of some browsers (sometimes protected by a flag)
Independant files for the shaders (unsure about that - maybe it's somehow possible to inline them in the html page like with WebGL)
While implementing the example described below, I ran into a strange behavior: if the size of the canvas (not the CSS size but the "JS" one, defining the pixels density) gets too big, the shaders doesn't seem to be applied anymore, regardless of the operations you are doing or not on your canvas. I'm quite curious why, so I will try to investigate that.
Example
To answer your more-precise requirements (canvas as a cylindrical cone), I made this small example: http://aldream.net/various/css-shader-demo/cylindricalConeTransformDemo.html. Hover above the canvas to make it wrap into a cone.
It doesn't spin, I just applied a simple transition effect I took from one example in the articles given above, but you should get the idea.
The vertex shader and fragment shader used can be found here:
"ConeTransform" vertex-shader
Computes the wrapping into a cone and the 3D transformation of the canvas.
"ConeTransform" fragment-shader
Does almost nothing. Just allows to display the unmodified DOM texture (the content of the canvas).
Simplified & commented Code
CSS
canvas {
/* ... Prettify it as you wish */
width: 640px;
height: 560px;
-webkit-filter: custom(url(cylindricalConeTransform.vs) /* Vertex-shader */
mix(url(cylindricalConeTransform.fs) normal source-atop /* Fragment-shader and color-mixing properties */),
36 2 /* Numbers of vertices */,
/* Passing the values to the shaders uniforms: */
amount 0,
cylinderRadius 0.35,
cylinderLength 250,
transform rotateY(0deg) rotateX(0deg));
-webkit-transition: -webkit-filter linear 1s; /* Transition on the filter for animation. */
}
canvas:hover {
/* Same as above, but with different values for some uniforms. With the CSS-transition, those values will be tweened. */
filter: custom(url(cylindricalConeTransform.vs) mix(url(cylindricalConeTransform.fs) normal source-atop), 36 2,
amount 1,
cylinderRadius 0.35,
cylinderLength 250,
transform rotateY(60deg) rotateX(60deg));
}
Vertex-Shader
precision mediump float;
// Built-in attributes
attribute vec4 a_position;
attribute vec2 a_texCoord;
// Built-in uniforms
uniform mat4 u_projectionMatrix;
// Uniforms passed in from CSS
uniform float amount;
uniform float cylinderRadius;
uniform float cylinderLength;
uniform mat4 transform;
// Constants
const float PI = 3.1415;
// Cone function
vec3 computeCylindricalConePosition( vec2 uv, float r, float l ) {
vec3 p;
float fi = uv.x * PI * 2.0;
p.x = r * cos( fi ) * uv.y;
p.y = r * sin( fi ) * uv.y;
p.z = (uv.y - 0.5) * l;
return p;
}
// Main
void main() {
vec4 position = a_position;
// Map plane to cone using UV coordinates
vec3 cone = computeCylindricalConePosition( a_texCoord, cylinderRadius, cylinderLength );
// Blend plane and cone
position.xyz = mix( position.xyz, cone, amount );
// Set vertex position
gl_Position = u_projectionMatrix * transform * position;
}
Fragment-Shader
/** spec: css */
precision mediump float;
void main() {
css_ColorMatrix = mat4(
1.0, 0.0, 0.0, 0.0,
0.0, 1.0, 0.0, 0.0,
0.0, 0.0, 1.0, 0.0,
0.0, 0.0, 0.0, 1.0
);
}
HTML
<!doctype html>
<html>
<head>
... your meta, css, ...
<body>
<canvas></canvas>
<script>
// Draw what you want in your canvas.
</script>
</body>
</html>
EDIT : I'm actually not sure if you're asking for a cone or a cylinder, but the difference is small here. If you want the 2nd, you modify the computeCylindricalConePosition() function in the vertex-shader, evaluating p.x and p.y like this instead:
p.x = r * cos( fi ) /* * uv.y */;
p.y = r * sin( fi ) /* * uv.y */;
I hope it helps. I will try to develop my answer once I clarify some points.
The HTML-Canvas is in the actual state only 2D, but the ThreeJS framework seems to be a pretty good solution for 3d-rendering within the canvas-element.

Categories

Resources