Converting a 3D-Scene to 2D-image using raytracing (webgl, three.js) - javascript

As explained above I would like to render a 3D-Scene onto a 2D-Plane with raytracing. Eventually I would like to use it for Volume Rendering but I'm struggling with the basics here. I have a three.js scene with the viewing plane attached to the camera (in front of it of course).
The Setup:
Then (in the shader) I'm shooting a ray from the camera through each point (250x250) in the plane. Behind the plane is 41x41x41 volume (a cube essentially). If a ray goes through the cube, the point in the viewing plane the ray crossed will be rendered red, otherwise the point will be black. Unfortunately this only works if you look at the cube from the front. Here's the example: http://ec2-54-244-155-66.us-west-2.compute.amazonaws.com/example.html
If you try to look at the cube from different angles (you can move the camera with the mouse) then we don't get a cube rendered onto the viewing plane as we would like but a square with some weird pixels on the side..
That's the code for Raytracing:
Vertex Shader:
bool inside(vec3 posVec){
bool value = false;
if(posVec.x <0.0 ||posVec.x > 41.0 ){
value = false;
}
else if(posVec.y <0.0 ||posVec.y > 41.0 ){
value = false;
}
else if(posVec.z <0.0 ||posVec.z > 41.0 ){
value = false;
}
else{
value = true;
}
return value;
}
float getDensity(vec3 PointPos){
float stepsize = 1.0;
float emptyStep = 15.0;
vec3 leap;
bool hit = false;
float density = 0.000;
// Ray direction from the camera through the current point in the Plane
vec3 dir = PointPos- camera;
vec3 RayDirection = normalize(dir);
vec3 start = PointPos;
for(int i = 0; i<STEPS; i++){
vec3 alteredPosition = start;
alteredPosition.x += 20.5;
alteredPosition.y += 20.5;
alteredPosition.z += 20.5;
bool insideTest = inside(alteredPosition);
if(insideTest){
// advance from the start position
start = start + RayDirection * stepsize;
hit = true;
}else{
leap = start + RayDirection * emptyStep;
bool tooFar = inside(leap);
if(tooFar){
start = start + RayDirection * stepsize;
}else{
start = leap;
}
}
}
if(hit){
density = 1.000;
}
return density;
}
void main() {
PointIntensity = getDensity(position);
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
gl_Position = projectionMatrix * mvPosition;
}
Fragment Shader:
varying float PointIntensity;
void main() {
//Rays that have traversed the volume (cube) should leave a red point on the viewplane, Rays that just went through empty space a black point
gl_FragColor= vec4(PointIntensity, 0.0, 0.0, 1.0);
}
Full Code:
http://pastebin.com/4YmWL0u1
Same Code but Running:
http://ec2-54-244-155-66.us-west-2.compute.amazonaws.com/example.html
I would be very glad if somebody had any tips on what I did wrong here
EDIT:
I updated the example with the changes that Mark Lundin proposed but unfortunately I still only get a red square when moving the camera (no weird pixels on the side though):
mat4 uInvMVProjMatrix = modelViewMatrix *inverseProjectionMatrix;
inverseProjectionMatrix being the Three.js camera property projectionMatrixInverse passed to the shader as a uniform. Then the unproject function gets called for every point in the viewplane with its uv-coordinates.
The new code is here:
http://pastebin.com/Dxh5C9XX
and running here:
http://ec2-54-244-155-66.us-west-2.compute.amazonaws.com/example.html
To see that the camera is actually moved you can press x, y, z to get the current camera x, y, z coordinate.

The reason you're seeing a square, rather than a 3D volume, is because your raytracing method doesn't take into account the camera orientation or projection. As you move the camera with the trackball it's orientation changes, therefore this should be included in your calculation. Secondly, the projection matrix of the camera should also be used to project the coordinates of the plane into 3D space. You can achieve this with something like the following:
vec3 unproject( vec2 coord ){
vec4 screen = vec4( coord, 0, 1.0 );
vec4 homogenous = uInvMVProjMatrix * 2.0 * ( screen - vec4( 0.5 ) );
return homogenous.xyz / homogenous.w;
}
where coord is the 2d coordinate of your plane and uInvMVProjMatrix is the inverse of the model view projection matrix. This will return a vec3 that you can use to test against intersection.

Related

how to make a point cloud viewer with webgl

I am trying to make a point cloud viewer with webgl, every tutorial i have found explain how the projection works with objects like a cube.
but that projection / perspective is not working with points because the point size is not changing.
How do i change my point size based on Z axis.
var vertCode =
'attribute vec3 coordinates;' +
'uniform mat4 rMatrix;' +
'void main(void) {' +
' gl_Position = rMatrix * vec4(coordinates, 1.0);' +
'gl_PointSize = 5.0;'+
'}';
[...]
const matrix = glMatrix.mat4.create();
const projectionMatrix = glMatrix.mat4.create();
const finalMatrix = glMatrix.mat4.create();
glMatrix.mat4.perspective(projectionMatrix,
75 * Math.PI/180, // vertical field of view ( angle in radian)
canvas.width/canvas.width, //aspect ratio
1e-4, // near cull distance ... aka null plane
1e4 // fat cull distance
);
//glMatrix.mat4.translate(matrix, matrix,[-.5,-.5,-.5]);
function animate(){
requestAnimationFrame(animate);
glMatrix.mat4.rotateY(matrix, matrix, Math.PI/6)
glMatrix.mat4.multiply(finalMatrix, projectionMatrix, matrix);
gl.uniformMatrix4fv(uniformLocations.matrix, false, matrix);
gl.drawArrays(gl.POINTS, 0, vertices.length/3);
}
sub question:
right now my code is a constant rotation, any ressource where to look on how to get the mouse event and create one or 2 rotation matrix based on the mouse drag( not quite sure if i can encode x and y rotation in the same matrix )
sub question 2:
any idea on how to have round points instead of square one ?
do i have to create small sphere ?

How to prevent WebGL from clipping outside bounds when drawing a wavy circle?

I have a shader that draws a bunch of instanced circles, and it works great! It works by basically drawing a bunch of rectangles at every given location, and then in the fragment shader it effectively discards pixels that are outside the radius, and this draws a circle.
I'm trying to update the shader now to make it draw "wavy" circles. That is, having a sin curve trace the entire outer edge of the circle. But the issue I'm running into now is that this curve will clip outside the bounds of the rectangle, and as a result, edges will be cut off. I drew a (crude) picture of what I think is happening:
As you can see, making a circle by hollowing out a quad works fine in the easy case. But when you add waves to the circle, portions of it clip outside of the unit space, causing those portions to not be rendered, so the rendered circle gets cut off at those parts. Here is what it looks like in my application (notice it gets cut off on the top, bottom, right, and left edges):
Here is where I believe the clip is occurring:
Here are my current vertex and fragment shaders for drawing these wavy circles. Is there any way I can modify them to prevent this clipping from occurring? Or maybe there is some WebGL setting I could use to fix this?
Vertex Shader:
in vec2 a_unit; // unit quad
in vec4 u_transform; // x, y, r, alpha
uniform mat3 u_projection; // camera
out float v_tint;
out vec2 v_pos;
void main() {
float r = u_transform.z;
float x = u_transform.x - r;
float y = u_transform.y - r;
float w = r * 2.0;
float h = r * 2.0;
mat3 world = mat3(
w, 0, 0,
0, h, 0,
x, y, 1
);
gl_Position = vec4(u_projection * world * vec3(a_unit, 1), 1);
v_tint = u_transform.w;
v_pos = a_unit;
}
Fragment Shader:
in vec2 v_pos;
in float v_tint;
uniform vec4 u_color;
uniform mat3 u_projection;
uniform float u_time;
out vec4 outputColor;
void main() {
vec2 cxy = 2.0 * v_pos - 1.0; // convert to clip space
float r = cxy.x * cxy.x + cxy.y * cxy.y;
float theta = 3.1415926 - atan(cxy.y, cxy.x) * 10.0; // current angle
r += 0.3 * sin(theta); // add waves
float delta = fwidth(r); // anti-aliasing
float alpha = 1.0 - smoothstep(1.0 - delta, 1.0 + delta, r);
outputColor = u_color * alpha * vec4(1, 1, 1, v_tint);
}

How to draw a circle instead of an ellipse when your monitor screen resolution isn't square?

I'm working with WebGL and I'm trying to clip away what I'm drawing to draw a circle, but currently it's drawing an ellipse instead. Here is my fragment shader:
void main() {
vec4 c = vec4((u_matrix * vec3(u_center, 1)).xy, 0, 1); // center
float r = .25; // radius
bool withinRadius = pow(v_pos.x - c.x, 2.) + pow(v_pos.y - c.y, 2.) < r * r;
if (!withinRadius) { discard; }
gl_FragColor = vec4(1, 1, 1, 1);
}
I think the issue is that because my screen size is 1920x1200, the horizontal clip space that goes from -1.0 to +1.0 is wider than the vertical clip space that goes from -1.0 to +1.0. I think the solution might involve somehow normalizing the clip-space such that it is square, but I'm not exactly sure how to do that or what the recommended way to handle that is. How do you normally handle this scenario?
You have to scale either the x or the y component of the vector from the center of the circle to the fragment. Add a unfiorm variable or constant to the fragment shader which holds the aspect ratio (aspect = width/height) or the resolution of the canvas and scale the x component of the vector by aspect:
uniform vec2 u_resolution;
void main()
{
float aspect = u_resolution.x / u_resolution.y;
vec4 c = vec4((u_matrix * vec3(u_center, 1)).xy, 0, 1); // center
float r = .25; // radius
vec2 dist_vec = (v_pos.xy - c.xy) * vec2(aspect, 1.0);
if (dot(dist_vec, dist_vec) > r*r)
discard;
gl_FragColor = vec4(1, 1, 1, 1);
}
Note, I've used the Dot product to compute the square of the Euclidean distance:
dot(va, vb) == va.x*vb.x + va.y*vb.y
fragCoord - The system constant of the pixel on the screen
(https://www.khronos.org/registry/OpenGL-Refpages/es3.0/html/gl_FragCoord.xhtml)
iResolution - Screen resolution that you have to transfer yourself
center - (0.5)
vec2 uv = fragCoord/iResolution.xy;
// fixe
uv.x -= center;
uv.x *= iResolution.x / iResolution.y;
uv.x += center;
//
float color = length(uv-vec2(0.5));
color = smoothstep(0.46,0.47,color);

Post Effects and Transparent background in three.js

Trying to use the transparent background with some post effect like the Unreal Bloom, SMAA and Tonemapping provided in the examples but it seems to break the transparency from my render.
renderer = new THREE.WebGLRenderer({ canvas, alpha: true });
renderer.setClearColor(0xFF0000, 0);
composer = new EffectComposer(renderer);
composer.addPass(new RenderPass(scene, camera));
// Bloom pass
canvasSize = new THREE.Vector2(canvas.width, canvas.height);
pass = new UnrealBloomPass(canvasSize, strength, radius, threshhold);
composer.addPass(pass);
// SMAA pass
size = canvasSize.multiplyScalar(this.renderer.getPixelRatio());
pass = new SMAAPass(size.x, size.y);
pass.renderToScreen = true
composer.addPass(pass);
// Tonemapping
renderer.toneMappingExposure = exposure;
renderer.toneMappingWhitePoint = whitePoint;
renderer.toneMapping = type;
composer.render();
If I deactivate the bloom pass I get a correct transparent background but when activated, I obtain a black background. I looked at the sources and it seems that it should correctly handle alpha texture channel as the format is set correctly to THREE.RGBAFormat.
Edit: After some research, I found where does this comes from. It comes from getSeperableBlurMaterial in js\postprocessing\UnrealBloomPass.js.
The fragment's alpha channel is always set to 1.0 which results in a complete removal of the previous alpha values when doing the additive blending at the end.
The cool thing would be to find a proper way to apply the alpha inside the Gaussian blur. Any idea how ?
I found a solution and this can be sorted like this :
https://github.com/mrdoob/three.js/issues/14104
void main()
{
vec2 invSize = 1.0 / texSize;
float fSigma = float(SIGMA);
float weightSum = gaussianPdf(0.0, fSigma);
float alphaSum = 0.0;
vec3 diffuseSum = texture2D(colorTexture, vUv).rgb * weightSum;
for( int i = 1; i < KERNEL_RADIUS; i ++ )
{
float x = float(i);
float weight = gaussianPdf(x, fSigma);
vec2 uvOffset = direction * invSize * x;
vec4 sample1 = texture2D( colorTexture, vUv + uvOffset);
float weightAlpha = sample1.a * weight;
diffuseSum += sample1.rgb * weightAlpha;
alphaSum += weightAlpha;
weightSum += weight;
vec4 sample2 = texture2D( colorTexture, vUv - uvOffset);
weightAlpha = sample2.a * weight;
diffuseSum += sample2.rgb * weightAlpha;
alphaSum += weightAlpha;
weightSum += weight;
}
alphaSum /= weightSum;
diffuseSum /= alphaSum; // Should apply discard here if alphaSum is 0
gl_FragColor = vec4(diffuseSum.rgb, alphaSum);
}

Three.js Verlet Cloth Simulation on GPU: Can't follow my logic for finding bug

I got a problem understanding the logic I am trying to implement with Three.js and the GPUComputationRenderer by yomboprime.
(https://github.com/yomboprime/GPGPU-threejs-demos/blob/gh-pages/js/GPUComputationRenderer.js)
I want to make a simple Verlet-Cloth-Simulation. Here is the logic I was already able to implement (short version):
1) Position-Fragment-Shader: This shader takes the old and current position texture and computes the new position like this:
vec3 position = texture2D( texturePosition, uv ).xyz;
vec3 oldPosition = texture2D( textureOldPosition, uv ).xyz;
position = (position * 2.0 - oldPosition + acceleration * delta *delta )
t = checkConstraints(position);
position += t;
gl_FragColor = vec4(position,1);
2) Old-Position-Shader This shader just saves the current position and saves it for the next step.
vec3 position = texture2D( texturePosition, uv ).xyz;
gl_FragColor = vec4(position,1);
This works fine, but with that pattern it's not possible to calculate the constraints more than once in one step, because each vertex is observed separately and cannot see the change of position that other pixels would have done in the first iteration.
What I am trying to do is to separate the constraints from the verlet. At the moment it looks somehow like this:
1) Position-Shader (texturePosition)
vec3 position = texture2D( textureConstraints, uv ).xyz;
vec3 oldPosition = texture2D( textureOldPosition, uv ).xyz;
position = (position * 2.0 - oldPosition + acceleration * delta *delta );
gl_FragColor = vec4(position, 1 );
2) Constraint-Shader (textureConstraints)
vec3 position = texture2D( texturePosition, uv ).xyz;
t = checkConstraints(position);
position += t;
gl_FragColor = vec4(position,1);
3) Old-Position-Shader (textureOldPosition)
vec3 position = texture2D( textureConstraints, uv ).xyz;
gl_FragColor = vec4(position,1);
This logic is not working, even if I don't calculate constraints at all and just pass the values like they were before. As soon as some acceleration is added in the Position-Shader the position values are exploding into nowhere.
What am I doing wrong?
This example is not verlet cloth, but I think the basic premise may help you. I have a fiddle that uses the GPUComputationRender to accomplish some spring physics on a point cloud. I think you could adapt it to your needs.
What you need is more information. You'll need fixed references to the cloth's original shape (as if it were a flat board) as well as the force currently being exerted on any of those points (by gravity + wind + structural integrity or whatever else), which then gives you that point's current position. Those point references to its original shape in combination with the forces are what will give your cloth a memory instead of flinging apart as it has been.
Here, for example, is my spring physics shader which the GPUComputationRenderer uses to compute the point positions in my visualization. The tOffsets in this case are the coordinates that give the cloud a permanent memory of its original shape - they never change. It is a DataTexture I add to the uniforms at the beginning of the program. Various forces like the the mass, springConstant, gravity, and damping also remain consistent and live in the shader. tPositions are the vec4 coords that change (two of the coords record current position, the other two record current velocity):
<script id="position_fragment_shader" type="x-shader/x-fragment">
// This shader handles only the math to move the various points. Adding the sprites and point opacity comes in the following shader.
uniform sampler2D tOffsets;
uniform float uTime;
varying vec2 vUv;
float hash(float n) { return fract(sin(n) * 1e4); }
float noise(float x) {
float i = floor(x);
float f = fract(x);
float u = f * f * (3.0 - 2.0 * f);
return mix(hash(i), hash(i + 1.0), u);
}
void main() {
vec2 uv = gl_FragCoord.xy / resolution.xy;
float damping = 0.98;
vec4 nowPos = texture2D( tPositions, uv ).xyzw;
vec4 offsets = texture2D( tOffsets, uv ).xyzw;
vec2 velocity = vec2(nowPos.z, nowPos.w);
float anchorHeight = 100.0;
float yAnchor = anchorHeight;
vec2 anchor = vec2( -(uTime * 50.0) + offsets.x, yAnchor + (noise(uTime) * 30.0) );
// Newton's law: F = M * A
float mass = 24.0;
vec2 acceleration = vec2(0.0, 0.0);
// 1. apply gravity's force:
vec2 gravity = vec2(0.0, 2.0);
gravity /= mass;
acceleration += gravity;
// 2. apply the spring force
float restLength = yAnchor - offsets.y;
float springConstant = 0.2;
// Vector pointing from anchor to point position
vec2 springForce = vec2(nowPos.x - anchor.x, nowPos.y - anchor.y);
// length of the vector
float distance = length( springForce );
// stretch is the difference between the current distance and restLength
float stretch = distance - restLength;
// Calculate springForce according to Hooke's Law
springForce = normalize(springForce);
springForce *= (springConstant * stretch);
springForce /= mass;
acceleration += springForce;
velocity += acceleration;
velocity *= damping;
vec2 newPosition = vec2(nowPos.x - velocity.x, nowPos.y - velocity.y);
// Write new position out
gl_FragColor = vec4(newPosition.x, newPosition.y, velocity.x, velocity.y);
}
</script>

Categories

Resources