Post Effects and Transparent background in three.js - javascript

Trying to use the transparent background with some post effect like the Unreal Bloom, SMAA and Tonemapping provided in the examples but it seems to break the transparency from my render.
renderer = new THREE.WebGLRenderer({ canvas, alpha: true });
renderer.setClearColor(0xFF0000, 0);
composer = new EffectComposer(renderer);
composer.addPass(new RenderPass(scene, camera));
// Bloom pass
canvasSize = new THREE.Vector2(canvas.width, canvas.height);
pass = new UnrealBloomPass(canvasSize, strength, radius, threshhold);
composer.addPass(pass);
// SMAA pass
size = canvasSize.multiplyScalar(this.renderer.getPixelRatio());
pass = new SMAAPass(size.x, size.y);
pass.renderToScreen = true
composer.addPass(pass);
// Tonemapping
renderer.toneMappingExposure = exposure;
renderer.toneMappingWhitePoint = whitePoint;
renderer.toneMapping = type;
composer.render();
If I deactivate the bloom pass I get a correct transparent background but when activated, I obtain a black background. I looked at the sources and it seems that it should correctly handle alpha texture channel as the format is set correctly to THREE.RGBAFormat.
Edit: After some research, I found where does this comes from. It comes from getSeperableBlurMaterial in js\postprocessing\UnrealBloomPass.js.
The fragment's alpha channel is always set to 1.0 which results in a complete removal of the previous alpha values when doing the additive blending at the end.
The cool thing would be to find a proper way to apply the alpha inside the Gaussian blur. Any idea how ?

I found a solution and this can be sorted like this :
https://github.com/mrdoob/three.js/issues/14104
void main()
{
vec2 invSize = 1.0 / texSize;
float fSigma = float(SIGMA);
float weightSum = gaussianPdf(0.0, fSigma);
float alphaSum = 0.0;
vec3 diffuseSum = texture2D(colorTexture, vUv).rgb * weightSum;
for( int i = 1; i < KERNEL_RADIUS; i ++ )
{
float x = float(i);
float weight = gaussianPdf(x, fSigma);
vec2 uvOffset = direction * invSize * x;
vec4 sample1 = texture2D( colorTexture, vUv + uvOffset);
float weightAlpha = sample1.a * weight;
diffuseSum += sample1.rgb * weightAlpha;
alphaSum += weightAlpha;
weightSum += weight;
vec4 sample2 = texture2D( colorTexture, vUv - uvOffset);
weightAlpha = sample2.a * weight;
diffuseSum += sample2.rgb * weightAlpha;
alphaSum += weightAlpha;
weightSum += weight;
}
alphaSum /= weightSum;
diffuseSum /= alphaSum; // Should apply discard here if alphaSum is 0
gl_FragColor = vec4(diffuseSum.rgb, alphaSum);
}

Related

Addressing Texel in ThreeJS DataTexture

I'm looking to compute texel references for a THREE.DataTexture in Javascript for use in a fragment shader. I've succeeded in computing screen space coordinates of points and passing them to a shader in a uniform float array of x and y values, and then referencing those points by indices in my shader. I now want to render too many points to pass the coordinates in a uniform float array so I'd like to use a DataTexture and write the coordinates in the RG values of RGBA texels.
Referencing this question I am using the following method:
var tDataWidth = points.length;
var tData = new Uint8Array( Math.pow(tDataWidth, 2) );
var texelSize = 1.0 / tDataWidth;
var texelOffset = new THREE.Vector2(0.5 * texelSize, 0.5 * texelSize);
for(var i = 0; i < points.length; i++){
//convert data to 0-1, then to 0-255
//inverse is to divide by 255 then multiply by width, height respectively
tData[i * 4] = Math.round(255 * (points[i].x / window.innerWidth));
tData[i * 4 + 1] = Math.round(255 * ((window.innerHeight - points[i].y) / window.innerHeight));
tData[i * 4 + 2] = 0;
tData[i * 4 + 3] = 0;
//calculate UV texel coordinates here
//Correct after edit
var u = ((i % tDataWidth) / tDataWidth) + texelOffset;
var v = (Math.floor(i / tDataWidth) + texelOffset);
var vUV = new THREE.Vector2(u, v);
//this function inserts the reference to the texel at the index into the shader
//referenced in the frag shader:
//cvec = texture2D(tData, index);
shaderInsert += ShaderInsert(vUV, screenPos.x, window.innerHeight - screenPos.y);
}
var dTexture = new THREE.DataTexture( sdfUItData, tDataWidth, tDataWidth, THREE.RGBAFormat, THREE.UnsignedByteType );
//I think this is necessary
dTexture.magFilter = THREE.NearestFilter;
dTexture.needsUpdate = true;
//update uniforms of shader to get this DataTexture
renderer.getUniforms("circles")["tData"].value = dTexture;
//return string insert of circle
//I'm editing the shader through javascript then recompiling it
//There's more to it in the calling function, but this is the relevant part I think
...
ShaderInsert(index){
var circle = "\n\tvIndex = vec2(" + String(index.x) + ", " + String(index.y) + ");\n";
circle += "\tcvec = texture2D(tData, vIndex);\n";
circle += "\tcpos = vec2( (cvec.r / 255.0) * resolution.x, (cvec.y / 255.0) * resolution.y);\n";
circle += "\tc = circleDist(translate(p, cpos), 7.0);\n";
circle += "\tm = merge(m, c);";
return(circle);
}
Any help on where I'm going wrong? Right now output is all in the lower left corner, so (0, window.innerHeight) as far as I can tell. Thanks!
So the answer is actually straightforward. In the fragment shader rgba values are 0.0 - 1.0, so there's no need to divide by 255 as I was doing in the fragment shader.
I'd also like to say that I discovered the Spector.js Chrome extension which allows one to view all webgl calls and buffers. Pretty cool!
If anyone wants to learn more about how the drawing functions work in the fragment shader, it's all in this awesome shader which I did not write:
https://www.shadertoy.com/view/4dfXDn
<3

Three.js Verlet Cloth Simulation on GPU: Can't follow my logic for finding bug

I got a problem understanding the logic I am trying to implement with Three.js and the GPUComputationRenderer by yomboprime.
(https://github.com/yomboprime/GPGPU-threejs-demos/blob/gh-pages/js/GPUComputationRenderer.js)
I want to make a simple Verlet-Cloth-Simulation. Here is the logic I was already able to implement (short version):
1) Position-Fragment-Shader: This shader takes the old and current position texture and computes the new position like this:
vec3 position = texture2D( texturePosition, uv ).xyz;
vec3 oldPosition = texture2D( textureOldPosition, uv ).xyz;
position = (position * 2.0 - oldPosition + acceleration * delta *delta )
t = checkConstraints(position);
position += t;
gl_FragColor = vec4(position,1);
2) Old-Position-Shader This shader just saves the current position and saves it for the next step.
vec3 position = texture2D( texturePosition, uv ).xyz;
gl_FragColor = vec4(position,1);
This works fine, but with that pattern it's not possible to calculate the constraints more than once in one step, because each vertex is observed separately and cannot see the change of position that other pixels would have done in the first iteration.
What I am trying to do is to separate the constraints from the verlet. At the moment it looks somehow like this:
1) Position-Shader (texturePosition)
vec3 position = texture2D( textureConstraints, uv ).xyz;
vec3 oldPosition = texture2D( textureOldPosition, uv ).xyz;
position = (position * 2.0 - oldPosition + acceleration * delta *delta );
gl_FragColor = vec4(position, 1 );
2) Constraint-Shader (textureConstraints)
vec3 position = texture2D( texturePosition, uv ).xyz;
t = checkConstraints(position);
position += t;
gl_FragColor = vec4(position,1);
3) Old-Position-Shader (textureOldPosition)
vec3 position = texture2D( textureConstraints, uv ).xyz;
gl_FragColor = vec4(position,1);
This logic is not working, even if I don't calculate constraints at all and just pass the values like they were before. As soon as some acceleration is added in the Position-Shader the position values are exploding into nowhere.
What am I doing wrong?
This example is not verlet cloth, but I think the basic premise may help you. I have a fiddle that uses the GPUComputationRender to accomplish some spring physics on a point cloud. I think you could adapt it to your needs.
What you need is more information. You'll need fixed references to the cloth's original shape (as if it were a flat board) as well as the force currently being exerted on any of those points (by gravity + wind + structural integrity or whatever else), which then gives you that point's current position. Those point references to its original shape in combination with the forces are what will give your cloth a memory instead of flinging apart as it has been.
Here, for example, is my spring physics shader which the GPUComputationRenderer uses to compute the point positions in my visualization. The tOffsets in this case are the coordinates that give the cloud a permanent memory of its original shape - they never change. It is a DataTexture I add to the uniforms at the beginning of the program. Various forces like the the mass, springConstant, gravity, and damping also remain consistent and live in the shader. tPositions are the vec4 coords that change (two of the coords record current position, the other two record current velocity):
<script id="position_fragment_shader" type="x-shader/x-fragment">
// This shader handles only the math to move the various points. Adding the sprites and point opacity comes in the following shader.
uniform sampler2D tOffsets;
uniform float uTime;
varying vec2 vUv;
float hash(float n) { return fract(sin(n) * 1e4); }
float noise(float x) {
float i = floor(x);
float f = fract(x);
float u = f * f * (3.0 - 2.0 * f);
return mix(hash(i), hash(i + 1.0), u);
}
void main() {
vec2 uv = gl_FragCoord.xy / resolution.xy;
float damping = 0.98;
vec4 nowPos = texture2D( tPositions, uv ).xyzw;
vec4 offsets = texture2D( tOffsets, uv ).xyzw;
vec2 velocity = vec2(nowPos.z, nowPos.w);
float anchorHeight = 100.0;
float yAnchor = anchorHeight;
vec2 anchor = vec2( -(uTime * 50.0) + offsets.x, yAnchor + (noise(uTime) * 30.0) );
// Newton's law: F = M * A
float mass = 24.0;
vec2 acceleration = vec2(0.0, 0.0);
// 1. apply gravity's force:
vec2 gravity = vec2(0.0, 2.0);
gravity /= mass;
acceleration += gravity;
// 2. apply the spring force
float restLength = yAnchor - offsets.y;
float springConstant = 0.2;
// Vector pointing from anchor to point position
vec2 springForce = vec2(nowPos.x - anchor.x, nowPos.y - anchor.y);
// length of the vector
float distance = length( springForce );
// stretch is the difference between the current distance and restLength
float stretch = distance - restLength;
// Calculate springForce according to Hooke's Law
springForce = normalize(springForce);
springForce *= (springConstant * stretch);
springForce /= mass;
acceleration += springForce;
velocity += acceleration;
velocity *= damping;
vec2 newPosition = vec2(nowPos.x - velocity.x, nowPos.y - velocity.y);
// Write new position out
gl_FragColor = vec4(newPosition.x, newPosition.y, velocity.x, velocity.y);
}
</script>

Chroma key Fragment Shader fails to find the color

I'm trying to write a fragment-shader that functions as a chroma-key filter for a specific color (for example make all pixels with a specific green transparent).
The shader I'm writing is for use in WebGL trough PIXI.js.
JSFiddle: https://jsfiddle.net/IbeVanmeenen/hexec6eg/14/
So far, I wrote this code for the shader, based on the shader I've found here.
varying vec2 vTextureCoord;
uniform float thresholdSensitivity;
uniform float smoothing;
uniform vec3 colorToReplace;
uniform sampler2D uSampler;
void main() {
vec4 textureColor = texture2D(uSampler, vTextureCoord);
float maskY = 0.2989 * colorToReplace.r + 0.5866 * colorToReplace.g + 0.1145 * colorToReplace.b;
float maskCr = 0.7132 * (colorToReplace.r - maskY);
float maskCb = 0.5647 * (colorToReplace.b - maskY);
float Y = 0.2989 * textureColor.r + 0.5866 * textureColor.g + 0.1145 * textureColor.b;
float Cr = 0.7132 * (textureColor.r - Y);
float Cb = 0.5647 * (textureColor.b - Y);
float blendValue = smoothstep(thresholdSensitivity, thresholdSensitivity + smoothing, distance(vec2(Cr, Cb), vec2(maskCr, maskCb)));
gl_FragColor = vec4(textureColor.rgb, textureColor.a * blendValue);
}
Now, when I define and test this, nothing happens.
The problem lies with the shader, because the other filters I tried work.
The color I use for the test is rgb(85, 249, 44).
The Full code for the shader with PIXI is below:
function ChromaFilter() {
const vertexShader = null;
const fragmentShader = [
"varying vec2 vTextureCoord;",
"uniform float thresholdSensitivity;",
"uniform float smoothing;",
"uniform vec3 colorToReplace;",
"uniform sampler2D uSampler;",
"void main() {",
"vec4 textureColor = texture2D(uSampler, vTextureCoord);",
"float maskY = 0.2989 * colorToReplace.r + 0.5866 * colorToReplace.g + 0.1145 * colorToReplace.b;",
"float maskCr = 0.7132 * (colorToReplace.r - maskY);",
"float maskCb = 0.5647 * (colorToReplace.b - maskY);",
"float Y = 0.2989 * textureColor.r + 0.5866 * textureColor.g + 0.1145 * textureColor.b;",
"float Cr = 0.7132 * (textureColor.r - Y);",
"float Cb = 0.5647 * (textureColor.b - Y);",
"float blendValue = smoothstep(thresholdSensitivity, thresholdSensitivity + smoothing, distance(vec2(Cr, Cb), vec2(maskCr, maskCb)));",
"gl_FragColor = vec4(textureColor.rgb, textureColor.a * blendValue);",
"}"
].join('\n');
let uniforms = {};
PIXI.Filter.call(this,
vertexShader,
fragmentShader,
uniforms
);
this.uniforms.thresholdSensitivity = 0.4;
this.uniforms.smoothing = 0.1;
this.uniforms.colorToReplace = [0.33, 0.97, 0.17];
this.glShaderKey = 'chromakey';
}
ChromaFilter.prototype = Object.create(PIXI.Filter.prototype);
ChromaFilter.prototype.constructor = ChromaFilter;
This is applied to the video-sprite like this:
videoBase = new PIXI.VideoBaseTexture(videoLoaderVid);
videoBase.on('loaded', () => {
video = videoBase.source;
video.volume = 0;
video.pause();
video.currentTime = 0;
videoTexture = new PIXI.Texture(videoBase);
videoSprite = new PIXI.Sprite(videoTexture);
const filter = new ChromaFilter();
videoSprite.filters = [filter];
resolve();
});
And PIXI is set up like this:
stage = new PIXI.Container();
renderer = PIXI.autoDetectRenderer(720, 720, {
preserveDrawingBuffer: true,
clearBeforeRender: true
});
canvasContainer.appendChild(renderer.view);
The video-sprite sits on it's own DisplayObjectContainer and is displayed above another DisplayObjectContainer (hence the need for a chroma-filter)
UPDATE:
The fixed shader can be found here:
https://gist.github.com/IbeVanmeenen/d4f5225ad7d2fa54fabcc38d740ba30e
And a fixed demo can be found here:
https://jsfiddle.net/IbeVanmeenen/hexec6eg/17/
The shader is fine, the problem is that uniforms (colorToReplace, thresholdSensitivity and smoothing) aren't passed, they're all set to 0s. By blind luck I've found that to fix that you need to remove third parameter you're passing to PIXI.Filter constructor:
/* ... */
PIXI.Filter.call(this, vertexShader, fragmentShader) // no uniforms param here
/* ... */
PS. You haven't answer in chat, so I'm posting my findings here.

initiate a number of vertices/triangles for vertex shader to use

I've been playing around with vertexshaderart.com and I'd like to use what I learned on a separate website. While I have used shaders before, some effects achieved on the site depend on having access to vertices/lines/triangles. While passing vertices is easy enough (at least it was with THREE.js, though it is kind of an overkill for simple shaders, but in some cases in need shader materials too), creating triangles seems a bit more complex.
I can't figure it out from the source, how exactly are triangles created there, when you switch the mode here?
I'd like to replicate that behavior but I honestly have no idea how to approach it. I could just create a number of triangles through THREE but with so many individual objects performance takes a hit rapidly. Are the triangles created here separate entities or are they a part of one geometry?
vertexshaderart.com is more of a puzzle, toy, art box, creative coding experiment than an example of the good WebGL. The same is true of shadertoy.com. An example like this is beautiful but it runs at 20fps in it's tiny window and about 1fps fullscreen on my 2014 Macbook Pro and yet my MBP can play beautiful games with huge worlds rendered fullscreen at 60fps. In other words, the techniques are more for art/fun/play/mental exercise and for the fun of trying to make things happen with extreme limits than to actually be good techniques.
The point I'm trying to make is both vertexshaderart and shadertoy are fun but impractical.
The way vertexshaderart works is it provides a count vertexId that counts vertices. 0 to N where N is the count setting the top of the UI. For each count you output gl_Position and a v_color (color).
So, if you want to draw something you need to provide the math to generate vertex positions based on the count. For example let's do it using Canvas 2D first
Here's a fake JavaScript vertex shader written in JavaScript that given nothing but vertexId will draw a grid 1 unit high and N units long where N = the number of vertices (vertexCount) / 6.
function ourPseudoVertexShader(vertexId, time) {
// let's compute an infinite grid of points based off vertexId
var x = Math.floor(vertexId / 6) + (vertexId % 2);
var y = (Math.floor(vertexId / 2) + Math.floor(vertexId / 3)) % 2;
// color every other triangle red or green
var triangleId = Math.floor(vertexId / 3);
var color = triangleId % 2 ? "#F00" : "#0F0";
return {
x: x * 0.2,
y: y * 0.2,
color: color,
};
}
We call it from a loop supplying vertexId
for (var count = 0; count < vertexCount; count += 3) {
// get 3 points
var position0 = ourPseudoVertexShader(count + 0, time);
var position1 = ourPseudoVertexShader(count + 1, time);
var position2 = ourPseudoVertexShader(count + 2, time);
// draw triangle
ctx.beginPath();
ctx.moveTo(position0.x, position0.y);
ctx.lineTo(position1.x, position1.y);
ctx.lineTo(position2.x, position2.y);
ctx.fillStyle = position0.color;
ctx.fill();
}
If you run it here you'll see a grid 1 unit high and N units long. I've set the canvas origin so 0,0 is in the center just like WebGL and so the canvas is addressed +1 to -1 across and +1 to -1 down
var vertexCount = 100;
function ourPseudoVertexShader(vertexId, time) {
// let's compute an infinite grid of points based off vertexId
var x = Math.floor(vertexId / 6) + (vertexId % 2);
var y = (Math.floor(vertexId / 2) + Math.floor(vertexId / 3)) % 2;
// color every other triangle red or green
var triangleId = Math.floor(vertexId / 3);
var color = triangleId % 2 ? "#F00" : "#0F0";
return {
x: x * 0.2,
y: y * 0.2,
color: color,
};
}
var ctx = document.querySelector("canvas").getContext("2d");
requestAnimationFrame(render);
function render(time) {
time *= 0.001;
ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height);
ctx.save();
ctx.translate(ctx.canvas.width / 2, ctx.canvas.height / 2);
ctx.scale(ctx.canvas.width / 2, -ctx.canvas.height / 2);
// lets assume triangles
for (var count = 0; count < vertexCount; count += 3) {
// get 3 points
var position0 = ourPseudoVertexShader(count + 0, time);
var position1 = ourPseudoVertexShader(count + 1, time);
var position2 = ourPseudoVertexShader(count + 2, time);
// draw triangle
ctx.beginPath();
ctx.moveTo(position0.x, position0.y);
ctx.lineTo(position1.x, position1.y);
ctx.lineTo(position2.x, position2.y);
ctx.fillStyle = position0.color;
ctx.fill();
}
ctx.restore();
requestAnimationFrame(render);
}
canvas { border: 1px solid black; }
<canvas width="500" height="200"></canvas>
Doing the same thing in WebGL means making a buffer with the count
var count = [];
for (var i = 0; i < vertexCount; ++i) {
count.push(i);
}
Then putting that count in a buffer and using that as an attribute for a shader.
Here's the equivalent shader to the fake shader above
attribute float vertexId;
uniform float time;
varying vec4 v_color;
void main() {
// let's compute an infinite grid of points based off vertexId
float x = floor(vertexId / 6.) + mod(vertexId, 2.);
float y = mod(floor(vertexId / 2.) + floor(vertexId / 3.), 2.);
// color every other triangle red or green
float triangleId = floor(vertexId / 3.);
v_color = mix(vec4(0, 1, 0, 1), vec4(1, 0, 0, 1), mod(triangleId, 2.));
gl_Position = vec4(x * 0.2, y * 0.2, 0, 1);
}
If we run that we'll get the same result
var vs = `
attribute float vertexId;
uniform float vertexCount;
uniform float time;
varying vec4 v_color;
void main() {
// let's compute an infinite grid of points based off vertexId
float x = floor(vertexId / 6.) + mod(vertexId, 2.);
float y = mod(floor(vertexId / 2.) + floor(vertexId / 3.), 2.);
// color every other triangle red or green
float triangleId = floor(vertexId / 3.);
v_color = mix(vec4(0, 1, 0, 1), vec4(1, 0, 0, 1), mod(triangleId, 2.));
gl_Position = vec4(x * 0.2, y * 0.2, 0, 1);
}
`;
var fs = `
precision mediump float;
varying vec4 v_color;
void main() {
gl_FragColor = v_color;
}
`;
var vertexCount = 100;
var gl = document.querySelector("canvas").getContext("webgl");
var count = [];
for (var i = 0; i < vertexCount; ++i) {
count.push(i);
}
var bufferInfo = twgl.createBufferInfoFromArrays(gl, {
vertexId: { numComponents: 1, data: count, },
});
var programInfo = twgl.createProgramInfo(gl, [vs, fs]);
var uniforms = {
time: 0,
vertexCount: vertexCount,
};
requestAnimationFrame(render);
function render(time) {
uniforms.time = time * 0.001;
gl.useProgram(programInfo.program);
twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo);
twgl.setUniforms(programInfo, uniforms);
twgl.drawBufferInfo(gl, gl.TRIANGLES, bufferInfo);
requestAnimationFrame(render);
}
canvas { border: 1px solid black; }
<script src="https://twgljs.org/dist/twgl.min.js"></script>
<canvas width="500" height="200"></canvas>
Everything else on vertexshartart is just creative math to make interesting patterns. You can use time to do animation. a texture with sound data is also provided.
There are some tutorials here
So, in answer to your question, when you switch modes (triangles/lines/points) on vertexshaderart.com all that does is change what's passed to gl.drawArrays (gl.POINTS, gl.LINES, gl.TRIANGLES). The points themselves are generated in the vertex shader like the example above.
So that leaves the question, what specific effect are you trying to achieve. Then we can know what to suggest to achieve it. You might want to ask a new question for that (so that this answer still matches the question above)

Converting a 3D-Scene to 2D-image using raytracing (webgl, three.js)

As explained above I would like to render a 3D-Scene onto a 2D-Plane with raytracing. Eventually I would like to use it for Volume Rendering but I'm struggling with the basics here. I have a three.js scene with the viewing plane attached to the camera (in front of it of course).
The Setup:
Then (in the shader) I'm shooting a ray from the camera through each point (250x250) in the plane. Behind the plane is 41x41x41 volume (a cube essentially). If a ray goes through the cube, the point in the viewing plane the ray crossed will be rendered red, otherwise the point will be black. Unfortunately this only works if you look at the cube from the front. Here's the example: http://ec2-54-244-155-66.us-west-2.compute.amazonaws.com/example.html
If you try to look at the cube from different angles (you can move the camera with the mouse) then we don't get a cube rendered onto the viewing plane as we would like but a square with some weird pixels on the side..
That's the code for Raytracing:
Vertex Shader:
bool inside(vec3 posVec){
bool value = false;
if(posVec.x <0.0 ||posVec.x > 41.0 ){
value = false;
}
else if(posVec.y <0.0 ||posVec.y > 41.0 ){
value = false;
}
else if(posVec.z <0.0 ||posVec.z > 41.0 ){
value = false;
}
else{
value = true;
}
return value;
}
float getDensity(vec3 PointPos){
float stepsize = 1.0;
float emptyStep = 15.0;
vec3 leap;
bool hit = false;
float density = 0.000;
// Ray direction from the camera through the current point in the Plane
vec3 dir = PointPos- camera;
vec3 RayDirection = normalize(dir);
vec3 start = PointPos;
for(int i = 0; i<STEPS; i++){
vec3 alteredPosition = start;
alteredPosition.x += 20.5;
alteredPosition.y += 20.5;
alteredPosition.z += 20.5;
bool insideTest = inside(alteredPosition);
if(insideTest){
// advance from the start position
start = start + RayDirection * stepsize;
hit = true;
}else{
leap = start + RayDirection * emptyStep;
bool tooFar = inside(leap);
if(tooFar){
start = start + RayDirection * stepsize;
}else{
start = leap;
}
}
}
if(hit){
density = 1.000;
}
return density;
}
void main() {
PointIntensity = getDensity(position);
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
gl_Position = projectionMatrix * mvPosition;
}
Fragment Shader:
varying float PointIntensity;
void main() {
//Rays that have traversed the volume (cube) should leave a red point on the viewplane, Rays that just went through empty space a black point
gl_FragColor= vec4(PointIntensity, 0.0, 0.0, 1.0);
}
Full Code:
http://pastebin.com/4YmWL0u1
Same Code but Running:
http://ec2-54-244-155-66.us-west-2.compute.amazonaws.com/example.html
I would be very glad if somebody had any tips on what I did wrong here
EDIT:
I updated the example with the changes that Mark Lundin proposed but unfortunately I still only get a red square when moving the camera (no weird pixels on the side though):
mat4 uInvMVProjMatrix = modelViewMatrix *inverseProjectionMatrix;
inverseProjectionMatrix being the Three.js camera property projectionMatrixInverse passed to the shader as a uniform. Then the unproject function gets called for every point in the viewplane with its uv-coordinates.
The new code is here:
http://pastebin.com/Dxh5C9XX
and running here:
http://ec2-54-244-155-66.us-west-2.compute.amazonaws.com/example.html
To see that the camera is actually moved you can press x, y, z to get the current camera x, y, z coordinate.
The reason you're seeing a square, rather than a 3D volume, is because your raytracing method doesn't take into account the camera orientation or projection. As you move the camera with the trackball it's orientation changes, therefore this should be included in your calculation. Secondly, the projection matrix of the camera should also be used to project the coordinates of the plane into 3D space. You can achieve this with something like the following:
vec3 unproject( vec2 coord ){
vec4 screen = vec4( coord, 0, 1.0 );
vec4 homogenous = uInvMVProjMatrix * 2.0 * ( screen - vec4( 0.5 ) );
return homogenous.xyz / homogenous.w;
}
where coord is the 2d coordinate of your plane and uInvMVProjMatrix is the inverse of the model view projection matrix. This will return a vec3 that you can use to test against intersection.

Categories

Resources