Accumulation shader with Three.js - javascript

I'm trying to implement a path tracer using THREE.js. I'm basically rendering a fullscreen quad and the path tracing happens in the pixel shader.
I want a higher sampling rate and one way to do so is to sample one path for each pixel and accumulate the resulting images. (i.e. averaging out the images obtained at each shader pass)
As it is I am able to generate the images I need but I have no idea how to accumulate them. My guess would be that I have to use two render targets; one would contain the "latest" sampled image and one would contain the average of all the images displayed so far.
I just don't know how to get the data from a WebGLRenderTarget and use it to manipulate the data contained in another render target. Is this even something possible that's possible with Three.js? I've been looking into FrameBuffer Objects and this seems like a promising path and I am combing through MrDoob's FBO example (http://www.mrdoob.com/lab/javascript/webgl/particles/particles_zz85.html) and it appears promising but I'm not sure I'm headed down the right path.

I think the issue is that you can't read and write from the same buffer. Say you render something one frame, you need a pass that outputs to the accum buffer. The next frame, you need to do your calcs, and save to the same buffer but if I understand correctly this is not possible with WebGL currently.
What you can do instead is have two buffers. In the shader where you output your stuff to the buffer and do your calcs, just add another texture sampler, read the one from the previous frame, save to the next one, and then alternate. You will always have your accumulated values, you can use whatever math you want for the addition, but you need to make sure that you are reading the right buffer at the right frame.
three.js has a plugin for post processing and this should be very handy for doing stuff like this.
var flipFlop = true;
var buffer1 = THREE.WebGLRenderTarget(BUFFERSIZE, BUFFERSIZE, {minFilter: THREE.LinearFilter, magFilter: THREE.LinearFilter, format: THREE.RGBAFormat, type: THREE.FloatType});
var buffer2 = THREE.WebGLRenderTarget(BUFFERSIZE, BUFFERSIZE, {minFilter: THREE.LinearFilter, magFilter: THREE.LinearFilter, format: THREE.RGBAFormat, type: THREE.FloatType});
function render() {
// If we are in frame1 read buffer from frame0 and add it to whatever you compute
yourComputeShaderMaterial.uniforms._accumulationBuffer.value = flipFlop ? buffer2 : buffer1;
if (flipFlop) // Frame 0
renderer.render(scene, camera, buffer1);
else // Frame 1
renderer.render(scene, camera, buffer2);
// Get whatever just renderered in this frame and use it
yourEndShader.uniforms._accumulationBuffer.value = !flipFlop ? buffer2 : buffer1;
// Switch frame
flipFlop = !flipFlop;
}

Related

Why we should call webgl.bindBuffer before putting some data to this buffer?

I'm trying to figure out how exactly buffers work in WebGL and I'm a little stuck here. Below will be my guesses - please confirm or deny it.
const positions = new Float32Array([
-1, 1,
-0.5, 0,
-0.25, 0.25,
]);
let buffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.bufferData(gl.ARRAY_BUFFER, positions, gl.STATIC_DRAW);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
We create an array of floats on RAM via JS.
WebGL creates an empty buffer directly on GPU and returns a reference on this buffer to JS. Now variable buffer is a pointer.
Set pointer on the buffer to gl.ARRAY_BUFFER.
Now we copy data from RAM to GPU buffer.
Unbind buffer from gl.ARRAY_BUFFER ( but the buffer is still available on GPU and we can rebind it more times).
So why we can't just call createBuffer() with positions instead of using ARRAY_BUFFER as a bridge between JS and GPU? Are these only limitations of OpenGL API or we have some strong reason to don't do this? Correct me if I'm wrong, but the allocation of memory with the known size is faster than allocating some memory and reallocation with positions size after we call bufferData.
Because that's the API is the only real answer.
Many people agree with you that a different API would be better. It's one reason why there are new apis (DirectX11/12, Vulkan, Metal, WebGPU)
But the description in the question isn't technically correct
We create an array of floats on RAM via JS.
WebGL creates an object the represents a GPU buffer (nothing is allocated on the GPU)
Set pointer on the buffer to gl.ARRAY_BUFFER.
Now we allocate a buffer and copy data from RAM to GPU buffer.
Unbind buffer from gl.ARRAY_BUFFER ( but the buffer is still available on GPU and we can rebind it more times).
Step 5 is not needed. There is no reason to unbind the buffer.
You can think of it like this. Imagine you had a javascript function that drew an image to a canvas but the image was passed in the same as buffers in your example. Here's the code
class Context {
constructor(canvas) {
this.ctx = canvas.getContext('2d');
}
bindImage(img) {
this.img = img;
}
drawImage(x, y) {
this.ctx.drawImage(this.img, x, y);
}
}
How let's say you want to draw 3 images
const ctx = new Context(someCanvas);
ctx.bindImage(image1);
ctx.drawImage(0, 0);
ctx.bindImage(image2);
ctx.drawImage(10, 10);
ctx.bindImage(image3);
ctx.drawImage(20, 20);
will work just fine. There's no reason to do
const ctx = new Context(someCanvas);
ctx.bindImage(image1);
ctx.drawImage(0, 0);
ctx.bindImage(null); // not needed
ctx.bindImage(image2);
ctx.drawImage(10, 10);
ctx.bindImage(null); // not needed
ctx.bindImage(image3);
ctx.drawImage(20, 20);
ctx.bindImage(null); // not needed
It's the same in WebGL. There are times to bind null to something, for example
gl.bindFramebuffer(gl.FRAMEBUFFER, null); // start drawing to the canvas
but most of the time unbinding is just a programmer's personal preference, not a something the API itself requires
references:
https://webglfundamentals.org/webgl/lessons/webgl-attributes.html
https://webglfundamentals.org/webgl/lessons/resources/webgl-state-diagram.html
https://stackoverflow.com/a/28641368/128511
Note that even my description above isn't technically correct. Whether or not step4 copies data to the GPU is undefined. It could just copy the data to RAM and only at draw time, if the buffer is used, and it hasn't yet been copied to the GPU, then copy it. Plenty of drivers do that. For a more concrete example of a driver not copying data to the GPU when it seems like it would see this answer and this one

How to swap low resolution with high resolution textures dynamically and improve texture loading in general

In my ThreeJS project I want to swap out one texture with another but as soon as I do this the UV's are completely broken. You can see how I do it in the code below:
var texture = new THREE.TextureLoader().load( "T_Test_1k_BaseColor.jpg" );
function loadDraco6141012( src ) {
var loader = new THREE.GLTFLoader().setPath('');
loader.setDRACOLoader( new THREE.DRACOLoader() );
loader.load( src, function( gltf ) {
gltf.scene.traverse( function ( child ) {
if ( child.isMesh ) {
child.material.envMap = envMap;
child.position.y -= 0.6
mesh = child;
// This needs to trigger only after the texture has fully loaded
setTimeout(function(){
mesh.material.map = texture;
}, 5000);
// either way the uv doesn't seem to be correct anymore, what happened..?
}
} );
scene.add( gltf.scene );
}
}
You can see the whole thing in action here www.dev.openstring-studios.com As you can see there are several things very wrong with this example.
As said before the loading time is still pretty slow, how could this be improved? Would using a database like MySQL improve performance?
Why are th UV's broken, this looks horrible, what could be the problem? And just to be clear the green texture map is the same as the blue one, they are only different in color.
Here's the ThreeJs Documentation / MeshStandardMaterial about how applying maps should work. I cannot explain why it doesn't work out here. Why are the UV's suddenly broken?
You shouldn't ask multiple questions in a single post, but I'll try:
You can improve loading times by drastically reducing the polygon count of your pot. Looking at the network tab in the dev tools, I noticed your .gltf is 3.67MB, which is unnecessarily large for a simple pot. You probably don't need this level of detail, you could remove 2/3rds the # of vertices, and your pot would still look good.
It also looks like you're exporting textures bundled in the GLTF, which is helping make your filesize that big. Maybe it's auto-exporting textures in really large sizes (4096x4096)? You should try exporting your textures separately, convert them to a compressed format (JPG), and make sure they're not unnecessarily large (1024x1024 could work). Then you can load them separately.
There is no way to load a texture in that way. You'd have to load them manually in incrementally larger sizes (256, 512, 1024, etc...). TextureLoader has a callback that lets you know when the texture has been loaded.
UVs aren't broken, you're just loading a second texture that doesn't follow the same layout as the first texture. Make sure this image https://www.dev.openstring-studios.com/T_Test_1k_BaseColor.jpg follows the same layout as your original (green) texture.
Lastly, is there a reason why you separated the pot into 5 different meshes? Whenever possible, you should try making it just one mesh to reduce the number of WebGL drawcalls and get a bit of a performance boost.

three.js: capture video frame for later use

I am using three.js r73. I have a fragment shader that display video frames with a line like this:
gl_FragColor = texture2D( iChannel0, uv);
I would like another uniform (say iChannel1) to contain the video frame that was displayed at some earlier point. (Then I'll do some masking between iChannel0 and iChannel1 to do motion capture/edge detection etc).
I've tried various approaches to this, but no luck.
I figure that I need to clone the video texture somehow in javascript and then assign it to iChannel1, but I don't know how to capture a frame.
I guess I could capture the canvas content but there might be other noise on the canvas that I don't want. I truly want the video frame not the canvas. Also going via canvas capture seems very roundabout. I feel like I'm just missing some API call to capture the current video frame in a way that I can make a texture that iChannel1 will use.
I looked at UniformsUtils but that doesn't seem to do the trick either.
I found a way to do this shortly after posting my question. D'oh.
Here is the way, for future reference:
if (timeToCaptureFrame) {
that.uniforms.iChannel1.value.image = that.uniforms.iChannel0.value.image;
that.uniforms.iChannel1.value.needsUpdate = true;
}
Note that iChannel1 needs to be initialized correctly for this to work. Something like this works:
var pathToSubtractionTexture = 'aStillImage.jpg';
(new THREE.TextureLoader()).load(pathToSubtractionTexture, function ( texture ) {
that.uniforms.iChannel1.value = texture;
});

Saving canvas to image via canvas.toDataURL results in black rectangle

Im using Pixi.js and trying to save a frame of the animation to an image. canvas.toDataUrl should work, but all i get is a black rectangle. See live example here
the code I use to extract the image data and set the image is:
var canvas = $('canvas')[0];
var context = canvas.getContext('2d');
$('button').click(function() {
var data = renderer.view.toDataURL("image/png", 1);
//tried var data = canvas.toDataURL();
$('img').attr('src', data);
})
I know this has been answered at least 5 other times on SO but ...
What Kaiido mentioned will work but the real issue is that canvas, when used with WebGL, by default has 2 buffers. The buffer you are drawing to and the buffer being displayed.
When you start drawing into a WebGL canvas, as soon as you exit the current event, for example your requestAnimationFrame callback, the canvas is marked for swapping those 2 buffers. When the browser re-draws the page it does the swap. The buffer that you were drawing to is swapped with the one that was being displayed. You're now drawing to other buffer. That buffer is cleared.
The reason it's cleared instead of just left alone is that whether the browser actually swaps buffers or does something else is up to the browser. For example if antialiasing is on (which is the default) then it doesn't actually do a swap. It does a "resolve". It converts the highres buffer you just drew to a normal res anti-aliased copy into the display buffer.
So, to make it more consistent, regardless of which way the browser does its default it just always clears whatever buffer you're about to draw to. Otherwise you'd have no idea if it had 1 frame old data or 2 frame old data.
Setting preserveDrawingBuffer: true tells the browser "always copy, never swap". In this case it doesn't have to clear the drawing buffer because what's in the drawing buffer is always known. No swapping.
What is the point of all that? The point is, if you want to call toDataURL or gl.readPixels you need to call it IN THE SAME EVENT.
So for example your code could work something like this
var capture = false;
$('button').click(function() {
capture = true;
});
function render() {
renderer.render(...);
if (capture) {
capture = false;
var data = renderer.view.toDataURL("image/png", 1);
$('img').attr('src', data);
}
requestAnimationFrame(render);
}
requestAnimationFrame(render);
In that case because you call toDataURL in the same javascript event as you rendered to it you'll get the correct results always regardless of wither or not preserveDrawingBuffer is true or false.
If you're writing app that is not constantly rendering you could also do something like
$('button').click(function() {
// render right now
renderer.render(...);
// capture immediately
var data = renderer.view.toDataURL("image/png", 1);
$('img').attr('src', data);
});
The reason preserveDrawingBuffer is false by default is because swapping is faster than copying so this allows the browser to go as fast as possible.
Also see this answer for one other detail
[NOTE]
While this answer is the accepted one, please do read the one by #gman just below, it does contain a way better way of doing.
Your problem is that you are using webGL context, then you need to set the preserveDrawingBuffer property of the webGL context to true in order to be able to call toDataURL() method.
Or alternatively, you can force pixi to use the 2D context, by using the CanvasRenderer Class

Trying to clear a WebGL Texture to a solid color

I have a series of textures which stack together (using a mip-map-style resolution pyramid) to arrive at a final image.
Because of the way they stack, it's necessary to initialize them to an unsigned int value of 128 when they have not been populated by meaningful image data. IE, 128 is zero because the render shader will subtract .5 from each texture value, which allows subsequent pyramidal layers to offset the final image by positive or negative values.
I cannot seem to figure out how to initialize a (single-channel GL_LUMINANCE) texture to a value!
I've tried setting it as a renderbuffer target and rendering polys to it, but the FBO seems to be marked as incomplete. I've also tried targeting it as a renderbuffer target and using gl.clear(gl.COLOR_BUFFER_BIT) but again it's considered incomplete.
The obvious thing would be to copy values in using gl.texSubImage2D() but that seems really slow... maybe that's the only way? I was hoping for something more elegant that doesn't require allocating so much memory (at least a frame's worth of a single value) and so slow (because all that data must be written to the buffer when it's all a single value).
The only way to set a texture to a (default) value in WebGL seems to be to resize it, or allocate it, which sets it to all zeroes.
Also, there doesn't seem to be a copy mode like gl.SIGNED_BYTE which would allow zero to be zero (IE, signed values coming in)... but this also doesn't solve the problem of initializing the texture to a single value (in this case, zero).
Any ideas? How does one initialize a WebGL texture to a value aside from just plain old copying the value into it?
Being able to render to a particular type of texture is unfortunately not guaranteed by the OpenGL ES 2.0 spec on which WebGL is based on. The only way to tell if it works is to create the texture, attach it to a framebuffer and then call checkFramebufferStatus and see if it returns FRAMEBUFFER_COMPLETE. Unfortunately it won't in a lot of cases.
Only 3 combinations off attachments are guaranteed to work
COLOR_ATTACHMENT0 = RGBA/UNSIGNED_BYTE texture
COLOR_ATTACHMENT0 = RGBA/UNSIGNED_BYTE texture + DEPTH_ATTACHMENT = DEPTH_COMPONENT16 renderbuffer
COLOR_ATTACHMENT0 = RGBA/UNSIGNED_BYTE texture + DEPTH_STENCIL_ATTACHMENT = DEPTH_STENCIL renderbuffer
So your options are
use an RGBA texture and render 128,128,128,??? to it in an fbo (or gl.clear it)
use a LUMINANCE texture and call texImage2D
use a LUMINANCE texture and call copyTexImage2D using the backbuffer or an fbo of the correct size cleared to the color you want (though there is no guarantee this is fast AFAIK)
In my experience (mainly with Chrome in OSX) using Canvas to initialise a texture is fast. I guess it is because the browser allocates the Canvas and draws to it all on the GPU, and its WebGL implementation uses the same GL context as Canvas, so there is no huge CPU-to-GPU memory transfer.
// Quickly init texture to (128,128,128)
var canvas = this._canvas = document.createElement("canvas");
canvas.width = wTex;
canvas.height = hTex;
var ctx = this._ctx = canvas.getContext("2d");
ctx.fillStyle = "#808080";
ctx.fillRect(0, 0, wTex, hTex);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGB, gl.RGB, gl.UNSIGNED_BYTE, canvas);

Categories

Resources