three.js: capture video frame for later use - javascript

I am using three.js r73. I have a fragment shader that display video frames with a line like this:
gl_FragColor = texture2D( iChannel0, uv);
I would like another uniform (say iChannel1) to contain the video frame that was displayed at some earlier point. (Then I'll do some masking between iChannel0 and iChannel1 to do motion capture/edge detection etc).
I've tried various approaches to this, but no luck.
I figure that I need to clone the video texture somehow in javascript and then assign it to iChannel1, but I don't know how to capture a frame.
I guess I could capture the canvas content but there might be other noise on the canvas that I don't want. I truly want the video frame not the canvas. Also going via canvas capture seems very roundabout. I feel like I'm just missing some API call to capture the current video frame in a way that I can make a texture that iChannel1 will use.
I looked at UniformsUtils but that doesn't seem to do the trick either.

I found a way to do this shortly after posting my question. D'oh.
Here is the way, for future reference:
if (timeToCaptureFrame) {
that.uniforms.iChannel1.value.image = that.uniforms.iChannel0.value.image;
that.uniforms.iChannel1.value.needsUpdate = true;
}
Note that iChannel1 needs to be initialized correctly for this to work. Something like this works:
var pathToSubtractionTexture = 'aStillImage.jpg';
(new THREE.TextureLoader()).load(pathToSubtractionTexture, function ( texture ) {
that.uniforms.iChannel1.value = texture;
});

Related

How to swap low resolution with high resolution textures dynamically and improve texture loading in general

In my ThreeJS project I want to swap out one texture with another but as soon as I do this the UV's are completely broken. You can see how I do it in the code below:
var texture = new THREE.TextureLoader().load( "T_Test_1k_BaseColor.jpg" );
function loadDraco6141012( src ) {
var loader = new THREE.GLTFLoader().setPath('');
loader.setDRACOLoader( new THREE.DRACOLoader() );
loader.load( src, function( gltf ) {
gltf.scene.traverse( function ( child ) {
if ( child.isMesh ) {
child.material.envMap = envMap;
child.position.y -= 0.6
mesh = child;
// This needs to trigger only after the texture has fully loaded
setTimeout(function(){
mesh.material.map = texture;
}, 5000);
// either way the uv doesn't seem to be correct anymore, what happened..?
}
} );
scene.add( gltf.scene );
}
}
You can see the whole thing in action here www.dev.openstring-studios.com As you can see there are several things very wrong with this example.
As said before the loading time is still pretty slow, how could this be improved? Would using a database like MySQL improve performance?
Why are th UV's broken, this looks horrible, what could be the problem? And just to be clear the green texture map is the same as the blue one, they are only different in color.
Here's the ThreeJs Documentation / MeshStandardMaterial about how applying maps should work. I cannot explain why it doesn't work out here. Why are the UV's suddenly broken?
You shouldn't ask multiple questions in a single post, but I'll try:
You can improve loading times by drastically reducing the polygon count of your pot. Looking at the network tab in the dev tools, I noticed your .gltf is 3.67MB, which is unnecessarily large for a simple pot. You probably don't need this level of detail, you could remove 2/3rds the # of vertices, and your pot would still look good.
It also looks like you're exporting textures bundled in the GLTF, which is helping make your filesize that big. Maybe it's auto-exporting textures in really large sizes (4096x4096)? You should try exporting your textures separately, convert them to a compressed format (JPG), and make sure they're not unnecessarily large (1024x1024 could work). Then you can load them separately.
There is no way to load a texture in that way. You'd have to load them manually in incrementally larger sizes (256, 512, 1024, etc...). TextureLoader has a callback that lets you know when the texture has been loaded.
UVs aren't broken, you're just loading a second texture that doesn't follow the same layout as the first texture. Make sure this image https://www.dev.openstring-studios.com/T_Test_1k_BaseColor.jpg follows the same layout as your original (green) texture.
Lastly, is there a reason why you separated the pot into 5 different meshes? Whenever possible, you should try making it just one mesh to reduce the number of WebGL drawcalls and get a bit of a performance boost.

Saving canvas to image via canvas.toDataURL results in black rectangle

Im using Pixi.js and trying to save a frame of the animation to an image. canvas.toDataUrl should work, but all i get is a black rectangle. See live example here
the code I use to extract the image data and set the image is:
var canvas = $('canvas')[0];
var context = canvas.getContext('2d');
$('button').click(function() {
var data = renderer.view.toDataURL("image/png", 1);
//tried var data = canvas.toDataURL();
$('img').attr('src', data);
})
I know this has been answered at least 5 other times on SO but ...
What Kaiido mentioned will work but the real issue is that canvas, when used with WebGL, by default has 2 buffers. The buffer you are drawing to and the buffer being displayed.
When you start drawing into a WebGL canvas, as soon as you exit the current event, for example your requestAnimationFrame callback, the canvas is marked for swapping those 2 buffers. When the browser re-draws the page it does the swap. The buffer that you were drawing to is swapped with the one that was being displayed. You're now drawing to other buffer. That buffer is cleared.
The reason it's cleared instead of just left alone is that whether the browser actually swaps buffers or does something else is up to the browser. For example if antialiasing is on (which is the default) then it doesn't actually do a swap. It does a "resolve". It converts the highres buffer you just drew to a normal res anti-aliased copy into the display buffer.
So, to make it more consistent, regardless of which way the browser does its default it just always clears whatever buffer you're about to draw to. Otherwise you'd have no idea if it had 1 frame old data or 2 frame old data.
Setting preserveDrawingBuffer: true tells the browser "always copy, never swap". In this case it doesn't have to clear the drawing buffer because what's in the drawing buffer is always known. No swapping.
What is the point of all that? The point is, if you want to call toDataURL or gl.readPixels you need to call it IN THE SAME EVENT.
So for example your code could work something like this
var capture = false;
$('button').click(function() {
capture = true;
});
function render() {
renderer.render(...);
if (capture) {
capture = false;
var data = renderer.view.toDataURL("image/png", 1);
$('img').attr('src', data);
}
requestAnimationFrame(render);
}
requestAnimationFrame(render);
In that case because you call toDataURL in the same javascript event as you rendered to it you'll get the correct results always regardless of wither or not preserveDrawingBuffer is true or false.
If you're writing app that is not constantly rendering you could also do something like
$('button').click(function() {
// render right now
renderer.render(...);
// capture immediately
var data = renderer.view.toDataURL("image/png", 1);
$('img').attr('src', data);
});
The reason preserveDrawingBuffer is false by default is because swapping is faster than copying so this allows the browser to go as fast as possible.
Also see this answer for one other detail
[NOTE]
While this answer is the accepted one, please do read the one by #gman just below, it does contain a way better way of doing.
Your problem is that you are using webGL context, then you need to set the preserveDrawingBuffer property of the webGL context to true in order to be able to call toDataURL() method.
Or alternatively, you can force pixi to use the 2D context, by using the CanvasRenderer Class

Accumulation shader with Three.js

I'm trying to implement a path tracer using THREE.js. I'm basically rendering a fullscreen quad and the path tracing happens in the pixel shader.
I want a higher sampling rate and one way to do so is to sample one path for each pixel and accumulate the resulting images. (i.e. averaging out the images obtained at each shader pass)
As it is I am able to generate the images I need but I have no idea how to accumulate them. My guess would be that I have to use two render targets; one would contain the "latest" sampled image and one would contain the average of all the images displayed so far.
I just don't know how to get the data from a WebGLRenderTarget and use it to manipulate the data contained in another render target. Is this even something possible that's possible with Three.js? I've been looking into FrameBuffer Objects and this seems like a promising path and I am combing through MrDoob's FBO example (http://www.mrdoob.com/lab/javascript/webgl/particles/particles_zz85.html) and it appears promising but I'm not sure I'm headed down the right path.
I think the issue is that you can't read and write from the same buffer. Say you render something one frame, you need a pass that outputs to the accum buffer. The next frame, you need to do your calcs, and save to the same buffer but if I understand correctly this is not possible with WebGL currently.
What you can do instead is have two buffers. In the shader where you output your stuff to the buffer and do your calcs, just add another texture sampler, read the one from the previous frame, save to the next one, and then alternate. You will always have your accumulated values, you can use whatever math you want for the addition, but you need to make sure that you are reading the right buffer at the right frame.
three.js has a plugin for post processing and this should be very handy for doing stuff like this.
var flipFlop = true;
var buffer1 = THREE.WebGLRenderTarget(BUFFERSIZE, BUFFERSIZE, {minFilter: THREE.LinearFilter, magFilter: THREE.LinearFilter, format: THREE.RGBAFormat, type: THREE.FloatType});
var buffer2 = THREE.WebGLRenderTarget(BUFFERSIZE, BUFFERSIZE, {minFilter: THREE.LinearFilter, magFilter: THREE.LinearFilter, format: THREE.RGBAFormat, type: THREE.FloatType});
function render() {
// If we are in frame1 read buffer from frame0 and add it to whatever you compute
yourComputeShaderMaterial.uniforms._accumulationBuffer.value = flipFlop ? buffer2 : buffer1;
if (flipFlop) // Frame 0
renderer.render(scene, camera, buffer1);
else // Frame 1
renderer.render(scene, camera, buffer2);
// Get whatever just renderered in this frame and use it
yourEndShader.uniforms._accumulationBuffer.value = !flipFlop ? buffer2 : buffer1;
// Switch frame
flipFlop = !flipFlop;
}

Collision detection using a collision array inside html5 canvas

I am trying to detect if a character and an object inside an image collide. I am using a function that can parse the image and creates a collision array and another function that can detect if there is a collision or not in a specific location. My problem is that the isCollision function is never executed that's my jsfiddle : http://jsfiddle.net/AbdiasSoftware/UNWWq/1/
if (isCollision(character.x, character.y)) {
alert("Collision");
}
Please help me to fix my problem.
Add this in the top of your init() method and it should work:
FieldImg.crossOrigin = '';
As you are loading the image from a different origin CORS kicks in and you need to request cross-origin usage when using getImageData() (or toDataURL()).
See modified fiddle here.
Note: in you final code this is probably not gonna be necessary though as you probably want to include the images in the same domain as the page itself - in these cases you need to remove the cross-origin request unless your server is setup to handle this. Just something to have in mind for later if what worked suddenly don't...
Ok i found it :
You are loading a huge background image, and you draw part of it on a much smaller canvas.
You do visual collision detection using a binary view on the display canvas that you build in the process() function.
The issue comes when you want to test : to compute the pixel position of the player within the binary view, you are using y*width + x, but with the wrong width, that of the background image when it should be that of the view (cw).
function isCollision(x, y) {
return (buffer8[y * cw + x] === 1);
}
.
move right and look in the console with this fiddle :
http://jsfiddle.net/gamealchemist/UNWWq/9/

How to drop Frames in HTML5 Canvas

I am making a small game in HTML5 with the canvas elements. It runs great on MOST computers, but it is lags on others. However, it doesn't skip frames, it continues to render each frame and the game slows down. I am trying to write a function to skip frames, but I can't come up with a formula to do it.
I've tried searching around, but I have found nothing.
I have a function that renders the game called render and it is on a loop like this:
var renderTimer = setInterval("render(ctx)", 1000/CANVAS_FPS);
render()
{
/* render code here */
}
Thank you for any help,
Brandon Pfeifer
This pattern will allow you to skip frames on computers known to be slow
var isSlowComputer=true;
var FrameSkipper=5;
function render(){
// if this is a slow computer
// just draw every 5th frame
if(isSlowComputer && --FrameSkipper>0){ return; }
// reset the frame skipper
FrameSkipper=5;
// draw your frame now
}
If your target market is people with HTML5 capable browsers, you can just use window.requestAnimationFrame. This allows all of your rendering logic to be bound in a simple place, and it will slow down only if it has to. It will try hard to reach the 16ms per frame allocation, which gets you to your 60fps.
var canvas = document.getElementById("#canvas");
(function drawFrame(){
window.requestAnimationFrame(drawFrame, canvas);
// your main code would fire off here
}());
As long as you let the browser figure out the frame rate you're golden.
I've written some different experiences using the canvas before, and until I used requestAnimationFrame things were a little choppy.
One other thing to keep in mind is the double buffer. If you are going to write a lot of things to the screen at any given moment, I find it is easier to write them all to a second hidden canvas element, then just use context.drawImg(buffer,0,0); that will get rid of a lot of the chop. As long as you have thought your code through the canvas shouldn't get choppy even under a lot of streign.
Good Luck

Categories

Resources