depth of field with webgl - javascript

I would like to simulate the "depth of field"-effect in webgl,
moving the camera on a circle: https://en.wikibooks.org/wiki/OpenGL_Programming/Depth_of_Field
In OpenGl I would use the accumulation-buffer. But unfortunately webgl doesn´t know such buffer.
Is it possible to use blending to simulate such an effect?

A simple way of simulating depth of field is
Render scene to texture
Blur texture with renderer scene to another texture
Mix the 2 textures (in focus scene texture + blurred scene texture) using the depth information.
There's an example here. Click the tiny * to and adjust the "dof" slider. Press d a few times to see the different textures.

You can also render the scene to several different framebuffers, then bind those framebuffers as textures and accumulate the color from all of them in one final post-processing gathering pass. So that is more or less the manual way of doing accumulation.

Related

Copy mesh thousand times and animate without big performance hit?

I have a demo where i used hundreds of cubes that have exactly same geometry and texture, example:
texture = THREE.ImageUtils.loadTexture ...
material = new THREE.MeshLambertMaterial( map: texture )
geometry = new THREE.BoxGeometry( 1, 1, 1 )
cubes = []
for i in [0..1000]
cubes.push new THREE.Mesh geometry, material
... on every frame
for cube in cubes
// do something with each cube
Once all the cubes are created i start moving them on the screen.
All of them have the same texture, same size, they just change position and rotation. The problem here is that when i start using many hundreds of cubes the computer starts to suffer to render it.
Is there any way i could tell Three.js / WebGL that all those objects are the same object, they are identical copies just in different positions ?
I ready something about BufferGeometry and Geometry2 being able to boost performance for this sort of situation but i'm not exactly sure what would be the best in this case.
Thank you
Is there any way i could tell Three.js / WebGL that all those objects are the
same object, they are identical copies just in different positions ?
Unfortunately there's nothing that can automatically determine and optimize rendercalls in that regard. That would be pretty awesome.
I read something about BufferGeometry and Geometry2 being able to boost performance for this sort of situation but i'm not exactly sure what would be the best in this case.
So, the details here is this: the normal THREE.Geometry-class three.js provides is built for developer-convenience, but is a bit far from how data is handled by WebGL. This is what the DirectGeometry (earlier called Geometry2) and BufferGeometry are for. A BufferGeometry is a representation of how WebGL expects data for drawcalls to be held: it contains a typed-array for every attribute of the geometry. The conversion from Geometry to BufferGeometry happens automatically every time geometry.verticesNeedsUpdate is set to true.
If you don't change any of the attributes, this conversion will happen once per geometry (of which you have 1) so this is completely ok and moving to a buffer-geometry won't help (simply because you are already using it).
The main problem you face with several hundred geometries is the number of drawcalls required to render the scene. Generally speaking, every instance of THREE.Mesh represents a single drawcall. And those drawcalls are expensive: A single drawcall that outputs hundred thousands of triangles is no problem at all, but a thousands of drawcalls with 100 triangles each will very quickly become a serious performance problem.
Now, there are different ways how the number of drawcalls can be reduced using three.js. The first is (as already mentioned in comments) to combine multiple meshes/geometries into a single one (in the end, meshes are just a collection of triangles, so there's no requirement that they form a single "body" or something like that). This isn't too practical in your case as this would involve applying the position and rotation of each of your cubes via JS and update the vertex-arrays accordingly on each frame.
What you are really looking for is a WebGL-feature called geometry instancing.
This is not as easy to use as regular meshes and geometries, but not too complicated either.
With instancing, you can create a huge amount of objects in a single drawcall. All of the rendered objects will share a single geometry (your cube-geometry with its vertices, normals and uv-coordinates). The instancing happens when you add special attributes named InstancedBufferAttribute that can contain independent values for each of the instances. So you could add two per-instance attributes for position and rotation (or a single instance transformation-matrix if you like).
These examples should pretty much be what you are looking for:
http://threejs.org/examples/?q=instancing
The only difficulty with instancing as of now is the material: you will need to provide a custom vertex-shader that knows how to apply your per-instance-attributes to the vertex-positions from the original geometry (this can also be seen in the code of the examples).
You have a webgl tag so Im going to give a non three js answer.
The best way to handle this is to allocate a float texture array made of model transform matrix data (or just vec3 positions if thats all you need). Then you allocate a mesh chunk containing all your cube data. You need to add an additional attribute which I refer to as modelTransform index. For each "cube instance" in the mesh chunk, write the correct modelTransform index value corresponding to the correct offset in the model transform data texture.
On each frame, you calculate the correct model transform data for all the cubes and write to the model transform data texture with correct offsets and such. Upload the texture to GPU on each frame.
In the vertex shader, access the model transform data from the modelTransform index attribute and the float texture. Rest is the same.
This is what I am using in my engine and it works well for smallish objects such as cubes. Note however, updating 150000 cubes on 60 FPS will likely take most of your CPU resources from JS. This is unavoidable regardless of which instancing scheme you take.
If the motion/animation of each cube is fixed, then a even better way to do it is to upload a velocity attribute and initial creation time stamp attribute for each cube instance. On each frame, send the current time as uniform and calculate the position as "pos += attr_velocity * getDeltaTime(attr_initTime, unif_currentTime);". This skips work on CPU all together and allows you to render a much higher number of cubes.

THREE.js Render target texture will not draw in a different scene

So if any THREE.js pros can understand why I can't get the WebGLRenderTarget to be used as a material for a plane in another scene I'd be pretty happy.
How it works right now is I create a scene with a perspective camera which renders a simple plane. This happens in the Application object.
I also have a WaveMap object that uses another scene and an orthographic camera and using a fragment shader draws the cos(x) * sin(y) function on another quad that takes up the entire screen. I render this to a texture and then I create a material that uses this texture.
I then pass that Material to be used in the Application object to draw the texture on the first plane I mentioned.
Problem is for some reason I could get this to work in the scene with the orthographic camera inside the WaveMap object but not in the scene with the perspective camera in the Application object after passing the material over. :(
I've tried simply passing a simple material with a solid color and that works but when I try to pass over a material which uses a WebGLRenderTarget as a texture it doesn't show up anymore.
https://github.com/ArminTaheri/rendertotexture-threejs
You need to clone any material/texture that you want to render by two different renderers.
var materialOnOther = originalMaterial.clone();
Prior to r72 you needed to force the image buffers for the textures to be updated like so:
materialOnOther.uniforms.exampleOfATexture.value.needsUpdate = true;

Using a cubemap texture as diffuse source in place of a 2D one

I'm trying to project massive, 32k-resolution equirectangular maps on spheres.
Since a 32k texture is hardly accessible for older graphics cards supporting 1k-2k sized textures, and scaling a 32k image to 1k loses a tremendous amount of detail, I've resolved to splitting each source map by projecting each into 6 cube faces to make up a cubemap, so that more detail can be displayed on older cards.
However, I'm having trouble actually displaying these cubemapped spheres with three.js. I can set the MeshPhongMaterial.envMap to my cubemap, but of course this makes the sphere mesh reflect the texture instead.
An acceptable result can be produced by using ShaderMaterial along with ShaderLib['cube'] to "fake" a skysphere of some sort. But this drops all ability for lighting, normal mapping and all the other handy (and pretty) things possible with MeshPhongMaterial.
If at all possible, I'd like to not have to write an entire shader from scratch for such a simple tweak (switching one texture2D call to textureCube). Is there a way to coerce three.js to read the diffuse term from a cubemap instead of a 2D texture, or a simple way to edit the shader three.js uses internally?

Rounded Plane In THREE JS

THREE JS, can often seem angular and straight edged. I haven't used it for very long and thus am struggling to understand how to curve the world so to speak. I would imagine a renderer or something must be changed, but the idea is to take a 2d map and turn it into a simple three lane running game. However, if you look at the picture below from another similar game, how can i achieve the fish eye effect?
I would do that kind of effect on per-vertex base depending on the distance from the camera.
Also, maybe a bit tweaked perspective camera with bigger vertical fov would boost up the effect of the "curviness".
It's just a simple distortion effect that has been simulated in some way, it probably isn't really curved. Hope this helps.
I'm sure there are many possible different approaches... Here's one that creates nice barrel distortion effect.
You can do something like that by rendering normal wide angle camera to a texture, then project it to a lens-shaped plane (a sphere even), then the actual on-screen render is from a camera pointing to that.
I don't have the code available ATM, but I should be able to dig it up in few days if interested. Or you can just adapt from the three.js examples. Three.js includes some postprocessing examples where the scene is first rendered into a texture, that texture is applied to a a quad then photographed with ortographic camera. You can modify such an example by changing the ortographic camera to a perspective one, then distorting/changing the quad to something more appropriately shaped.
Taken to extremes, this approach can produce some pixelization / blocky artifacts.

webgl shadow mapping gl.DEPTH_COMPONENT

Hey im trying to implement shadow mapping in webgl using this example:
tutorial
What im trying to do is
initialize the depth texture and framebuffer.
draw a scene to that framebuffer with a simple shader, then draw a new scene with a box that has the depthtexture as texture so i can see the depth map using an other shader.
I think i look ok with the colortexture but cant get i to work with the depthtexture its all white.
i put the code on dropbox:
source code
most is in the files
index html
webgl_all js
objects js
have some light shaders im not using at the moment.
Really hope somebody can help me.
greetings from denmark
This could have several causes:
For common setups of the near and far planes, normalized depth values will be high enough to appear all white for most of the scene, even though they are not actually identical (remember that a depth texture has an accuracy of at least 16bits, while your screen output has only 8 bits per color channel. So a depth texture may appear all white, even when its values are not all identical.)
On some setups (e.g. desktop OpenGl), a texture may appear all white, when it is incomplete, that is when texture filtering is set to use mipmaps, but not all mipmap levels have been created. This may be the same with WebGl.
You may have hit a browser WebGl implementation bug.

Categories

Resources