webgl shadow mapping gl.DEPTH_COMPONENT - javascript

Hey im trying to implement shadow mapping in webgl using this example:
tutorial
What im trying to do is
initialize the depth texture and framebuffer.
draw a scene to that framebuffer with a simple shader, then draw a new scene with a box that has the depthtexture as texture so i can see the depth map using an other shader.
I think i look ok with the colortexture but cant get i to work with the depthtexture its all white.
i put the code on dropbox:
source code
most is in the files
index html
webgl_all js
objects js
have some light shaders im not using at the moment.
Really hope somebody can help me.
greetings from denmark

This could have several causes:
For common setups of the near and far planes, normalized depth values will be high enough to appear all white for most of the scene, even though they are not actually identical (remember that a depth texture has an accuracy of at least 16bits, while your screen output has only 8 bits per color channel. So a depth texture may appear all white, even when its values are not all identical.)
On some setups (e.g. desktop OpenGl), a texture may appear all white, when it is incomplete, that is when texture filtering is set to use mipmaps, but not all mipmap levels have been created. This may be the same with WebGl.
You may have hit a browser WebGl implementation bug.

Related

How to make THREE.Mesh look volumetric with WebVR?

I'm working on porting an existing three.js project to WebVR + Oculus Rift. Basically, this app takes an STL file as input, creates a THREE.Mesh based on it and renders it on an empty scene. I managed to make it work in Firefox Nightly with VREffect plugin to three.js and VRControls. A problem I have is models rendered in VR aren't really 3D. Namely, when I move the HMD back and forth an active 3D model doesn't get closer/farther, and I can't see different sides of the model. It looks like the model is rather a flat background image stuck to its position. If I add THREE.AxisHelper to the scene, it is transformed correctly when HMD is moved.
Originally, THREE.OrbitControls were used in the app and models were rotated and moved properly.
There's quite some amount of source code so I'll post some snippets on demand.
It turned out that technically there was no problem. The issue was essentially with different scales of my models and Oculus movements. When VRControls is used with default settings, it reports a position of HMD as it reads it from Oculus, in meters. So, the range of movements of my head could barely exceed 1 m, whereas average sizes of my models are about a few dozens of their own units. When I used them altogether at the same scene, it was like a viewer is an ant looking at a giant model. Naturally, the ant have to walk a while to see another side of the model. That's why it seemed like not a 3D body.
Fortunately, there's a scale property of VRControls that should be used for adjusting scale of HMD movements. When I set it to about 30, everything works pretty well.
Thanks to #brianpeiris's comment, I decided to check coordinates of the model and camera once again to make sure they're not knit with each other. And, it led me to the solution.

Three.js running out of texture units

I have written an app using Three.js (r73) that allows the user to load multiple .dae files using the ColladaLoader.
If the user selects a sufficient number of objects the texture will not show for any of the objects...at this point I get this:
WebGLRenderer: trying to use 26 texture units while this GPU supports only 16
The error message seems fairly self-explanitory - does this mean I can only load 16 textures at any one time? Is there a way around this? Can I render my scene with half my objects - clear the texture units - and then render the other half?
Quite new to Three.js - so sorry if its a stupid question.
This number is based on what your GPU supports, you can see it listed here at WebGL Report, under Max Texture Image Units: 16.
Many people confuse this number with how many textures you can have in a single scene, this is false. This number represents how many textures you can use for a single object (i.e. in a single draw call).
So if you have an extremely complicated object, with hundreds of separate textures. You'll have to find a way to either merge the textures together, or split the object into multiple objects that can be drawn separately.
However, if you draw 1000 separate objects, each with a different texture, this shouldn't be a problem.
The warning comes from exceeding the maximum number of "total" texture units, and not the Vertex texture units. Refer to WebGLRenderer.js, function getTextureUnit() for the reasoning behind this and the printing of this error message. (ex, https://searchcode.com/codesearch/view/96702746/, line 4730)
To avoid the warning, analyse the shaders, and reduce the count of texture units required in the shader, for the rendering.

Using a cubemap texture as diffuse source in place of a 2D one

I'm trying to project massive, 32k-resolution equirectangular maps on spheres.
Since a 32k texture is hardly accessible for older graphics cards supporting 1k-2k sized textures, and scaling a 32k image to 1k loses a tremendous amount of detail, I've resolved to splitting each source map by projecting each into 6 cube faces to make up a cubemap, so that more detail can be displayed on older cards.
However, I'm having trouble actually displaying these cubemapped spheres with three.js. I can set the MeshPhongMaterial.envMap to my cubemap, but of course this makes the sphere mesh reflect the texture instead.
An acceptable result can be produced by using ShaderMaterial along with ShaderLib['cube'] to "fake" a skysphere of some sort. But this drops all ability for lighting, normal mapping and all the other handy (and pretty) things possible with MeshPhongMaterial.
If at all possible, I'd like to not have to write an entire shader from scratch for such a simple tweak (switching one texture2D call to textureCube). Is there a way to coerce three.js to read the diffuse term from a cubemap instead of a 2D texture, or a simple way to edit the shader three.js uses internally?

Why is my simple webgl demo so slow

I've been trying to learn Web GL using these awesome tutorials. My goal is to make a very simple 2D game framework to replace the canvas-based jawsJS.
I basically just want to be able to create a bunch of sprites and move them around, and then maybe some tiles later.
I put together a basic demo that does this, but I hit a performance problem that I can't track down. once I get to ~2000 or so sprites on screen, the frame rate tanks and I can't work out why. Compared to this demo of the pixi.js webgl framework, which starts losing frames at about ~30000 bunnies or so (on my machine), I'm a bit disappointed.
My demo (framework source) has 5002 sprites, two of which are moving, and the frame rate is in the toilet.
I've tried working through the pixi.js framework to try to work out what they do differently, but it's 500kloc and does so much more than mine that I can't work it out.
I found this answer that basically confirmed that what I'm doing is roughly right - my algorithm is pretty much the same as the one in the answer, but there must be more to it.
I have so far tried a few things - using just a single 'frame buffer' with a single shape defined which then gets translated 5000 times for each sprite. This did help the frame rate a little bit, but nothing close the the pixi demo (it then meant that all sprites had to be the same shape!). I cut out all of the matrix maths for anything that doesn't move, so it's not that either. It all seems to come down to the drawArrays() function - it's just going really slow for me, but only for my demo!
I've also tried removing all of the texture based stuff, replacing the fragment shader with a simple block colour for everything instead. It made virtually no difference so I eliminated dodgy texture handling as a culprit.
I'd really appreciate some help in tracking down what incredibly stupid thing I've done!
Edit: I'm definitely misunderstanding something key here. I stripped the whole thing right back to basics, changing the vertex and fragment shaders to super simple:
attribute vec2 a_position;
void main() {
gl_Position = vec4(a_position, 0, 1);
}
and:
void main() {
gl_FragColor = vec4(0,1,0,1); // green
}
then set the sprites up to draw to (0,0), (1,1).
With 5000 sprites, it takes about 5 seconds to draw a single frame. What is going on here?
A look at a the frame calls using WebGLInspector or the experimental canvas inspector in chrome reveals a totally not optimized rendering loop.
You can and should use one and the same vertexbuffer to render all your geometry,
this way you can save the bindBuffer aswell as the vertexAttribPointer calls.
You can also save 99% of your texture binds as you're repetively rebinding one and the same texture. A texture remains bound as long as you do not bind something else to the same texture unit.
Having a state cache is helpful to avoid binding data that is already bound.
Take a look at my answer here about the gpu as a statemachine.
Once your rendering loop is optimized you can go ahead and consider the following things:
Use ANGLE_instanced_arrays extension
Avoid constructing data in your render loop.
Use an interlaced vertexbuffer.
In some cases not using an indexbuffer also increases
performance.
Check if you can shave off a few GPU cycles in your shaders
Break up your objects into chunks and do view frustum culling on the CPU side.
The problem is probably this line in render: glixl.context.uniformMatrix3fv(glixl.matrix, false, this.matrix);.
In my experience, passing uniforms for each model is very slow in webGL, and I was unable to maintain 60FPS after ~1,000 unique models. Unfortunately there is no uniform buffers in webgl to alleviate this problem.
I solved my problem by just calculating all the vertex positions on the CPU and draw them all using one drawArray call. This should work if the vertex count isnt overwhelming. I can draw 2k moving + rotating cubes at 60 FPS. I dont recall exactly how many cubes you can draw at 60 FPS but it is quite a bit higher than 2k. If that isnt fast enough then you have to look into drawArrayInstanced. Basically, store all the matrices on an arraybuffer and draw all your models using one drawArrayInstanced call with correct offset and such.
EDIT: also to the OP, if you want to see how PIXI does the vertex update rendering (NOT uniform instancing), see https://github.com/GoodBoyDigital/pixi.js/blob/master/src/pixi/renderers/webgl/utils/WebGLFastSpriteBatch.js.

Rounded Plane In THREE JS

THREE JS, can often seem angular and straight edged. I haven't used it for very long and thus am struggling to understand how to curve the world so to speak. I would imagine a renderer or something must be changed, but the idea is to take a 2d map and turn it into a simple three lane running game. However, if you look at the picture below from another similar game, how can i achieve the fish eye effect?
I would do that kind of effect on per-vertex base depending on the distance from the camera.
Also, maybe a bit tweaked perspective camera with bigger vertical fov would boost up the effect of the "curviness".
It's just a simple distortion effect that has been simulated in some way, it probably isn't really curved. Hope this helps.
I'm sure there are many possible different approaches... Here's one that creates nice barrel distortion effect.
You can do something like that by rendering normal wide angle camera to a texture, then project it to a lens-shaped plane (a sphere even), then the actual on-screen render is from a camera pointing to that.
I don't have the code available ATM, but I should be able to dig it up in few days if interested. Or you can just adapt from the three.js examples. Three.js includes some postprocessing examples where the scene is first rendered into a texture, that texture is applied to a a quad then photographed with ortographic camera. You can modify such an example by changing the ortographic camera to a perspective one, then distorting/changing the quad to something more appropriately shaped.
Taken to extremes, this approach can produce some pixelization / blocky artifacts.

Categories

Resources