I want to render lines of arbitrary thickness in WebGL. From looking around the best way appears to be to generate geometry for a TRIANGLE_STRIP.
My lines are updated in each frame (they're simulated ropes basically) and I am heavily cpu bound already so I want as much work as possible on the gpu and as little work as possible on the cpu.
So from what I understand the least amount of work to do is to push a buffer with each points twice and an index so the vertex shader can push them apart.
I have that working with miter-joints.
But I want round-joints. All things I've found on google however talk about generated extra triangles in the joint-region and pushing them to the cpu. Since WebGL doesn't have geometry shaders there is no obvious way to get that concept onto the gpu.
The fact that vertices are shared between lines in a TRIANGLE_STRIP doesn't make this easier.
My current solution is to make it so that the fragment shader for a fragment that results from a line between A and B has the a varying that interpolates from A to
B, a varying that interpolates from B to A and can tell where in the interpolation it is. That gives me 2 unknown values and 2 equations. Solving them gives me A and B in the fragment shader. It's more complex to get right than it may sound at first as a TRIANGLE_STRIP reuses vertices and it has small (acceptable to me though) issues with accuracy around the middle and the edges of the interpolation.
It's a pretty convoluted solution.
Is there any "common" solution to rendering round line joints via shaders?
How do big library like three.js handle this? I've tried to search their source, but it's a big project ;)
Here is something to try: send to the GPU the center vertex position, a polarity (+1, or -1) and the direction of vertex. Eg. [A, +1, dir(B-A)],[A, -1, dir(B-A)], [B, +1, dir(C-B)],...
The cross product of camera viewing direction and the vertex direction multiplied by the polarity + vertex center (cross(camDir, a_vertexDir) * a_polarity + a_vertexCenter) gives the geometry of the line. Do this in the vertex shader. Send the interpolated center positions to the fragment shader and use the distance between the fragment and the interpolated center position to modulate the line.
The idea is similiar to this: http://codeflow.org/entries/2012/aug/05/webgl-rendering-of-solid-trails/.
I've since implemented a solution I am happy with now.
The key points are:
use mither joints generated in the vertex shader
use a bunch of varyings and the point index to alternate between two sets of them to find the segment in the fragment shader
I've written a blog post about the details here: http://nanodesu.info/oldstuff/2D-lines-with-round-joints-using-WebGL-shaders/
Related
I´m searching for a (or more) best practice(s) for the following problem. I´ll try to describe it as abstract as possible, so the solution can be applied to scenarios i have not yet thought of.
Data available: Voxels (Volumetric Pixels), forming a cube, with coordinates x,y,z and a color attached.
Goal: Use OpenGL to display this data, as you move through it from different sides.
Question: Whats the best practice to render those voxels, depending on the viewpoint? How (which type of Object) can store the data?
Consider the following:
The cube of data can be considered as z layers of x y data. It should
be possible to view, in-between-layers, then the displayed color
should be interpolated from the closest matching voxels.
For my application, i have data sets of (x,y,z)=(512,512,128) and
more, containing medical data (scans of hearts, brains, ...).
What i´ve tried so far:
Evaluated different frameworks (PIXI.js, three.js) and worked through a few WebGL tutorials.
If something is not yet clear enough, please ask.
There are 2 major ways to represent / render 3D datasets. Rasterization and Ray-tracing.
One fair rasterization approach is a surface reconstruction technique by the use of algorithms such as Marching Cubes, Dual Contouring or Dual Marching Cubes.
Three.js have a Marching Cubes implementation in the examples section. You basically create polygons from your voxels for classical rasterization. It may be faster than it seems. Depending the level of detail you want to reach, the process can be fast enough to be done more than 60 times per second, for thousands of vertices.
Although, unless you want to simply represent cubes (I doubt) instead of a surface, you will also need more info associated to each of your voxels rather than only voxel positions and colors.
The other way is raycasting. Unless you find a really efficient raycasting algorithm, you will have serious performance hit with a naive implementation.
You can try to cast rays from your camera position through your data structure, find / stop marching through when you reach a surface and project your intersection point back to screen space with the desired color.
You may draw the resulting pixel in a texture buffer to map it on a full-screen quad with a simple shader.
In both cases, you need more information than just colors and cubes. For example, you need at least density values at each corners of your voxels for Marching cubes or intersection normals along voxels edges (hermite data) for Dual Contouring.
The same for ray-casting, you need at least some density information to figure out where the surface lies or not.
One of the keys is also in how you organize the data in your structure specially for out-of-core accesses.
I'm trying to make custom filters with Phaser, but I don't get how the uniforms, and vTextureCoord in particular are specified. Here's a JSFiddle (EDIT: Ignore the image, the minimal case lays in the square gradient):
Why isn't the top-right corner white? I've set both the filter resolution and the sprite size to 256, yet vTextureCoord only goes from [0,0] to [.5,.5] (or so it seems)
Try dragging the sprite: it seems to be blocked by a wall at the top and left borders. It's only shader-related though, as the game object itself is correctly dragged. How come?
I pulled my hair on this one during the last Ludum Dare, trying to figure out the pixel position within the sprite (i.e. [0,0] on the bottom left corner and [sprite.w, sprite.h] on the top right one)... But I couldn't find any reliable way to compute that whatever the sprite position and size are.
Thanks for your help!
EDIT: As emackey pointed out, it seems like either Phaser or Pixi (not sure at which level it's handled?) uses an intermediate texture. Because of this the uSampler I get is not the original texture, but a modified one, that is, for example, shifted/cropped if the sprite is beyond the top-left corner of the screen. The uSampler and vTextureCoord work well together, so as long as I'm making simple things like color tweaks all seems well, but for toying with texture coordinates it's simply not reliable.
Can a Phaser/Pixi guru explain why it works that way, and what I'm supposed to do to get clear coordinates and work with my actual source texture? I managed to hack a shader by "fixing vTextureCoord" and plugging my texture in iChannel0, but this feels a bit hacky.
Thanks.
I'm not too familiar with Phaser, but we can shed a little light on what that fragment shader is really doing. Load your jsFiddle and replace the GLSL main body with this:
void main() {
gl_FragColor = vec4(vTextureCoord.x * 2., vTextureCoord.y * 2., 1., 1.);
gl_FragColor *= texture2D(uSampler, vTextureCoord) * 0.6 + 0.4;
}
The above filter shader is a combination of the original texture (with some gray added) and your colors, so you can see both the texture and the UVs at the same time.
You're correct that vTextureCoord only goes to 0.5, hence the * 2. above, but that's not the whole story: Try dragging your sprite off the top-left. The texture slides but the texture coordinates don't move!
How is that even possible? My guess is that the original sprite texture is being rendered to an intermediate texture, using some of the sprite's location info for the transform. By the time your custom filter runs, your filter GLSL code is running on what's now the transformed intermediate texture, and the texture coordinates no longer have a known relation to the original sprite texture.
If you run the Chrome Canvas Inspector you can see that indeed there are multiple passes, including a render-to-texture pass. You can also see that the filter pass is using coordinates that appear to be the ratio of the filter area size to the game area size, which in this case is 0.5 on both dimensions.
I don't know Phaser well enough to know if there's a quick fix for any of this. Maybe you can add some uniforms to the filter that would give the shader the extra transform it needs, if you can figure out where that comes from exactly. Or perhaps there's a way to attach a shader directly on the sprite itself (there's a null field of the same name) so you could possibly run your GLSL code there instead of in the filter. I hope this answer has at least explained the "why" of your two questions above.
I'm learning webgl. I've managed to draw stuff and hopefully understood the pipeline. Now, every tutorial I see explains matrices before even loading a mesh. While it can be good for most, I think I need to concentrate on the process of loading external geometry, maybe through a json file. I've read that openGL by default displays things orthogonally, so I ask: is it possible to display a 3d mesh without any kind of transformation?
Now, every tutorial I see explains matrices before even loading a mesh.
Yes. Because understanding transformations is essential and you will need to work with them. They're not hard to understand and the sooner you wrap your head around them, the better. Actually in the case of OpenGL for the model-view transformation part it's actually rather simple:
The transformation matrix is just a bunch of vectors (in columns) placed within a "parent" coordinate system. The first the columns define how the X, Y and Z axes of the "embedded" coordinate system are aligned within the "parent", the W column moves it around. By varying the lengths of the base vectors you can stretc, i.e. scale things.
That's it, there's nothing more to it (in the modelview) than that. Learn the rules of matrix-matrix multiplication. Matrix-vector multiplication is just a special case of matrix-matrix multiplication.
The projection matrix is a little bit trickier, but I suggest you don't bother too much with it, just use GLM, Eigen::3D or linmath.h to build the matrix. The best analogy for the projection matrix is being the "lens" of OpenGL, i.e. this is where you apply zoom (aka field of view), tilt and shift. But the place of the "camera" is defined through the modelview.
is it possible to display a 3d mesh without any kind of transformation?
No. Because the mesh coordinates have to be transformed into screen coordinates. However a identity transform is perfectly possible, which, yes, looks like a dead on orthographic projection where the coordinate range [-1, 1] in either dimension is mapped to fill the viewport.
Hey im trying to implement shadow mapping in webgl using this example:
tutorial
What im trying to do is
initialize the depth texture and framebuffer.
draw a scene to that framebuffer with a simple shader, then draw a new scene with a box that has the depthtexture as texture so i can see the depth map using an other shader.
I think i look ok with the colortexture but cant get i to work with the depthtexture its all white.
i put the code on dropbox:
source code
most is in the files
index html
webgl_all js
objects js
have some light shaders im not using at the moment.
Really hope somebody can help me.
greetings from denmark
This could have several causes:
For common setups of the near and far planes, normalized depth values will be high enough to appear all white for most of the scene, even though they are not actually identical (remember that a depth texture has an accuracy of at least 16bits, while your screen output has only 8 bits per color channel. So a depth texture may appear all white, even when its values are not all identical.)
On some setups (e.g. desktop OpenGl), a texture may appear all white, when it is incomplete, that is when texture filtering is set to use mipmaps, but not all mipmap levels have been created. This may be the same with WebGl.
You may have hit a browser WebGl implementation bug.
I am still working on my "javascript 3d engine" (link inside stackoverflow).
at First, all my polygons were faces of cubes, so sorting them by average Z was working fine.
but now I've "evolved" and I want to draw my polygons (which may contain more than 4 vertices)
in the right order, namely, those who are close to the camera will be drawn last.
basically,
I know how to rotate them and "perspective"-ize them into 2D,
but don't know how to draw them in the right order.
just to clarify:
//my 3d shape = array of polygons
//polygon = array of vertices
//vertex = point with x,y,z
//rotation is around (0,0,0) and my view point is (0,0,something) I guess.
can anyone help?
p.s: some "catch phrases" I came up with, looking for the solution: z-buffering, ray casting (?!), plane equations, view vector, and so on - guess I need a simple to understand answer so that's why I asked this one. thanks.
p.s2: i don't mind too much about overlapping or intersecting polygons... so maybe the painter's algorthm indeed might be good. but: what is it exactly? how do I decide the distance of a polygon?? a polygon has many points.
The approach of sorting polygons and then drawing them bottom-to-top is called the "Painter's algorithm". Unfortunately the sorting step is in general an unsolvable problem, because it's possible for 3 polygons to overlap each other:
Thus there is not necessarily any polygon that is "on top". Alternate approaches such as using a Z buffer or BSP tree (which involves splitting polygons) don't suffer from this problem.
how do I decide the distance of a polygon?? a polygon has many points.
Painter's algorithm is the simplest to implement, but it works only in very simple cases because it assumes that there is only a single "distance" or z-value for each polygon (which you could approximate to be the average of z-values of all points in the polygon). Of course, this will produce wrong results if two polygons intersect each other.
In reality, there isn't a single distance value for a polygon -- each point on the surface of a polygon can be at a different distance from the viewer, so each point has its own "distance" or depth.
You already mentioned Z-buffering, and that is one way of doing this. I don't think you can implement this efficiently on a HTML canvas, but here's the general idea:
You need to maintain an additional canvas, the "z-buffer", where each pixel's colour represents the z-depth of the corresponding pixel on the main canvas.
To draw a polygon, you go through each point on its surface and draw only those points which are closer to the viewer than any previous objects, as indicated by the z-buffer.
I think you will have some ideas by investigating BSP tree ( binary spaces partition tree ), even if the algo will require to split some of your polygon in two.
Some example could be find here http://www.devmaster.net/articles/bsp-trees/ or by google for BSP tree. Posting some code as a reply is, in my opinion, not serious since is a complex topic.