Why is that the light is moving in this Webgl example - javascript

Here is an example of Goraud interpolation and a Lambertian reflection model from a textbook.
https://jsfiddle.net/zhenghaohe/r73knp0h/6/
However in the textbook there is a stupid error, in this book it says the code should contain this following line, when in fact it does not.
vec3 light = vec3(uModelViewMatrix * vec4(uLightDirection, 0.0));
The weird thing is the example still seems to work.
I am aware of that the sphere is rotating because
mat4.rotate(modelViewMatrix, modelViewMatrix, angle * Math.PI / 180, [0, 1, 0]);
However it seems to me that the light is also moving with the sphere. But in the code I cannot find how the light is being moved around.
Can someone please point me to the code where we also rotate the light?

The light does not rotate, it is fixed in a static position and direction. The problem here is you do not seem to understand what a normal is and how it is used in computer graphics.
A computer model is a series of "vertices" that connect to form "faces" (usually triangles). When "realistic" light is introduced into a scene an additional piece of information is necessary to determine how it should interact with each face of the model, this is called a "normal." A normal is a directional vector that generally forms a line perpendicular to a face, but it does not have to which will become important for your problem. This normal is used to compute how light interacts with that surface.
So you have three sets of data: The vertices, the indicies (how the verticies come together to form faces), and the normals (computed automatically in your example). The problem arises when you begin to make transformations to the model (like rotation) but do not perform similar transformations to the normals that were computed before the transformation.
Let's visualize this... say we have the following pyramid with one of it's normals drawn to illustrate the problem:
Now when we start to rotate the pyramid, but we leave the normals directions unchanged we see that the angle between the normal and the face begins to change.
For things to work as expected we need to also rotate the normals so that the angle relative to the face does not change.
The angle of the light relative to the surface normal is what dictates how the surface is shaded by the light. When you're rotating the model the normals begin pointing in "random" directions, this messes with the light computation and it appears as if the light is rotating, but it is not.
Obviously this is a very watered down explanation of what is happening, but it should give you a basic understanding of what a normal is and why you need to apply transformations to them as well.

Related

P5.js camera not working with multiple rotation matrices

I've been working for some time on making a 3D first person camera in p5.js for games and random projects, but I've been having some trouble.
For some time now I've been using a single y-rotation matrix with my projects to allow the player to look around, but I've felt like having an upgrade recently, so I decided to use x and y rotation matrices for my camera code. I was able to botch together a system that kind of worked by dividing both calculated z values, but there were some issues, not to mention that's not how rotation matrices work. I recently tried doing a proper implementation, but I've come across some issues.
I've been using this: camera(0, 0, 0, -200*sin(rot.y), -200*sin(rot.x), (200*cos(rot.x)) + (200*cos(rot.y)), 0, 1, 0); as my test camera code, which in theory would work, but in an actual setting, it doesn't for some reason, as you can see here. Right now if you look around too far it will randomly spaz out and mess up the way you're looking.
I also can confirm that I am using the correct formulas as here. I used the (almost) exact same code for calculating the values, and it looks completely fine.
Is there any weird trick to using the p5.js camera or is this some error that needs to be fixed?
You actually don't have the correct formulas. The example you showed uses orbitControl(), not camera. It also doesn't have two different angles it's rotating through.
The middle 3 coordinates of camera() define the point toward which the camera is pointing. That means that you want that point to move the same way you want the focus of the camera to move. It might help to draw a box at that point like this (in your original):
push();
translate(-200*sin(rot.y), -200*sin(rot.x), (200*cos(rot.x)) + (200*cos(rot.y)));
box(50);
pop();
You'll notice that the box is not always the same distance from the camera. It stays on a torus whose major and minor radii are both 200. What you want is a sphere with radius 200 (really it can have any radius).
The way you define these three coordinates depends on how you want the user's interactions to be. Here's one way:
camera(0, 0, 0,
cos(rot.x) * cos(rot.y),
cos(rot.x) * sin(rot.y),
sin(rot.x),
0, 0, 1);
This points the camera based on latitude and longitude, with the north pole on the positive Z axis. Moving the mouse right and left affects the longitude, and up and down affects the latitude.

Compute UV coordinates for threecsg mesh

for a university project I created a threecsg subtract in ThreeJS. I want to apply a texture for this mesh. But the missing uv coordinates after the processing is causing me some trouble. This needs to be a threecsg, because this a project requirement.
This is how the mesh looks like now: screenshot link
I found some code here: THREE.js generate UV coordinate
And it did get me closer to the solution. The side faces are now facing the applied texture in the right way: screenshot link
The upside has many weird faces. I tried to use the THREE.SimplifyModifier to get fewer faces so I might be able to calculate and set the uv-coordinates by myself, but I failed.
I thought it might be a solution to "just iterate" over the up- and downside and to kinda "cut of" the texture at the border, like it would be if the mesh were a cube. The mesh has about 350 Faces, I probably could be able to set the corner vertices but it would be nice if the uv-coordinates of the vertices in between could be calculated - but I have no idea how to to this. I do not mind about the side where cylinder is cut off, because you will not see it at the end.
thank you so much!
The CSG library I'm maintaining preserves UV coordinates from cut geometry.
https://github.com/manthrax/THREE-CSGMesh
http://vectorslave.com/csg/CSGDemo.html
http://vectorslave.com/csg/CSGShinyDemo.html
#JonasOe

WebGL: round line joints via shader in 2D

I want to render lines of arbitrary thickness in WebGL. From looking around the best way appears to be to generate geometry for a TRIANGLE_STRIP.
My lines are updated in each frame (they're simulated ropes basically) and I am heavily cpu bound already so I want as much work as possible on the gpu and as little work as possible on the cpu.
So from what I understand the least amount of work to do is to push a buffer with each points twice and an index so the vertex shader can push them apart.
I have that working with miter-joints.
But I want round-joints. All things I've found on google however talk about generated extra triangles in the joint-region and pushing them to the cpu. Since WebGL doesn't have geometry shaders there is no obvious way to get that concept onto the gpu.
The fact that vertices are shared between lines in a TRIANGLE_STRIP doesn't make this easier.
My current solution is to make it so that the fragment shader for a fragment that results from a line between A and B has the a varying that interpolates from A to
B, a varying that interpolates from B to A and can tell where in the interpolation it is. That gives me 2 unknown values and 2 equations. Solving them gives me A and B in the fragment shader. It's more complex to get right than it may sound at first as a TRIANGLE_STRIP reuses vertices and it has small (acceptable to me though) issues with accuracy around the middle and the edges of the interpolation.
It's a pretty convoluted solution.
Is there any "common" solution to rendering round line joints via shaders?
How do big library like three.js handle this? I've tried to search their source, but it's a big project ;)
Here is something to try: send to the GPU the center vertex position, a polarity (+1, or -1) and the direction of vertex. Eg. [A, +1, dir(B-A)],[A, -1, dir(B-A)], [B, +1, dir(C-B)],...
The cross product of camera viewing direction and the vertex direction multiplied by the polarity + vertex center (cross(camDir, a_vertexDir) * a_polarity + a_vertexCenter) gives the geometry of the line. Do this in the vertex shader. Send the interpolated center positions to the fragment shader and use the distance between the fragment and the interpolated center position to modulate the line.
The idea is similiar to this: http://codeflow.org/entries/2012/aug/05/webgl-rendering-of-solid-trails/.
I've since implemented a solution I am happy with now.
The key points are:
use mither joints generated in the vertex shader
use a bunch of varyings and the point index to alternate between two sets of them to find the segment in the fragment shader
I've written a blog post about the details here: http://nanodesu.info/oldstuff/2D-lines-with-round-joints-using-WebGL-shaders/

Writing fragment shaders: cannot make sense of how the uniforms are defined

I'm trying to make custom filters with Phaser, but I don't get how the uniforms, and vTextureCoord in particular are specified. Here's a JSFiddle (EDIT: Ignore the image, the minimal case lays in the square gradient):
Why isn't the top-right corner white? I've set both the filter resolution and the sprite size to 256, yet vTextureCoord only goes from [0,0] to [.5,.5] (or so it seems)
Try dragging the sprite: it seems to be blocked by a wall at the top and left borders. It's only shader-related though, as the game object itself is correctly dragged. How come?
I pulled my hair on this one during the last Ludum Dare, trying to figure out the pixel position within the sprite (i.e. [0,0] on the bottom left corner and [sprite.w, sprite.h] on the top right one)... But I couldn't find any reliable way to compute that whatever the sprite position and size are.
Thanks for your help!
EDIT: As emackey pointed out, it seems like either Phaser or Pixi (not sure at which level it's handled?) uses an intermediate texture. Because of this the uSampler I get is not the original texture, but a modified one, that is, for example, shifted/cropped if the sprite is beyond the top-left corner of the screen. The uSampler and vTextureCoord work well together, so as long as I'm making simple things like color tweaks all seems well, but for toying with texture coordinates it's simply not reliable.
Can a Phaser/Pixi guru explain why it works that way, and what I'm supposed to do to get clear coordinates and work with my actual source texture? I managed to hack a shader by "fixing vTextureCoord" and plugging my texture in iChannel0, but this feels a bit hacky.
Thanks.
I'm not too familiar with Phaser, but we can shed a little light on what that fragment shader is really doing. Load your jsFiddle and replace the GLSL main body with this:
void main() {
gl_FragColor = vec4(vTextureCoord.x * 2., vTextureCoord.y * 2., 1., 1.);
gl_FragColor *= texture2D(uSampler, vTextureCoord) * 0.6 + 0.4;
}
The above filter shader is a combination of the original texture (with some gray added) and your colors, so you can see both the texture and the UVs at the same time.
You're correct that vTextureCoord only goes to 0.5, hence the * 2. above, but that's not the whole story: Try dragging your sprite off the top-left. The texture slides but the texture coordinates don't move!
How is that even possible? My guess is that the original sprite texture is being rendered to an intermediate texture, using some of the sprite's location info for the transform. By the time your custom filter runs, your filter GLSL code is running on what's now the transformed intermediate texture, and the texture coordinates no longer have a known relation to the original sprite texture.
If you run the Chrome Canvas Inspector you can see that indeed there are multiple passes, including a render-to-texture pass. You can also see that the filter pass is using coordinates that appear to be the ratio of the filter area size to the game area size, which in this case is 0.5 on both dimensions.
I don't know Phaser well enough to know if there's a quick fix for any of this. Maybe you can add some uniforms to the filter that would give the shader the extra transform it needs, if you can figure out where that comes from exactly. Or perhaps there's a way to attach a shader directly on the sprite itself (there's a null field of the same name) so you could possibly run your GLSL code there instead of in the filter. I hope this answer has at least explained the "why" of your two questions above.

How do I adjust axes for camera rotated using THREE.DeviceOrientationControls?

The short story: I am trying to use THREE.TrackballControls to move the camera, but the (upside-down) x-z plane is where the x-y plane should be. Can anyone help?
The long story: I've been trying to add device orientation controls to a project. I have already used the THREE.TrackballControls to move the camera when mouse and touch are being used, and the direction the camera points feeds into other functionality. I am using v69 of three.js.
So, I have been looking into using THREE.DeviceOrientationControls to enable device orientation. Specifically, what I'm after is for rotation to be in the x-y plane when the device is upright in front of me and I turn around. Or in other words, when the device is face up on the table it is looking in the -ve z-direction, and when upside down it it looking in the +ve z-direction. Sounds fairly straightforward, right?
There are plenty of examples around to follow, but I seem to be stuck with axes incorrectly orientated, i.e. what should be my x-y plane is coming out as the x-z plane, but upside-down. I created a test page based on an example with a BoxGeometry cube I found, and then added red, yellow and blue spheres to the middle of the faces that corresponded to the +ve x-, y-, and z-directions respectively, and then pale versions of the same coloured spheres for the corresponding -ve directions. Testing this on an iPad confirmed that the scene axes and the real world axes were not lining up.
I have spent a bit of time trying to get to grips with how this Object works, and the main sticking point is in the function returned by setObjectQuaternion() which does the tricky bit:
...
return function (quaternion, alpha, beta, gamma, orient) {
euler.set(beta, alpha, -gamma, 'YXZ'); // 'ZXY' for the device, but 'YXZ' for us
quaternion.setFromEuler(euler); // orient the device
quaternion.multiply(q1); // camera looks out the back of the device, not the top
quaternion.multiply( q0.setFromAxisAngle( zee, - orient ) ); // adjust for screen orientation
}
...
where q1 is quaternion for a -pi/2 rotation around the x-axis, and zee is a unit z-axis vector.
I set up a jsfiddle here to help me debug this, but it wasn't rendering correctly on the iPad itself, so I had to add in some faking of orientation events, and plenty of logging, and continue on a normal desktop + console. This jsfiddle goes through each of the 6 basic orientations and sees whether the camera is looking in the direction I expect.
(Initially it would seem that a pi/2 rotation around the x-axis is what is required, but removing the quaternion.multiply(q1) doesn't fix it - I haven't even started looking at non-zero screen orientations yet.)
Ultimately, I'd like to make this more like the TrackballControls/OrbitControls with a target point that the camera always looks at (unless panned) and rotates around, once I've figured this "simple" stuff out.
Anybody have any ideas how I can orientate my camera properly?

Categories

Resources