How to make wireframes for 3D html5 shapes - javascript

I'm currently learning how to draw shapes in html5 and I am beginning to move into 3D shapes. I know that in order to make shapes like spheres and toruses, you need to first make a cylinder wireframe, and then apply an equation to transform the wireframe cylinder into the desired shape. For example, you would need to start out with a wireframe like this:
Then you would apply the parametric equations: x = cos(φ) * cos(θ),
y = cos(φ) * sin(θ), and z = sin(φ) where θ = 2 π u and φ = π v - π / 2
which would create a sphere such as:
The problem is that I'm not really sure how to make the wireframe in the first place. I've seen that in one implementation you can essentially make a cube and then extrude the sides outward to form a sphere, but I'm not sure if that is the optimal way nor even how to implement it. I haven't found much information online for doing this in html5 without using a library like three js or webgl and have only seen outdated posts like this one. What is the process/algorithm that is needed to create one of these shapes.

Related

3D model in HTML/CSS; Calculate Euler rotation of triangle

TLDR; Given a set of triangle vertices and a normal vector (all in unit space), how do I calculate X, Y, Z Euler rotation angles of the triangle in world space?
I am attemping to display a 3D model in HTML - with actual HTML tags and CSS transforms. I've already loaded an OBJ file into a Javascript class instance.
The model is triangulated. My first aim is just to display the triangles as planes (HTML elements are rectangular) - I'll be 'cutting out' the triangle shapes with CSS clip-path later on.
I am really struggling to understand and get the triangles of the model rotated correctly.
I thought a rotation matrix could help me out, but my only experience with those is where I already have the rotation vector and I need to convert and send that to WebGL. This time there is no WebGL (or tutorials) to make things easier.
The following excerpt shows the face creation/'rendering' of faces. I'm using the face normal as the rotation but I know this is wrong.
for (const face of _obj.faces) {
const vertices = face.vertices.map(_index => _obj.vertices[_index]);
const center = [
(vertices[0][0] + vertices[1][0] + vertices[2][0]) / 3,
(vertices[0][1] + vertices[1][1] + vertices[2][1]) / 3,
(vertices[0][2] + vertices[1][2] + vertices[2][2]) / 3
];
// Each vertex has a normal but I am just picking the first vertex' normal
// to use as the 'face normal'.
const normals = face.normals.map(_index => _obj.normals[_index]);
const normal = normals[0];
// HTML element creation code goes here; reference is 'element'.
// Set face position (unit space)
element.style.setProperty('--posX', center[0]);
element.style.setProperty('--posY', center[1]);
element.style.setProperty('--posZ', center[2]);
// Set face rotation, converting to degrees also.
const rotation = [
normal[0] * toDeg,
normal[1] * toDeg,
normal[2] * toDeg,
];
element.style.setProperty('--rotX', rotation[0]);
element.style.setProperty('--rotY', rotation[1]);
element.style.setProperty('--rotZ', rotation[2]);
}
The CSS first translates the face on X,Y,Z, then rotates it on X,Y,Z in that order.
I think I need to 'decompose' my triangles' rotation into separate axis rotations - i.e rotate on X, then on Y, then on Z to get the correct rotation as per the model face.
I realise that the normal vector gives me an orientation but not a rotation around itself - I need to calculate that. I think I have to determine a vector along one triangle side and cross it with the normal, but this is something I am not clear on.
I have spent hours looking at similar questions on SO but I'm not smart enough to understand or make them work for me.
Is it possible to describe what steps to take without Latex equations? I'm good with pseudo code but my Math skills are severely lacking.
The full code is here: https://whoshotdk.co.uk/cssfps/ (view HTML source)
The mesh building function is at line 422.
The OBJ file is here: https://whoshotdk.co.uk/cssfps/data/model/test.obj
The Blender file is here: https://whoshotdk.co.uk/cssfps/data/model/test.blend
The mesh is just a single plane at an angle, displayed in my example (wrongly) in pink.
The world is setup so that -X is left, -Y is up, -Z is into the screen.
Thank You!
If you have a plane and want to rotate it to be in the same direction as some normal, you need to figure out the angles between that plane's normal vector and the normal vector you want. The Euler angles between two 3D vectors can be complicated, but in this case the initial plane normal should always be the same, so I'll assume the plane normal starts pointing towards positive X to make the maths simpler.
You also probably want to rotate before you translate, so that everything is easier since you'll be rotating around the origin of the coordinate system.
By taking the general 3D rotation matrix (all three 3D rotation matrices multiplied together, you can find it on the Wikipedia page) and applying it to the vector (1,0,0) you can then get the equations for the three angles a, b, and c needed to rotate that initial vector to the vector (x,y,z). This results in:
x = cos(a)*cos(b)
y = sin(a)*cos(b)
z = -sin(b)
Then rearranging these equations to find a, b and c, which will be the three angles you need (the three values of the rotation array, respectively):
a = atan(y/x)
b = asin(-z)
c = 0
So in your code this would look like:
const rotation = [
Math.atan2(normal[1], normal[0]) * toDeg,
Math.asin(-normal[2]) * toDeg,
0
];
It may be that you need to use a different rotation matrix (if the order of the rotations is not what you expected) or a different starting vector (although you can just use this method and then do an extra 90 degree rotation if each plane actually starts in the positive Y direction, for example).

How can I set Z up coordinate system in three.js?

In three.js Y axis represent up and down and Z axis represent forward and backward. But I want Z axis to represent up and down and Y axis to forward and backward. Here is a image showing what I want:
I want to change the entire coordinate system in such a way that, if I rotate a mesh around y axis, it follows the new coordinate system not the traditional one.
Now, I have searched stack overflow and found this link:
Three.JS rotate projection so that the y axis becomes the z-axis .
It doesn't work.
THREEJS: Matrix from Z-Up Coordinate System to Y-Up Coordinate System. This method just change the object or mesh y and z vertices but if I rotate it around y axis it rotates around the traditional y axis. I have to apply the matrix to the rotation matrix also to make it rotate like the new coordinate system.
Changing a matrix from right-handed to left-handed coordinate system
Reorienting axes in three.js fails when webpage is refreshed. This doesn't work also.
Is there any way I can make three.js to work like Z up coordinate system?
You can set the up vector of the camera using
camera.up.set(0,0,1);
Then, it will work like you expect.
The answer above works in simple case, but if you wish for example to use the editor, you better set before doing anything
THREE.Object3D.DefaultUp = new THREE.Vector3(0,0,1);
So any new object will also use this convention.
Using the previous answer, I struggled in the editor on on all the implications around the controls, saving the objects etc...
Please note that if you use a grid you still have to rotate it so that it covers XY plane instead of XZ
var grid = new THREE.GridHelper( 30, 30, 0x444444, 0x888888 );
grid.rotateX(Math.PI / 2);

XTK - Toolkit.. the cube moves by should only rotating

Im a newbie in 3D computer graphics and seen an odd thing.
I used the XTK-Toolkit, witch is great with DICOM. I add a cube in the scene and translated it far from the center (http://jsfiddle.net/64L47wtd/2/).
when the cube rotate it looks like it is moving
Is this a bug in XTK, or an principle problem with 3D rendering?
window.onload = function() {
// create and initialize a 3D renderer
var r = new X.renderer3D();
r.init();
// create a cube
cube = new X.cube();
// skin it..
cube.texture.file = 'http://x.babymri.org/?xtk.png';
cube.transform.translateX(250);
cube.transform.translateY(200);
cube.transform.translateX(270);
r.add(cube); // add the cube to the renderer
r.render(); // ..and render it
// add some animation
r.onRender = function() {
// rotation by 1 degree in X and Y directions
cube.transform.rotateX(1);
cube.transform.rotateY(1);
};
};
You miss to consider the cube a compound object consisting of several vertices, edges and/or faces. As a compound object it's using local coordinate system consisting of axes X, Y, Z. The actual cube is described internally using coordinates for vertices related to that cube-local coordinate system.
By "translating" you declare those relative coordinates of vertices being adjusted prior to applying inside that local coordinate system. Rotation is then still working on the axes of that local coordinate system.
Thus, this isn't an error of X toolkit.
You might need to put the cube into another (probably fully transparent) container object to translate/move it, but keep rotating the cube itself.
I tried to extend your fiddle accordingly but didn't succeed at all. Taking obvious intentions of X Toolkit into account this might be an intended limitation of that toolkit for it doesn't obviously support programmatic construction of complex scenes consisting of multi-level object hierarchies by relying on its API only.

How to hide parts of 3D objects that stick out of the back of other (complex) 3D objects?

I'm rendering a complex 3D mesh with Three.js (an iliac bone). Then I'm rendering some simple spheres along with this mesh to mark certain points on the surface (where muscles would attach):
The problem is, the mesh is quite thin in some areas, and the markers will stick out the back.
Assume that the marker coordinates are always closer to the front face of the mesh than the back face, and that the spheres always show more surface area / volume on the front of the mesh than on the back:
How could I hide the parts that extrude out the back without manually intervening for specific markers?
Edit: Here's a (naive?) way of how I might do it. I would like feedback on the feasibility of the idea, and (some pointers to writing) actual code to do it:
for each marker sphere:
find all faces of the mesh that intersect with the sphere
compute all outward-facing normal vectors of those faces (vertex-normals? face-normals?)
compute all distances from the center of the face to the center of the sphere
add all those normal vectors, weighed by their respective distances
given the (normalized?) result vector, hide the hemisphere pointing in that direction
I'm not sure how to code any of those steps. Nor am I sure if this is even a sensible approach.
Draw hemispheres instead of full spheres.
Use phiStart and phiLength parameters of the SphereGeometry constructor.
The centers of the spheres will still be on the surface of the bone (a vertex).
The orientation of one sphere will be given by the normal calculated in the sphere origin.
Three.js already calculates the normals for a mesh in order to determine how light will bounce from the mesh. You can use the VertexNormalsHelper to display normals for your mesh:
var bone = ...; // bone mesh
var scene = ...; //your THREE.Scene
scene.add(new THREE.VertexNormalsHelper(bone));
The source code for VertexNormalsHelper can be found here:VertexNormalsHelper
You have to calculate the difference angles between the normal vector and oZ axis so you obtain difX and difY. These are the ammounts you must rotate your sphere in the X and Y directions to make it perpendicular on the local surface of the bone.

Three.js outlines

Is it possible to have an black outline on my 3d models with three.js?
I would have graphics which looks like Borderlands 2. (toon shading + black outlines)
I'm sure I came in late. Let's hope this would solve someone's question later.
Here's the deal, you don't need to render everything twice, the overhead actually is not substantial, all you need to do is duplicate the mesh and set the duplicate mesh's material side to "backside". No double passes. You will be rendering two meshes instead, with most of the outline's geometry culled by WebGL's "backface culling".
Here's an example:
var scene = new THREE.Scene();
//Create main object
var mesh_geo = new THREE.BoxGeometry(1, 1, 1);
var mesh_mat = new THREE.MeshBasicMaterial({color : 0xff0000});
var mesh = new THREE.Mesh(mesh_geo, mesh_mat);
scene.add(mesh);
//Create outline object
var outline_geo = new THREE.BoxGeometry(1, 1, 1);
//Notice the second parameter of the material
var outline_mat = new THREE.MeshBasicMaterial({color : 0x00ff00, side: THREE.BackSide});
var outline = new THREE.Mesh(outline_geo, outline_mat);
//Scale the object up to have an outline (as discussed in previous answer)
outline.scale.multiplyScalar(1.5);
scene.add(outline);
For more details on backface culling, check out: http://en.wikipedia.org/wiki/Back-face_culling
The above approach works well if you want to add an outline to objects, without adding a toon shader, and thus losing "realism".
Toon shading by itself supports edge detection. They've developed the 'cel' shader in Borderlands to achieve this effect.
In cel shading devs can either use the object duplication method (done at the [low] pipeline level), or can use image processing filters for edge detection. This is the point at which performance tradeoff is compared between the two techniques.
More info on cel: http://en.wikipedia.org/wiki/Cel_shading
Cheers!
Yes it is possible but not in a simple out-of-the-box way. For toon shading there are even shaders included in /examples/js/ShaderToon.js
For the outlines I think the most commonly suggested method is to render in two passes. First pass renders the models in black, and slightly larger scale. Second pass is normal scale and with the toon shaders. This way you'll see the larger black models as an outline. It's not perfect but I don't think there's an easy way out. You might have more success searching for "three.js hidden line rendering", as, while different look, somewhat similar method is used to achieve that.
Its a old question but here is what i did.
I created a Outlined Cel-shader for my CG course. Unfortunately it takes 3 rendering passes. Im currently trying to figure out how to remove one pass.
Here's the idea:
1) Render NormalDepth image to texture.
In vertex shader you do what you normally do, position to screen space and normal to screen space.
In fragment shader you calculate the depth of the pixel and then create the normal color with the depth as the alpha value
float ndcDepth = (2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) / (gl_DepthRange.far - gl_DepthRange.near);
float clipDepth = ndcDepth / gl_FragCoord.w;
2) Render the scene on to a texture with cel-shading. I changed the scene override material.
3)Make quad and render both textures on the quad and have a orto camera look at it. Cel-shaded texture is just renderd on quad but the normaldepth shaded on that you use some edge detection and then with that you know when the pixel needs to be black(edge).

Categories

Resources