WebGL calculating the vertices on GPU for many identical objects? - javascript

Let's say I have a simple 2d x-y graph. I want to draw a really complicate shape centered at (5,5), (3,4) ,(-1,9), etc.
I know where the vertices of the shape will be relative to a center (n,m). Is it possible to calculate all the vertices on the GPU instead of in javascript? I would just need to upload the relationship of the vertices to the center once and then after that just the individual points.
For example if the shape was a square, the relationship would be:
At the point (n,m), there are vertices (n-1, m-1), (n+1, m-1), (n+1,m-1), (n+1, m+1).
That way I could just upload (5,5), (3,4) ,(-1,9) to the GPU instead of calculating 12 vertices and uploading.
Questions:
Is this possible?
Would this be faster than calculating the vertices in javascript?

These are some solutions for OPENGL, WebGL does not support apparently.
http://www.opengl.org/wiki/Vertex_Rendering
http://www.opengl.org/wiki/GLAPI/glDrawArraysInstanced
edit: Apparently WebGL doesn't have drawArraysInstanced

Related

Konva-JS: how do you get the updated vertices coordinates for custom shape after translation, scale or rotation?

I'm using React-Konva (React version of KonvaJS) to draw custom shapes, mostly irregular polygons, and apply transformations to it, like moving them around, scaling and rotating.
Now, once the polygons are in place I need the coordinate of the vertices for another feature, but even though I move it around and transform and whatnot, the shape appears correctly modified but the vertices coordinates are still the initial ones.
For instance if I have a triangle at (0,0), (1,0), (0.5,2) and then drag it all the way to the right, after the drag ended the triangle will appear in the new position on the canva, but when printing the vertices it still will output (0,0), (1,0), (0.5,2).
How do you get the updated coordinates of all the vertices? I'm using the Shape class for the polygons with draggable set to true for the translation, and the Transformer class for scaling and rotating.
Canvas, and therefore Konva which is a wrapper & enhancer of canvas functionality, uses vector graphics. An important part of vector graphics is the concept of 'transform'-ing your shapes when you rotate or scale them. Essentially, the shape will tell you its position is unchanged when rotated or scaled, but the important fact is it's transform which is what does the rotation and scaling.
Long story short, without needing to understand the matrix math, you can 'get' the transform that is applied to your shape and give it the x,y positions of your shape's vertices/corners, and it will return the x,y of that point with the transform applied.
Here is an earlier answer to the same question but regarding rectangles. https://stackoverflow.com/a/65645262/7073944
This is vanilla JS but hopefully you can react-ify it.
The critical functions are node.getTransform and its close relation node.getAbsoluteTransform methods will retrieve the transform applied to the node (shape).

jsc3d is there any way to detect 2 mesh collision?

I have a project using jsc3d to render a 3d object.
the project needs to put new accessories into the current figure. I need to check if the accessory collision with the main part so the output 3D models can print though 3d printer.
is there any way to detect collision in jsc3d ??
There isn't any easy method to check for 3D mesh collision.
To get an exact result for complex and/or concave 3d shapes, You will need to check every triangle of both shapes for intersection. This may be somewhat slow, depending from the amount of vertices, but there is also some optimization possible.
There are some approximation techniques which are faster than the N*M check of all triangles intersection:
intersection of axis-aligned bounding boxes
intersection of the bounding spheres
intersection of the rotated bounding boxes
intersection of the bounding cylinders
...or any combination of the shapes
JSC3D has already built-in the AABB structure.
For simple 3D meshes, maybe You can use that. The check for 3D AABB intersection is really easy, see also this answer here: Intersection between two boxes in 3D space

Generating triangles from a random set of points

I have randomly generated some points on a JavaScript canvas I was wondering what the most efficient method would be to draw triangles connecting the points in a uniform fashion. The goal is to have the triangles fill the entire canvas without overlapping.
For a visual representation, here is an image of the points I have randomly generated across a canvas. As you can see I may have to modify the way I randomly place the points on the canvas.
And this is how I wish to draw the triangles.
Thanks to #Phorgz & #GabeRogan for pointing me in the right direction. Delaunay Triangulation was definitely the way to go and it ended up being very fast, even when updating the canvas as an animation.
I did end up using the npm package faster-delaunay which uses the divide and conquer algorithm to triangulate the randomly generated points.
Here is a result of what I have drawn on the canvas that updates as the points move around the plane:

Best practice: Rendering volume (voxel) based data in WebGL

I´m searching for a (or more) best practice(s) for the following problem. I´ll try to describe it as abstract as possible, so the solution can be applied to scenarios i have not yet thought of.
Data available: Voxels (Volumetric Pixels), forming a cube, with coordinates x,y,z and a color attached.
Goal: Use OpenGL to display this data, as you move through it from different sides.
Question: Whats the best practice to render those voxels, depending on the viewpoint? How (which type of Object) can store the data?
Consider the following:
The cube of data can be considered as z layers of x y data. It should
be possible to view, in-between-layers, then the displayed color
should be interpolated from the closest matching voxels.
For my application, i have data sets of (x,y,z)=(512,512,128) and
more, containing medical data (scans of hearts, brains, ...).
What i´ve tried so far:
Evaluated different frameworks (PIXI.js, three.js) and worked through a few WebGL tutorials.
If something is not yet clear enough, please ask.
There are 2 major ways to represent / render 3D datasets. Rasterization and Ray-tracing.
One fair rasterization approach is a surface reconstruction technique by the use of algorithms such as Marching Cubes, Dual Contouring or Dual Marching Cubes.
Three.js have a Marching Cubes implementation in the examples section. You basically create polygons from your voxels for classical rasterization. It may be faster than it seems. Depending the level of detail you want to reach, the process can be fast enough to be done more than 60 times per second, for thousands of vertices.
Although, unless you want to simply represent cubes (I doubt) instead of a surface, you will also need more info associated to each of your voxels rather than only voxel positions and colors.
The other way is raycasting. Unless you find a really efficient raycasting algorithm, you will have serious performance hit with a naive implementation.
You can try to cast rays from your camera position through your data structure, find / stop marching through when you reach a surface and project your intersection point back to screen space with the desired color.
You may draw the resulting pixel in a texture buffer to map it on a full-screen quad with a simple shader.
In both cases, you need more information than just colors and cubes. For example, you need at least density values at each corners of your voxels for Marching cubes or intersection normals along voxels edges (hermite data) for Dual Contouring.
The same for ray-casting, you need at least some density information to figure out where the surface lies or not.
One of the keys is also in how you organize the data in your structure specially for out-of-core accesses.

how to "sort" polygons 3d?

I am still working on my "javascript 3d engine" (link inside stackoverflow).
at First, all my polygons were faces of cubes, so sorting them by average Z was working fine.
but now I've "evolved" and I want to draw my polygons (which may contain more than 4 vertices)
in the right order, namely, those who are close to the camera will be drawn last.
basically,
I know how to rotate them and "perspective"-ize them into 2D,
but don't know how to draw them in the right order.
just to clarify:
//my 3d shape = array of polygons
//polygon = array of vertices
//vertex = point with x,y,z
//rotation is around (0,0,0) and my view point is (0,0,something) I guess.
can anyone help?
p.s: some "catch phrases" I came up with, looking for the solution: z-buffering, ray casting (?!), plane equations, view vector, and so on - guess I need a simple to understand answer so that's why I asked this one. thanks.
p.s2: i don't mind too much about overlapping or intersecting polygons... so maybe the painter's algorthm indeed might be good. but: what is it exactly? how do I decide the distance of a polygon?? a polygon has many points.
The approach of sorting polygons and then drawing them bottom-to-top is called the "Painter's algorithm". Unfortunately the sorting step is in general an unsolvable problem, because it's possible for 3 polygons to overlap each other:
Thus there is not necessarily any polygon that is "on top". Alternate approaches such as using a Z buffer or BSP tree (which involves splitting polygons) don't suffer from this problem.
how do I decide the distance of a polygon?? a polygon has many points.
Painter's algorithm is the simplest to implement, but it works only in very simple cases because it assumes that there is only a single "distance" or z-value for each polygon (which you could approximate to be the average of z-values of all points in the polygon). Of course, this will produce wrong results if two polygons intersect each other.
In reality, there isn't a single distance value for a polygon -- each point on the surface of a polygon can be at a different distance from the viewer, so each point has its own "distance" or depth.
You already mentioned Z-buffering, and that is one way of doing this. I don't think you can implement this efficiently on a HTML canvas, but here's the general idea:
You need to maintain an additional canvas, the "z-buffer", where each pixel's colour represents the z-depth of the corresponding pixel on the main canvas.
To draw a polygon, you go through each point on its surface and draw only those points which are closer to the viewer than any previous objects, as indicated by the z-buffer.
I think you will have some ideas by investigating BSP tree ( binary spaces partition tree ), even if the algo will require to split some of your polygon in two.
Some example could be find here http://www.devmaster.net/articles/bsp-trees/ or by google for BSP tree. Posting some code as a reply is, in my opinion, not serious since is a complex topic.

Categories

Resources