Collision detection of 3d objects in p5.js - javascript

I am working on a infinite runner game, where 3d objects are involved - A rover and some obstacles, that move on a terrain. The game is made using p5.js WebGL functionality. I have almost completed the game, but the game should end when the rover hits any obstacle. I just want to know if I can detect the collision of both the 3d objects(the rover is a plane, and the obstacle is custom loaded model) and end the game...Simply, I want to know whether collision detection in WebGL is feasible, and if so how?
Please help me out on the same.
Thank you.

Ideally, indeed, you'd post a minimal sketch of your attempt at detecting the collision: something to make it easier for other to contribute (without making to many wild guesses on your behalf).
One idea is to check if the bounding box of the 3d model (defined by it's minX, minY, minZ, maxX, maxY, maxZ values) intersects with the plane (or the bounding box of the plane if it's simpler to keep things consistent). It won't be 100% accurate, depending on the loaded model (we don't even know what that is is), but it's a decent initial step.
For more accuracy a convex hull would be handy. If computing that from scratch in p5.js might prove difficult, perhaps you could use the same 3D editor to export the model to also generate a convex hull of the original model and export that to be used as simpler collision mesh.
Additionally, even through more advanced, you can look into using a physics engine such as ammo.js to handle the heavy collision math (and more) for you. (checkout the vehicle demo).

Related

Can glsl be used instead of webgl?

This may be a bit of a naive question so please go easy on me. But I was looking at shaders at shadertoy.com and I'm amazed at how small the glsl code is for the 3d scenes. Digging deeper I noticed how most of the shaders use a technique called ray marching.
This technique makes it possible to avoid using vertices/triangles altogether and just employ the pixel shader and some math to create some pretty complex scenes.
So I was wondering why is it that 3d scenes often use triangle meshes with webgl instead of just using pixel shaders. Can't we just render the entire scene with glsl and pixel shaders (aka fragment shaders)?
The simple answer is because the techniques on shadertoy are probably 10,100,1000 times slower than using vertices and triangles.
Compare this shadertoy forest that runs at 1fps at best fullscreen on my laptop
https://www.shadertoy.com/view/4ttSWf
To this Skyrim forest which runs at 30 to 60fps
https://www.youtube.com/watch?v=PjqsYzBrP-M
Compare this Shadertoy city which runs at 5fps on my laptop
https://www.shadertoy.com/view/XtsSWs
To this Cities:Skylines city which runs at 60fps
https://www.youtube.com/watch?v=0gI2N10QyRA
Compare this Shadertoy Journey clone which runs 1fps fullscreen on my laptop
https://www.shadertoy.com/view/ldlcRf
to the actual Journey game on PS3, a machine with an arguably slower GPU than my laptop given the PS3 came out in 2006, and yet runs at 60fps
https://www.youtube.com/watch?v=61DZC-60x20#t=0m46s
There's plenty of other reasons. A typical 3D world uses gigabytes of data for textures, characters, animations, collisions etc, none of that is available in just GLSL. Another is often they use fractal techniques so there's no easy way to actually design anything. Instead they just search the math for something interesting. That would not be a good way to design game levels for example. In other words using data of vertices makes things far more flexible and editable.
Compare the Journey examples above. The Shadertoy example is a single scene vs the game which is a vast designed world with buildings and ruins and puzzles etc...
There's a reason it's called ShaderTOY. It's a meant as a fun challenge. Given a single function who's only input is which pixel is currently being drawn, write code to draw something. As such the images people have managed to draw given that limit are amazing!
But, they aren't generally the techniques used to write real apps. If you want your app to run fast and be flexible you use the more traditional techniques of vertices and triangles. The techniques used by GTA5, Red Dead Redemption 2, Call of Duty, Apex Legends, Fortnite, etc....

JavaScript Canvas Complex Shape Collision

I am trying to make a very simple game with the HTML canvas and JavaScript. I have found many tutorials and questions about detecting collisions of basic shapes on a canvas (such as rectangles and circles). But I am wondering is it possible to detect if a complex shape (a shape that is made up of many basic shapes) is colliding with another shape, or even if two complex shapes are colliding. If so, how could this be done? Thanks in advance!
A general algorithm will not provide a better solution than one based specifically on the knowledge of each shape type.
Generally, for complex (i.e. compound) shapes, you would generally try as step #1 and "exit early" test. For optimization reasons, you generally try eliminate false-positives as early in the process as possible.
As simple step #1 is to test for collisions on the "bounding boxes" of each compound shape. If the bounding boxes are NOT overlapping then you can quit early and assume no collision because the compound shapes could not be colliding (see https://gamedevelopment.tutsplus.com/tutorials/collision-detection-using-the-separating-axis-theorem--gamedev-169)
If the bounding-box test cannot eliminate early, you will need to test each sub-shape in turn with algorithms most suitable to the shape (circle-circle, circle-rect etc.) leaving the most "expensive" tests to last - like polygon-polygon.
You might want to also look at this question How do I determine if two convex polygons intersect?

Best practice: Rendering volume (voxel) based data in WebGL

I´m searching for a (or more) best practice(s) for the following problem. I´ll try to describe it as abstract as possible, so the solution can be applied to scenarios i have not yet thought of.
Data available: Voxels (Volumetric Pixels), forming a cube, with coordinates x,y,z and a color attached.
Goal: Use OpenGL to display this data, as you move through it from different sides.
Question: Whats the best practice to render those voxels, depending on the viewpoint? How (which type of Object) can store the data?
Consider the following:
The cube of data can be considered as z layers of x y data. It should
be possible to view, in-between-layers, then the displayed color
should be interpolated from the closest matching voxels.
For my application, i have data sets of (x,y,z)=(512,512,128) and
more, containing medical data (scans of hearts, brains, ...).
What i´ve tried so far:
Evaluated different frameworks (PIXI.js, three.js) and worked through a few WebGL tutorials.
If something is not yet clear enough, please ask.
There are 2 major ways to represent / render 3D datasets. Rasterization and Ray-tracing.
One fair rasterization approach is a surface reconstruction technique by the use of algorithms such as Marching Cubes, Dual Contouring or Dual Marching Cubes.
Three.js have a Marching Cubes implementation in the examples section. You basically create polygons from your voxels for classical rasterization. It may be faster than it seems. Depending the level of detail you want to reach, the process can be fast enough to be done more than 60 times per second, for thousands of vertices.
Although, unless you want to simply represent cubes (I doubt) instead of a surface, you will also need more info associated to each of your voxels rather than only voxel positions and colors.
The other way is raycasting. Unless you find a really efficient raycasting algorithm, you will have serious performance hit with a naive implementation.
You can try to cast rays from your camera position through your data structure, find / stop marching through when you reach a surface and project your intersection point back to screen space with the desired color.
You may draw the resulting pixel in a texture buffer to map it on a full-screen quad with a simple shader.
In both cases, you need more information than just colors and cubes. For example, you need at least density values at each corners of your voxels for Marching cubes or intersection normals along voxels edges (hermite data) for Dual Contouring.
The same for ray-casting, you need at least some density information to figure out where the surface lies or not.
One of the keys is also in how you organize the data in your structure specially for out-of-core accesses.

Rounded Plane In THREE JS

THREE JS, can often seem angular and straight edged. I haven't used it for very long and thus am struggling to understand how to curve the world so to speak. I would imagine a renderer or something must be changed, but the idea is to take a 2d map and turn it into a simple three lane running game. However, if you look at the picture below from another similar game, how can i achieve the fish eye effect?
I would do that kind of effect on per-vertex base depending on the distance from the camera.
Also, maybe a bit tweaked perspective camera with bigger vertical fov would boost up the effect of the "curviness".
It's just a simple distortion effect that has been simulated in some way, it probably isn't really curved. Hope this helps.
I'm sure there are many possible different approaches... Here's one that creates nice barrel distortion effect.
You can do something like that by rendering normal wide angle camera to a texture, then project it to a lens-shaped plane (a sphere even), then the actual on-screen render is from a camera pointing to that.
I don't have the code available ATM, but I should be able to dig it up in few days if interested. Or you can just adapt from the three.js examples. Three.js includes some postprocessing examples where the scene is first rendered into a texture, that texture is applied to a a quad then photographed with ortographic camera. You can modify such an example by changing the ortographic camera to a perspective one, then distorting/changing the quad to something more appropriately shaped.
Taken to extremes, this approach can produce some pixelization / blocky artifacts.

Is HTML5's Canvas API Rotate or Translate function better?

After a short discussion with a friend on a Canvas project, we realized that there is no clear cut answer on whether the Canvas rotate or translate functions are better to use. Mainly we want to know which is the best for rendering performance and why.
PS The save and restore method for rotating images across the net is terrible...
Since they both boil down to matrix transforms, and "translate" is performed as merely an offset, presumably the translation would yield better performance in general than most types of rotation because there are simply more operations involved with a rotation than translation. See Transformation matrix, Translation matrix and Rotation matrix for some background.
But since the two aren't interchangeable in any way, shape or form, I fail to see the fruit of the debate.
Also, I feel that I should add since there is a general movement toward hardware acceleration for canvas related features in general, that the cost of any transformation may end up as a constant (eg. no actual cost) due to co-processor parallelism in the rendering pipeline...

Categories

Resources