Augmented Reality, superimposing 3d models - javascript

How would I use a library like this https://code.google.com/p/js-handtracking/ with a 3D model in order to superimpose over a hand tracking?
How would you use a 3d model file with such a thing?
What format would the model need to be? I've never dealt with 3D model superimposition.

The 3D object you want to superimpose may be any 3D object (e.g. obj, 3ds, ply, vrml). Actually the model representation is not an issue as you can convert a 3d representation to any other 3d representation (usually). It is up to your browser/player doing the rendering of the scene.
In order to overlay a 3d model on your detected hand you need to know the position of the hand (relative to your scene of course). The detection algorithm should give you some kind of a transformation matrix which can be translated to rotation, translation and scale. Then you can use these values to place your 3d object at the right position in the 3d scene.
You should first check if there is an API of the handtracking algorithm, or at least how can you intercept the output data (if any). Otherwise you would have to search through the algorithm (source code..) where the detection is made and get the transformation matrix and apply it on your 3d object.

Related

How to implement rolling a ball on a sphere in terms of matrices?

Target:
It is necessary to create two spheres, one of which can be rolled over the surface of the other with the mouse, and implement a camera that can be moved around these balls using the keyboard.
Implementation:
I started a matrix that stores the current state of the rotation of the rolling ball. When the user drags, I get a series of mouse move events, and each time I move, I calculate how many degrees around the current X and Y, as the user sees them, the rotation has changed. Then I calculate a matrix that represents these two rotations and multiply the original sphere rotation matrix by it in reverse order - the reverse order is necessary because the rotation occurs from the point of view of the camera, and not from the point of view of model space.
Problem:
But with such an implementation, the second sphere will not change the point of contact with the first sphere (it will, as it were, slide along it), how can one analytically implement the rotation of the point of contact of the balls in terms of matrices?
Here is the code if anyone is interested: https://github.com/AndrewStrizh/spheres-with-webGL
What you need is to be able to control rotation of your sphere around two (or more) different rotation pivots.
A proper way to deal with complex transformations is to implement hierarchical transformations:
http://web.cse.ohio-state.edu/~wang.3602/courses/cse3541-2019-fall/05-Hierarchical.pdf
In this case, you can control the rotation of the sphereB around the sphereA by making the sphereB a child of an third invisible object - call it Locator - located at the center of the sphereA. With proper implementation of hierarchical transformations, rotating the Locator will also rotate the sphereB around this Locator (so, around the sphereA). In the same time, you can also apply a rotation of the sphereB around its own center, making it spinning.
In practice, implementing true hierarchical transformations require to implement a scene graph, with proper nodes traversal, etc. But the main idea is that every object have what is called a local transform matrix, and world transform matrix. The local transform matrix hold only the own transformation of that particular object (locally to its own origin), while the world transform matrix is the final matrix, sum result of all the hierarchical transformations (from parents) applied to this object.
The world transform matrix is the one used as "model" matrix, to be multiplied with the view and projection matrices. World and local transform matrices of nodes are computed like this (pseudocode):
node.worldMatrix = node.localMatrix * node.parent.worldMatrix;
Knowing that, since you only need three objects and two hierarchical transformations, you don't have to implement a whole scene graph, you only need to simulate this principle by multiplying proper matrices to reproduce the desired behavior.

How to rotate single element of 3d Model with Javascript

I am a three.js beginner and really impressed by the functionality of it.
I want to load a 3d model with three.js and rotate one element of this model with javascript (e.g. to rotate the hand of a human model or the door of a car model). Is this possible with three.js? Can you provide examples? Which 3d format/loader do you suggest for that purpose?
I tried to move parts of e.g. this model - however the object representing the model after loading it with
loader.load( 'models/3ds/portalgun/portalgun.3ds', function ( object ) {..
seems not to have a list of sub-elements which I can rotate....So I do not see a chance to select a sub-element of this model which I can transform (e.g. rotate). I am only able to transform (rotate) the complete model.
According to the documentation Object3d has a list of children but in my case it is always empty... Can you provide me an example?
The animation examples such as this seems not to use a loader...
Edit: E.g. how can I move the hand of this model in JS?
Thank you

Volume ray tracing using three.js without textures

I'm trying to visualize hydrogen wave functions, and would like to do this using volume ray tracing/casting. All guides online for creating volume rendering is based on having 2D textures from some medical imaging. In my case, I don't have any data as images, but instead the 3D data is already in memory (I'm using them to generate particles right now).
Do I really need to convert all my 3D data to 2D textures, only to load them in again, and fake a 3D texture? If not, how can it be done without textures?
Yes, from your link I understand that you have a function that takes a 3D coordinate and returns a propability between 0 and 1. You can use this directly during the evaluation of each ray.
For each ray,
for each distance ∆ along the ray
calculate the coordinates at distance ∆ from the camera
calculate the propability at those coordinates using your function
add the probability to the ray's accumulated color
Using this method, you skip the particle positions that you have rendered in the linked example, and use the function directly.

WebGL display loaded model without matrix

I'm learning webgl. I've managed to draw stuff and hopefully understood the pipeline. Now, every tutorial I see explains matrices before even loading a mesh. While it can be good for most, I think I need to concentrate on the process of loading external geometry, maybe through a json file. I've read that openGL by default displays things orthogonally, so I ask: is it possible to display a 3d mesh without any kind of transformation?
Now, every tutorial I see explains matrices before even loading a mesh.
Yes. Because understanding transformations is essential and you will need to work with them. They're not hard to understand and the sooner you wrap your head around them, the better. Actually in the case of OpenGL for the model-view transformation part it's actually rather simple:
The transformation matrix is just a bunch of vectors (in columns) placed within a "parent" coordinate system. The first the columns define how the X, Y and Z axes of the "embedded" coordinate system are aligned within the "parent", the W column moves it around. By varying the lengths of the base vectors you can stretc, i.e. scale things.
That's it, there's nothing more to it (in the modelview) than that. Learn the rules of matrix-matrix multiplication. Matrix-vector multiplication is just a special case of matrix-matrix multiplication.
The projection matrix is a little bit trickier, but I suggest you don't bother too much with it, just use GLM, Eigen::3D or linmath.h to build the matrix. The best analogy for the projection matrix is being the "lens" of OpenGL, i.e. this is where you apply zoom (aka field of view), tilt and shift. But the place of the "camera" is defined through the modelview.
is it possible to display a 3d mesh without any kind of transformation?
No. Because the mesh coordinates have to be transformed into screen coordinates. However a identity transform is perfectly possible, which, yes, looks like a dead on orthographic projection where the coordinate range [-1, 1] in either dimension is mapped to fill the viewport.

Use 3d model as icon without scaling in Google Earth Plugin

There're two types of objects we can place on the google earth.
The first one is 3d models - they have real size and they scale with dependency on the camera position.
The second one is icons and labels - they overlay the map and do not scale while the camera moves.
So is there a possibility to use 3d models like icons? That means I want to switch my png-icons with beauty 3d-models that do not scale and that have icon's behavior.
I know that there's access to the camera and object positions, and we can rescale 3d object with dependency on the distance every time when the camera or an object moves, but I believe there's simpler way without all these calculations and observables.
I would say no, there is no simple way to achieve this.
As you say, an icon doesn't have any geometry other than a location, but a 3D model is specifically defined by its location and its length, width, height, etc. Yes you could calculate the scaling and attempt to redraw the model based on the current view, but that wouldn't be trivial and I really doubt that the results would be very pleasing.

Categories

Resources