How to rotate single element of 3d Model with Javascript - javascript

I am a three.js beginner and really impressed by the functionality of it.
I want to load a 3d model with three.js and rotate one element of this model with javascript (e.g. to rotate the hand of a human model or the door of a car model). Is this possible with three.js? Can you provide examples? Which 3d format/loader do you suggest for that purpose?
I tried to move parts of e.g. this model - however the object representing the model after loading it with
loader.load( 'models/3ds/portalgun/portalgun.3ds', function ( object ) {..
seems not to have a list of sub-elements which I can rotate....So I do not see a chance to select a sub-element of this model which I can transform (e.g. rotate). I am only able to transform (rotate) the complete model.
According to the documentation Object3d has a list of children but in my case it is always empty... Can you provide me an example?
The animation examples such as this seems not to use a loader...
Edit: E.g. how can I move the hand of this model in JS?
Thank you

Related

Creating 3d text above an object in three js

I have 3D model which I have created in three.js. Based on some data, I want to create a text above each object of my scene.
It seems like I have one option which is creating a 3d text model and position it above my object , but this doesn't work with me i don't know if i had commited an error or something like that.
I'm wondering how to go about this. Which is the "correct" way to do this? Any suggestions and example code is very welcome!
here is my code
here is my problem

Three.js - How can I set up a callback on a clickable region - 3 js

I started with this tutorial and successfully managed to export my 3D model from blender to a json file which in turn was displayed on my html page using three.js. The 3d model is basically a simple human. Now I would like to define some clickable regions on the model (these clickable regions might be hands,legs,chest etc). I wanted to know what would be the best way to do that so that I would know which region is clicked. Will I have to define these regions in blender ? Are there any other approaches out there for accomplishing this task ?
look at this example showing object highlighting based on mouse position.

XML3D: Camera controls & XML3D tools

What is the suggested approach for handling user input and camera controls in XML3D?
Basic interactivity can be added using the DOM tree events, but I'm not sure if that would be enough to provide rotation gizmos (for example).
Does library provides some API to handle user input and camera controls?
I've noticed that there is xml3d toolkit that was developed year ago.
It seem however that this is rather a loose collection of demos rather than a library for handling user input, also there is no decent use documentation for it.
I need to provide basics functionalities like rotation/translation/scaling of models and controlling the camera.
xml3d.js doesn't provide any cameras or gizmos by itself. They're usually application-specific (there are dozens of ways to implement a camera for instance) so it doesn't really make sense to include them as part of the core library. A very basic camera is provided alongside xml3d.js but it's quite limited.
The xml3d-toolkit did include transformation gizmos and various camera controllers but it's not in active development anymore since the creator has moved on to other things. Still, it might be a good place to start, or at least to use as a reference in building your own camera or gizmo.
For example, a simple way to allow the user to change the transformation of a model would be to:
Add an onclick listener to each model that toggles the editing mode
Show 3 buttons somewhere in your UI to let the user switch between editing rotation, translation or scale
Add onmouseup and onmousedown listeners to the <xml3d> element to record click+drag movements
As part of those listeners, convert the change in mouse position to a change in transformation depending on what editing mode the user is in
Apply those transformation changes to the model either by changing its CSS transform, or through the relevant attribute on a <transform> element that's referenced by a <group> around your model.
Exit the editing mode if the user clicks the canvas to deselect the object (rather than a click+drag operation)
To keep it from conflicting with camera interaction you could use the right mouse button for editing, or simply disable the camera while the user is editing a transformation.
A 3D gizmo is a bit trickier because it needs to be drawn over top of the model while still being clickable, currently there is no way to do this. You could use the RenderInterface to draw the gizmo in a second pass after clearing the depth buffer, but this would not be done in the internal object picking pass that's required to find out which object a user has clicked on.
As a workaround, the toolkit library used a second XML3D canvas with a transparent background positioned over the first that intercepted and relayed all mouse events. When an object was selected its transformation was mirrored into the second canvas where the gizmo was drawn. Changes in the gizmo's transformation were then mirrored back to the object in the main canvas.
Have a look at the classes in the toolkit's xml3doverlay and widgets folders.
An advice for people implementing draggable objects with XML3D:
Use ray picking method of XML3D element to get both object and the point of intersection of ray & model ( function getElementByRay).
Change the mouse movements from screen coordinates to world coordinates.
You must scale transform by the relative distance of picked point to camera and camera to projection plane, so the moving object can track your cursor.

How to correctly apply matrix transformations on multiple models?

I am trying to accomplish an animation of a tree growing in WebGL.
In this project, I have 2/3 models that make the building block of the tree.
Branch model
Leaf model
Trunk model
Now I apply various sorts of Matrix transformation in those models, like Translation, Scaling, Rotation, etc.
My question is how to efficiently organize these things in the project.
I'm not using any library, but handling everything by myself. I have gotten custom Scene, Camera functions which is working pretty well.
But what I'm stuck at is I cannot apply specific transformations to a specific individual model.
Also,
I cannot dynamically change the camera clip space, as more and more models get loaded.
Can anyone help me out with the basic organization of the project?
Look at Procedural maple tree at
http://www.ibiblio.org/e-notes/Splines/tree/maple.htm
and notes at the bottom

Augmented Reality, superimposing 3d models

How would I use a library like this https://code.google.com/p/js-handtracking/ with a 3D model in order to superimpose over a hand tracking?
How would you use a 3d model file with such a thing?
What format would the model need to be? I've never dealt with 3D model superimposition.
The 3D object you want to superimpose may be any 3D object (e.g. obj, 3ds, ply, vrml). Actually the model representation is not an issue as you can convert a 3d representation to any other 3d representation (usually). It is up to your browser/player doing the rendering of the scene.
In order to overlay a 3d model on your detected hand you need to know the position of the hand (relative to your scene of course). The detection algorithm should give you some kind of a transformation matrix which can be translated to rotation, translation and scale. Then you can use these values to place your 3d object at the right position in the 3d scene.
You should first check if there is an API of the handtracking algorithm, or at least how can you intercept the output data (if any). Otherwise you would have to search through the algorithm (source code..) where the detection is made and get the transformation matrix and apply it on your 3d object.

Categories

Resources