I work with Forge Autodesk. I display a 3D building in the viewer.
I would like to know if there is a way to get a node properties (in the viewer).
I have the node number of a floor, and now I want to get the floor's position and rotation values inside the viewer. Since it is a plane surface, I suppose it must have some local coordinates saved somewhere. This post seems to confirm it.
By default each nodes (components) have no rotation and a null translation applied to them. What you need is to access each vertices of a specific node in order to determine an accurate extent in 3D space. Alternatively you can also access the bounding box of a node to give you an approximation. Take a look at the following articles:
Accessing mesh information
Getting bounding boxes of each component in the viewer
Thanks.
The following articles helped me a lot :
Get THREE.Mesh elements in Autodesk Forge Viewer
How Autodesk Forge viewer manages multiple scenes to select multiple elements
Getting bounding boxes of each component in the viewer
In the end, I used the fragIds to access the bounding box of a sub-element of the 3D object.
Related
I'm trying to build a web based app which overlays assets like spectacles onto your face.
While I'm able to use a tfjs model to get a face mesh, how can I position a png file appropriately? Assuming I manually find the location of the centre of the spectacles sample, I can use this point as a reference and add it to the appropriate location in the face mesh. Is there a better way to do this?
I am a three.js beginner and really impressed by the functionality of it.
I want to load a 3d model with three.js and rotate one element of this model with javascript (e.g. to rotate the hand of a human model or the door of a car model). Is this possible with three.js? Can you provide examples? Which 3d format/loader do you suggest for that purpose?
I tried to move parts of e.g. this model - however the object representing the model after loading it with
loader.load( 'models/3ds/portalgun/portalgun.3ds', function ( object ) {..
seems not to have a list of sub-elements which I can rotate....So I do not see a chance to select a sub-element of this model which I can transform (e.g. rotate). I am only able to transform (rotate) the complete model.
According to the documentation Object3d has a list of children but in my case it is always empty... Can you provide me an example?
The animation examples such as this seems not to use a loader...
Edit: E.g. how can I move the hand of this model in JS?
Thank you
I'm creating a WebVR application using three.js. Currently I'm trying to set the resolution of the window to the resolution of the hmd in order to get a clearer picture. I've been trying to get this information using VREyeParameters renderWidth/Height components. However with the Vive I'm using I keep getting a resolution much larger than the supposed 2160x1200 (I got the width to be 3448). Am I grabbing the wrong information and is there somewhere else I need to be getting these values from?
The values look correct.
You are rendering to a temporary texture that is projected with barrel distortion to the destination LCD. For best quality, the temporary texture is in higher resolution.
I'm working on porting an existing three.js project to WebVR + Oculus Rift. Basically, this app takes an STL file as input, creates a THREE.Mesh based on it and renders it on an empty scene. I managed to make it work in Firefox Nightly with VREffect plugin to three.js and VRControls. A problem I have is models rendered in VR aren't really 3D. Namely, when I move the HMD back and forth an active 3D model doesn't get closer/farther, and I can't see different sides of the model. It looks like the model is rather a flat background image stuck to its position. If I add THREE.AxisHelper to the scene, it is transformed correctly when HMD is moved.
Originally, THREE.OrbitControls were used in the app and models were rotated and moved properly.
There's quite some amount of source code so I'll post some snippets on demand.
It turned out that technically there was no problem. The issue was essentially with different scales of my models and Oculus movements. When VRControls is used with default settings, it reports a position of HMD as it reads it from Oculus, in meters. So, the range of movements of my head could barely exceed 1 m, whereas average sizes of my models are about a few dozens of their own units. When I used them altogether at the same scene, it was like a viewer is an ant looking at a giant model. Naturally, the ant have to walk a while to see another side of the model. That's why it seemed like not a 3D body.
Fortunately, there's a scale property of VRControls that should be used for adjusting scale of HMD movements. When I set it to about 30, everything works pretty well.
Thanks to #brianpeiris's comment, I decided to check coordinates of the model and camera once again to make sure they're not knit with each other. And, it led me to the solution.
What is the suggested approach for handling user input and camera controls in XML3D?
Basic interactivity can be added using the DOM tree events, but I'm not sure if that would be enough to provide rotation gizmos (for example).
Does library provides some API to handle user input and camera controls?
I've noticed that there is xml3d toolkit that was developed year ago.
It seem however that this is rather a loose collection of demos rather than a library for handling user input, also there is no decent use documentation for it.
I need to provide basics functionalities like rotation/translation/scaling of models and controlling the camera.
xml3d.js doesn't provide any cameras or gizmos by itself. They're usually application-specific (there are dozens of ways to implement a camera for instance) so it doesn't really make sense to include them as part of the core library. A very basic camera is provided alongside xml3d.js but it's quite limited.
The xml3d-toolkit did include transformation gizmos and various camera controllers but it's not in active development anymore since the creator has moved on to other things. Still, it might be a good place to start, or at least to use as a reference in building your own camera or gizmo.
For example, a simple way to allow the user to change the transformation of a model would be to:
Add an onclick listener to each model that toggles the editing mode
Show 3 buttons somewhere in your UI to let the user switch between editing rotation, translation or scale
Add onmouseup and onmousedown listeners to the <xml3d> element to record click+drag movements
As part of those listeners, convert the change in mouse position to a change in transformation depending on what editing mode the user is in
Apply those transformation changes to the model either by changing its CSS transform, or through the relevant attribute on a <transform> element that's referenced by a <group> around your model.
Exit the editing mode if the user clicks the canvas to deselect the object (rather than a click+drag operation)
To keep it from conflicting with camera interaction you could use the right mouse button for editing, or simply disable the camera while the user is editing a transformation.
A 3D gizmo is a bit trickier because it needs to be drawn over top of the model while still being clickable, currently there is no way to do this. You could use the RenderInterface to draw the gizmo in a second pass after clearing the depth buffer, but this would not be done in the internal object picking pass that's required to find out which object a user has clicked on.
As a workaround, the toolkit library used a second XML3D canvas with a transparent background positioned over the first that intercepted and relayed all mouse events. When an object was selected its transformation was mirrored into the second canvas where the gizmo was drawn. Changes in the gizmo's transformation were then mirrored back to the object in the main canvas.
Have a look at the classes in the toolkit's xml3doverlay and widgets folders.
An advice for people implementing draggable objects with XML3D:
Use ray picking method of XML3D element to get both object and the point of intersection of ray & model ( function getElementByRay).
Change the mouse movements from screen coordinates to world coordinates.
You must scale transform by the relative distance of picked point to camera and camera to projection plane, so the moving object can track your cursor.