I am trying to add a skybox to a model in Forge Viewer.
So far I have managed to create and add the skybox to the model via an extension.
The problem is that the skybox needs to be big, and the camera back clip plane will be to short; - eg. the skybox is only partially visible or hidden.
I did not manage to modify the camera settings to change the clip plane and was therefore thinking of another way:
I was wondering if it was better to keep the skybox in a separate ThreeJS Scene, but so far I am not able to figure out how the extra scene should be added to Autodesks Viewer3D, neither how it should be kept in sync with the main cameras rotation.
Any pointers and examples would be appreciated
Loading an extra scene can be an overkill to achieve such feature, the easiest workaround would be load a second model that has a slightly larger extends than your skybox, so the viewer will automatically update its clippings plane.
What I suggest you to do is to translate a model that contains only tiny spheres or cubes defining the desired extends of the skybox scene. Then you will load that model either using its urn or downloading its resources and loading it "Local" even if your other model is loaded from the cloud.
See there for more details about extracting viewable resources and running Local vs cloud based:
https://extract.autodesk.io
Working seamlessly online/offline when developing your Web applications with the Forge Viewer
Hope that helps
Related
I'm trying to build a web based app which overlays assets like spectacles onto your face.
While I'm able to use a tfjs model to get a face mesh, how can I position a png file appropriately? Assuming I manually find the location of the centre of the spectacles sample, I can use this point as a reference and add it to the appropriate location in the face mesh. Is there a better way to do this?
I'm simply trying to add a model with a diffuse and bump texture to a simple scene in react 3 fiber.
I have literally no clue what I'm doing wrong.
Heres the sandbox: https://codesandbox.io/s/three-point-lighting-in-react-three-fiber-forked-qeqlx?file=/src/index.js
The model is a GLTF moon, that has the textures baked in. The moon is just a sphere but I want to use the GLTF model. Currently, the scene displays the background and lights, but no model.
If you have any insight about this I would appreciate it immensely!
At first glance, it looks like you're importing GLTFLoader and moon, but you're never using them. That's what those squiggly yellow lines mean:
Make sure you implement them as outlined in the documents:
const loader = new GLTFLoader();
loader.load(
moon,
function(gltf) {
scene.add(gltf.scene);
}
);
Alright, I figured it out. Thanks to #Marquizzo
Issues:
The .gltf file was not in the correct directory. It needed to be in the public directory since I am using the "npx #react-three/gltfjsx" script.
The .gltf was incredibly large relative to the scene. In Cinema 4d, it is perfectly sized. But in three.js, it was around 99x too large.
The position of the mesh was behind the camera. I had no clue that you could use Orbital Controls to move the camera around and manually find the object. I also positioned the object at [0,0,0].
The template scene I was using was from a tutorial from almost a year ago. So there had been some major developments since then that caused simple bugs on runtime. So I updated the dependencies.
Issues I still have:
Textures aren't loading from the baked-in .gltf file.
I increased the lighting and it seems that the lighting isnt the issue.
Heres the fixed sandbox: fixed
What I learned:
Orbital Controls
Use of OBJLoader
useHelpers like CameraHelper
I'm working on porting an existing three.js project to WebVR + Oculus Rift. Basically, this app takes an STL file as input, creates a THREE.Mesh based on it and renders it on an empty scene. I managed to make it work in Firefox Nightly with VREffect plugin to three.js and VRControls. A problem I have is models rendered in VR aren't really 3D. Namely, when I move the HMD back and forth an active 3D model doesn't get closer/farther, and I can't see different sides of the model. It looks like the model is rather a flat background image stuck to its position. If I add THREE.AxisHelper to the scene, it is transformed correctly when HMD is moved.
Originally, THREE.OrbitControls were used in the app and models were rotated and moved properly.
There's quite some amount of source code so I'll post some snippets on demand.
It turned out that technically there was no problem. The issue was essentially with different scales of my models and Oculus movements. When VRControls is used with default settings, it reports a position of HMD as it reads it from Oculus, in meters. So, the range of movements of my head could barely exceed 1 m, whereas average sizes of my models are about a few dozens of their own units. When I used them altogether at the same scene, it was like a viewer is an ant looking at a giant model. Naturally, the ant have to walk a while to see another side of the model. That's why it seemed like not a 3D body.
Fortunately, there's a scale property of VRControls that should be used for adjusting scale of HMD movements. When I set it to about 30, everything works pretty well.
Thanks to #brianpeiris's comment, I decided to check coordinates of the model and camera once again to make sure they're not knit with each other. And, it led me to the solution.
I have a JSON loaded mirror that I would like to hook up to a webcam. The mirror's reflection will be updated with video coming from a canvas. I was able to follow this source http://threejs.org/examples/#canvas_materials_video
to display a video on a canvas.
However, I need the video texture to run on the specific face of the object. I've tried targeting the object via geometry.faces[i]materialIndex to no avail. The animation is moving, so having a plane to emulate that the the texture is on the model is also not optimal. Is there any advice on what to do?
How can I get the value from each angle of my perspective camera in 3D scene.
I'm using Three.js library.
To be more accurate, I shall mark what I want to get known with the next sign:
What coordinates I need to know:
It's needed for me, because I'm creating a real mode map engine with moving in 3D scene via mouse cursor.
What I'm trying to achieve is available here:
http://www.zephyrosanemos.com/windstorm/current/live-demo.html
As you could see, in this sample, new terrain is loading with intersecting the new location (which previously were not available with the garbage collection, when the camera is leaving the old viewport):
Now, I want to show a piece of screen from my three.js application:
As you are able to see, I'm loading my scene statically, where is available only one plane with buildings (buildings data are loading from my server, and the data I have token from some osm services).
And it may be controlled only with pushing the keyboard button (for e.g. the 3D scene is locating to the new location by pushing on keyboard arrows, aslo you could see the empty space in map :) this is only because of preparing the cut data in DB for testing purposes, when the application is ready - it won't me empty, it's miles easy to work with light count of records in DB ). All meshes are being deleted, and with each new movement the new data is loaded and the new building are rendered.
But I want to made them loaded dynamically with camera movement. So I want to make it be able dynamically loaded as in the example of dynamic terrain generation. I suggest, that I shall prepare a big plane matrix, which loads data only for the 8 planes (as in terrain generation sample) and to make a logic with the camera intersacting/leavaing old view for such a dynamic work.
So... I want you to help me with this piece of hard task :)
To get the field of view angle simply get the value of this field:
Three.PerspectiveCamera.fov
With that angle you should can have an imaginary cubic cone and test it for collision. For the collision part refer to this question:
How to detect collision in three.js?