How can I get the value from each angle of my perspective camera in 3D scene.
I'm using Three.js library.
To be more accurate, I shall mark what I want to get known with the next sign:
What coordinates I need to know:
It's needed for me, because I'm creating a real mode map engine with moving in 3D scene via mouse cursor.
What I'm trying to achieve is available here:
http://www.zephyrosanemos.com/windstorm/current/live-demo.html
As you could see, in this sample, new terrain is loading with intersecting the new location (which previously were not available with the garbage collection, when the camera is leaving the old viewport):
Now, I want to show a piece of screen from my three.js application:
As you are able to see, I'm loading my scene statically, where is available only one plane with buildings (buildings data are loading from my server, and the data I have token from some osm services).
And it may be controlled only with pushing the keyboard button (for e.g. the 3D scene is locating to the new location by pushing on keyboard arrows, aslo you could see the empty space in map :) this is only because of preparing the cut data in DB for testing purposes, when the application is ready - it won't me empty, it's miles easy to work with light count of records in DB ). All meshes are being deleted, and with each new movement the new data is loaded and the new building are rendered.
But I want to made them loaded dynamically with camera movement. So I want to make it be able dynamically loaded as in the example of dynamic terrain generation. I suggest, that I shall prepare a big plane matrix, which loads data only for the 8 planes (as in terrain generation sample) and to make a logic with the camera intersacting/leavaing old view for such a dynamic work.
So... I want you to help me with this piece of hard task :)
To get the field of view angle simply get the value of this field:
Three.PerspectiveCamera.fov
With that angle you should can have an imaginary cubic cone and test it for collision. For the collision part refer to this question:
How to detect collision in three.js?
Related
I'm trying to visualize hydrogen wave functions, and would like to do this using volume ray tracing/casting. All guides online for creating volume rendering is based on having 2D textures from some medical imaging. In my case, I don't have any data as images, but instead the 3D data is already in memory (I'm using them to generate particles right now).
Do I really need to convert all my 3D data to 2D textures, only to load them in again, and fake a 3D texture? If not, how can it be done without textures?
Yes, from your link I understand that you have a function that takes a 3D coordinate and returns a propability between 0 and 1. You can use this directly during the evaluation of each ray.
For each ray,
for each distance ∆ along the ray
calculate the coordinates at distance ∆ from the camera
calculate the propability at those coordinates using your function
add the probability to the ray's accumulated color
Using this method, you skip the particle positions that you have rendered in the linked example, and use the function directly.
I'm working on porting an existing three.js project to WebVR + Oculus Rift. Basically, this app takes an STL file as input, creates a THREE.Mesh based on it and renders it on an empty scene. I managed to make it work in Firefox Nightly with VREffect plugin to three.js and VRControls. A problem I have is models rendered in VR aren't really 3D. Namely, when I move the HMD back and forth an active 3D model doesn't get closer/farther, and I can't see different sides of the model. It looks like the model is rather a flat background image stuck to its position. If I add THREE.AxisHelper to the scene, it is transformed correctly when HMD is moved.
Originally, THREE.OrbitControls were used in the app and models were rotated and moved properly.
There's quite some amount of source code so I'll post some snippets on demand.
It turned out that technically there was no problem. The issue was essentially with different scales of my models and Oculus movements. When VRControls is used with default settings, it reports a position of HMD as it reads it from Oculus, in meters. So, the range of movements of my head could barely exceed 1 m, whereas average sizes of my models are about a few dozens of their own units. When I used them altogether at the same scene, it was like a viewer is an ant looking at a giant model. Naturally, the ant have to walk a while to see another side of the model. That's why it seemed like not a 3D body.
Fortunately, there's a scale property of VRControls that should be used for adjusting scale of HMD movements. When I set it to about 30, everything works pretty well.
Thanks to #brianpeiris's comment, I decided to check coordinates of the model and camera once again to make sure they're not knit with each other. And, it led me to the solution.
I am writing a simple minecraft clone in THREE.js.
However, the result is very laggy.
I am using box geometry for the voxels, but I need to remove blocks when they are broken and need to use mouse picking.
I have heard that joining geometries speeds it up but as far as I am aware, this means that you cannot remove any of the voxels or use mouse picking.
What are the other ways of speeding up in THREE.js?
Using a box per voxel (making one draw call per voxel) will be too slow on any machine even if you wrote it in assembly language.
You need to build a mesh for every section of your world. As in divide the world into 48x48x48 unit chunks and build one mesh that contains all the voxels in that area. When the user edits a box you edit the mesh (the vertices) rather than remove a Box object.
There're two types of objects we can place on the google earth.
The first one is 3d models - they have real size and they scale with dependency on the camera position.
The second one is icons and labels - they overlay the map and do not scale while the camera moves.
So is there a possibility to use 3d models like icons? That means I want to switch my png-icons with beauty 3d-models that do not scale and that have icon's behavior.
I know that there's access to the camera and object positions, and we can rescale 3d object with dependency on the distance every time when the camera or an object moves, but I believe there's simpler way without all these calculations and observables.
I would say no, there is no simple way to achieve this.
As you say, an icon doesn't have any geometry other than a location, but a 3D model is specifically defined by its location and its length, width, height, etc. Yes you could calculate the scaling and attempt to redraw the model based on the current view, but that wouldn't be trivial and I really doubt that the results would be very pleasing.
I'm trying to implement a seemingly dynamic infinite map of divs.
My main issues are how to generate new tiles in any direction when the user drag the board
and then what/how should the map be stored in the database .
Here is the quick start I have.
I suspect the grid is just large as opposed to truly infinite.
You only store the tiles that are placed.
The 'view' of the board is limited, even the minimap version is only about 256x256.
The 'empty' board can be just draw, derived from the top left( or other single point ) and width and depth of the screen.
You could also use a pseudo random number to procedurally vary the appearance of each blank square.
One way of doing it would be to store the coordinates of each word in an R-Tree. You would then use the R-Tree to find all the words within the coordinate boundaries you like to see. This could be done in your backend (many database systems support indexing spatial coordinates).