Use 3d model as icon without scaling in Google Earth Plugin - javascript

There're two types of objects we can place on the google earth.
The first one is 3d models - they have real size and they scale with dependency on the camera position.
The second one is icons and labels - they overlay the map and do not scale while the camera moves.
So is there a possibility to use 3d models like icons? That means I want to switch my png-icons with beauty 3d-models that do not scale and that have icon's behavior.
I know that there's access to the camera and object positions, and we can rescale 3d object with dependency on the distance every time when the camera or an object moves, but I believe there's simpler way without all these calculations and observables.

I would say no, there is no simple way to achieve this.
As you say, an icon doesn't have any geometry other than a location, but a 3D model is specifically defined by its location and its length, width, height, etc. Yes you could calculate the scaling and attempt to redraw the model based on the current view, but that wouldn't be trivial and I really doubt that the results would be very pleasing.

Related

How to implement rolling a ball on a sphere in terms of matrices?

Target:
It is necessary to create two spheres, one of which can be rolled over the surface of the other with the mouse, and implement a camera that can be moved around these balls using the keyboard.
Implementation:
I started a matrix that stores the current state of the rotation of the rolling ball. When the user drags, I get a series of mouse move events, and each time I move, I calculate how many degrees around the current X and Y, as the user sees them, the rotation has changed. Then I calculate a matrix that represents these two rotations and multiply the original sphere rotation matrix by it in reverse order - the reverse order is necessary because the rotation occurs from the point of view of the camera, and not from the point of view of model space.
Problem:
But with such an implementation, the second sphere will not change the point of contact with the first sphere (it will, as it were, slide along it), how can one analytically implement the rotation of the point of contact of the balls in terms of matrices?
Here is the code if anyone is interested: https://github.com/AndrewStrizh/spheres-with-webGL
What you need is to be able to control rotation of your sphere around two (or more) different rotation pivots.
A proper way to deal with complex transformations is to implement hierarchical transformations:
http://web.cse.ohio-state.edu/~wang.3602/courses/cse3541-2019-fall/05-Hierarchical.pdf
In this case, you can control the rotation of the sphereB around the sphereA by making the sphereB a child of an third invisible object - call it Locator - located at the center of the sphereA. With proper implementation of hierarchical transformations, rotating the Locator will also rotate the sphereB around this Locator (so, around the sphereA). In the same time, you can also apply a rotation of the sphereB around its own center, making it spinning.
In practice, implementing true hierarchical transformations require to implement a scene graph, with proper nodes traversal, etc. But the main idea is that every object have what is called a local transform matrix, and world transform matrix. The local transform matrix hold only the own transformation of that particular object (locally to its own origin), while the world transform matrix is the final matrix, sum result of all the hierarchical transformations (from parents) applied to this object.
The world transform matrix is the one used as "model" matrix, to be multiplied with the view and projection matrices. World and local transform matrices of nodes are computed like this (pseudocode):
node.worldMatrix = node.localMatrix * node.parent.worldMatrix;
Knowing that, since you only need three objects and two hierarchical transformations, you don't have to implement a whole scene graph, you only need to simulate this principle by multiplying proper matrices to reproduce the desired behavior.

Volume ray tracing using three.js without textures

I'm trying to visualize hydrogen wave functions, and would like to do this using volume ray tracing/casting. All guides online for creating volume rendering is based on having 2D textures from some medical imaging. In my case, I don't have any data as images, but instead the 3D data is already in memory (I'm using them to generate particles right now).
Do I really need to convert all my 3D data to 2D textures, only to load them in again, and fake a 3D texture? If not, how can it be done without textures?
Yes, from your link I understand that you have a function that takes a 3D coordinate and returns a propability between 0 and 1. You can use this directly during the evaluation of each ray.
For each ray,
for each distance ∆ along the ray
calculate the coordinates at distance ∆ from the camera
calculate the propability at those coordinates using your function
add the probability to the ray's accumulated color
Using this method, you skip the particle positions that you have rendered in the linked example, and use the function directly.

How do I make the Three.js camera look at the face of an object?

I'm have a sphere made of hexagons and pentagons and I am trying to make the camera look at a particular hexagon directly - so the centre of a user's view is the hex and it is flat.
The hexagons are made using the hexasphere.js plugin (https://github.com/arscan/hexasphere.js/tree/master). I am able to extract information from a mesh object which makes up a hex. But I don't know how to take the object info and tell the camera where to go.
I have tried using the normal matrix element of the mesh and finding the euler angles - but I don't know what to then do with them.
Ok, I've found a solution. The hexasphere plugin provides the centre point of a face with hexasphereObj.tiles[i].centrePoint which is a point object and this has a method project(radius, percent) which gets the coordinates of a point at a projection from the centre of the hexasphere and through the centre of the face.
I was then able to move the camera to this projected point and have it lookAt the centre of the hexasphere.

Approach with <canvas> for a large 'map'

I'm looking to build a tile-based javascript game that has a top-down view. Think of: Sim City Classic.
I'm wondering what the right approach is for a map that's larger than the users' viewport. Should I create a large canvas for the entire map and let the user use the browser-scroll, or should I make the canvas the size of the browser, and manually implement scrolling.
In both cases I can see that there might be performance considerations. The big map is a big canvas which can be slow, and for the small one I will need to manually scroll (and redraw a lot during scrolling?)
Best approach for maps is to keep an offscreen canvas that is slightly larger than the display. Render the map onto that and render that canvas onto the display canvas. While there is no movement you don't need to update the map, when there are only small pixel movements you only need to rerender new stuff in the direction of movement. This is done by copying what is already rendered and moving it in the opposite direction of movement and then just rendering what is new along the two edges in the direction of travel. If zooming in you can just zoom the background map and re render it in parts, rather than in one go. Same for zooming out. Just render the edges, and then re render the map in parts.
Look at google maps. This is how they do it. Of Course your mapping images will not have to wait for them to come from the server so the user will not see blank areas while the map waits for content.
This method is the best way to handle large regular and irregular tiled maps.
If the map is animated then you will just have to brute force it and rerender the map each animation frame, but still use the off screen canvas. It may pay to reduce the maps animation rate to half the frame rate. That way you can interlace the animations, spreading the half the animations to every odd frame and the other half to even.

How to get angles value of perspective camera in Three.js?

How can I get the value from each angle of my perspective camera in 3D scene.
I'm using Three.js library.
To be more accurate, I shall mark what I want to get known with the next sign:
What coordinates I need to know:
It's needed for me, because I'm creating a real mode map engine with moving in 3D scene via mouse cursor.
What I'm trying to achieve is available here:
http://www.zephyrosanemos.com/windstorm/current/live-demo.html
As you could see, in this sample, new terrain is loading with intersecting the new location (which previously were not available with the garbage collection, when the camera is leaving the old viewport):
Now, I want to show a piece of screen from my three.js application:
As you are able to see, I'm loading my scene statically, where is available only one plane with buildings (buildings data are loading from my server, and the data I have token from some osm services).
And it may be controlled only with pushing the keyboard button (for e.g. the 3D scene is locating to the new location by pushing on keyboard arrows, aslo you could see the empty space in map :) this is only because of preparing the cut data in DB for testing purposes, when the application is ready - it won't me empty, it's miles easy to work with light count of records in DB ). All meshes are being deleted, and with each new movement the new data is loaded and the new building are rendered.
But I want to made them loaded dynamically with camera movement. So I want to make it be able dynamically loaded as in the example of dynamic terrain generation. I suggest, that I shall prepare a big plane matrix, which loads data only for the 8 planes (as in terrain generation sample) and to make a logic with the camera intersacting/leavaing old view for such a dynamic work.
So... I want you to help me with this piece of hard task :)
To get the field of view angle simply get the value of this field:
Three.PerspectiveCamera.fov
With that angle you should can have an imaginary cubic cone and test it for collision. For the collision part refer to this question:
How to detect collision in three.js?

Categories

Resources