How to find point by input search in WebGL? - javascript

I'm going to create 3D Earth with search input. Could someone guide how to write code that finds point (exact place point) by input search, using WebGL?

I think your question is really vague but I can imagine that what you want to do is to rotate your 3D Earth so the point you queried for appears in the center of the view (or what it is the same, on the view axis of the camera).
To do it you need to:
assign every landmark a set of
spherical coordinates
given that you are locating
points on the surface of the sphere
you can forget about the radius and
only assign elevation and
azimuth to each point.
then you write the code for the
user to input the point of interest.
Say "Rome".
you look for this point in a
javascript array and recover the
elevation and the azimuth values
you apply the correspondent
rotations to your Model-View Matrix. Assuming you are using glMatrix you
should have something like this:
var M = mat4.create();
var Y_axis = [0,1,0];
var X_axis = [1,0,0];
mat4.rotate(M,azimuth,Y_axis);
mat4.rotate(M,elevation,X_axis);
the point of interest should be
displayed now

Related

Map (vrm) animated humanoid model based on skeleton coordinates in three.js

I'm really new to three.js and animation in general, and currently pretty confused with concepts like what rotation angles are/what exactly a VRM is and how it interacts with three.js/what is humanoid animation etc, but i will try to be as explicit as i can about my question below.
So i have a sequence of frames, where each frame has a set of coordinates (xyz, imagine x goes from left to right on your screen, y from top to bottom and z comes out the screen) for human joints (e.g. left foot, right foot, left shoulder etc...). And I would like to have a 3D animated model move based on the provided coordinates.
What I have seen people done so far (e.g. RM motion capture demo using pixiv three-vrm), it seems like they would modify the rotation (z) of the human bone node (returned by getBoneNode) in order to map the human action onto the animated model.
My questions are:
You can (e.g. like the author of above link) and only need to compute the rotation around z-axis since the input is a 2D video, but in my case it's 3D coordinates, how can I calculate the rotation value? From the documention on Object3D of three.js, looks like the rotation are Euler angles.
i. But how can one calculate these Euler angles given e.g. the coordinate of left shoulder?
ii. And what angles of which humanoid body/bone part do I need to do this calculation for? e.g. Does it even make sense to talk about rotation of LeftShoulder or nose?
iii. this probably is silly, but just thinking out loud here, why can't I just supply the xyz coordinate value as the position attribute of these humanoid bone node? e.g. something like:
currentVrm.humanoid.getBoneNode(THREE.VRMSchema.HumanoidBoneName.Neck).position = (10, -2.5, 1)
this would not get the animated model moving the same as the person in the frames with coordinates provided?
What exactly does a humanoid bone node look like or how are they represented? from three.js doc, it only says it's a Object3D object, it cannot be just a vector right? because from my limited understanding of Euler angles, it doesn't make complete sense to have all three Eulers angles of a vector (since it can't rotate like a cylinder). The reason im asking this, is because im confused on what angle and how needs to be calculated for each humanoid bone node, e.g. i have leftShoulder = (3, 11.2, -8.72), do i just calculate its angle to each xyz axis and supply these angles to the rotation. attributes of the bone node?
Can't tell much about three.js, but I can tell something about VRM.
Basically you have bones hierarchy. That is root-hips-spine-chest-neck... etc,
from chest you have left/right_shoulder - l/r_upper_arm - l/r_lower_arm - l/r_hand etc, from hips you have legs and feet.
Every bone has 3 position coordinates (X,Y,Z) and a quaternion (X,Y,Z,W). Which means that if you want to find a position of some bone in the world coordinate systems you have to go through all hierarchy (starting from root) applying quaternions and adding positions.
For example, if I want to find 'neck bone' position I have to:
take 'root' coordinates and apply 'root' quaternion
take 'hips' position and apply 'hips' quaternion, add resulting coordinates to 'root' coordinates;
take 'spine' coordinates and apply 'spine' quaternion, add resulting coordinates to 'hips' coordinates
take 'chest' coordinates and apply 'chest' quaternion, add resulting coordinates to 'spine' coordinates
take 'neck' coordinates and apply 'neck' quaternion, add resulting coordinates to 'chest' coordinates
Also, 'applying quaternion' means that you also keep previous quaternion in mind (you do that by multiplication); that is the resulting quaternion for 'neck' would be
qneck_res = qneckqchestqspineqhipsqroot
There is a procedure to convert between Euler angles and quaternion if needed.

What is the Ideal way to create a Triangle-Coordinates in Google Map API with two location points consisting of latitude & longitude

I have a scenario in my JavaScript application where I have the coordinates of a starting point which consist of Latitude and Longitude, similarly an ending point with it's respective coordinates.
Now I need to search for a location which basically provides with a set of coordinates and find if the recently entered location lies in between the previously mentioned starting point or ending point. However, the location does not need to match exactly within the points of the path of the start and end point. That is even if the location lies around the distance of say 2-3 km from the derived path, it should give a match.
I believe that we can create a triangle by providing three coordinates i.e start-point, end-point and a third point. So once the triangle is formed we can use google.maps.geometry.poly.containsLocation method to find if our searched location is present inside this triangle.
So my question is how can we get a third point to create a triangle which will provide locations that are nearby within 2-3 km from start to end point.
Else is there any alternate approach to deal with my use case?
Use googlemap's geometry library
This function specifically
isLocationOnEdge
Here's an example
0.001 tolerance value would be 100m
var isLocationNear = google.maps.geometry.poly.isLocationOnEdge(
yourLatLng,
new google.maps.Polyline({
path: [
new google.maps.LatLng(point1Lat, point1Long),
new google.maps.LatLng(point2Lat, point2Long),
]
}),
.00001);
Please note that the following answer assumes Plane Geometry where you should be using Spherical Geometry instead. Although this will be fine for less accurate purposes (like approximate distance, etc..)
It seems more of a geometry question than a programming question. A triangle like you mentioned won't be able to cover a straight line path in a uniform way. The situation can be thought of more like a distance between point and a line problem (Refer the given diagram
Here you can just find the distance between point C and line AB which you can check whether it's below 2.5 KMs (I've omitted all the units and conversions for simplicity)
Please note that you will also need to convert the distances from radian to appropriate units that you require using haversine formula, etc. which is not a trivial task (https://www.movable-type.co.uk/scripts/latlong.html).

CesiumJS - Distance Between Two Points

My goal is to calculate the distance between two Cesium entities in kilometers. As a bonus, I eventually want to be able to measure their distance in pixels.
I have a bunch of placemarks in KML format like this:
<Placemark>
<name>Place</name>
<Point><coordinates>48.655,-31.175</coordinates></Point>
<styleUrl>#style</styleUrl>
<ExtendedData>
...
</ExtendedData>
</Placemark>
I am importing them into Cesium like so:
viewer.dataSources.add(Cesium.KmlDataSource.load('./data.kml', options)).then(function(dataSource) {
var entities = dataSource.entities._entities._array;
I have attempted to create new Cartesian3 objects of entities I care about, but the x, y, and z values I get from the entity object are in the hundreds of thousands. The latitude and longitude from my KML are nowhere to be found in the entity objects.
If I do create Cartesian3 objects and compute the distance like so:
var distance = Cesium.Cartesian3.distance(firstPoint, secondPoint);
it returns numbers in the millions. I have evaluated the distance between multiple points this way and when I compare those values returned to the result of an online calculator which returns the actual value in kilometers, the differences in the distances are not linear (some of the distances returned by Cesium are 900 times the actual distance and some are 700 times the actual distance).
I hope that is enough to receive help. I am not sure where to start fixing this. Any help would be appreciated. Thank you.
A couple of things are going on here. The Cesium.Cartesian3 class holds meters, so it is correct to divide by 1000 to get km, but that's not the full story. Cartesian3s are positions on a 3D globe, and if you compute a simple Cartesian.distance between two of them on opposite sides of that globe, you'll get the Cartesian linear distance, as in the length of a line that cuts through the middle of the globe to get from one to the other, rather than traveling around the surface of the globe to get to the far side.
To get the distance you actually want -- the distance of a line that follows the curvature of the surface of the Earth -- check out the answer to Cesium JS Line Length on GIS SE.

Geolocation, Accuracy and Affine Transformation: What is causing my inaccurate conversion from a lat/long location into a point on my image

I've implemented some code to create some code to treat an image of a relatively small location like plane for converting between locations on the image I have stored and incoming Lat/Long information.
Using the formulas provided at https://msdn.microsoft.com/en-us/library/jj635757(v=vs.85).aspx I wrote these lines of code among others
var vector = math.matrix(
[[x1],
[y1],
[x2],
[y2]]);
var matrix = math.matrix(
[[lat1,long1,1,0]
,[-long1,lat1,0,1]
,[lat2,long2,1,0]
,[-long2,lat2,0,1]]);
var solution = math.multiply(math.inv(matrix),vector);
There is an implicit conversion from the vector returned to solution into conversiondata as I put it into and take it back out of my database.
a = parseFloat(conversiondata['A']);
b = parseFloat(conversiondata['B']);
c = parseFloat(conversiondata['C']);
d = parseFloat(conversiondata['D']);
var long = position.coords.longitude;
var lat = position.coords.latitude;
var x = a * lat + b * long + c;
var y = b * lat - a * long + d;
The values x1, x2, y1, y2 are supplied by getting user click data.
The values lat1, lat2, long1, long2 are supplied by the user in response to two clicks on the map image.
When putting x,y back onto the map its not quite in the right position, the position on the map seems to almost be on the opposite side of the line defined by (x1,y1) and (x2,y2). I'm trying to tell what the reason for the inaccuracy is. (I am however assuming for the time being that the apparent reflection is a coincidence)
If someone could help me narrow down what could be going wrong here are things I've considered (the map doesn't reach even a mile in any direction for reference).
The affine transformation simply doesn't work - But acccording to the link provided it includes scaling so that shouldn't be the cause of the problem
There is a problem with my setting of variables - I've been looking at my code too long to see it if it is.
I am losing too much accuracy moving the var data to MySQL as a float or to PHP as a string
I am not giving accurate enough information from click data / lat/long input. - I zoomed i significantly when clicking on the map and getting the lat/long from google maps though
SVG isn't accuracte enough - Though looking at the xml data it keeps the decimals.
The area that I'm working with is too big to simplify by assuming that the local map is a flat plane
Any help is appreciated, thanks for reading this far.
For further reference I put the lat/long data that JavaScript gave me into google maps and i'm comparing accuracy to that rather than my actual location.
Additional reference: I found "landmarks" on the east and west edges of my image and have calculated the longitude difference to be 0.02695 with the length of the image being at least twice the height.
Sample values of a full run-through of values.
Reference Points
Point 1 (x,y) = (619,564)
Point 1 (lat,long) = (X.099546,-Y.465179)
Point 2 (x,y) = (1181,190)
Point 2 (lat,long) = (X.10365341,-Y.457014)
Geolocation
Predicted coordinate (x,y) = (975,262)
Given coordinate(lat,long) = (X.102851,-Y.459996)
Real Blip (x,y) = (1022.7498707999475,351.02335709985346)
Real blip (approximate lat,long) = (X.101964, -Y.459340)
(Real blip lat long is approximate as it is in a body of water with no good landmarks)
For safety's sake I've taken the digits before the decimal out of the lat/long coordinates but I can confirm that all the X's are equal and all the Y's are equal
Additionally I played with the lat long values in Chrome's developer tools, it seems like the axes are a bit rotated approximately 30 degrees from what it should be
After sufficient poking around I figured out that I had ordered lat and long incorrectly. On my map that has not been rotated from N at the top the following code brings me within just a few feet, more than explainable than the lack of precision resulting from relying on user input and the pixel grid.
var matrix = math.matrix(
[[long1,lat1,1,0]
,[-lat1,long1,0,1]
,[long2,lat2,1,0]
,[-lat2,long2,0,1]]);
And
var x = a * long + b * lat + c;
var y = b * long - a * lat + d;
For anyone else that is interested in pursuing this as a potential solution to simplify the math of their app
The drift that occurred was less than 40 feet over a map with a diagonal of 8000 feet and a difference in reference points of around 3000 feet. This means the drift is little over 1% of the distance of the reference points, this includes the effect of human error.
This error should decrease as you work on smaller maps and increase as you work on bigger maps.
I tested it again on a map with a ~90 degree rotation and the code held up

Three.JS: Get position of rotated object

In Three.JS, I am capable of rotating an object about its origin. If I were to do this with a line, for instance, the line rotates, but the positions of its vertices are not updated with their new locations. Is there some way to apply the rotation matrix to the position of the vertices to find the new position of the point? Say I rotate a line with points at (0,0,0) and (0,100,100) by 45° on the x, 20° on the y, and 100° on the z. How would I go about finding the actual position of the vertices with respect to the entire scene.
Thanks
yes, 'entire scene' means world position.
THREE.Vector3() has a applyMatrix4() method,
you can do the same things that the shader does so in order to project a vertex into world space you would do this
yourPoint.applyMatrix4(yourObject.matrixWorld);
to project that into camera space you can apply this next
yourPoint.applyMatrix4(camera.matrixWorld);
to get an actual screen position in -1 to 1
yourPoint.applyMatrix4(camera.projectionMatrix);
you would access your point like this
var yourPoint = yourObject.geometry.vertices[0]; //first vertex
also, rather than doing this three times, you can just combine the matrices. Didnt test this, but something along the lines of this. Might go the other way:
var neededPVMmatrix = new THREE.Matrix4().multiplyMatrices(yourObject.matrixWorld, camera.matrixWorld);
neededPVMmatrix.multiplyMatrices(neededPVMmatrix, camera.projectionMatrix);
if you need a good tutorial on what this does under the hood i recommend this
Alteredq posted everything there is to know about three.js matrices here
edit
One thing to note though, if you want just the rotation, not the translation, you need to use the upper 3x3 portion which is the rotation matrix, of the models world matrix. This might be slightly more complicated. I forgot what three.js gives you, but i think the normalMatrix would do the trick, or perhaps you can convert your THREE.Vector3() to THREE.Vector4(), and set .w to 0, this will prevent any translation from being applied.
edit2
if you want to move the line point in your example, instead of applying it to the particle, apply it to
var yourVertexWorldPosition = new THREE.Vector3().clone(geo.vertices[1]); //this is your second line point, to whatever you set it in your init function
yourVertexWorldPosition.applyMatrix4();//this transforms the new vector into world space based on the matrix you provide (line.matrixWorld)

Categories

Resources