I'm adding a plane entity to Cesium as the folowing:
let position = Cesium.Cartesian3.fromDegrees(long, lat, alt)
let planeEntity = this.viewer.entities.add({
position: position,
model: {
uri: './assets/cesium/Cesium_Air.glb',
minimumPixelSize : 64
}
});
I get plane locations at realtime, each time location arrived I do:
planeEntity.position = Cesium.Cartesian3.fromDegrees(long, lat, alt);
and move the plane to that location.
I want to rotate the plane head to the right place (if the plane flight up, the head cant stay to the left).
How can I calculate bearing rotation from 2 positions? (current position and the next position)?
I found solution here:
[Calculate bearing between 2 points with javascript
// Converts from degrees to radians.
toRadians(degrees) {
return degrees * Math.PI / 180;
}
// Converts from radians to degrees.
toDegrees(radians) {
return radians * 180 / Math.PI;
}
bearing(startLat, startLng, destLat, destLng){
startLat = this.toRadians(startLat);
startLng = this.toRadians(startLng);
destLat = this.toRadians(destLat);
destLng = this.toRadians(destLng);
let y = Math.sin(destLng - startLng) * Math.cos(destLat);
let x = Math.cos(startLat) * Math.sin(destLat) - Math.sin(startLat) * Math.cos(destLat) * Math.cos(destLng - startLng);
let brng = Math.atan2(y, x);
let brngDgr = this.toDegrees(brng);
return (brngDgr + 360) % 360;
}
Cesium way (ES6) based on larry ckey's answer:
import * as Cesium from 'cesium';
const calculateBearing = (startPoint, endPoint) => {
const start = Cesium.Cartographic.fromCartesian(startPoint);
const end = Cesium.Cartographic.fromCartesian(endPoint);
const y = Math.sin(end.longitude - start.longitude) * Math.cos(end.latitude);
const x =
Math.cos(start.latitude) * Math.sin(end.latitude) -
Math.sin(start.latitude) * Math.cos(end.latitude) *
Math.cos(end.longitude - start.longitude);
const bearing = Math.atan2(y, x);
return Cesium.Math.toDegrees(bearing);
}
Related
I would like to generate coordinates within a given radius of given coordinates.
I wrote a little script, that most of the time generate valid values:
const coordinate = [Math.random() * 180 - 90, Math.random() * 360 - 180]; // [lat, long]
const angleRadians = Math.PI / 4; // "random" angle in ° radians
const totalDistance = 100; // max distance in km
// First calculate the new latitude
const kmPerDegreeLatitude = 111; // in km/°
const distanceLatitude = totalDistance / kmPerDegreeLatitude; // in °
const offsetLatitude = Math.sin(angleRadians) * distanceLatitude;
const newLatitude = coordinate[0] + offsetLatitude;
// This looks wrong to me, but reduces the likelihood of errors
const remainingDistance = Math.sqrt(
Math.pow(totalDistance, 2) -
Math.pow(offsetLatitude * kmPerDegreeLatitude, 2)
);
// Now calculate the longitude
const kmPerDegreeLongitude =
Math.abs(Math.cos(degreesToRadians(newLatitude))) * 111; // in km/°
const distanceLongitude = remainingDistance / kmPerDegreeLongitude; // in °
const offsetLongitude = Math.cos(angleRadians) * distanceLongitude;
const newLongitude = coordinate[1] + offsetLongitude;
// Done
const newCoordinate: [latitude: number, longitude: number] = [
newLatitude,
newLongitude,
];
But when I use the following code to check the distance I sometimes end up above the allowed distance:
function haversine(
latitude1: number,
longitude1: number,
latitude2: number,
longitude2: number,
) {
const EQUATORIAL_EARTH_RADIUS = 6378.137;
const distanceLatitude = degreesToRadians(latitude2 - latitude1);
const distanceLongitude = degreesToRadians(longitude2 - longitude1);
const a =
Math.sin(distanceLatitude / 2) * Math.sin(distanceLatitude / 2) +
Math.cos(degreesToRadians(latitude1)) *
Math.cos(degreesToRadians(latitude2)) *
Math.sin(distanceLongitude / 2) *
Math.sin(distanceLongitude / 2);
const distance =
EQUATORIAL_EARTH_RADIUS * 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1 - a));
return distance;
}
Any idea, where I messed up? I assume it's in the coordinate generation, but it might be in the verification step.
Or are there maybe way easier ways to generate the nearby coordinates?
Please note, that I'm not able to use existing libraries for that, because the generated coordinates should be reproducible, but I omitted the related code for simplicity.
Assuming that you can locally approximate the surface of the earth as a plane, then the following function gives you the latitude and longitude of a point P1, given the latitude and longitude of a point P0, the linear distance |P1-P0| and the angle formed by P1-P0 with the local parallel. The local earth radius R is also necessary as input.
function incrementCoordinates (long0, lat0, dist, angle, R) {
// Calculate the distance component along the parallel
dist_x = dist * Math.cos(angle / 180 * Math.PI)
// Calculate the distance component along the meridian
dist_y = dist * Math.sin(angle / 180 * Math.PI)
// Calculate the new longitude
long1 = long0 + dist_x / R / Math.PI * 180
// Calculate the new latitude
lat1 = lat0 + dist_y / R / Math.PI * 180
return [long1, lat1]
}
Well, I tried to overcomplicate things and the float imprecision in JS stacked up sometimes to 20% due to stacked calls.
I simplified the calculation now to assume we live on a perfect sphere:
const angleRadians = randomAngle(); // in ° radians
const errorCorrection = 0.995; // avoid float issues
const distanceInKm = randomDinstance(100) * errorCorrection; // in km
const distanceInDegree = distanceInKm / kmPerDegree; // in °
const newCoordinate: [latitude: number, longitude: number] = [
coordinate[0] + Math.sin(angleRadians) * distanceInDegree,
coordinate[1] + Math.cos(angleRadians) * distanceInDegree,
];
// Box latitude [-90°, 90°]
newCoordinate[0] = newCoordinate[0] % 180;
if (newCoordinate[0] < -90 || newCoordinate[0] > 90) {
newCoordinate[0] = Math.sign(newCoordinate[0]) * 180 - newCoordinate[0];
newCoordinate[1] += 180;
}
// Box longitude [-180°, 180°]
newCoordinate[1] = (((newCoordinate[1] % 360) + 540) % 360) - 180;
I'm building a simple app that places a marker on your screen where at the top of certain landmarks in the real world, going to overlay the markers over the camera's view.
I have the latitude/longitude/altitude for both the viewing device and the world landmarks, and I convert those to ECEF coordinates. But I am having trouble with the 3D projection math. The point always seems to get placed in the middle of the screen... maybe my scaling is wrong somewhere so it looks like it's hardly moving from the center?
Viewing device GPS coordinates:
GPS:
lat: 45.492132
lon: -122.721062
alt: 124 (meters)
ECEF:
x: -2421034.078421273
y: -3768100.560012433
z: 4525944.676268726
Landmark GPS coordinates:
GPS:
lat: 45.499278
lon: -122.708417
alt: 479 (meters)
ECEF:
x: -2420030.781624382
y: -3768367.5284123267
z: 4526754.604333807
I tried following the math from here to build a function to get me screen coordinates from 3D point coordinates.
When I put those ECEF points into my projection function, with a viewport of 1440x335 I get: x: 721, y: 167
Here is my function:
function projectionCoordinates(origin, destination) {
const relativeX = destination.x - origin.x;
const relativeY = destination.y - origin.y;
const relativeZ = destination.z - origin.z;
const xPerspective = relativeX / relativeZ;
const yPerspective = relativeY / relativeZ;
const xNormalized = (xPerspective + viewPort.width / 2) / viewPort.width;
const yNormalized = (yPerspective + viewPort.height / 2) / viewPort.height;
const xRaster = Math.floor(xNormalized * viewPort.width);
const yRaster = Math.floor((1 - yNormalized) * viewPort.height);
return { x: xRaster, y: yRaster };
}
I believe the point should be placed much higher on the screen. That article I linked mentions 3x4 matrices which I couldn't follow along with (not sure how to build the 3x4 matrices from the 3D points). Maybe those are important, especially since I will eventually have to take the device's tilt into consideration (looking up or down with phone).
If it's needed, here is my function to convert latitude/longitude/altitude coordinates to ECEF (copy/pasted from another SO answer):
function llaToCartesion({ lat, lon, alt }) {
const cosLat = Math.cos((lat * Math.PI) / 180.0);
const sinLat = Math.sin((lat * Math.PI) / 180.0);
const cosLon = Math.cos((lon * Math.PI) / 180.0);
const sinLon = Math.sin((lon * Math.PI) / 180.0);
const rad = 6378137.0;
const f = 1.0 / 298.257224;
const C =
1.0 / Math.sqrt(cosLat * cosLat + (1 - f) * (1 - f) * sinLat * sinLat);
const S = (1.0 - f) * (1.0 - f) * C;
const h = alt;
const x = (rad * C + h) * cosLat * cosLon;
const y = (rad * C + h) * cosLat * sinLon;
const z = (rad * S + h) * sinLat;
return { x, y, z };
}
Your normalise and raster steps are cancelling out the view port scaling you need. Multiplying out this:
const xNormalized = (xPerspective + viewPort.width / 2) / viewPort.width;
gives you:
const xNormalized = xPerspective / viewPort.width + 0.5;
And applying this line:
const xRaster = Math.floor(xNormalized * viewPort.width);
gives you:
const xRaster = Math.floor(xPerspective + viewPort.width * 0.5);
Your calculation of xPerspective is correct (but see comment below) - however the value is going to be around 1 looking at your numbers. Which is why the point is near the centre of the screen.
The correct way to do this is:
const xRaster = Math.floor(xPerspective * viewPort.width /2 + viewPort.width /2);
You can simplify that. The idea is that xPerspective is the tan of the angle that xRelative subtends at the eye. Multiplying the tan by half the width of the screen gives you the x distance from the centre of the screen. You then add the x position of the centre of the screen to get the screen coordinate.
Your maths uses an implicit camera view which is aligned with the x, y, z axes. To move the view around you need to calculate xRelative etc relative to the camera before doing the perspective divide step (division by zRelative). An easy way to do this is to represent your camera as 3 vectors which are the X,Y,Z of the camera view. You then calculate the projection of the your 3D point on your camera by taking the dot product of the vector [xRelative, yRelative, zRelative] with each of X,Y and Z. This gives you a new [xCamera, yCamera, zCamera] which will change as you move your camera. You can also do this with matrices.
A typical random walk does not care about direction changes. Each iteration generates a new direction. But if you imagine a point animated on a random walk, it will mostly jump around. So, the goal is to have a smoother curve depending on the previously calculated points.
How to adjust a random walk function to have smoother directional changes?
My main idea is to have a method that generates a new point with x and y coordinates, but looks after the previous step and decreases the size of the next step (const radius), if the rotation (directional change) comes closer to 180°.
Therefore, I am using D3js to randomly take a new step in any x and y direction. At the end I'll get an array of all past steps limited by the maximum amount of steps. The radius gives an orientation how long an average step should be taking on the x and y axis'.
const history = [];
const steps = 10;
const radius = 1;
let point = {
x: 0,
y: 0,
radians: null
};
for (let i = 0; i < steps; i++) {
console.log(point);
history.push(point);
const previousPoint = Object.assign({}, point);
point.x += radius * d3.randomNormal(0, 1)();
point.y += radius * d3.randomNormal(0, 1)();
point.radians = Math.atan2(
point.y - previousPoint.y,
point.x - previousPoint.x
);
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/d3/5.8.0/d3.js"></script>
Instead of using a coordinates based random walk, I decided to randomly generate each iteration a new radians. So the new and previous radians can be compared to each others to decide with velocity the new point will get. Depending on the minimum range between these radians' the volicity will be set. Afterwards a simple sine and cosine calculation have to be down to generate the coordinates of the new point.
At least I've achieved my final goal: https://beta.observablehq.com/#nextlevelshit/gentlemans-random-walk-part-3
const steps = 10;
const stepSize = 10;
let point = {
x: 0,
y: 0,
radians: randomRadians(),
velocity: 0
};
for (let i = 0; i < steps; i++) {
console.log(point);
const radians = randomRadians();
const velocity = 1 - minimumDifference(radians, point.radians) / Math.PI;
point = {
// Coordinates calculated depending on random radians and velocity
x: Math.sin(radians * Math.PI) * stepSize * velocity + point.x,
y: Math.cos(radians * Math.PI) * stepSize * velocity + point.y,
radians: radians, // Randomly generated radians
velocity: velocity // Velocity in comparison to previous point
};
}
function randomRadians() {
return randomFloat(- Math.PI, Math.PI);
}
function randomFloat(min, max) {
return Math.random() * (max - min) + min;
}
function minimumDifference(x, y) {
return Math.min((2 * Math.PI) - Math.abs(x - y), Math.abs(x - y));
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/d3/5.8.0/d3.js"></script>
I have been trying to convert the 360 degree camera, single fish eye image, to equirectangular viewer in node js for the past two days. In stackoverflow, the same question is asked and answered in pseudo code. I have been trying to convert pseudo code to node js and cleared some errors. Now the project runs without error but the output image is blank.
From that pseudo, I dont know the polar_w, polar_h and geo_w, geo_h, geo and polar value, so, it gave static value to show the output. Here is a link which i followed to convert pseudo code to node js.
How to convert spherical coordinates to equirectangular projection coordinates?.
Here is the code I tried for converting spherical image to equirectangular viewer:
exports.sphereImage=(request, response)=>{
var Jimp = require('jimp');
// Photo resolution
var img_w_px = 1280;
var img_h_px = 720;
var polar_w = 1280;
var polar_h = 720;
var geo_w = 1280;
var geo_h = 720;
var img_h_deg = 70;
var img_w_deg = 30;
// Camera field-of-view angles
var img_ha_deg = 70;
var img_va_deg = 40;
// Camera rotation angles
var hcam_deg = 230;
var vcam_deg = 60;
// Camera rotation angles in radians
var hcam_rad = hcam_deg/180.0*Math.PI;
var vcam_rad = vcam_rad/180.0*Math.PI;
// Rotation around y-axis for vertical rotation of camera
var rot_y = [
[Math.cos(vcam_rad), 0, Math.sin(vcam_rad)],
[0, 1, 0],
[-Math.sin(vcam_rad), 0, Math.cos(vcam_rad)]
];
// Rotation around z-axis for horizontal rotation of camera
var rot_z = [
[Math.cos(hcam_rad), -Math.sin(hcam_rad), 0],
[Math.sin(hcam_rad), Math.cos(hcam_rad), 0],
[0, 0, 1]
];
Jimp.read('./public/images/4-18-2-42.jpg', (err, lenna) => {
polar = new Jimp(img_w_px, img_h_px);
geo = new Jimp(img_w_px, img_h_px);
for(var i=0; i<img_h_px; ++i)
{
for(var j=0; j<img_w_px; ++j)
{
// var p = img.getPixelAt(i, j);
var p = lenna.getPixelColor(i, j)
// var p = getPixels(img, { x: i, y: j })
// Calculate relative position to center in degrees
var p_theta = (j - img_w_px / 2.0) / img_w_px * img_w_deg / 180.0 * Math.PI;
var p_phi = -(i - img_h_px / 2.0) / img_h_px * img_h_deg / 180.0 *Math. PI;
// Transform into cartesian coordinates
var p_x = Math.cos(p_phi) * Math.cos(p_theta);
var p_y = Math.cos(p_phi) * Math.sin(p_theta);
var p_z = Math.sin(p_phi);
var p0 = {p_x, p_y, p_z};
// Apply rotation matrices (note, z-axis is the vertical one)
// First vertically
var p1 = rot_y[1][2][3] * p0;
var p2 = rot_z[1][2][3] * p1;
// Transform back into spherical coordinates
var theta = Math.atan2(p2[1], p2[0]);
var phi = Math.asin(p2[2]);
// Retrieve longitude,latitude
var longitude = theta / Math.PI * 180.0;
var latitude = phi / Math.PI * 180.0;
// Now we can use longitude,latitude coordinates in many different
projections, such as:
// Polar projection
{
var polar_x_px = (0.5*Math.PI + phi)*0.5 * Math.cos(theta)
/Math.PI*180.0 * polar_w;
var polar_y_px = (0.5*Math.PI + phi)*0.5 * Math.sin(theta)
/Math.PI*180.0 * polar_h;
polar.setPixelColor(p, polar_x_px, polar_y_px);
}
// Geographical (=equirectangular) projection
{
var geo_x_px = (longitude + 180) * geo_w;
var geo_y_px = (latitude + 90) * geo_h;
// geo.setPixel(geo_x_px, geo_y_px, p.getRGB());
geo.setPixelColor(p, geo_x_px, geo_y_px);
}
// ...
}
}
geo.write('./public/images/4-18-2-42-00001.jpg');
polar.write('./public/images/4-18-2-42-00002.jpg');
});
}
And tried another method by slicing image into four parts to detect car. Sliced image into four parts using image-slice module and to read and write jimp module is used. But unfortunately cars not detected properly.
Here is the code i used for slicing image:
exports.sliceImage=(request, response)=>{
var imageToSlices = require('image-to-slices');
var lineXArray = [540, 540];
var lineYArray = [960, 960];
var source = './public/images/4-18-2-42.jpg'; // width: 300, height: 300
imageToSlices(source, lineXArray, lineYArray, {
saveToDir: './public/images/',
clipperOptions: {
canvas: require('canvas')
}
}, function() {
console.log('the source image has been sliced into 9 sections!');
});
}//sliceImage
And for detect car from image i used opencv4nodejs. Cars are not detected properly. here is the code i used for detect car:
function runDetectCarExample(img=null){
if(img==null){
img = cv.imread('./public/images/section-1.jpg');
}else
{
img=cv.imread(img);
}
const minConfidence = 0.06;
const predictions = classifyImg(img).filter(res => res.confidence > minConfidence && res.className=='car');
const drawClassDetections = makeDrawClassDetections(predictions);
const getRandomColor = () => new cv.Vec(Math.random() * 255, Math.random() * 255, 255);
drawClassDetections(img, 'car', getRandomColor);
cv.imwrite('./public/images/section-'+Math.random()+'.jpg', img);
var name="distanceFromCamera";
var focalLen= 1.6 ;//Focal length in mm
var realObjHeight=254 ;//Real Height of Object in mm
var cameraFrameHeight=960;//Height of Image in pxl
var imgHeight=960;//Image Height in pxl
var sensorHeight=10;//Sensor height in mm
var R = 6378.1 //#Radius of the Earth
var brng = 1.57 //#Bearing is 90 degrees converted to radians.
var hc=(200/100);//Camera height in m
predictions
.forEach((data)=> {
// imgHeight=img.rows;//Image Height in pxl
// realObjHeight=data.rect.height;
// data.rect[name]=((focalLen)*(realObjHeight)*
(cameraFrameHeight))/((imgHeight)*(sensorHeight));
var dc=(((data.rect.width * focalLen) / img.cols)*2.54)*100; // meters
console.log(Math.floor(parseInt(data.rect.width)));
// var dc=((Math.floor(parseInt(data.rect.width)* 0.264583) * focalLen) / img.cols); // mm
var lat1=13.0002855;//13.000356;
var lon1=80.2046441;//80.204632;
// Gate 13.0002855,80.2046441
// Brazil Polsec : -19.860566, -43.969436
// var d=Math.sqrt((dc*dc)+(hc*hc));
// d=(data.rect[name])/1000;
data.rect[name]=d=dc/1000;
lat1 =toRadians(lat1);
lon1 = toRadians(lon1);
brng =toRadians(90);
// lat2 = Math.asin( Math.sin(lat1)*Math.cos(d/R) +
// Math.cos(lat1)*Math.sin(d/R)*Math.cos(brng));
// lon2 = lon1 +
Math.atan2(Math.sin(brng)*Math.sin(d/R)*Math.cos(lat1),
// Math.cos(d/R)-Math.sin(lat1)*Math.sin(lat2));
var lat2 = Math.asin(Math.sin(lat1) * Math.cos(d/6371) +
Math.cos(lat1) * Math.sin(d/6371) * Math.cos(brng));
var lon2 = lon1 + Math.atan2(Math.sin(brng) * Math.sin(d/6371) * Math.cos(lat1),
Math.cos(d/6371) - Math.sin(lat1) * Math.sin(lat2));
lat2 = toDegrees(lat2);
lon2 = toDegrees(lon2);
data.rect['latLong']=lat2+','+lon2;
// console.log(brng);
});
response.send(predictions);
cv.imshowWait('img', img);
};
here is the fish eye image which need to be converted to equirectangular.
Any help much appreciated pls....
You are asking how to convert a 360deg fish-eye projection to an equirectangular projection.
In order to do this, for every pixel on the fish-eye image you need to know where to place in onto the output image.
Your input image is 1920x1080, let us assume you want to output it to an equirectangular projection of the same size.
The input circle mapping is defined as:
cx = 960; // center of circle on X-axis
cy = 540; // center of circle on Y-axis
radius = 540; // radius of circle
If you have a pixel at (x,y) in the input image, then we can calculate the spherical coordinates using:
dx = (x - cx) * 1.0 / radius;
dy = (y - cy) * 1.0 / radius;
theta_deg = atan2(dy, dx) / MATH_PI * 180;
phi_deg = acos(sqrt(dx*dx + dy*dy)) / MATH_PI * 180;
outputx = (theta_deg + 180) / 360.0 * outputwidth_px;
outputy = (phi_deg + 90) / 180.0 * outputheight_px;
So there we translated (x,y) from the fish-eye image to the (outputx,outputy) in the equirectangular image. In order to not leave the implementation as the dreaded "exercise to the reader", here is some sample Javascript-code using the Jimp-library as used by the OP:
var jimp = require('jimp');
var inputfile = 'input.png';
jimp.read(inputfile, function(err, inputimage)
{
var cx = 960;
var cy = 540;
var radius = 540;
var inputwidth = 1920;
var inputheight = 1080;
var outputwidth = 1920;
var outputheight = 1080;
new jimp(outputwidth, outputheight, 0x000000ff, function(err, outputimage)
{
for(var y=0;y<inputheight;++y)
{
for(var x=0;x<inputwidth;++x)
{
var color = inputimage.getPixelColor(x, y);
var dx = (x - cx) * 1.0 / radius;
var dy = (y - cy) * 1.0 / radius;
var theta_deg = Math.atan2(dy, dx) / Math.PI * 180;
var phi_deg = Math.acos(Math.sqrt(dx*dx + dy*dy)) / Math.PI * 180;
var outputx = Math.round((theta_deg + 180) / 360.0 * outputwidth);
var outputy = Math.round((phi_deg + 90) / 180.0 * outputheight);
outputimage.setPixelColor(color, outputx, outputy);
}
}
outputimage.write('output.png');
});
});
Note that you will still need to do blending of the pixel with neighbouring pixels (for the same reason as when you're resizing the image).
Additionally, in your case, you only have half of the sphere (you can't see the sun in the sky). So you would need to use var outputy = Math.round(phi_deg / 90.0 * outputheight). In order to keep the right aspect ratio, you might want to change the height to 540.
Also note that the given implementation may not be efficient at all, it's better to use the buffer directly.
Anyway, without blending I came up with the result as demonstrated here:
So in order to do blending, you could use the simplest method which is the nearest neighbour approach. In that case, you should invert the formulas in the above example. Instead of moving the pixels from the input image to the right place in the output image, you can go through every pixel in the output image and ask which input pixel we can use for that. This will avoid the black pixels, but may still show artifacts:
var jimp = require('jimp');
var inputfile = 'input.png';
jimp.read(inputfile, function(err, inputimage)
{
var cx = 960;
var cy = 540;
var radius = 540;
var inputwidth = 1920;
var inputheight = 1080;
var outputwidth = 1920;
var outputheight = 1080/2;
var blendmap = {};
new jimp(outputwidth, outputheight, 0x000000ff, function(err, outputimage)
{
for(var y=0;y<outputheight;++y)
{
for(var x=0;x<outputwidth;++x)
{
var theta_deg = 360 - x * 360.0 / outputwidth - 180;
var phi_deg = 90 - y * 90.0 / outputheight;
var r = Math.sin(phi_deg * Math.PI / 180)
var dx = Math.cos(theta_deg * Math.PI / 180) * r;
var dy = Math.sin(theta_deg * Math.PI / 180) * r;
var inputx = Math.round(dx * radius + cx);
var inputy = Math.round(dy * radius + cy);
outputimage.setPixelColor(inputimage.getPixelColor(inputx, inputy), x, y);
}
}
outputimage.write('output.png');
});
});
For reference, in order to convert between Cartesian and Spherical coordinate systems. These are the formulas (taken from here). Note that the z is in your case just 1, a so-called "unit" sphere, so you can just leave it out of the equations. You should also understand that since the camera is actually taking a picture in three dimensions, you also need formulas to work in three dimensions.
Here is the generated output image:
Since I don't see your original input image in your question anymore, in order for anyone to test the code from this answer, you can use the following image:
Run the code with:
mkdir /tmp/test
cd /tmp/test
npm install --permanent jimp
cat <<EOF >/tmp/test/main.js
... paste the javascript code from above ...
EOF
curl https://i.stack.imgur.com/0zWt6.png > input.png
node main.js
Note: In order to further improve the blending, you should remove the Math.round. So for instance, if you need to grab a pixel at x is 0.75, and the pixel on the left at x = 0 is white, and the pixel on the right at x = 1 is black. Then you want to mix both colors into a dark grey color (using ratio 0.75). You would have to do this for both dimensions simultaneously, if you want a nice result. But this should really be in a new question imho.
I just recently started delving into ThreeJS. Currently I'm trying to plot a point on a sphere but it appears to be plotting in the southern hemispehre instead of the northern hemisphere. Vertically, it looks to be in the correct spot just on the bottom of the sphere instead of the top. I grabbed some code from this answer: https://stackoverflow.com/a/8982005/738201
In the image the yellow line with the red box is my plotted point. It should be in the upstate NY region instead of where it is now.
And lastly, the code.
function xyz(lat, lon){
var cosLat = Math.cos(lat*Math.PI/180.00);
var sinLat = Math.sin(lat*Math.PI/180.00);
var cosLon = Math.cos(lon*Math.PI/180.00);
var sinLon = Math.sin(lon*Math.PI/180.00);
var r = sphere.geometry.radius; //50
var coords = {};
coords.x = r * cosLat * cosLon;
coords.y = r * cosLat * sinLon;
coords.z = r * sinLat;
console.log(coords);
return coords;
}
// Lat/Lon from GoogleMaps
var coords = xyz(42.654162, -73.699830);
// returns {x: 10.321018160637124, y: -35.29474079777381, z: 33.878575178802286}
I suspect that the issue may be using 2D coords on a 3D sphere but if so, I'm not quite sure how to rectify that.
Try this, It will help to use directly lat and long on the map
function calcPosFromLatLon(phi, theta){
let lat = (90 - phi) * (Math.PI/180);
let lon = (theta + 180) * (Math.PI/180);
const x = -(Math.sin(lat)* Math.cos(lon))
const z = Math.sin(lat) * Math.sin(lon)
const y = Math.cos(lat)
}
calcPosFromLatLon(//coordinates here//)