Server side marker clustering PHP & Mongodb - javascript

I have implemented a server side marker clustering by following what says in this link
http://www.appelsiini.net/2008/introduction-to-marker-clustering-with-google-maps
This works perfect for markers less than 5,000. But when my markers increased to 17,000
it causes all the memory to exhaust as there are very big loops running.
I am using mongodb for storing all my records with lat n long,
Can i make use of the mongodb's spatial query feature for clustering ?
Some how i want server load to be very less calculating the clusters each time user drags the map,
So far i'm doing the clustering as follows
while (count($markers)) {
$marker = array_pop($markers);
$cluster = array();
/* Compare against all markers which are left. */
foreach ($markers as $key => $target) {
$pixels = $this->pixelDistance($marker['lat'], $marker['long'],
$target['lat'], $target['long'],
$zoom);
if ($distance > $pixels && $zoom < 18) {
unset($markers[$key]);
$cluster[] = $target;
}
if (count($cluster) > 0) {
$cluster[] = $marker;
$clustered[] = $cluster;
} else {
$clustered[] = $marker;
}
}
$newarray = array();
foreach($clustered as $key => $cluster) {
$centroid = array('lat' => 0, 'long' => 0, 'count' => 0);
if(isset($cluster[0]) && is_array($cluster[0])){
foreach($cluster as $marker) {
//echo "{$key} =>"; printArray($marker);
//if($key != 10){
$centroid['lat'] += $marker['lat']; // Sum up the Lats
$centroid['long'] += $marker['long']; // Sum up the Lngs
$centroid['count']++;
//}
}
//if($centroid['count'] != 0){
$centroid['lat'] /= $centroid['count']; // Average Lat
$centroid['long'] /= $centroid['count']; // Average Lng
$clustered[$key] = $centroid; // Overwrite the cluster with the single point.
//}
}
}
return $clustered;
Any help is greatly appreciated.
Thanks in advance

You can use a bounding box to narrow the search. A longitude is 111 km: http://en.m.wikipedia.org/wiki/Longitude. The tile is calculated with 3 variables x,y,z. It uses a 2 dimensional grid with the x and y axis and the zoom level z. Read here:http://msdn.microsoft.com/en-us/library/bb259689.aspx. Basically you need to convert the lat-lng pair to pixel coordinates. Then you can get the tile-number from it. The maximum pixel is a power of 2 number. Hence the big number at maximum zoom level. Because you insist from the bing tiling system:
To optimize the indexing and storage of tiles, the two-dimensional tile XY coordinates are combined into one-dimensional strings called quadtree keys, or “quadkeys” for short. Each quadkey uniquely identifies a single tile at a particular level of detail, and it can be used as an key in common database B-tree indexes. To convert tile coordinates into a quadkey, the bits of the Y and X coordinates are interleaved, and the result is interpreted as a base-4 number (with leading zeros maintained) and converted into a string. For instance, given tile XY coordinates of (3, 5) at level 3, the quadkey is determined as follows:
tileX = 3 = 011 2
tileY = 5 = 101 2
quadkey = 100111 2 = 213 4 = “213”
Quadkeys have several interesting properties. First, the length of a quadkey (the number
of digits) equals the level of detail of the corresponding tile. Second, the quadkey of > any tile starts with the quadkey of its parent tile (the containing tile at the
previous level).
It's very similar to a quadtree or r-tree and it should be a good exercise for the reader but you already have the bing tile code.

Related

Reverse gravity / anti-gravity? What elements of a gravitational force algorithm do i need to change for reversing it?

I want to reverse my gravitational force algorithm to produce locations in the "past" of multiple bodies interacting. It's trivial to produce locations in the future by running the algorithm multiple times on the set of bodies but reversing this to write out positions of bodies' previous positions has stumped me. I don't want to store the past positions and since this is deterministic, it should be possible to somehow run the algorithm backwards but I'm not sure how.
In the snippet element each of the bodies that are tested from universe in the loop, tick is the delta time.
function forces(other) {
if (element === other) {
return;
}
var distancePoint = element.point.sub(other.point);
var normal = Math.sqrt(100.0 + distancePoint.lengthSq());
var mag = GravatationalConstant /
Math.pow(normal, 3);
var distPointMulOtherMass = distancePoint
.mul(mag * other.mass);
element.acceleration = element.acceleration.sub(distPointMulOtherMass);
other.acceleration = other
.acceleration
.add(distancePoint
.mul(mag * element.mass)
);
}
element.acceleration = new Point(0,0,0,0);
universe.forEach(forces);
element.velocity = element.velocity.add(element.acceleration.mul(ticks));
element.point = element.point.add(element.velocity.mul(0.5 * ticks));
I tried sending a negative tick as well as negative gravitational constant, but the positions it produces for the "past" didn't seem to follow what the elements appeared to do in the real past.
I don't know much about physics but I was wondering if there is a small change that could be done to reverse this algorithm.
Update
Thanks to Graeme Niedermayer, I've updated my gravity algorithm to the inverse square law and using negative time it appears to produce positions in the past!
function forces(other) {
if (element === other) {
return;
}
var distancePoint = element.point.sub(other.point);
const forceElementMass = GravatationalConstant * element.mass * other.mass /
Math.pow(element.mass,2)
const forceOtherMass = GravatationalConstant * element.mass * other.mass /
Math.pow(other.mass,2)
element.acceleration = element.acceleration
.sub(distancePoint.mul(forceOtherMass))
other.acceleration = other.acceleration
.add(distancePoint.mul(forceElementMass))
}
const ticks = forwards ? dt : -dt;
element.acceleration = new Point(0,0,0,0);
universe.forEach(forces);
element.velocity = element.velocity.add(element.acceleration.mul(ticks));
element.point = element.point.add(element.velocity.mul(0.5 * ticks));
Outlined circles are at the current position and the "past" positions are others fading out to zero opacity.
Update 2
Realised that I used the wrong equation in Update 1 (both force constants used the same mass object). I looked into a few more examples and have updated the code, but now I'm not sure where i should add the delta time ticks which is currently just set to 1 for forwards and -1 back backwards. Below is an image of what is looks like if I multiply the acceleration by ticks before adding it to the velocity each frame body.velocity = body.velocity.add(body.acceleration.mul(ticks)) or if I make one of the masses negative const force = G * body.mass * (forward ? other.mass : -other.mass) / d ** 2.
As you can see the "past" positions (red outline) of the green body go over to the left and above. I was hoping to have them appear to "follow" the current position but I'm not sure how to reverse or invert the equation to show the "past" positions, basically if the body was traveling in the opposite direction. Is there a way to do this?
In this next image I have multiplied the velocity by delta time ticks before adding it to the position body.point = body.point.add(body.velocity.mul(ticks)) this results in a similar path to a recorded path the body traveled (by writing each position to an array and drawing a line between those positions) but it is slightly off. This solution is similar to what I was seeing in Update 1. Is there a reason that this is "almost" correct?
Code below is without any additions to reverse the position.
function forces(other, ticks) {
if (body === other) {
return;
}
// Calculate direction of force
var distanceVector = other.point.sub(body.point)
// Distance between objects
var d = distanceVector.mag()
// Normalize vector (distance doesn't matter here, we just want this vector for direction)
const forceNormalized = distanceVector.normalized()
// Calculate gravitational force magnitude
const G = 6.674
const force = G * body.mass * other.mass / d ** 2
// Get force vector --> magnitude * direction
const magDirection = forceNormalized.mul(force)
const f = magDirection.div(body.mass)
body.acceleration = body.acceleration.add(f)
}
body.acceleration = body.acceleration.mul(0)
universe.forEach(body => forces(body, ticks))
body.velocity = body.velocity.add(body.acceleration)
body.point = body.point.add(body.velocity)
Update 3
I ended up removing the negative mass and the velocity multiplied by ticks and just reversed the way the acceleration is applied to the position:
if (forward) {
universe.forEach(body => forces(body, ticks));
body.velocity = body.velocity.add(body.acceleration)
body.point = body.point.add(body.velocity)
} else {
body.point = body.point.sub(body.velocity)
universe.forEach(body => forces(body, ticks));
body.velocity = body.velocity.sub(body.acceleration)
}
Resulting in being able to generate positions forwards and backwards in time from the current position. In the image it appears so the "past" positions follow the recorded trail of the current position.
To generate a step in the "past" it subtracts the current velocity from the current position, putting it in the last position it was in. Next it gets the acceleration by checking the forces from other bodies then subtracts the new acceleration (using negative mass would do the same) from the velocity so the next position in the "past" will be correct.
You should be able to make one of the masses negative.
The reason why negative time doesn't work is because you are implicit using euler's method. Euler's method is unstable when using negative steps.
Also the physics you using is also a little weird. Gravity is usually a squared law.

Rotating SVG path points for better morphing

I am using a couple of functions from Snap.SVG, mainly path2curve and the functions around it to build a SVG morph plugin.
I've setup a demo here on Codepen to better illustrate the issue. Basically morphing shapes simple to complex and the other way around is working properly as of Javascript functionality, however, the visual isn't very pleasing.
The first shape morph looks awful, the second looks a little better because I changed/rotated it's points a bit, but the last example is perfect.
So I need either a better path2curve or a function to prepare the path string before the other function builds the curves array. Snap.SVG has a function called getClosest that I think may be useful but it's not documented.
There isn't any documentation available on this topic so I would appreciate any suggestion/input from RaphaelJS / SnapSVG / d3.js / three/js developers.
I've provided a runnable code snippet below that uses Snap.svg and that I believe demonstrates one solution to your problem. With respect to trying to find the best way to morph a starting shape into an ending shape, this algorithm essentially rotates the points of the starting shape one position at a time, sums the squares of the distances between corresponding points on the (rotated) starting shape and the (unchanged) ending shape, and finds the minimum of all those sums. i.e. It's basically a least squares approach. The minimum value identifies the rotation that, as a first guess, will provide the "shortest" morph trajectories. In spite of these coordinate reassignments, however, all 'rotations' should result in visually identical starting shapes, as required.
This is, of course, a "blind" mathematical approach, but it might help provide you with a starting point before doing manual visual analysis. As a bonus, even if you don't like the rotation that the algorithm chose, it also provides the path 'd' attribute strings for all the other rotations, so some of that work has already been done for you.
You can modify the snippet to provide any starting and ending shapes you want. The limitations are as follows:
Each shape should have the same number of points (although the point types, e.g. 'lineto', 'cubic bezier curve', 'horizontal lineto', etc., can completely vary)
Each shape should be closed, i.e. end with "Z"
The morph desired should involve only translation. If scaling or rotation is desired, those should be applied after calculating the morph based only on translation.
By the way, in response to some of your comments, while I find Snap.svg intriguing, I also find its documentation to be somewhat lacking.
Update: The code snippet below works in Firefox (Mac or Windows) and Safari. However, Chrome seems to have trouble accessing the Snap.svg library from its external web site as written (<script...github...>). Opera and Internet Explorer also have problems. So, try the snippet in the working browsers, or try copying the snippet code as well as the Snap library code to your own computer. (Is this an issue of accessing third party libraries from within the code snippet? And why browser differences? Insightful comments would be appreciated.)
var
s = Snap(),
colors = ["red", "blue", "green", "orange"], // colour list can be any length
staPath = s.path("M25,35 l-15,-25 C35,20 25,0 40,0 L80,40Z"), // create the "start" shape
endPath = s.path("M10,110 h30 l30,20 C30,120 35,135 25,135Z"), // create the "end" shape
staSegs = getSegs(staPath), // convert the paths to absolute values, using only cubic bezier
endSegs = getSegs(endPath), // segments, & extract the pt coordinates & segment strings
numSegs = staSegs.length, // note: the # of pts is one less than the # of path segments
numPts = numSegs - 1, // b/c the path's initial 'moveto' pt is also the 'close' pt
linePaths = [],
minSumLensSqrd = Infinity,
rotNumOfMin,
rotNum = 0;
document.querySelector('button').addEventListener('click', function() {
if (rotNum < numPts) {
linePaths.forEach(function(linePath) {linePath.remove();}); // erase any previous coloured lines
var sumLensSqrd = 0;
for (var ptNum = 0; ptNum < numPts; ptNum += 1) { // draw new lines, point-to-point
var linePt1 = staSegs[(rotNum + ptNum) % numPts]; // the new line begins on the 'start' shape
var linePt2 = endSegs[ ptNum % numPts]; // and finished on the 'end' shape
var linePathStr = "M" + linePt1.x + "," + linePt1.y + "L" + linePt2.x + "," + linePt2.y;
var linePath = s.path(linePathStr).attr({stroke: colors[ptNum % colors.length]}); // draw it
var lineLen = Snap.path.getTotalLength(linePath); // calculate its length
sumLensSqrd += lineLen * lineLen; // square the length, and add it to the accumulating total
linePaths[ptNum] = linePath; // remember the path to facilitate erasing it later
}
if (sumLensSqrd < minSumLensSqrd) { // keep track of which rotation has the lowest value
minSumLensSqrd = sumLensSqrd; // of the sum of lengths squared (the 'lsq sum')
rotNumOfMin = rotNum; // as well as the corresponding rotation number
}
show("ROTATION OF POINTS #" + rotNum + ":"); // display info about this rotation
var rotInfo = getRotInfo(rotNum);
show(" point coordinates: " + rotInfo.ptsStr); // show point coordinates
show(" path 'd' string: " + rotInfo.dStr); // show 'd' string needed to draw it
show(" sum of (coloured line lengths squared) = " + sumLensSqrd); // the 'lsq sum'
rotNum += 1; // analyze the next rotation of points
} else { // once all the rotations have been analyzed individually...
linePaths.forEach(function(linePath) {linePath.remove();}); // erase any coloured lines
show(" ");
show("BEST ROTATION, i.e. rotation with lowest sum of (lengths squared): #" + rotNumOfMin);
// show which rotation to use
show("Use the shape based on this rotation of points for morphing");
$("button").off("click");
}
});
function getSegs(path) {
var absCubDStr = Snap.path.toCubic(Snap.path.toAbsolute(path.attr("d")));
return Snap.parsePathString(absCubDStr).map(function(seg, segNum) {
return {x: seg[segNum ? 5 : 1], y: seg[segNum ? 6 : 2], seg: seg.toString()};
});
}
function getRotInfo(rotNum) {
var ptsStr = "";
for (var segNum = 0; segNum < numSegs; segNum += 1) {
var oldSegNum = rotNum + segNum;
if (segNum === 0) {
var dStr = "M" + staSegs[oldSegNum].x + "," + staSegs[oldSegNum].y;
} else {
if (oldSegNum >= numSegs) oldSegNum -= numPts;
dStr += staSegs[oldSegNum].seg;
}
if (segNum !== (numSegs - 1)) {
ptsStr += "(" + staSegs[oldSegNum].x + "," + staSegs[oldSegNum].y + "), ";
}
}
ptsStr = ptsStr.slice(0, ptsStr.length - 2);
return {ptsStr: ptsStr, dStr: dStr};
}
function show(msg) {
var m = document.createElement('pre');
m.innerHTML = msg;
document.body.appendChild(m);
}
pre {
margin: 0;
padding: 0;
}
<script src="//cdn.jsdelivr.net/snap.svg/0.4.1/snap.svg-min.js"></script>
<p>Best viewed on full page</p>
<p>Coloured lines show morph trajectories for the points for that particular rotation of points. The algorithm seeks to optimize those trajectories, essentially trying to find the "shortest" cumulative routes.</p>
<p>The order of points can be seen by following the colour of the lines: red, blue, green, orange (at least when this was originally written), repeating if there are more than 4 points.</p>
<p><button>Click to show rotation of points on top shape</button></p>

mouse position to isometric tile including height

Struggeling translating the position of the mouse to the location of the tiles in my grid. When it's all flat, the math looks like this:
this.position.x = Math.floor(((pos.y - 240) / 24) + ((pos.x - 320) / 48));
this.position.y = Math.floor(((pos.y - 240) / 24) - ((pos.x - 320) / 48));
where pos.x and pos.y are the position of the mouse, 240 and 320 are the offset, 24 and 48 the size of the tile. Position then contains the grid coordinate of the tile I'm hovering over. This works reasonably well on a flat surface.
Now I'm adding height, which the math does not take into account.
This grid is a 2D grid containing noise, that's being translated to height and tile type. Height is really just an adjustment to the 'Y' position of the tile, so it's possible for two tiles to be drawn in the same spot.
I don't know how to determine which tile I'm hovering over.
edit:
Made some headway... Before, I was depending on the mouseover event to calculate grid position. I just changed this to do the calculation in the draw loop itself, and check if the coordinates are within the limits of the tile currently being drawn. creates some overhead tho, not sure if I'm super happy with it but I'll confirm if it works.
edit 2018:
I have no answer, but since this ha[sd] an open bounty, help yourself to some code and a demo
The grid itself is, simplified;
let grid = [[10,15],[12,23]];
which leads to a drawing like:
for (var i = 0; i < grid.length; i++) {
for (var j = 0; j < grid[0].length; j++) {
let x = (j - i) * resourceWidth;
let y = ((i + j) * resourceHeight) + (grid[i][j] * -resourceHeight);
// the "+" bit is the adjustment for height according to perlin noise values
}
}
edit post-bounty:
See GIF. The accepted answer works. The delay is my fault, the screen doesn't update on mousemove (yet) and the frame rate is low-ish. It's clearly bringing back the right tile.
Source
Intresting task.
Lets try to simplify it - lets resolve this concrete case
Solution
Working version is here: https://github.com/amuzalevskiy/perlin-landscape (changes https://github.com/jorgt/perlin-landscape/pull/1 )
Explanation
First what came into mind is:
Just two steps:
find an vertical column, which matches some set of tiles
iterate tiles in set from bottom to top, checking if cursor is placed lower than top line
Step 1
We need two functions here:
Detects column:
function getColumn(mouseX, firstTileXShiftAtScreen, columnWidth) {
return (mouseX - firstTileXShiftAtScreen) / columnWidth;
}
Function which extracts an array of tiles which correspond to this column.
Rotate image 45 deg in mind. The red numbers are columnNo. 3 column is highlighted. X axis is horizontal
function tileExists(x, y, width, height) {
return x >= 0 & y >= 0 & x < width & y < height;
}
function getTilesInColumn(columnNo, width, height) {
let startTileX = 0, startTileY = 0;
let xShift = true;
for (let i = 0; i < columnNo; i++) {
if (tileExists(startTileX + 1, startTileY, width, height)) {
startTileX++;
} else {
if (xShift) {
xShift = false;
} else {
startTileY++;
}
}
}
let tilesInColumn = [];
while(tileExists(startTileX, startTileY, width, height)) {
tilesInColumn.push({x: startTileX, y: startTileY, isLeft: xShift});
if (xShift) {
startTileX--;
} else {
startTileY++;
}
xShift = !xShift;
}
return tilesInColumn;
}
Step 2
A list of tiles to check is ready. Now for each tile we need to find a top line. Also we have two types of tiles: left and right. We already stored this info during building matching tiles set.
function getTileYIncrementByTileZ(tileZ) {
// implement here
return 0;
}
function findExactTile(mouseX, mouseY, tilesInColumn, tiles2d,
firstTileXShiftAtScreen, firstTileYShiftAtScreenAt0Height,
tileWidth, tileHeight) {
// we built a set of tiles where bottom ones come first
// iterate tiles from bottom to top
for(var i = 0; i < tilesInColumn; i++) {
let tileInfo = tilesInColumn[i];
let lineAB = findABForTopLineOfTile(tileInfo.x, tileInfo.y, tiles2d[tileInfo.x][tileInfo.y],
tileInfo.isLeft, tileWidth, tileHeight);
if ((mouseY - firstTileYShiftAtScreenAt0Height) >
(mouseX - firstTileXShiftAtScreen)*lineAB.a + lineAB.b) {
// WOHOO !!!
return tileInfo;
}
}
}
function findABForTopLineOfTile(tileX, tileY, tileZ, isLeftTopLine, tileWidth, tileHeight) {
// find a top line ~~~ a,b
// y = a * x + b;
let a = tileWidth / tileHeight;
if (isLeftTopLine) {
a = -a;
}
let b = isLeftTopLine ?
tileY * 2 * tileHeight :
- (tileX + 1) * 2 * tileHeight;
b -= getTileYIncrementByTileZ(tileZ);
return {a: a, b: b};
}
Please don't judge me as I am not posting any code. I am just suggesting an algorithm that can solve it without high memory usage.
The Algorithm:
Actually to determine which tile is on mouse hover we don't need to check all the tiles. At first we think the surface is 2D and find which tile the mouse pointer goes over with the formula OP posted. This is the farthest probable tile mouse cursor can point at this cursor position.
This tile can receive mouse pointer if it's at 0 height, by checking it's current height we can verify if this is really at the height to receive pointer, we mark it and move forward.
Then we find the next probable tile which is closer to the screen by incrementing or decrementing x,y grid values depending on the cursor position.
Then we keep on moving forward in a zigzag fashion until we reach a tile which cannot receive pointer even if it is at it's maximum height.
When we reach this point the last tile found that were at a height to receive pointer is the tile that we are looking for.
In this case we only checked 8 tiles to determine which tile is currently receiving pointer. This is very memory efficient in comparison to checking all the tiles present in the grid and yields faster result.
One way to solve this would be to follow the ray that goes from the clicked pixel on the screen into the map. For that, just determine the camera position in relation to the map and the direction it is looking at:
const camPos = {x: -5, y: -5, z: -5}
const camDirection = { x: 1, y:1, z:1}
The next step is to get the touch Position in the 3D world. In this certain perspective that is quite simple:
const touchPos = {
x: camPos.x + touch.x / Math.sqrt(2),
y: camPos.y - touch.x / Math.sqrt(2),
z: camPos.z - touch.y / Math.sqrt(2)
};
Now you just need to follow the ray into the layer (scale the directions so that they are smaller than one of your tiles dimensions):
for(let delta = 0; delta < 100; delta++){
const x = touchPos.x + camDirection.x * delta;
const y = touchPos.y + camDirection.y * delta;
const z = touchPos.z + camDirection.z * delta;
Now just take the tile at xz and check if y is smaller than its height;
const absX = ~~( x / 24 );
const absZ = ~~( z / 24 );
if(tiles[absX][absZ].height >= y){
// hanfle the over event
}
I had same situation on a game. first I tried with mathematics, but when I found that the clients wants to change the map type every day, I changed the solution with some graphical solution and pass it to the designer of the team. I captured the mouse position by listening the SVG elements click.
the main graphic directly used to capture and translate the mouse position to my required pixel.
https://blog.lavrton.com/hit-region-detection-for-html5-canvas-and-how-to-listen-to-click-events-on-canvas-shapes-815034d7e9f8
https://code.sololearn.com/Wq2bwzSxSnjl/#html
Here is the grid input I would define for the sake of this discussion. The output should be some tile (coordinate_1, coordinate_2) based on visibility on the users screen of the mouse:
I can offer two solutions from different perspectives, but you will need to convert this back into your problem domain. The first methodology is based on coloring tiles and can be more useful if the map is changing dynamically. The second solution is based on drawing coordinate bounding boxes based on the fact that tiles closer to the viewer like (0, 0) can never be occluded by tiles behind it (1,1).
Approach 1: Transparently Colored Tiles
The first approach is based on drawing and elaborated on here. I must give the credit to #haldagan for a particularly beautiful solution. In summary it relies on drawing a perfectly opaque layer on top of the original canvas and coloring every tile with a different color. This top layer should be subject to the same height transformations as the underlying layer. When the mouse hovers over a particular layer you can detect the color through canvas and thus the tile itself. This is the solution I would probably go with and this seems to be a not so rare issue in computer visualization and graphics (finding positions in a 3d isometric world).
Approach 2: Finding the Bounding Tile
This is based on the conjecture that the "front" row can never be occluded by "back" rows behind it. Furthermore, "closer to the screen" tiles cannot be occluded by tiles "farther from the screen". To make precise the meaning of "front", "back", "closer to the screen" and "farther from the screen", take a look at the following:
.
Based on this principle the approach is to build a set of polygons for each tile. So firstly we determine the coordinates on the canvas of just box (0, 0) after height scaling. Note that the height scale operation is simply a trapezoid stretched vertically based on height.
Then we determine the coordinates on the canvas of boxes (1, 0), (0, 1), (1, 1) after height scaling (we would need to subtract anything from those polygons which overlap with the polygon (0, 0)).
Proceed to build each boxes bounding coordinates by subtracting any occlusions from polygons closer to the screen, to eventually get coordinates of polygons for all boxes.
With these coordinates and some care you can ultimately determine which tile is pointed to by a binary search style through overlapping polygons by searching through bottom rows up.
It also matters what else is on the screen. Maths attempts work if your tiles are pretty much uniform. However if you are displaying various objects and want the user to pick them, it is far easier to have a canvas-sized map of identifiers.
function poly(ctx){var a=arguments;ctx.beginPath();ctx.moveTo(a[1],a[2]);
for(var i=3;i<a.length;i+=2)ctx.lineTo(a[i],a[i+1]);ctx.closePath();ctx.fill();ctx.stroke();}
function circle(ctx,x,y,r){ctx.beginPath();ctx.arc(x,y,r,0,2*Math.PI);ctx.fill();ctx.stroke();}
function Tile(h,c,f){
var cnv=document.createElement("canvas");cnv.width=100;cnv.height=h;
var ctx=cnv.getContext("2d");ctx.lineWidth=3;ctx.lineStyle="black";
ctx.fillStyle=c;poly(ctx,2,h-50,50,h-75,98,h-50,50,h-25);
poly(ctx,50,h-25,2,h-50,2,h-25,50,h-2);
poly(ctx,50,h-25,98,h-50,98,h-25,50,h-2);
f(ctx);return ctx.getImageData(0,0,100,h);
}
function put(x,y,tile,image,id,map){
var iw=image.width,tw=tile.width,th=tile.height,bdat=image.data,fdat=tile.data;
for(var i=0;i<tw;i++)
for(var j=0;j<th;j++){
var ijtw4=(i+j*tw)*4,a=fdat[ijtw4+3];
if(a!==0){
var xiyjiw=x+i+(y+j)*iw;
for(var k=0;k<3;k++)bdat[xiyjiw*4+k]=(bdat[xiyjiw*4+k]*(255-a)+fdat[ijtw4+k]*a)/255;
bdat[xiyjiw*4+3]=255;
map[xiyjiw]=id;
}
}
}
var cleanimage;
var pickmap;
function startup(){
var water=Tile(77,"blue",function(){});
var field=Tile(77,"lime",function(){});
var tree=Tile(200,"lime",function(ctx){
ctx.fillStyle="brown";poly(ctx,50,50,70,150,30,150);
ctx.fillStyle="forestgreen";circle(ctx,60,40,30);circle(ctx,68,70,30);circle(ctx,32,60,30);
});
var sheep=Tile(200,"lime",function(ctx){
ctx.fillStyle="white";poly(ctx,25,155,25,100);poly(ctx,75,155,75,100);
circle(ctx,50,100,45);circle(ctx,50,80,30);
poly(ctx,40,70,35,80);poly(ctx,60,70,65,80);
});
var cnv=document.getElementById("scape");
cnv.width=500;cnv.height=400;
var ctx=cnv.getContext("2d");
cleanimage=ctx.getImageData(0,0,500,400);
pickmap=new Uint8Array(500*400);
var tiles=[water,field,tree,sheep];
var map=[[[0,0],[1,1],[1,1],[1,1],[1,1]],
[[0,0],[1,1],[1,2],[3,2],[1,1]],
[[0,0],[1,1],[2,2],[3,2],[1,1]],
[[0,0],[1,1],[1,1],[1,1],[1,1]],
[[0,0],[0,0],[0,0],[0,0],[0,0]]];
for(var x=0;x<5;x++)
for(var y=0;y<5;y++){
var desc=map[y][x],tile=tiles[desc[0]];
put(200+x*50-y*50,200+x*25+y*25-tile.height-desc[1]*20,
tile,cleanimage,x+1+(y+1)*10,pickmap);
}
ctx.putImageData(cleanimage,0,0);
}
var mx,my,pick;
function mmove(event){
mx=Math.round(event.offsetX);
my=Math.round(event.offsetY);
if(mx>=0 && my>=0 && mx<cleanimage.width && my<cleanimage.height && pick!==pickmap[mx+my*cleanimage.width])
requestAnimationFrame(redraw);
}
function redraw(){
pick=pickmap[mx+my*cleanimage.width];
document.getElementById("pick").innerHTML=pick;
var ctx=document.getElementById("scape").getContext("2d");
ctx.putImageData(cleanimage,0,0);
if(pick!==0){
var temp=ctx.getImageData(0,0,cleanimage.width,cleanimage.height);
for(var i=0;i<pickmap.length;i++)
if(pickmap[i]===pick)
temp.data[i*4]=255;
ctx.putImageData(temp,0,0);
}
}
startup(); // in place of body.onload
<div id="pick">Move around</div>
<canvas id="scape" onmousemove="mmove(event)"></canvas>
Here the "id" is a simple x+1+(y+1)*10 (so it is nice when displayed) and fits into a byte (Uint8Array), which could go up to 15x15 display grid already, and there are wider types available too.
(Tried to draw it small, and it looked ok on the snippet editor screen but apparently it is still too large here)
Computer graphics is fun, right?
This is a special case of the more standard computational geometry "point location problem". You could also express it as a nearest neighbour search.
To make this look like a point location problem you just need to express your tiles as non-overlapping polygons in a 2D plane. If you want to keep your shapes in a 3D space (e.g. with a z buffer) this becomes the related "ray casting problem".
One source of good geometry algorithms is W. Randolf Franklin's website and turf.js contains an implementation of his PNPOLY algorithm.
For this special case we can be even faster than the general algorithms by treating our prior knowledge about the shape of the tiles as a coarse R-tree (a type of spatial index).

Tools or libraries for getting cross-section of 2D polygons into 1D geometry?

I am an amateur pilot working on some airspace modeling experiments. What I am trying to achieve is a tool for easily creating cross-sections of airspace, i.e. 3D airspace to 2D. So in the end I would like to have an image similar to this created: airspace example. Accuracy is not important as these cross-sections would not be used for navigational purposes, but for training/visualization only. Hence coordinate geometry is enough, and no geodesic calculations are needed.
Currently I am storing GeoJSON 2D geometries (all polygons) in my database with additional metadata containing lower and upper altitude limits of each airspace element. Therefore I am effectively only showing 2D data on my OpenLayers and Leaflet.js maps.
I want the user to be able to draw a linestring over the map (see the green linestring in the picture below, stroke-width dramatically increased for demonstration purposes). This I can do with OpenLayers or Leaflet. The outcome should be a 1-dimensional cross-section of the 2D elements intersecting with the user-drawn line, as in this very artistic illustration by me:
Clarification: if the length of the output 1-dimensional cross-section is for example 1L, then the contents of the cross-section in the example should be a set of the following geometries: 1) black line between 0.1L and 0.5L and 2) a red line between 0.7L and 0.825L.
The user interface part is doable, and it would be running on top of OpenLayers or Leaflet. I have also found several algorithms in various languages for determining if two lines intersect and even to find out the intersection point. I can use Raphael.js then to draw the cross-section.
I should be able to do all this in a day or two... But I was wondering if there was an easier path to take? For example, does anyone know of a software library that would enable calculation of such cross-sections that I am trying to achieve? Oh, please don't mention those $10,000 GIS packages :-).
As this will be a web application, I am mostly looking into Javascript, Perl or PHP solutions. GeoJSON Utilities for JavaScript looks quite promising for calculating intersections, but I wonder if there are others?
Posting answer to my own question for future reference. I was able to come up with a solution for creating the airspace cross-sections. It is based on coordinate geometry.
1) Input data is the start and end coordinates of the cross-section.
2) Loop through all areas and check if the start and/or end coordinates fall inside any of the airspace areas (polygons). PHP code slightly modified from another SO answer to check if a point is inside a polygon:
// Point class, storage of lat/long-pairs
class Point {
var $lat;
var $long;
function Point($lat, $long) {
$this->lat = $lat;
$this->long = $long;
}
}
function pointInPolygon($p, $polygon) {
$c = 0;
$p1 = $polygon[0];
$n = count($polygon);
for ($i=1; $i<=$n; $i++) {
$p2 = $polygon[$i % $n];
if ($p->long > min($p1->long, $p2->long)
&& $p->long <= max($p1->long, $p2->long)
&& $p->lat <= max($p1->lat, $p2->lat)
&& $p1->long != $p2->long) {
$xinters = ($p->long - $p1->long) * ($p2->lat - $p1->lat) / ($p2->long - $p1->long)
if ($p1->lat == $p2->lat || $p->lat <= $xinters) {
$c++;
}
}
$p1 = $p2;
}
// if the number of edges we passed through is even, then it's not in the poly.
return $c%2!=0;
}
3) Loop through all the line segments of each and every area (polygon) containing the airspace data. PHP code slightly modified from another SO answer to return the intersection point ofe two line segments:
function Det2($x1, $x2, $y1, $y2) {
return ($x1 * $y2 - $y1 * $x2);
}
function lineIntersection ($v1Y, $v1X, $v2Y, $v2X, $v3Y, $v3X, $v4Y, $v4X) {
$tolerance = 0.000001;
$a = Det2($v1X - $v2X, $v1Y - $v2Y, $v3X - $v4X, $v3Y - $v4Y);
if (abs($a) < $tolerance) return null; // Lines are parallel
$d1 = Det2($v1X, $v1Y, $v2X, $v2Y);
$d2 = Det2($v3X, $v3Y, $v4X, $v4Y);
$x = Det2($d1, $v1X - $v2X, $d2, $v3X - $v4X) / $a;
$y = Det2($d1, $v1Y - $v2Y, $d2, $v3Y - $v4Y) / $a;
if ($x < min($v1X, $v2X) - $tolerance || $x > max($v1X, $v2X) + $tolerance) return null;
if ($y < min($v1Y, $v2Y) - $tolerance || $y > max($v1Y, $v2Y) + $tolerance) return null;
if ($x < min($v3X, $v4X) - $tolerance || $x > max($v3X, $v4X) + $tolerance) return null;
if ($y < min($v3Y, $v4Y) - $tolerance || $y > max($v3Y, $v4Y) + $tolerance) return null;
return array($x, $y);
}
4) If there are intersecting segments, figure out the distance of the intersection (as result of the previous step) from the start coordinates of the cross-section. Divide this with the overall length of the cross-section to get the relative location of the airspace area boundary on the cross-section line.
5) Based on results of points 2 and 4, draw SVG polygons. Relative intersection locations are translated to X coordinates of polygons and altitude data (lower and upper limit) becomes the Y coordinates of the polygon.

ArcGIS Javascript - Zoom to show all points

I am trying to add some functionality that will zoom the map in/out depending on the points that are returned from a query. So for example, say we're zoomed in on the state of Texas. If I execute a query and the service returns back points that are in Texas AND some located in California, I would like the map to then zoom out and then display both California and Texas. I have been looking through the ArcGIS JS API to see how I could implement it but I'm having trouble figuring out what properties and/or methods to use to accomplish this.
The FeatureSet provided to the QueryTask's onComplete callback has the property features that is an array of Graphics.
The javascript api provides the esri.graphicsExtent(graphics) function that can accept that array of Graphics and calculate their extent. Once the extent has been calculated, map.setExtent(extent) can be used to zoom the map to that extent.
It should be noted that the documentation for esri.graphicsExtent(...) specifies that 'If the extent height and width are 0, null is returned.' This case will occur if the returned Graphics array only has a single point in it, so you'll want to check for it.
Here's an example QueryTask onComplete callback that could be used to zoom the map to the extents of points returned by the query:
function onQueryComplete(returnedPointFeatureSet){
var featureSet = returnedPointFeatureSet || {};
var features = featureSet.features || [];
var extent = esri.graphicsExtent(features);
if(!extent && features.length == 1) {
// esri.getExtent returns null for a single point, so we'll build the extent by hand by subtracting/adding 1 to create x and y min/max values
var point = features[0];
extent = new esri.geometry.Extent(point.x - 1, point.y - 1, point.x + 1, point.y + 1, point.spatialReference);
}
if(extent) {
// assumes the esri map object is stored in the globally-scoped variable 'map'
map.setExtent(extent)
}
}
I agree, map.setExtent(extent, true) is the way to go here. Another observation: In case we have only a single point it's worth considering simply using map.centerAndZoom(point, ZOOM_LEVEL) instead of creating an extent. Then, we could just have this:
function onQueryComplete(returnedPointFeatureSet){
var featureSet = returnedPointFeatureSet || {};
var features = featureSet.features || [];
var extent = esri.graphicsExtent(features);
if(!extent && features.length == 1) {
var point = features[0];
map.centerAndZoom(point, 12);
}
else {
map.setExtent(extent, true);
}
}
Not a good idea to create an extent from a point that way. If the units are in degrees you could get a huge extent. Instead, you could do a buffer around the point using the geometryEngine
function onQueryComplete(featureSet){
if (featureSet.features.length) {
var extent = esri.graphicsUtils.graphicsExtent(featureSet.features);
if(!extent && featureSet.features.length == 1 && featureSet.features[0].geometry.type == "point") {
var point = featureSet.features[0];
var extent = esri.geometry.geometryEngine.buffer(point.geometry, 1, "meters").getExtent();
}
// assumes the esri map object is stored in the globally-scoped variable 'map'
map.setExtent(extent)
}
}

Categories

Resources