I have an array of coordinates that make up a polyline. Now i would like to get lat/lng point every 5 km from start point to end point on that polyline. Is there some function in Leaflet js for that or some other way of doing this.
Thanks!
As requested, here's an explanation of how to use leaflet geometryutil (GeoUtil from here on out) to do what you want.
First, let's get the total length of your polyline. You can do this using GeoUtil and a reduce function.
const lengths = GeoUtil.accumulatedLengths(latlngs);
const totalLength = lengths.reduce((a, b) => a + b, 0);
Assuming latlngs is an array of L.LatLng objects, GeoUtil.accumulatedLengths will return an array of distances between those latlngs. Then we reduce that array down to the sum of its parts in the reduce statement.
Next, we figure out how many points you're going to have along your line based on the interval you want (5km), and the total distance:
const interval = 5000; // 5km
const totalPoints = Math.floor(totalLength / interval);
So we will have totalPoints points along your line. Now we need to find out how far along the line each point is as a ratio:
const ratios = [];
for (let i = 0; i <= totalPoints; i++) {
const ratio = i / totalPoints;
ratios.push(ratio);
}
So ratios will be an array of numbers between 0 and 1, with the same length as the number of points we expect to have on the line. If this part confuses you, ask for more explanations in a comment.
Now we have that array of ratios, we can use it in GeoUtil.interpolateOnLine:
const points = ratios.map((ratio) =>
GeoUtil.interpolateOnLine(map, latlngs, ratio)
);
points.forEach((point) => {
L.marker(point.latLng).addTo(map);
});
points is an array of L.LatLng points, at equal intervals along your line. Which I believe is what you are looking to achieve.
Working codesandbox
**Note: the use of Math.floor is crucial to get an integer value when calculating how many points along the line you'll have. This may make it feel like we're "leaving off" the last point, but unless your total distance is an exact multiple of your interval distance, this is required to get the math to work.
If you want to set markers on the line you can use the library leaflet-distance-marker.
var line = L.polyline(coords, {
distanceMarkers: {offset: 5000 }
});
Demo
Else take a look into the code and copy it.
Related
I have a code below but there's a problem with the current implementation in the function entriesToDel. It will not always produce n distinct co-ordinates to be turned blank in a puzzle. It should always produce an array with n elements consisting of two-element arrays storing the co-ordinates of an element and every element of the output should produce distinct co-ordinates but I'm not sure how to do that as I'm quite new to this.
// a function to randomly select n (row,column) entries of a 2d array
function entriesToDel(n) {
var array = [];
for (var i = 0; i < n; i++) {
var row = Math.round(3*Math.random());
var col = Math.round(3*Math.random());
array.push([row,col]);
}
return array;
}
A simple solution would be using a set instead of an array and add random numbers until you have enough unique entries.
In case you dont know what a set is: A set works similarly to an array, but it does not allow for duplicates (duplicates will be ignored). (It also does not give elements a fixed position; you can't access an entry of a set with set[i])
const size = 3 // define size of grid in variable
function entriesToDel(n) {
// create set of positions
const positions = new Set() // create set
while(positions.size < n) { // repeat until set has enough entries
positions.add(Math.floor((size ** 2) * Math.random())) // add random position (duplicates will be ignored by the set)
}
// convert set of positions to array of coordinates
const cords = []
for(const position of positions) { // iterate through positions
// convert position to column and row and add to array
const row = Math.floor(position / size)
const column = position % size
cords.push([row, column])
}
return cords
}
This solution is nice and simple. An issue is that if you're really unlucky, the random number generator will keep on generating the same number, resulting in a possibly very long calculation time. If you want to exclude this (very unlikely) possibility, it would be reasonably to use the more complex solution provided by James.
I am trying to implement a function as follows but really lack the math skills, any help would be greatly appreciated.
The function should take an amount of data points x and return an array of size x containing exponentially increasing values from 0 to 100 (for example). Ideally it should also accept a lambda value to modify the curve.
function exponentialCurve(x, max=100, lambda=4) {
// returns an array of size x where each entry represents a point on an exponential curve between 0 and max
}
This is for applying exponential decay to audio PCM data.
Again anything to help point me in the right direction would be really great, thanks for reading.
Is this what you're looking for (where 1 <= lambda <=10)?
function exponentialCurve(x, max=100, lambda=4) {
// returns an array of size x where each entry represents a point on an exponential curve between 0 and max
const base = Math.log(x) / Math.log(lambda);
const points = Array(x).fill(max);
return points.map((point, n) => point / Math.pow(base, n));
}
GOAL
Finding a good way to check if 2 image are similar compairing their hash profiles. The hash is a simple array containing 0 and 1 values.
INTRO
I have 2 images. They are the same image but with some little differences: one has a different brightness, rotation and shot.
What I want to do is create a Javascript method to compare the 2 images and calculate a percentage value that tells how much they are similar.
WHAT I'VE DONE
After uploading the 2 images into a html5 canvas to get their image data, I've used the pHash algorithm (www.phash.org) to obtain their hash rapresentation.
The hash is an array containing 0 and 1 values that recreates the image in a "simplified" form.
I've also created a JS script that generates a html table with black cells where the array contains 1.The result is the following screenshot (the image is a Van Gogh picture):
Screenshot
Now, what I should do is to compare the 2 arrays for obtaining a percentage value to know "how much" they are similar.
The most part of the hash Javascript algorithms I've found googling already have a compare algorithm: the hamming distance algorithm. It's very simple and fast, but not very precise. In fact, the hamming distance algorithm says that the 2 images in my screenshot have a 67% of similarity.
THE QUESTION
Starting with 2 simple arrays, with the same length, filled with 0 and 1 values: what could be a good algorithm to determine similarity more precisely?
NOTES
- Pure Javascript development, no third party plugins or framework.
- No need of a complex algorithm to find the right similarity when the 2 images are the same but they are very different (strong rotation, totaly different colors, etc.).
Thanx
PHASH CODE
// Size is the image size (for example 128px)
var pixels = [];
for (var i=0;i<imgData.data.length;i+=4){
var j = (i==0) ? 0 : i/4;
var y = Math.floor(j/size);
var x = j-(y*size);
var pixelPos = x + (y*size);
var r = imgData.data[i];
var g = imgData.data[i+1];
var b = imgData.data[i+2];
var gs = Math.floor((r*0.299)+(g*0.587)+(b*0.114));
pixels[pixelPos] = gs;
}
var avg = Math.floor( array_sum(pixels) / pixels.length );
var hash = [];
array.forEach(pixels, function(px,i){
if(px > avg){
hash[i] = 1;
} else{
hash[i] = 0;
}
});
return hash;
HAMMING DISTANCE CODE
// hash1 and hash2 are the arrays of the "coded" images.
var similarity = hash1.length;
array.forEach(hash1, function(val,key){
if(hash1[key] != hash2[key]){
similarity--;
}
});
var percentage = (similarity/hash1.length*100).toFixed(2);
NOTE: array.forEach is not pure javascript. Consider it as a replace of: for (var i = 0; i < array.length; i++).
I'm using blockhash, it seems pretty good so far, only false positives I get are when half the pictures are of the same background color, which is to be expected =/
http://blockhash.io/
BlockHash may be slower than yours but it should be more accurate.
What you do is just calculate the greyscale of EACH pixels, and just compare it to the average to create your hash.
What BlockHash does is split the picture in small rectangles of equal size and averages the sum of the RGB values of the pixels inside them and compares them to 4 horizontal medians.
So it is normal that it takes longer, but it is still pretty efficient and accurate.
I'm doing it with pictures of a good resolution, at minimum 1000x800, and use 16bits. This gives a 64 character long hexadecimal hash. When using the hamming distance provided by the same library, I see good results when using a 10 similarity threshold.
Your idea of using greyscale isn't bad at all. But you should average out portions of the image instead of comparing each pixels. That way you can compare a thumbnail version to its original, and get pretty much the same phash!
I don't know if this can do the trick, but you can just compare the 0 and 1 similarities between arrays :
const arr1 = [1,1,1,1,1,1,1,1,1,1],
arr2 = [0,0,0,0,0,0,0,0,0,0],
arr3 = [0,1,0,1,0,1,0,1,0,1],
arr4 = [1,1,1,0,1,1,1,0,1,1]
const howSimilar = (a1,a2) => {
let similarity = 0
a1.forEach( (elem,index) => {
if(a2[index]==elem) similarity++
})
let percentage = parseInt(similarity/arr1.length*100) + "%"
console.log(percentage)
}
howSimilar(arr1,arr2) // 0%
howSimilar(arr1,arr3) // 50%
howSimilar(arr1,arr4) // 80%
I have a polygon, it has five points like this:
then I add another point to polygon (the red one):
what's the algorithm to determine two polygons is same one (not just angle/length is same, coordinates also same too).
As your same means same shape,size,orientation and position
then it is really simple
you have 2 polygons defined as set of points
A={ a0,a1...a(n-1) } and B={ b0,b1,...b(m-1) }
for starters I assume you have no oversampling (line is always 2 points not more)
compare m,n
if not equal shapes are different so stop
otherwise m==n so I will use just n from now on
find (a(i)==b(j)) where i,j=<0,n)
this is needed in case the polygons are not starting from the same point
otherwise i=0,j=0
for complicated (self intersecting) shapes you need to find unique points
(not duplicates, or the same count of duplicates with the same next point)
otherwise just set i=0 and find j with single O(n) loop
if no common point found stop (not the same polygons)
compare the points
for (k=0;k<n;k++)
{
if (a(i)!=b(j)) return false; // not the same
i++; if (i>=n) i=0;
j++; if (j>=n) j=0;
} return true; // are the same
the point comparison can be done like this if (|a(i)-b(j)|>=max_difference_treshold)
no need to compare sqrt-ed distances the treshold can be powered by 2 instead
I usually use something like 1e-6 or 1e-10 values
For oversampled polygon you need to resample points of booth A,B first
take 3 neighboring points p(i-1),p(i),p(i+1)
compute dx,dy between 2 pairs
d1=p(i)-p(i-1); dx1=p1.x; dy1=p1.y;
d2=p(i+1)-p(i); dx2=p2.x; dy2=p2.y;
if (dx1*dy2==dx1*dy1) then delete p(i) from the set
you should handle the zero cases (any dx,dy is zero) separately prior to this
//import turf library
var turf = require('#turf/turf');
// create polygon using turf or you can directly use geoJSON
var polygon1 = turf.polygon(firstPolygonCoordinates);
var polygon2 = turf.polygon(secondPolygonCoordinates);
// Compare two polygon using turf's booleanEqual method
if (turf.booleanEqual(polygon1, polygon2)) {
// Add your logic here
}
Depending on your point of view. Two rectangles may be the same independently of the position.
If the position is a requirement you will need to compare the vertex
coordinates.
If the position is NOT a requirement, you should compare the
rectangle size. In this case, you need to obtain the distance between
the vertex. In this case, my advice is you should use the rectangle
base and height for comparing.
I'm using a library (JavaScript-Voronoi) which produces an array of line segments that represent a closed polygon. These segments appear unordered, both the order in which the segments appear as well as the ordering of the points for each end of the segment.
(Edit: As noted in a comment below, I was wrong: the segments from the library are well-ordered. However, the question stands as written: let's assume that the segments do not have any ordering, as this makes it more generally useful.)
For example:
var p1 = {x:13.6,y:13.1}, p2 = {x:37.2,y:35.8}, p3 = {x:99.9,y:14.6},
p4 = {x:99.9,y:45.5}, p5 = {x:33.7,y:66.7};
var segments = [
{ va:p1, vb:p2 },
{ va:p3, vb:p4 },
{ va:p5, vb:p4 },
{ va:p3, vb:p2 },
{ va:p1, vb:p5 } ];
Notice how the first segment links to the last (they share a common point), and to the next-to-last. It is guaranteed that every segment shares an end with exactly one other segment.
I would like to convert this into a list of points to generate a proper SVG polygon:
console.log( orderedPoints(segments) );
// [
// {"x":33.7,"y":66.7},
// {"x":13.6,"y":13.1},
// {"x":37.2,"y":35.8},
// {"x":99.9,"y":14.6},
// {"x":99.9,"y":45.5}
// ]
It doesn't matter whether the points are in clockwise or counter-clockwise order.
The following code is what I've come up with, but in the worst-case scenario it will take n^2+n point comparisons. Is there a more efficient algorithm for joining all these together?
function orderedPoints(segs){
segs = segs.concat(); // make a mutable copy
var seg = segs.pop(), pts = [seg.va], link = seg.vb;
for (var ct=segs.length;ct--;){
for (var i=segs.length;i--;){
if (segs[i].va==link){
seg = segs.splice(i,1)[0]; pts.push(seg.va); link = seg.vb;
break;
}else if (segs[i].vb==link){
seg = segs.splice(i,1)[0]; pts.push(seg.vb); link = seg.va;
break;
}
}
}
return pts;
}
If your polygon is convex, you can pick middle point of each line segment, then use convex hull algorithm to find convex polygon by middle items, after that, because you know what is the arrangement of middles and also you know which middle belongs to which segment, you can find an arrangement in original array.
If you just want to find a convex hull, use convex hull algorithm directly, it's O(n log n), which is fast enough, but also you can find a Quickhull algorithm in javascript here. quickhull is also in O(n logn), but in average, the worst case is O(n^2), but it's fast because of less constant factor.
but in the case of general algorithm:
Set one end of each segment as First, and another end as second (randomly).
Sort your segments by their first x and put it in array First after that in array first sort segments with same first x by their first y and put two extra int into your structure to save start and end position of items with same first x.
Then again sort your segments with the second x value, .... and make array second.
Above actions both are in O(n log n).
Now pick first segment in array First, search for its second x value in both arrays First and second, in the case you find similar values, search for their y values in related subarray (you have start and end position of items with same x). You know there is only one segment with this order (also is not current segment), so finding next segment takes O(log n) and because in all there is n-1 next segment it takes O(n logn) (also preprocessing), which is extremely faster than O(n^2).
It should be possible to turn the points into a (double, unordered?) linked list in linear time:
for (var i=0; i<segments.length; i++) {
var a = segments[i].va,
b = segments[i].vb;
// nexts being the two adjacent points (in unknown order)
if (a.nexts) a.nexts.push(b); else a.nexts = [b];
if (b.nexts) b.nexts.push(a); else b.nexts = [a];
}
Now you can iterate it to build the array:
var prev = segments[0].va,
start = segments[0].vb, // start somewhere, in some direction
points = [],
cur = start;
do {
points.push(cur);
var nexts = cur.nexts,
next = nexts[0] == prev ? nexts[1] : nexts[0];
delete cur.nexts; // un-modify the object
prev = cur;
cur = next;
} while (cur && cur != start)
return points;
If you do not want to modify the objects, an EcmaScript6 Map (with object keys) would come in handy. As a workaround, you could use a JSON serialisation of your point coordinates as keys of a normal object, however you are then limited to polygons that do not contain a coordinate twice. Or just use the unique voronoiId property that your library adds to the vertices for identifying them.
For a convex polygon, you don't even need to know the side segments. You just need a bunch of vertices. The procedure to order the vertices is pretty simple.
average all the vertices together to get a point inside the polygon. note that this doesn't even need to be the centroid. it just needs to be a point inside the polygon. call this point C.
for each vertex V[i], compute the angle the line segment from V[i] to C forms with the line segment from V[i] to V[i]+(1,0). call this a[i].
sort the angles of vertices using the vertices as satellite data.
the sorted vertices are in order around the polygon. there are some redundancies that you can remove. 1 runs in linear time, 2 runs in linear time, 3 runs in n log n.