In the example in Leaflet (for non geographic image), they set "bounds". I am trying to understand how they computed the values
var bounds = [[-26.5,-25], [1021.5,1023]];
The origin is bottom-left and y increases upwards / x towards the right. How did negative numbers turn up here? Also, after experimentation, I see that the actual pixel coordinates change if you specify different coordinates for bounds. I have a custom png map which I would like to use but I am unable to proceed due to this.
Oh, you mean this image:
If you open the full file (available at https://github.com/Leaflet/Leaflet/blob/v1.4.0/docs/examples/crs-simple/uqm_map_full.png ) with an image editor, you'll see that it measures 2315x2315 pixels. Now, the pixel that represents the (0,0) coordinate is not at a corner of the image, but rather 56 pixels away from the lower-left corner of the image:
Similarly, the (1000, 1000) coordinate is about 48 pixels from the top-right corner of the image:
Therefore, if we measure pixel coordinates of the grid corners:
Game coordinate (0, 0) → Pixel coordinate (59, 56)
Game coordinate (1000, 1000) → Pixel coordinate (2264, 2267)
The problem here is finding the bounds (measured in game coordinates) of the image. Or, in other words:
Pixel coordinate (0, 0) → Game coordinate (?, ?)
Pixel coordinate (2315, 2315) → Game coordinate (?, ?)
We know that the pixel-to-game-coordinate ratio is constant, we know the image size and the distance to the coordinates grid, so we can infer stuff:
1000 horizontal game units = image width - left margin - right margin
or
1000 horizontal game units = 2315px - 56px - 48px = 2213px
therefore the pixel/game unit ratio is
2213px / 1000 game units = 2.213 px/unit
therefore the left margin is...
~59px = ~59px / (2.213px/unit) ~= 26.66 game units
...therefore the left edge of the image is at ~ -26.66 game units. Idem for the right margin...
~51px = ~51px / (2.213px/unit) = ~23.04 game units
...therefore the right edge of the image is at ~1023.04 game units
Repeating that for the top and bottom margins we can fill up all the numbers:
Pixel coordinate (0, 0) → Game coordinate (-26.66, -25)
Pixel coordinate (2315, 2315) → Game coordinate (1023.04, 1025)
Why don't these numbers match the ones in the example exactly? Because I might have used a different pixel for measurement when I wrote that Leaflet tutorial. Still, the error is negligible.
Let me remark a sentence from that tutorial:
One common mistake when using CRS.Simple is assuming that the map units equal image pixels. In this case, the map covers 1000x1000 units, but the image is 2315x2315 pixels big. Different cases will call for one pixel = one map unit, or 64 pixels = one map unit, or anything. Think in map units in a grid, and then add your layers (L.ImageOverlays, L.Markers and so on) accordingly.
If you have your own game map (or anything else), you should ask yourself: Where is the (0,0) coordinate? What are the coordinates of the image edges in the units I'm gonna use?
Related
I'm currently working on a mini map for a game in which keeps track of different items of importance on and off the screen. When I first created the mini map through a secondary camera rendered onto a texture and displayed on screen in a miniature display, it was rectangle shape. I was able to ensure when the item of importance left the view of the map, an arrow pointing to the target showed up and remained on the edge of the map. It was basically clamping the x & y positions of the arrow to half the camera view's width and length (with some suitable margin space).
Anyway. Now I am trying to make the mini map circular and while I have the proper render mask on to guarantee that shape of the mini map, I am having difficulties in clamping the arrows to the shape of the new mini-map. In the rectangular mini map, the arrows stayed in the corners while clamped, but obviously, circles don't have corners.
I am thinking clamping the arrow's x & y positions have to do with the radius of the circle (half of the height of the screen/minimap), but because I'm a little weak on the math side, I am kindly requesting some help. How would I clamp the arrows to the edge of a new circle shape?
The code I have now is as follows:
let {width: canvasWidth, height: canvasHeight} = cc.Canvas.instance.node, // 960, 640
targetScreenPoint = cc.Camera.main.getWorldToScreenPoint(this.targetNode.position)
// other code for rotation of arrow, etc...
// FIXME: clamp the the edge of the minimap mask which is circular
// This is the old clamping code for a rectangle shape.
let arrowPoint = targetScreenPoint;
arrowPoint.x = utils.clamp(arrowPoint.x, (-canvasWidth / 2) + this.arrowMargin,
(canvasWidth / 2) - this.arrowMargin);
arrowPoint.y = utils.clamp(arrowPoint.y, (-canvasHeight / 2) + this.arrowMargin,
(canvasHeight /2) - this.arrowMargin);
this.node.position = cc.v2(arrowPoint.x, arrowPoint.y);
I should probably also note that all mini-map symbols and arrows technically are on screen but only are displayed in on the secondary camera through a culling mask... you know, just in case it helps.
Just for anyone else looking to do the same, I basically normalized the direction from the target node that the arrow points at and multiplied it by the radius of the image mask (with appropriate margin space).
Since the player node and the centre of the mask is at origin, I just got the difference from the player. The (640/2) is the diameter, which of course, shouldn't be hardcoded, but meh for now. Thanks to those who commented and got me thinking in the right direction.
let direction = this.targetNode.position.sub(this.playerNode.position).normalize();
let arrowPos = direction.mul((640/2) - this.arrowMargin);
this.node.position = arrowPos;
I have played around with the d3js (v5) maps,
i'm trying to generate this map (the screenshot was taken from a random website),
For my particular case there is no need to present Antarctica.
I have read the documentation here: https://github.com/d3/d3-geo#projections,
and followed the instructions and used geoMercator, got this flat map which gets cutoff in the top north for some reason.
What is the correct approach for getting the first map's layout?
any suggestions?
The projection you are looking at is a Mercator projection.
With d3.geoMercator(), the scale value is derived from the circumference of the cylinder that forms the projection surface. The scale value is the number of pixels per radian. The default value anticipates stretching the 360 degrees of the cylinder over 960 pixels: 960/Math.PI/2.
For vertical angular distances, there is no such scaling factor, as one moves to extreme longitudes, the angular distance between points is increasingly exaggerated, such that the poles will be at ± infinity on the y axis. Because of this Mercator's, especially web Mercator's are often truncated at ±~85 degrees. With an extent of [-180,85] and [180,-85], a Mercator is square.
This limit is incorporated into d3-geoMercator, which "Defines a default projection.clipExtent such that the world is projected to a square, clipped to approximately ±85° latitude. (docs)"
This means that if we want to show the full extent of a d3-geoMercator, across 960 x 960 pixels, we can use:
d3.geoMercator()
.scale(960/Math.PI/2) // 960 pixels over 2 π radians
.translate([480,480]) // the center of the SVG/canvas
Which gives us:
The default center of d3-geoMercator is [0°,0°], so if we want [0°,0°] to be in the middle of the SVG/canvas, we translate the center so that it is in the middle, with a translate of [width/2,height/2]
Now that we are showing the whole world, we can refine to show only the portion we want. The simplest method might just be lopping off pixels from the bottom of the svg/canvas. Using the above code with a canvas/svg height of 700 pixels (and keeping 960 pixels across, using the same scale and translate) I get:
I did not remove Antarctica from this image - it just happens that it is cut off without having to filter it out (this is not necessarily ideal practice: it is still drawn).
So, an SVG/Canvas with width 960, height 700, with a projection scale of 960/Math.PI/2 and a translate of [480,480] appears to be ok. These values will scale together for different view port sizes.
With maps, there is often a lot of eyeballing to get the visual effect desired, tweaking projection.translate() or projection.center() can help shift the map to the desired location. But we can do this computationally. I'll speak to one method here, using projection.fitSize() (though this won't solve the required aspect ratio without extra steps).
Project.fitSize([width,height],geojson) takes an array specifying the dimensions of the SVG/canvas and a geojson object and tweaks the projection scale and translate values so that the geojson feature is contained in the SVG/canvas. The geojson feature could be a bounding box of the part of the world you want to show, so you could use:
projection.fitSize([width,height], {
type: "Polygon",
coordinates: [[
[-179.999,84] ,
[-179.999,-57] ,
[179.999,-57] ,
[179.999,84],
[-179.999,84]
]]
})
Where ~84 degrees north is the north end of Greenland and ~56 degrees south is roughly the tip of South America. This will ensure that the entire portion of the world you want to see is visible. However, as noted above, this doesn't consider aspect, so if you constrain the above extent to square dimensions, you'll still be showing the full extent of the Mercator.
I use getImageData and putImageData to draw on canvas from a buffer canvas. I use these methods because I have a large number of particles and these proved to provide the best performance.
Now I'd like to add rotation of particles but I'm having problems with that.
Here is a jsfiddle which uses transformation matrix for rotation. As you can see in the picture (or fiddle) there are holes in the resulting image which I kinda expected from using this matrix.
nx = ~~ (xx * Math.cos(angle) + yy * Math.sin(angle) + cx);
ny = ~~ (xx * Math.sin(angle) - yy * Math.cos(angle) + cy);
But I don't know how to make this better, especially when I'm looking performance effecient solution?
jsfiddle demo
Image - square after rotation (square is used as a simple body):
Currently my backup is procedurally generated sprite animation which is prepared in advance with standard canvas states: save -> translate -> rotate -> restore.
Thank you very much for any directions you can give me.
The problem is that you are trying to map a single pixel to a single pixel. When you rotate an image, each pixel in the original can influence any of the surrounding pixels in the new image. You are effectively mapping the top left corner of each pixel to it's location in the new image, but you need map the center of each pixel to it's location in the new image and then check the overlap of this rotated pixel with that location, and the 8 surrounding pixels in the new image.
Here you can see the effect. The yellow dots are the centers of the pixel which find the "home" location for the pixel (i.e. where the majority of the influence will be placed). You then need to figure out the percentage of that pixel (the underlying blue/white grid) cell is covered by the original pixel (black box surrounding the yellow dot). Once you figure out the home location influence, you need to repeat that process for the 8 surrounding pixel with respect to current pixel in the original image. In your current code, you are using the top left corner of each pixel to find the home pixel for the new image. You should use the center of the pixel.
Since multiple iterations might affect the same pixel, you'll need to calculate the transformation in a buffer before drawing it to the final image. For pixels in the transformation that are not fully covered by pixels in the original image, figure out the percentage of the pixel that is covered and use that to influence the alpha channel. You'll have to take care when applying the pixels to the final image that you account for the alpha portion and blend with what's already there.
I have a bit of code (involving "canvas"), which generates a graph on a four-quadrant cartesian plane. (Please see the JsFiddle link in the comment below.)
I want to create a bit of code that adds a point to a specific position on the plane. However, I want the point to get plotted based on the intervals on the x- & y-axes rather than pixels. In other words, I don't want to have to guess and check where each coordinate is on the graph and then adjust accordingly. If I move the graph 200 pixels down on the page, I want the point to likewise move 200 pixels down.
Coding novice, here (if you couldn't tell already). It took me forever to get to this point, so I would greatly appreciate any help anyone is willing to offer.
Thanks!
The 2D canvas context provides a transformation to all rendering.
You can set the matrix with ctx.setTransform and you can multiply the existing transformation with ctx.transform, ctx.scale, ctx.rotate, ctx.translate
Personally I am a big fan of ctx.setTransform(a,b,c,d,e,f); where
a,b is the unit length and direction of the X axis in pixels
c,d is the unit length and direction of the Y axis in pixels
e,f is the location of the origin relative to the top left and is in
pixels.
Basicly 2 vectors defining the size (scale) and direction of a pixel x and y axis, and a coordinate defining where on the canvas the origin is. The coordinate is not effected by the scale or rotation.
So if you want the X axis to point down and the scale to be two then
a = 0, b = 2 the Y axis is then b = -2, c = 0 to be 90deg clockwise from the X axis.
If you want the axis to remain the same but the scale scale = 2 changed then
a = scale,b = 0, c= 0, d = scale. And to have the origin at the center of the canvas e = canvas.width/2, f = canvas.height/2
Now if you draw an arc ctx.arc(0,0,100,0,Math.PI*2) you will see a circle in the center of the canvas with a radius = 100*scale
Hope that makes sense....
Here is the graphics:
http://snag.gy/aVFGA.jpg
the big rectangle is canvas element, the small rectangle is the image object in the canvas. I want to find what is the real distance from the left.
values are such from what I see in console:
regX: 564.256
regY: 41.4
scaleX: 0.4491319444444445
scaleY: 0.4491319444444445
x: 363.3333333333333
y: 409.77777777777777
So as I see x is not real. It somehow relates with regX and scaleX. But I am not finding how it relates. From the image I think the x should be about 100 - 150 px.
THe bigger the x - the more it is to the right.
But the bigger regX - the more it makes rectangle go to the left.
So if I would just take the difference 564.256 - 363.333 = ~200 - left corner of the rectangle should be in them middle of canvas because canvas is 400px widh. But it is not, so substraction does not help. So how do I get how many pixels are in real from the left?
You can do this by using the localToGlobal method (see here).
It depends to which object the given attributes belong.
If they belong to the shape and your rectangle inside the image / shape starts at (0,0):
var point = shape.localToGlobal(0, 0);
// this will calculate the global point of the shape's local point (0,0)
If they belong to the stage:
var point = stage.localToGlobal(yourRectObject.x, yourRectObject.y);
// point.x should contain the position on the canvas
You should use these methods in general because your method might work for the current situation but will probably break as soon as you scale the stage itself or put the shape in a scaled / positioned container.
I guess I found what by experimenting with values:
distanceFromLeft = x - scaleX * regX;
so getting 109.90793888888885 px
If someone has worked more with this library, they could confirm that its not accidental.