Here is the graphics:
http://snag.gy/aVFGA.jpg
the big rectangle is canvas element, the small rectangle is the image object in the canvas. I want to find what is the real distance from the left.
values are such from what I see in console:
regX: 564.256
regY: 41.4
scaleX: 0.4491319444444445
scaleY: 0.4491319444444445
x: 363.3333333333333
y: 409.77777777777777
So as I see x is not real. It somehow relates with regX and scaleX. But I am not finding how it relates. From the image I think the x should be about 100 - 150 px.
THe bigger the x - the more it is to the right.
But the bigger regX - the more it makes rectangle go to the left.
So if I would just take the difference 564.256 - 363.333 = ~200 - left corner of the rectangle should be in them middle of canvas because canvas is 400px widh. But it is not, so substraction does not help. So how do I get how many pixels are in real from the left?
You can do this by using the localToGlobal method (see here).
It depends to which object the given attributes belong.
If they belong to the shape and your rectangle inside the image / shape starts at (0,0):
var point = shape.localToGlobal(0, 0);
// this will calculate the global point of the shape's local point (0,0)
If they belong to the stage:
var point = stage.localToGlobal(yourRectObject.x, yourRectObject.y);
// point.x should contain the position on the canvas
You should use these methods in general because your method might work for the current situation but will probably break as soon as you scale the stage itself or put the shape in a scaled / positioned container.
I guess I found what by experimenting with values:
distanceFromLeft = x - scaleX * regX;
so getting 109.90793888888885 px
If someone has worked more with this library, they could confirm that its not accidental.
Related
In the example in Leaflet (for non geographic image), they set "bounds". I am trying to understand how they computed the values
var bounds = [[-26.5,-25], [1021.5,1023]];
The origin is bottom-left and y increases upwards / x towards the right. How did negative numbers turn up here? Also, after experimentation, I see that the actual pixel coordinates change if you specify different coordinates for bounds. I have a custom png map which I would like to use but I am unable to proceed due to this.
Oh, you mean this image:
If you open the full file (available at https://github.com/Leaflet/Leaflet/blob/v1.4.0/docs/examples/crs-simple/uqm_map_full.png ) with an image editor, you'll see that it measures 2315x2315 pixels. Now, the pixel that represents the (0,0) coordinate is not at a corner of the image, but rather 56 pixels away from the lower-left corner of the image:
Similarly, the (1000, 1000) coordinate is about 48 pixels from the top-right corner of the image:
Therefore, if we measure pixel coordinates of the grid corners:
Game coordinate (0, 0) → Pixel coordinate (59, 56)
Game coordinate (1000, 1000) → Pixel coordinate (2264, 2267)
The problem here is finding the bounds (measured in game coordinates) of the image. Or, in other words:
Pixel coordinate (0, 0) → Game coordinate (?, ?)
Pixel coordinate (2315, 2315) → Game coordinate (?, ?)
We know that the pixel-to-game-coordinate ratio is constant, we know the image size and the distance to the coordinates grid, so we can infer stuff:
1000 horizontal game units = image width - left margin - right margin
or
1000 horizontal game units = 2315px - 56px - 48px = 2213px
therefore the pixel/game unit ratio is
2213px / 1000 game units = 2.213 px/unit
therefore the left margin is...
~59px = ~59px / (2.213px/unit) ~= 26.66 game units
...therefore the left edge of the image is at ~ -26.66 game units. Idem for the right margin...
~51px = ~51px / (2.213px/unit) = ~23.04 game units
...therefore the right edge of the image is at ~1023.04 game units
Repeating that for the top and bottom margins we can fill up all the numbers:
Pixel coordinate (0, 0) → Game coordinate (-26.66, -25)
Pixel coordinate (2315, 2315) → Game coordinate (1023.04, 1025)
Why don't these numbers match the ones in the example exactly? Because I might have used a different pixel for measurement when I wrote that Leaflet tutorial. Still, the error is negligible.
Let me remark a sentence from that tutorial:
One common mistake when using CRS.Simple is assuming that the map units equal image pixels. In this case, the map covers 1000x1000 units, but the image is 2315x2315 pixels big. Different cases will call for one pixel = one map unit, or 64 pixels = one map unit, or anything. Think in map units in a grid, and then add your layers (L.ImageOverlays, L.Markers and so on) accordingly.
If you have your own game map (or anything else), you should ask yourself: Where is the (0,0) coordinate? What are the coordinates of the image edges in the units I'm gonna use?
I'm trying to scale and then rotate a triangle and then translate it to a given point in Snap SVG.
I want to rotate the triangle around the top of it not the center, so i can build something like a pie.
So I thought I scale first, then rotate and later translate.
var t = new Snap.Matrix();
t.scale(0.5);
t.rotate(45, bbox.cx, (bbox.cy-(bbox.h/2)));
But the scale and rotation somehow are allways a bit off.
I reused a jsfiddle I found and updated it, so you can see what I try:
http://jsfiddle.net/AGq9X/477/
Somehow the bbox.cx and bbox.cy are not in the center of the triangle.
On my local setup they are.
The strange thing is, just rotation without scaleing works fine,
but scaling and then roation always seems to be a bit off on the y axis, the triangle doesn't stays at the rotation point.
Any ideas how i can fix that?
EDIT:
Ok I found the Solution,thanks to lan, you were right, the center of scaleing is important, and
I thought it was useing the center of the object, but it was the upper left corner. I adjusted it
and now it works greate:
var bbox = obj.getBBox(); //get coords etc. of triangle object
var t = new Snap.Matrix();
var offset = (bbox.cy+(bbox.h)) - centerY; //translate Y to center,
//depends on scaleing factor (0.5 = bbox.h, 0.25 = bbox.h*2)
t.scale(0.5, 0.5, bbox.cx, (bbox.cy+(bbox.h/2))); //scale object
t.translate(0,-offset); //translate to center
t.rotate(45, bbox.cx, (bbox.cy+(bbox.h/2))); //rotate object
obj.transform(t); //apply transformation to object
EDIT2:
I wanted to know how to save transformation, so you don't need to apply them every time you use a new transformation. Ian recommended to use element.transform() like so to get the old transformations:
element.transform( element.transform() + 's2,2' )
This is slightly more complicated than one would expect, but you would be animating a matrix, which does some odd things sometimes.
Personally I would use Snaps alternate animate method Snap.animate() and not using a matrix. Set the scale first and then build your animation string.
Something like..
var triangle2 = p.select("#myShape2").transform('s0.5');
...
Snap.animate(0,90,function( val ) {
triangle2.transform('r'+ val + ',' + bbox.cx+','+(bbox.cy-(bbox.h/2))+'s0.5')
}, 2000)
jsfiddle
I use getImageData and putImageData to draw on canvas from a buffer canvas. I use these methods because I have a large number of particles and these proved to provide the best performance.
Now I'd like to add rotation of particles but I'm having problems with that.
Here is a jsfiddle which uses transformation matrix for rotation. As you can see in the picture (or fiddle) there are holes in the resulting image which I kinda expected from using this matrix.
nx = ~~ (xx * Math.cos(angle) + yy * Math.sin(angle) + cx);
ny = ~~ (xx * Math.sin(angle) - yy * Math.cos(angle) + cy);
But I don't know how to make this better, especially when I'm looking performance effecient solution?
jsfiddle demo
Image - square after rotation (square is used as a simple body):
Currently my backup is procedurally generated sprite animation which is prepared in advance with standard canvas states: save -> translate -> rotate -> restore.
Thank you very much for any directions you can give me.
The problem is that you are trying to map a single pixel to a single pixel. When you rotate an image, each pixel in the original can influence any of the surrounding pixels in the new image. You are effectively mapping the top left corner of each pixel to it's location in the new image, but you need map the center of each pixel to it's location in the new image and then check the overlap of this rotated pixel with that location, and the 8 surrounding pixels in the new image.
Here you can see the effect. The yellow dots are the centers of the pixel which find the "home" location for the pixel (i.e. where the majority of the influence will be placed). You then need to figure out the percentage of that pixel (the underlying blue/white grid) cell is covered by the original pixel (black box surrounding the yellow dot). Once you figure out the home location influence, you need to repeat that process for the 8 surrounding pixel with respect to current pixel in the original image. In your current code, you are using the top left corner of each pixel to find the home pixel for the new image. You should use the center of the pixel.
Since multiple iterations might affect the same pixel, you'll need to calculate the transformation in a buffer before drawing it to the final image. For pixels in the transformation that are not fully covered by pixels in the original image, figure out the percentage of the pixel that is covered and use that to influence the alpha channel. You'll have to take care when applying the pixels to the final image that you account for the alpha portion and blend with what's already there.
I'm drawing to the canvas using the x/y coords of the mouse, but the line that I'm drawing always draws off a little bit, try drawing on here: http://zachrip.net/widgets/onlineedit/index.html (top left) for an example of what I mean. There is no offset so I do not account for it, so I don't know what the issue is?
The problem here is that you are setting the Canvas Element Size through your CSS, but you do not set the Drawing Surface Size.
The default size of the Drawing Surface is 300px by 150px. Since you do not set it, but set the Element Size, the browser scales the drawing surface size to fit the element. The x and y co-ordinates you get through the mouse event correspond to the Element Size, and not the actual Drawing Surface Size. Which is why you get the offset.
Now, the fiddle that I posted earlier merely had you set the size of Drawing Surface, instead of the Element. And that works, but if you'd rather have different Element and Drawing Surface sizes, then you can also do
function scaleCoords(x, y) {
x = x * DrawingSurfaceSize.width/ElementSize.width;
y = y * DrawingSurfaceSize.height/ElementSize.height;
return {x: x, y: y};
}
Example for second method.
Manipulating the slider until the end, the circle that represents the star disappears or does a different motion. See: jsfiddle.net/NxNXJ/13 Unlike this: astro.unl.edu/naap/hr/animations/hrExplorer.html
Can you help me?? Thanks
When you supply a big luminosity, You're rendering a circle which is millions of pixels tall. The broswer might not render it because it's so big.
However, you are really only interested in a small slice of that big circle - namely, the bit that fits in your tiny window.
At some point, it doesn't make sense to increase the size of the circle, since you can't observe a change in the curvature of the circle - it just looks like a straight vertical line.
This apparent verticality occurs around when x^2 + y^2 = R^2, where R is the radius of the star, Y is half the height of your window, and x is R-1. Solve for R in terms of Y, and you get
function maximumNecessaryRadius(windowHeight){
y = windowHeight / 2;
maxRadius = (y*y - 1)/2;
return Math.round(maxRadius);
}
When resizing the star, check to make sure that its radius doesn't exceed the maximum necessary radius. Rendering it any larger than that is overkill.
Example Implementation