I have written a function that will resize an SVG path, or any shape. However when i use it the path does gets resized but unfortunatetly it also changes position within my svg-canvas.
this is my function
function output()
{
var transformw=prompt("Enter your new width");
var transformh=prompt("Enter your new height");
var lastw = svg_1.getBoundingClientRect().width;
var lasth = svg_1.getBoundingClientRect().height;
newW=transformw/lastw;
newH=transformh/lasth;
alert(newH);
alert(newW);
svgCanvas.changeSelectedAttribute("transform",
"matrix(" + newW + ", 0, 0, " + newH + ", 0, 0)");
svgCanvas.recalculateAllSelectedDimensions();
}
I only want the shapes to be positioned on the top corner of my canvas once they get transformed. Ideally i would want them to have the same x,y position they had before the transformation however i wouldnt mind to have a fixed point if the original x,y position is difficult to achieve.
i am answering my own question.
When we resize an SVG element using transform the elements gets moved in the x,y axis relative to the transformation we did.
To counteract this effect we just need to apply a negative translation on the element that has the same ''transformation'' parameters albeit negatively(it moves it to the opposite direction than what the transformation does.
This way we counteract the positioning effects of a tranformation and we only get the resizing effects.
Related
I want to write a JavaScript function that moves CSS3 transformed (scaled, skewed, rotated) div elements to an absolute position in the container box. We are developing a canvas like application. I am using getBoundingClientRect() to find an absolute position and bounding-rect of the element. However, when I move this element to specified position it does not work because x and y position has translated.
Move to (0, 0):
// pseudocode I tried!
const matrix = getTransformationMatrix(elm); // get elements transormation matrix.
const pos = matrix.applyToPoint(0, 0); // find related coordiate of zeroth postion by applying element's matrix
tranlatePosition(elm, pos.x, pos.y);
I would like to scale animate an SVG element to fit (preserving aspect ratio) a given area of the SVG.
I know about animate which performs relative animations
var s = Snap("#myelement");
s.animate({ 'transform' : 't100,100s5,5,165,175' },1000);
In principle it should be possible to achieve what I want by computing the parameters of the translation and the scaling.
The problem there is that I do not find accurate documentation of the parameters.
The arguments of t seem to be the relative x,y position and that of s the scale factors and the coordinates of the scale center.
However, how does the combined translation and scaling work? Does the relative translation position scale with the scaling, etc.?
In other words: How do I compute the relative translation and scaling parameters from the coordinates of the upper left and the lower right corner of the animation target element?
Alternatively: Is there a more suitable animate function in Snap?
You show a transform with several parts. The order of these parts is important. If you translate first and scale later, the resulting translation is scaled too. If you scale first and then translate the resulting translation is not affected by the scaling.
The animation you use in Snap.svg is the one I also use. (However I consider migrating to svg.js, since Snap.svg does not play well with Electron for example. I have to do some testing first, though)
Since Snap uses SVG syntax, to solve the problem one needs to understand SVG transformations (see here for an introduction: https://sarasoueidan.com/blog/svg-transformations/). In order to set up a combined SVG transformation it is important to understand that each transformation changes the coordinate system (rather than just the properties of an element in an absolute coordinate frame).
If you combine two transformations, scaling and translation, this means that the parameters of the second transformation depends on the first one.
To achieve a translation and scaling of an element to a given location and size in the coordinates of the ViewBox of an SVG, one can first perform the scaling to the new size choosing the center coordinates for the scaling as the center of the element. Then considerations for the following translations simplify as follows
function startAnimation() {
var svg = Snap("#baseSVG");
/* get the bounding box of the svg */
var bboxSvg = svg.getBBox();
var s = Snap("#element");
/* get the bounding box of the element */
var bbox = s.getBBox();
/* get the required scale factor (assuming that we want to fit the element inside the svg bounding box) */
var scale = Math.min(bboxSvg.width/bbox.width,bboxSvg.height/bbox.height)*0.8;
/* compute the translation needed to bring center of element to center of svg
the scale factor must be taken into account since the translation is based on the coordinate system obtained after the previous scaling */
var tx = (200-bbox.cx)/scale;
var ty = (200-bbox.cy)/scale;
/* perform the animation (make center of scaling the center of element) */
s.animate({ 'transform' : 's' + scale + ',' + scale + ',' + bbox.cx + ',' + bbox.cy + 't' + tx + ',' + ty },1000,mina.bounce);
s.drag();
}
This assumes that your SVG object has id baseSVG and the element you want to transform has id element. It is transformed such that it fits the SVG (adjust the factor 0.8 if you want it larger or smaller). If you know only the coordinates of the corners of the element you must first compute the center coordinates of the target (replace bbox.cx and bbox.cy) and the scale to apply this code snippet. This works in the obvious way in the coordinate frame of baseSVG.
I wish to set a boundary for two rectangles in my SVG.
I found this example: http://bl.ocks.org/mbostock/1557377
In the example the boundaries get worked out from the position of the object that is dragged. Every circle in the example can only move a certain distance from where it started. What I wish to do is to create one drag function and use it on multiple shapes. This drag function will stop the shapes from going out of a certain area.
For example: I have a rectangle on the left side of the screen and one on the right but I don't want any of them to be able to go off screen. I started working on this but figured out this worked with regards to the position of the object getting dragged. So this works for the left hand rectangle but the right hand rectangle can go offscreen to the right but only so far to the left
.on("drag", function(d) {
g = this;
translate = d3.transform(g.getAttribute("transform")).translate;
x = d3.event.dx + translate[0],
y = d3.event.dy + translate[1];
if(x<-10){x=-10}
if(x>width-10){width-10}
if(y<-10){y=-10}
if(y>height-10){y=height-10}
d3.select(g).attr("transform", "translate(" + x + "," + y + ")");
d3.event.sourceEvent.stopPropagation();
My question is: how do I impose the same boundary on anything that is dragged? i.e I don't want it to go off screen. I have variables width and height which are screen width and screen height respectively
I am doing a modification on svg-edit. I am using a function to make a path element bigger or smaller based on width and height inputs by the user. The user selects an element and clicks on a button to fire up the function which takes the last known width and heght measurements and then asks from the user the new width and height values. It then creates a divisor which it uses to create a TRANSFORM MATRIX operation on the element to make it as big as the user wants.
The problem is that when transforming matrices, the elements also changes position.
I want when the user is asked for a width and height also to be asked for x,y position on canvas and then move the selected element to that position.
Is their a way of repositioning an svg element?
function changeDimensions()
{
svgNode = svgCanvas.getSelectedElems()[0];
var dims = Raphael.pathBBox(svgNode.getAttribute('d'));
lasth = parseInt(dims.height);
lastw= parseInt(dims.width);
var transformw=prompt("Enter your new width");
var transformh=prompt("Enter your new height");
newW=transformw/lastw;
newH=transformh/lasth;
svgCanvas.changeSelectedAttribute("transform", "matrix(" + newW + ", 0, 0, " + newH + ", 0, 0)");
svgCanvas.recalculateAllSelectedDimensions();
}
Different svg elements have different attributes that they use to position themselves. For example rect's have x and y attributes but circles have cx, and cy. Path's do not have separate attributes.
However you can probably get what you need from a transform! Most svg elements will accept a transform attribute where you can assign a translation. E.g.
<path d="M10,10L20,100" transform="translate(30,40)"/>
In fact you can probably scale your path with the same transform attribute.
I have used the Canvas code provided elsewhere on this site to create a screen where I have several overlapping transparent pngs with the non-transparent parts being irregular shapes. I can get the color under the cursor and that is great. But my shapes are all the same color and I need a way to get the ID of the particular shape as well so I know which one was clicked on. Imagine a map made of overlapping pngs fo reach country and you want to detect which country was clicked on. From what I can tell, id detection only applies to rectangular regions. Any suggestions?
$('#myCanvas').click(function(e){
var position = findPos(this);
var x = e.pageX - position.x;
var y = e.pageY - position.y;
var coordinate = "x=" + x + ", y=" + y;
var canvas = this.getContext('2d');
var p = canvas.getImageData(x, y, 1, 1).data;
var hex = "#" + ("000000" + rgbToHex(p[0], p[1], p[2])).slice(-6);
alert(hex);
});
This code gets and displays the color (findPos and rgbToHex are separate functions left off for clarity). I need an id! Help!
Even with transparency, the images are all rectangles. You then know which images are at a clicked point by rectangle intersection - check your array of images and their x,y points with width,height for point intersection. You then come up with an array of possibly clicked images. If there's only one in the list, you are done.
The images have an implied Z-order of the reverse order in which you wrote them, meaning, an image is overwritten by the next image written which overlaps it. You can use that to know which order to try them in for hit-testing if more than one is at the point clicked. The only trick is to detect if an image pixel is transparent or not.
To detect transparency for a pixel point clicked in a single image, you could keep a second hidden canvas element. Clear it, then write the target image to it at the same position, and use the same code to see if the clicked pixel within the second canvas is the transparent color. If it is, repeat this process with the next image in the Z-order until you get the image where a non-transparent pixel was clicked.
A small but important optimization is to check the color clicked first, and if it's the transparent color you already know none of the images were clicked on a non-transparent point.