I have an SVG visualization of the distribution of CSS4 color keywords in HSL space here: https://meyerweb.com/eric/css/colors/hsl-dist.html
I recently added zooming via the mouse wheel, and panning via mouse clack-and-drag. I’m able to convert a point from screen space to SVG coordinate space using matrixTransform, .getScreenCTM(), and .inverse() thanks to example code I found online, but how do I convert mouse movements during dragging? Right now I’m just shifting the viewBox coordinates by the X and Y values from event, which means the image drag is faster than the mouse movement when zoomed in.
As an example, suppose I’m zoomed in on the image and am dragging to pan, and I jerk the mouse leftwards and slightly downwards. event.movementX returns -37 and event.movementY returns 6. How do I determine how far that equates to in SVG coordinates, so that the viewBox coordinates are shifted properly?
(Note: I’m aware that there are libraries for this sort of thing, but I’m intentionally writing vanilla JS code in order to learn more about both SVG and JS. So please, don’t post “lol just use library X” and leave it at that. Thanks!)
Edited to add: I was asked to post code. Posting the entire JS seems overlong, but this is the function that fires on mousemove events:
function dragger(event) {
var target = document.getElementById('color-wheel');
var coords = parseViewBox(target);
coords.x -= event.movementX;
coords.y -= event.movementY;
changeViewBox(target,coords);
}
If more is needed, then view source on the linked page; all the JS is at the top of the page. Nothing is external except for a file that just contains all the HSL values and color names for the visualization.
My recommendation:
Don't worry about the movementX/Y properties on the event.
Just worry about where the mouse started and where it is now.
(This has the additional benefit that you get the same result even if you miss some events: maybe because the mouse moved out of the window, or maybe because you want to group events so you only run the code once per animation frame.)
For where the mouse started, you measure that on the mousedown event.
Convert it to a position in the SVG coordinates, using the method you were using,
with .getScreenCTM().inverse() and .matrixTransform().
After this conversion, you don't care where on the screen this point is. You only care about where it is in the picture. That's the point in the picture that you're always going to move to be underneath the mouse.
On the mousemove events, you use that same conversion method to find out where the mouse currently is within the current SVG coordinate system. Then you figure out how far that is from the point (again, in SVG coordinates) that you want underneath the mouse. That's the amount that you use to transform the graphic. I've followed your example and am doing the transform by shifting the x and y parts of the viewBox:
function move(e) {
var targetPoint = svgCoords(event, svg);
shiftViewBox(anchorPoint.x - targetPoint.x,
anchorPoint.y - targetPoint.y);
}
You can also shift the graphic around with a transform on a group (<g> element) within the SVG; just be sure to use that same group element for the getScreenCTM() call that converts from the clientX/Y event coordinates.
Full demo for the drag to pan. I've skipped all your drawing code and the zooming effect.
But the zoom should still work, because the only position you're saving in global values is already converted into SVG coordinates.
var svg = document.querySelector("svg");
var anchorPoint;
function shiftViewBox(deltaX, deltaY) {
svg.viewBox.baseVal.x += deltaX;
svg.viewBox.baseVal.y += deltaY;
}
function svgCoords(event,elem) {
var ctm = elem.getScreenCTM();
var pt = svg.createSVGPoint();
// Note: rest of method could work with another element,
// if you don't want to listen to drags on the entire svg.
// But createSVGPoint only exists on <svg> elements.
pt.x = event.clientX;
pt.y = event.clientY;
return pt.matrixTransform(ctm.inverse());
}
svg.addEventListener("mousedown", function(e) {
anchorPoint = svgCoords(event, svg);
window.addEventListener("mousemove", move);
window.addEventListener("mouseup", cancelMove);
});
function cancelMove(e) {
window.removeEventListener("mousemove", move);
window.removeEventListener("mouseup", cancelMove);
anchorPoint = undefined;
}
function move(e) {
var targetPoint = svgCoords(event, svg);
shiftViewBox(anchorPoint.x - targetPoint.x,
anchorPoint.y - targetPoint.y);
}
body {
display: grid;
margin: 0;
min-height: 100vh;
}
svg {
margin: auto;
width: 70vmin;
height: 70vmin;
border: thin solid gray;
cursor: move;
}
<svg viewBox="-40 -40 80 80">
<polygon fill="skyBlue"
points="0 -40, 40 0, 0 40 -40 0" />
</svg>
So the script needs something so that the vectors moved by the SVG are coordinated against the vectors moved by the mouse on screen. Despite the event being on your target, your SVG, the MouseEvent properties relate to your screen alone.
The movementX read-only property of the MouseEvent interface provides the difference in the X coordinate of the mouse pointer between the given event and the previous mousemove event. In other words, the value of the property is computed like this: currentEvent.movementX = currentEvent.screenX - previousEvent.screenX.
From https://developer.mozilla.org/en-US/docs/Web/API/MouseEvent/movementX
The screenX read-only property of the MouseEvent interface provides the horizontal coordinate (offset) of the mouse pointer in global (screen) coordinates.
So what you're measuring, and to the best of my knowledge the only thing you can measure direcly without additional libraries or complication, is the movement of the pointer in pixel terms across the screen. The only way to make this work in terms of vector for movement of your SVG is to translate the on screen movement to the dimensions that are relevant to your scaled SVG.
My initial thinking was that you would be able to work out the scaling of the SVG object, using some combination of its viewbox and its actual width on the screen. Naturally what would initially appear sensible is not. This approach won't work, if it appears to it would be purely by chance.
But it turns out that the solution is essentially to use the same type of code you've used in your scaling when you approach your mouse movements. The .getScreenCTM() and .inverse() functions are exactly what you'll need again. But instead of trying to find a single point on the SVG to work from, you need to find out what the on-screen distance translates to in the SVG by comparing two points on the SVG instead.
What I provide here isn't necessarily the most optimal solution but hopefully helps explain and gives you something to work further from...
function dragger(event) {
var target = document.getElementById('color-wheel');
var coords = parseViewBox(target);
//Get an initial point in the SVG to start measuring from
var start_pt = target.createSVGPoint();
start_pt.x = 0;
start_pt.y = 0;
var svgcoord = start_pt.matrixTransform(target.getScreenCTM().inverse());
//Create a point within the same SVG that is equivalent to
//the px movement by the pointer
var comparison_pt = target.createSVGPoint();
comparison_pt.x = event.movementX;
comparison_pt.y = event.movementY;
var svgcoord_plus_movement = comparison_pt.matrixTransform(target.getScreenCTM().inverse());
//Use the two SVG points created from screen position values to determine
//the in-SVG distance to change coordinates
coords.x -= (svgcoord_plus_movement.x - svgcoord.x);
//Repeat the above, but for the Y axis
coords.y -= (svgcoord_plus_movement.y - svgcoord.y);
//Deliver the changes to the SVG to update the view
changeViewBox(target,coords);
}
Sorry for the long winded answer, but hopefully it explains it from the beginning enough that anyone else looking to find an answer can get the whole picture if they've not come as far as you have in this script.
From MouseEvent, we have clientX and movememntX. Taken together, we can deduce our last location. We can then take the transform of our current location and subtract it from the transform of our last location:
element.onpointermove = e => {
const { clientX, clientY, movementX, movementY } = e;
const DOM_pt = svg.createSVGPoint();
DOM_pt.x = clientX;
DOM_pt.y = clientY;
const { x, y } = DOM_pt.matrixTransform(svgs[i].getScreenCTM().inverse());
DOM_pt.x += movementX;
DOM_pt.y += movementY;
const { x: last_x, y: last_y } = DOM_pt.matrixTransform(svgs[i].getScreenCTM().inverse());
const dx = last_x - x;
const dy = last_y - y;
// TODO: use dx & dy
}
Related
I’m having some trouble incorporating pan/zoom behaviour with the ability to also drag-move some shapes around on the canvas, using EaselJS.
I want to be able to move the shape ONLY if I mousedown on it, but if I mousedown on the stage (i.e. not on a shape), then I want to be able to pan the stage.
This behaviour needs to be consistent regardless of the ‘zoom’ level (which is altered by the mousewheel).
I have read this: How to stop the event bubble in easljs? Which suggests that the stage mousedown events will fire regardless of whether I have clicked on a shape or empty space, so it would be better to create a ‘background’ shape to capture my mousedown events that are not on a ‘proper’ shape.
This fiddle is how I have set it up: https://jsfiddle.net/hmcleay/mzheuLbg/
var stage = new createjs.Stage("myCanvas");
console.log('stage.scaleX: ', stage.scaleX);
console.log('stage.scaleY: ', stage.scaleY);
function addCircle(r,x,y){
var g=new createjs.Graphics().beginFill("#ff0000").drawCircle(0,0,r);
var s=new createjs.Shape(g)
s.x=x;
s.y=y;
s.on('pressmove', function(ev) {
var localpos = stage.globalToLocal(ev.stageX, ev.stageY)
s.x = localpos.x;
s.y = localpos.y;
stage.update();
});
stage.addChild(s);
stage.update();
}
// create a rectangle 'background' Shape object to cover the stage (to allow for capturing mouse drags on anything except other shapes).
bg = new createjs.Shape();
bg.graphics.beginFill("LightGray").drawRect(10, 10, stage.canvas.width - 20, stage.canvas.height - 20); //deliberately smaller for debugging purposes (easier to see if it moves).
bg.x = 0;
bg.y = 0;
stage.addChild(bg);
stage.update();
//create a rectangle frame to represent the position of the stage.
stageborder = new createjs.Shape();
stageborder.graphics.beginStroke("Black").drawRect(0, 0, stage.canvas.width, stage.canvas.height);
stageborder.x = 0;
stageborder.y = 0;
stage.addChild(stageborder);
stage.update();
// MOUSEWHEEL ZOOM LISTENER - anywhere on canvas.
var factor
canvas.addEventListener("wheel", function(e){
if(Math.max(-1, Math.min(1, (e.wheelDelta || -e.detail)))>0){
factor = 1.1;
} else {
factor = 1/1.1;
}
var local = stage.globalToLocal(stage.mouseX, stage.mouseY);
stage.regX=local.x;
stage.regY=local.y;
stage.x=stage.mouseX;
stage.y=stage.mouseY;
stage.scaleX = stage.scaleX * factor;
stage.scaleY = stage.scaleY * factor;
//re-size the 'background' shape to be the same as the canvas size.
bg.graphics.command.w = bg.graphics.command.w / factor;
bg.graphics.command.h = bg.graphics.command.h / factor;
// re-position the 'background' shape to it's original position of (0,0) in the global space.
var localzero = stage.globalToLocal(0, 0);
bg.x = localzero.x;
bg.y = localzero.y;
stage.update();
});
// listener to add circles to the canvas.
canvas.addEventListener('dblclick', function(){
var localpos = stage.globalToLocal(stage.mouseX, stage.mouseY);
addCircle(10, localpos.x, localpos.y);
});
bg.addEventListener("mousedown", function(ev1){
// purpose of this listener is to be able to capture drag events on the 'background' to pan the whole stage.
// it needs to be a separate 'shape' object (rather than the stage itself), so that it doesn't fire when other shape objects are drag-moved around on the stage.
// get the initial positions of the stage, background, and mousedown.
var mousedownPos0 = {'x': ev1.stageX, 'y': ev1.stageY};
var stagePos0 = {'x': stage.x, 'y': stage.y};
var bgPos0 = {'x': bg.x, 'y': bg.y};
bg.addEventListener('pressmove', function(ev2){
//logic is to pan the stage, which will automatically pan all of it's children (shapes).
// except we want the 'background' shape to stay where it is, so we need to offset it in the opposite direction to the stage movement so that it stays where it is.
stageDelta = {'x': ev2.stageX - mousedownPos0.x, 'y': ev2.stageY - mousedownPos0.y};
//adjust the stage position
stage.x = stagePos0.x + stageDelta.x;
stage.y = stagePos0.y + stageDelta.y;
// return the 'background' shape to global(0,0), so that it doesn't move with the stage.
var localzero = stage.globalToLocal(0,0);
bg.x = localzero.x;
bg.y = localzero.y;
stage.update();
});
});
The grey box is my background shape. I have deliberately made it slightly smaller than the canvas, so that I can see where it is (useful for debugging).
Double click anywhere on the canvas to add some red circles.
If you drag a circle, it only moves that circle.
If you drag on the grey ‘background’ area in between circles, it moves the whole stage (and therefore all the child shapes belonging to the stage).
Because the grey background is also a child of the stage, it wants to move with it. So I have included some code to always return that grey box back to where it started.
The black border represents the position of the ‘stage’, I just added it to help visualise where the stage is.
The mousewheel zoom control is based on the answer to this question: EaselJS - broken panning on zoomed image
Similar to drag-panning, when zooming I have to adjust the size and position of the grey ‘background’ box so that it renders in the same position on the canvas.
However, it doesn’t stay exactly where I want it to… it seems to creep up towards the top left corner of the canvas when I zoom out.
I’ve spent quite some time trying to diagnose this behaviour and can’t find out why it’s happening. I suspect it may have something to do with rounding.. but I’m really not sure.
Can anyone explain why my grey box isn't staying stationary when I zoom in and out?
An alternative method would be to scrap the ‘background’ shape used for capturing mousedown events that aren’t on a ‘proper’ shape.
Instead, it might be possible to use the ‘stage’ mousedown event, but prevent it from moving the stage if the mouse is over a ‘shape’.
Would this be a better way of handling this behaviour? Any suggestions how to prevent it from moving the stage?
Thanks in advance,
Hugh.
Ok,
So as usually happens, after finally asking for help, I managed to work out the problem.
The issue was caused by making the background shape (grey rectangle) 10px smaller than the canvas, so that I could see its position more clearly (to assist with debugging). How ironic that this offset was causing the issue.
The 10px offset was not being converted into the 'local' space when the zoom was applied.
By making the grey rectangle's graphic position at (0,0) with width and height equal to that of the canvas, the problem went away!
Hope this is of use to someone at some point in time.
Cheers,
Hugh.
I am creating a web-based annotation application for annotating images via the HTML canvas element and Javascript. I would like the user to mouse down to indicate the start of the rectangle, drag to the desired end coordinate and let go to indicate the opposite end of the rectangle.
Currently, I am able to take the starting coordinates and end coordinates to create a rectangle on the image with the context.rects() function, however as I am uncertain on how to resize a specific rectangle on the canvas, that leaves me with the rectangle only being drawn after the user has released the mouse click.
How would I be able to resize a specific rectangle created onmousedown while dragging?
The following is the code snippet that performs the function:
var isMouseDown = false;
var startX;
var startY;
canvas.onmousedown = function(e) {
if(annMode){
isMouseDown = true;
var offset = $(this).offset();
startX = parseInt(e.pageX - offset.left);
startY = parseInt(e.pageY - offset.top);
}
};
canvas.onmousemove = function(e) {
if(isMouseDown) {
var offset = $(this).offset();
var intermediateX = parseInt(e.pageX - offset.left);
var intermediateY = parseInt(e.pageY - offset.top);
console.log(intermediateX);
}
};
canvas.onmouseup = function(e) {
if(annMode&&isMouseDown){
isMouseDown = true;
var offset = $(this).offset();
var endX = parseInt(e.pageX - offset.left);
var endY = parseInt(e.pageY - offset.top);
var width = endX - startX;
var height = endY - startY;
context.strokeStyle = "#FF0000";
context.rect(startX, startY, width, height);
context.stroke();
}
isMouseDown = false
};
Here my handy-front-end scripts come in handy!
As I understood the question, you wanted to be able to move your mouse to any point on the canvas, hold the left mouse button, and drag in any direction to make a rectangle between the starting point and any new mouse position. And when you release the mouse button it will stay.
Scripts that will help you accomplish what you are trying to do:
https://github.com/GustavGenberg/handy-front-end/blob/master/README.md#canvasjs
https://github.com/GustavGenberg/handy-front-end/blob/master/README.md#pointerjs
Both scripts just makes the code a lot cleaner and easier to understand, so I used those.
Here is a fiddle as simple as you can make it really using
const canvas = new Canvas([]);
and
const mouse = new Pointer();
https://jsfiddle.net/0y8cbao3/
Did I understand your question correctly?
Do you want a version with comments describing every line and what is does?
There are still some bugs at the moment but im going to fix those soon!
EDIT
After reading your questions again, I reacted to: "...however as I am uncertain on how to resize a specific rectangle on the canvas...".
Canvas is like an image. Once you have drawn to it, you can NOT "resize" different shapes. You can only clear the whole canvas and start over (ofcourse you can clear small portions too).
That's why the Canvas helper is so helpful. To be able to "animate" the canvas, you have to create a loop that redraws the canvas with a new frame each 16ms (60 fps).
The canvas API does not preserve references to specific shapes drawn with it (unlike SVG). The canvas API simply provides convenient functions to apply operations to the individual pixels of the canvas element.
You have a couple options to achieve a draggable rectangle:
You can position a styled div over your canvas while the user is dragging. Create a container for your canvas and the div, and update the position and size the div. When the user releases, draw your rectangle. Your container needs to have position: relative and the div needs to be absolutely positioned. Ensure the div has a higher z-index than the canvas.
In your mouse down method, set div.style.display to block. Then update the position (style.left, style.top, style.width, and style.height) as the mouse is dragged. When the mouse is released, hide it again (style.display = 'none').
You can manually store references to each item you want to draw, clear the canvas (context.clearRect), and redraw each item on the canvas each frame. This kind of setup is usually achieved through recursive usage of the window.requestAnimationFrame method. This method takes a callback and executes on the next draw cycle of the browser.
The first option is probably easier to achieve in your case. If you plan to expand the capabilities of your app further, the 2nd will provide more versatility. A basic loop would be implemented as so:
// setup code, create canvas & context
function mainLoop() {
context.clearRect(0, 0, canvas.width, canvas.height);
/** do your logic here and re-draw **/
requestAnimationFrame(mainLoop);
}
function startApp() {
requestAnimationFrame(mainLoop)
}
This tutorial has detailed explanation of event loops for HTML canvas: http://www.isaacsukin.com/news/2015/01/detailed-explanation-javascript-game-loops-and-timing
I also have a fully featured implementation on my GitHub that's part of rendering engine I wrote: https://github.com/thunder033/mallet/blob/master/src/mallet/webgl/webgl-app.ts#L115
I have a pixi.js html canvas with thousands of objects on it and I want the user to be able to zoom into it with the usual rectangular selection area. The brute force way to implement this would be to draw the rectangle on each mouse move and rerender the whole stage. But this seems like a waste of CPU. Plus this is so common in user interfaces, that I suspect that there is already some function in pixi.js or a plugin that solves this.
If there is no plugin: If I could save the whole buffer to some 2nd buffer when the user presses the mouse button, I could draw the rectangle on top, and on every mouse move, copy back the 2nd buffer to the primary buffer before drawing the rectangle. This would mean that I didn't have to redraw everything on every mouse move. But I don't think that one can clone the current buffer to some named secondary buffer.
Another alternative would be to move a rectangular DOM object on top of the canvas, but then I am afraid that the current pixel position will be hard to relate to the pixi.js / html5 canvas pixels.
Is there a better way? Or some plugin / search engine keyword that I'm missing? How would you implement a rubber band in html canvas or pixi.js ?
I ended up solving this with a separate DOM object that is moved over the canvas. The solution also requires the new interaction manager in PIXI 4, that offers a single callback for any mouse movement over the canvas.
In the following, I assume that the canvas is placed at canvasLeft and canvasTop pixels with CSS.
$(document.body).append("<div style='position:absolute; display:none; border: 1px solid black' id='tpSelectBox'></div>");
renderer = new PIXI.CanvasRenderer(0, 0, opt);
// setup the mouse zooming callbacks
renderer.plugins.interaction.on('mousedown', function(ev) {
mouseDownX = ev.data.global.x;
mouseDownY = ev.data.global.y; $("#tpSelectBox").css({left:mouseDownX+canvasLeft, top:mouseDownY+canvasTop}).show();
});
renderer.plugins.interaction.on('mousemove', function(ev) {
if (mouseDownX == null)
return;
var x = ev.data.global.x;
var y = ev.data.global.y;
var selectWidth = Math.abs(x - mouseDownX);
var selectHeight = Math.abs(y - mouseDownY);
var minX = Math.min(ev.data.global.x, mouseDownX);
var minY = Math.min(ev.data.global.y, mouseDownY);
var posCss = {
"left":minX+canvasLeft,
"top":minY+canvasTop,
"width":selectWidth,
"height":selectHeight
};
$("#tpSelectBox").css(posCss);
});
renderer.plugins.interaction.on('mouseup', function(ev) {
$("#tpSelectBox").hide();
mouseDownX = null;
mouseDownY = null;
$("#tpSelectBox").css({"width":0, "height":0});
});
For older version of PIXI, here is an example of pan/zoom without a rectangle
https://github.com/Arduinology/Pixi-Pan-and-Zoom/blob/master/js/functions.js
In May 2015, the Interaction Manager got extended to allow easier pan/zoom handling https://github.com/pixijs/pixi.js/issues/1825 which is what I'm using here.
I need to track mouse position relative to a <canvas> element in my app. Currently, I have a mousemove event listener attached to the <canvas> that updates my mouse position whenever it fires, using offsetX/offsetY when available, or layerX/layerY when the offsetX/Y is not available. Using offsetX/Y or layerX/Y gives me mouse coordinates relative to my <canvas>, which is exactly what I want. As my app works its magic, various CSS 3d transformations get applied to the <canvas>, and even when <canvas> is very transformed, offsetX/Y still gives me accurate coordinates within the <canvas>'s local, transformed coordinate-space.
That's kind of confusing, so I'll try stating an example. If my <canvas> is 100px in both width and height, and is located at (0,0) relative to the browser viewport, and I click at (50,50) (in viewport coords), that corresponds to (50,50) in my <canvas>, and 50 is the value that is (correctly) returned via offsetX and offsetY. If I then apply transform: translate3d(20px,20px,0px) to my <canvas> and click at (50,50) (in viewport coords), since my canvas has been shifted 20px down and 20px to the right, that actually corresponds to (30,30) relative to the <canvas>, and 30 is the value that is (correctly) returned via offsetX and offsetY.
The problem I'm facing is what to do when the user is not physically moving the mouse, yet the <canvas> is being transformed. I'm only updating the position of the mouse on mousemove events, so what do I do when there is no mousemove?
For example. My mouse is positioned at (50,50) and no transformations are applied to the <canvas>. My this.mouseX and this.mouseY are both equal to 50; they were saved at the final mousemove event when I moved the mouse to (50,50). Without moving the mouse at all, I apply the above transformation (transform: translate3d(20px,20px,0px)) to my <canvas>. Now, I need this.mouseX and this.mouseY to each be equal to 30, as that is my mouse's new position relative to the current transformation of <canvas>. But this.mouseX and this.mouseY are still equal to 50. Since I never moved the mouse, there was no mousemove event fired, and those saved coords never got updated.
How can I deal with this? I thought about creating a new jQuery event, manually assigning some properties (pageX and pageY?) based on my old/previous mouse position, and then triggering that event, but I don't think that's going to cause the browser to recalculate the offsetX and offsetY properties. I've also been thinking about taking the known old/previous mouse position and multiplying it by my transformation matrix, but that's going to get real complicated since my mouse coordinates are in 2d-space, but the transformations I'm applying to <canvas> are all 3d transformations.
I guess really, what I want to do is take my known 2d page position and raycast it into the 3d space and find out where I'm hitting the transformed <canvas>, all in javascript (jQuery is available).
Is this possible? Does this even make sense?
Works in all browsers
var mouseX=0;
var mouseY=0;
var canvas = document.querySelector('#canvas');
var rect = canvas.getBoundingClientRect();
document.onmousemove = function(e) {
mouseX=e.clientX-rect.left;
mouseY=e.clientY-rect.top;
};
function updateCoords() {
mouseX=e.clientX-mouseX;
mouseY=e.clientY-mouseY;
setTimeout(updatecoords,10);
}
Now we can call updateCoords() function once to repeatedly check for new position.
updateCoords();
You can add your code inside the updateCoords() function and it will be executed each 10 milliseconds
Concept: mouseX and mouseY variables get updated on mousemove event, and also get updated when there is any change in the canvas position.
It looks like you want to refresh your mouseposition-values even if you don't move your mouse. You should try something like this:
var event = '';
var counter = 1;
$(function(e){
event = e;
window.setInterval(refresh, 10);
});
$(document).mousemove(function(e){
event = e;
refresh;
});
function refresh(){
counter++;
$('#mousepos').val("event.pageX: " + event.pageX + ", event.pageY: " + event.pageY + ", counter: " + counter)
}
The counter is just for visualisation of the refresh. You can set the interval to everything you want (10 = 10ms = 0.01s) Just move everything from your .mousemove() event into this refresh() function and call it properly and your mouse position should update even if you don't move your mouse.
Look at this fiddle for a life example: http://jsfiddle.net/82cmxw8L/1
EDIT:
Because my fiddle didn't work for the asker, i updated it: http://jsfiddle.net/82cmxw8L/8/
New is, that the mouseposition is now set every 0.1 Second, no matter what, rather than being updated only when the mouse moves.
I am working on some image viewing tools in KineticJS. I have a rotate tool. When you move the mouse over an image, a line appears from the centre of the image to the mouse position, and then when you click and move, the line follows and the image rotates around that point, in sync with the line. This works great.
My issue is, I have the following set up:
Canvas->
Stage->
Layer->
GroupA->
GroupB->
Image
This is because I draw tabs for options on GroupA and regard it as a container for the image. GroupB is used because I flip GroupB to flip the image ( and down the track, any objects like Text and Paths that I add to the image ), so that it flips but stays in place. This all works well. I am also hoping when I offer zoom, to zoom groupb and thus zoom anything drawn on the image, but for groupA to create clipping and continue to support drag buttons, etc.
The object I am rotating, is GroupA. Here is the method I call to set up rotation:
this.init = function(control)
{
console.log("Initing rotate for : " + control.id());
RotateTool.isMouseDown = false;
RotateTool.startRot = isNaN(control.getRotationDeg()) ? 0 : control.getRotationDeg();
RotateTool.lastAngle = control.parent.rotation() / RotateTool.factor;
RotateTool.startAngle = control.parent.rotation();
this.image = control.parent;
var center = this.getCentrePoint();
RotateTool.middleX = this.image.getAbsolutePosition().x + center.x;
RotateTool.middleY = this.image.getAbsolutePosition().y + center.y;
this.image.x(this.image.x() + center.x - this.image.offsetX());
this.image.y(this.image.y() + center.y - this.image.offsetY());
this.image.offsetX(center.x);
this.image.offsetY(center.y);
}
getCentrePoint is a method that uses trig to get the size of the image, based on the rotation. As I draw a line to the centre of the image, I can tell it's working well, to start with. I've also stepped in to it and it always returns values only slightly higher than the actual width and height, they always look like they should be about what I'd expect, for the angle of the image.
Here is the code I use on mouse move to rotate the image:
this.layerMouseMove = function (evt, layer)
{
if (RotateTool.isRotating == false)
return;
if (!Util.isNull(this.image) && !Util.isNull(this.line))
{
if (Item.moving && !RotateTool.isRotating)
{
console.log("layer mousemove item moving");
RotateTool.layerMouseUp(evt, layer);
}
else
{
var pt = this.translatePoint(evt.x, evt.y);
var x = pt.x;
var y = pt.y;
var end = this.getPoint(x, y, .8);
RotateTool.line.points([this.middleX, this.middleY, end.x, end.y]);
RotateTool.line.parent.draw();
RotateTool.sign.x(x - 20);
RotateTool.sign.y(y - 20);
var angle = Util.findAngle({ x: RotateTool.startX, y: RotateTool.startY }, { x: x, y: y }, { x: RotateTool.middleX, y: RotateTool.middleY });
var newRot = (angle) + RotateTool.startAngle;
RotateTool.image.rotation(newRot);
console.log(newRot);
}
}
}
Much of this code is ephemeral, it's maintaining the line ( which is 80% of the length from the centre to my mouse, as I also show a rotate icon, over the mouse.
Sorry for the long windedness, I'm trying to make sure I am clear, and that it's obvious that I've done a lot of work before asking for help.
So, here is my issue. After I've rotated a few times, when I click again, the 'centre' point that the line draws to, is way off the bottom right of my screen, and if I set a break point, sure enough, the absolute position of my groups are no longer in sync. This seems to me like my rotation has moved the image in the manner I hoped, but moved my group off screen. When I set offsetX and offsetY, do I need to also set it on all the children ? But, it's the bottom child I can see, and the top group I set those things on, so I don't really understand how this is happening.
I do notice my image jumps a few pixels when I move the mouse over it ( which is when the init method is called ) so I feel like perhaps I am just out slightly somewhere, and it's causing this flow on effect. I've done some more testing, and my image always jumps slightly up and to the right when I move the mouse over it, and the rotate tool continues to work reliably, so long as I don't move the mouse over the image again, causing my init method to call. It seems like every time this is called, is when it breaks. So, I could just call it once, but I'd have to associate the data with the image then, for the simple reason that once I have many images, I'll need to change my data as my selected image changes. Either way, I'd prefer to understand and resolve the issue, than just hide it.
Any help appreciated.