If I create a text element and add a drag handler it does not move on a touch device. I can see that the move event gets triggered, but the element does not drag. If I create a path element and attach the same drag handlers that path element moves just fine on a touch device. All elements move fine on both Chrome and Firefox on my laptop.
Is there anything special that I have to do to get text elements to drag on a touch device?
Here's a snippet of what I'm doing. I did remove some pieces of code that were unrelated, so it may look a little strange.
var word = paper.text(xVal, yVal, "text"]);
word.drag(window.move, window.start, window.up)
window.start = function() {
this.lastDx = 0;
this.lastDy = 0;
};
window.move = function(dx, dy) {
this.transform("...T" + (dx - this.lastDx) + "," + (dy - this.lastDy));
this.lastDx = dx;
this.lastDy = dy;
return this;
};
window.up = function() {
this.lastDx = 0;
this.lastDy = 0;
};
After some trial and error I was able to get around this by creating a rectangle that "wraps" the text. I put a fill on the rectangle but with an opacity of 0 and then grouped the rectangle and the text together and made the set the draggable piece.
There is a fix in Raphael 2.1.1 for this issue. Prior versions reported NaN for dx/dy on touch devices so if you were using those for dragging then it won't work.
See https://github.com/DmitryBaranovskiy/raphael/issues/328
Related
I’m having some trouble incorporating pan/zoom behaviour with the ability to also drag-move some shapes around on the canvas, using EaselJS.
I want to be able to move the shape ONLY if I mousedown on it, but if I mousedown on the stage (i.e. not on a shape), then I want to be able to pan the stage.
This behaviour needs to be consistent regardless of the ‘zoom’ level (which is altered by the mousewheel).
I have read this: How to stop the event bubble in easljs? Which suggests that the stage mousedown events will fire regardless of whether I have clicked on a shape or empty space, so it would be better to create a ‘background’ shape to capture my mousedown events that are not on a ‘proper’ shape.
This fiddle is how I have set it up: https://jsfiddle.net/hmcleay/mzheuLbg/
var stage = new createjs.Stage("myCanvas");
console.log('stage.scaleX: ', stage.scaleX);
console.log('stage.scaleY: ', stage.scaleY);
function addCircle(r,x,y){
var g=new createjs.Graphics().beginFill("#ff0000").drawCircle(0,0,r);
var s=new createjs.Shape(g)
s.x=x;
s.y=y;
s.on('pressmove', function(ev) {
var localpos = stage.globalToLocal(ev.stageX, ev.stageY)
s.x = localpos.x;
s.y = localpos.y;
stage.update();
});
stage.addChild(s);
stage.update();
}
// create a rectangle 'background' Shape object to cover the stage (to allow for capturing mouse drags on anything except other shapes).
bg = new createjs.Shape();
bg.graphics.beginFill("LightGray").drawRect(10, 10, stage.canvas.width - 20, stage.canvas.height - 20); //deliberately smaller for debugging purposes (easier to see if it moves).
bg.x = 0;
bg.y = 0;
stage.addChild(bg);
stage.update();
//create a rectangle frame to represent the position of the stage.
stageborder = new createjs.Shape();
stageborder.graphics.beginStroke("Black").drawRect(0, 0, stage.canvas.width, stage.canvas.height);
stageborder.x = 0;
stageborder.y = 0;
stage.addChild(stageborder);
stage.update();
// MOUSEWHEEL ZOOM LISTENER - anywhere on canvas.
var factor
canvas.addEventListener("wheel", function(e){
if(Math.max(-1, Math.min(1, (e.wheelDelta || -e.detail)))>0){
factor = 1.1;
} else {
factor = 1/1.1;
}
var local = stage.globalToLocal(stage.mouseX, stage.mouseY);
stage.regX=local.x;
stage.regY=local.y;
stage.x=stage.mouseX;
stage.y=stage.mouseY;
stage.scaleX = stage.scaleX * factor;
stage.scaleY = stage.scaleY * factor;
//re-size the 'background' shape to be the same as the canvas size.
bg.graphics.command.w = bg.graphics.command.w / factor;
bg.graphics.command.h = bg.graphics.command.h / factor;
// re-position the 'background' shape to it's original position of (0,0) in the global space.
var localzero = stage.globalToLocal(0, 0);
bg.x = localzero.x;
bg.y = localzero.y;
stage.update();
});
// listener to add circles to the canvas.
canvas.addEventListener('dblclick', function(){
var localpos = stage.globalToLocal(stage.mouseX, stage.mouseY);
addCircle(10, localpos.x, localpos.y);
});
bg.addEventListener("mousedown", function(ev1){
// purpose of this listener is to be able to capture drag events on the 'background' to pan the whole stage.
// it needs to be a separate 'shape' object (rather than the stage itself), so that it doesn't fire when other shape objects are drag-moved around on the stage.
// get the initial positions of the stage, background, and mousedown.
var mousedownPos0 = {'x': ev1.stageX, 'y': ev1.stageY};
var stagePos0 = {'x': stage.x, 'y': stage.y};
var bgPos0 = {'x': bg.x, 'y': bg.y};
bg.addEventListener('pressmove', function(ev2){
//logic is to pan the stage, which will automatically pan all of it's children (shapes).
// except we want the 'background' shape to stay where it is, so we need to offset it in the opposite direction to the stage movement so that it stays where it is.
stageDelta = {'x': ev2.stageX - mousedownPos0.x, 'y': ev2.stageY - mousedownPos0.y};
//adjust the stage position
stage.x = stagePos0.x + stageDelta.x;
stage.y = stagePos0.y + stageDelta.y;
// return the 'background' shape to global(0,0), so that it doesn't move with the stage.
var localzero = stage.globalToLocal(0,0);
bg.x = localzero.x;
bg.y = localzero.y;
stage.update();
});
});
The grey box is my background shape. I have deliberately made it slightly smaller than the canvas, so that I can see where it is (useful for debugging).
Double click anywhere on the canvas to add some red circles.
If you drag a circle, it only moves that circle.
If you drag on the grey ‘background’ area in between circles, it moves the whole stage (and therefore all the child shapes belonging to the stage).
Because the grey background is also a child of the stage, it wants to move with it. So I have included some code to always return that grey box back to where it started.
The black border represents the position of the ‘stage’, I just added it to help visualise where the stage is.
The mousewheel zoom control is based on the answer to this question: EaselJS - broken panning on zoomed image
Similar to drag-panning, when zooming I have to adjust the size and position of the grey ‘background’ box so that it renders in the same position on the canvas.
However, it doesn’t stay exactly where I want it to… it seems to creep up towards the top left corner of the canvas when I zoom out.
I’ve spent quite some time trying to diagnose this behaviour and can’t find out why it’s happening. I suspect it may have something to do with rounding.. but I’m really not sure.
Can anyone explain why my grey box isn't staying stationary when I zoom in and out?
An alternative method would be to scrap the ‘background’ shape used for capturing mousedown events that aren’t on a ‘proper’ shape.
Instead, it might be possible to use the ‘stage’ mousedown event, but prevent it from moving the stage if the mouse is over a ‘shape’.
Would this be a better way of handling this behaviour? Any suggestions how to prevent it from moving the stage?
Thanks in advance,
Hugh.
Ok,
So as usually happens, after finally asking for help, I managed to work out the problem.
The issue was caused by making the background shape (grey rectangle) 10px smaller than the canvas, so that I could see its position more clearly (to assist with debugging). How ironic that this offset was causing the issue.
The 10px offset was not being converted into the 'local' space when the zoom was applied.
By making the grey rectangle's graphic position at (0,0) with width and height equal to that of the canvas, the problem went away!
Hope this is of use to someone at some point in time.
Cheers,
Hugh.
I have an SVG visualization of the distribution of CSS4 color keywords in HSL space here: https://meyerweb.com/eric/css/colors/hsl-dist.html
I recently added zooming via the mouse wheel, and panning via mouse clack-and-drag. I’m able to convert a point from screen space to SVG coordinate space using matrixTransform, .getScreenCTM(), and .inverse() thanks to example code I found online, but how do I convert mouse movements during dragging? Right now I’m just shifting the viewBox coordinates by the X and Y values from event, which means the image drag is faster than the mouse movement when zoomed in.
As an example, suppose I’m zoomed in on the image and am dragging to pan, and I jerk the mouse leftwards and slightly downwards. event.movementX returns -37 and event.movementY returns 6. How do I determine how far that equates to in SVG coordinates, so that the viewBox coordinates are shifted properly?
(Note: I’m aware that there are libraries for this sort of thing, but I’m intentionally writing vanilla JS code in order to learn more about both SVG and JS. So please, don’t post “lol just use library X” and leave it at that. Thanks!)
Edited to add: I was asked to post code. Posting the entire JS seems overlong, but this is the function that fires on mousemove events:
function dragger(event) {
var target = document.getElementById('color-wheel');
var coords = parseViewBox(target);
coords.x -= event.movementX;
coords.y -= event.movementY;
changeViewBox(target,coords);
}
If more is needed, then view source on the linked page; all the JS is at the top of the page. Nothing is external except for a file that just contains all the HSL values and color names for the visualization.
My recommendation:
Don't worry about the movementX/Y properties on the event.
Just worry about where the mouse started and where it is now.
(This has the additional benefit that you get the same result even if you miss some events: maybe because the mouse moved out of the window, or maybe because you want to group events so you only run the code once per animation frame.)
For where the mouse started, you measure that on the mousedown event.
Convert it to a position in the SVG coordinates, using the method you were using,
with .getScreenCTM().inverse() and .matrixTransform().
After this conversion, you don't care where on the screen this point is. You only care about where it is in the picture. That's the point in the picture that you're always going to move to be underneath the mouse.
On the mousemove events, you use that same conversion method to find out where the mouse currently is within the current SVG coordinate system. Then you figure out how far that is from the point (again, in SVG coordinates) that you want underneath the mouse. That's the amount that you use to transform the graphic. I've followed your example and am doing the transform by shifting the x and y parts of the viewBox:
function move(e) {
var targetPoint = svgCoords(event, svg);
shiftViewBox(anchorPoint.x - targetPoint.x,
anchorPoint.y - targetPoint.y);
}
You can also shift the graphic around with a transform on a group (<g> element) within the SVG; just be sure to use that same group element for the getScreenCTM() call that converts from the clientX/Y event coordinates.
Full demo for the drag to pan. I've skipped all your drawing code and the zooming effect.
But the zoom should still work, because the only position you're saving in global values is already converted into SVG coordinates.
var svg = document.querySelector("svg");
var anchorPoint;
function shiftViewBox(deltaX, deltaY) {
svg.viewBox.baseVal.x += deltaX;
svg.viewBox.baseVal.y += deltaY;
}
function svgCoords(event,elem) {
var ctm = elem.getScreenCTM();
var pt = svg.createSVGPoint();
// Note: rest of method could work with another element,
// if you don't want to listen to drags on the entire svg.
// But createSVGPoint only exists on <svg> elements.
pt.x = event.clientX;
pt.y = event.clientY;
return pt.matrixTransform(ctm.inverse());
}
svg.addEventListener("mousedown", function(e) {
anchorPoint = svgCoords(event, svg);
window.addEventListener("mousemove", move);
window.addEventListener("mouseup", cancelMove);
});
function cancelMove(e) {
window.removeEventListener("mousemove", move);
window.removeEventListener("mouseup", cancelMove);
anchorPoint = undefined;
}
function move(e) {
var targetPoint = svgCoords(event, svg);
shiftViewBox(anchorPoint.x - targetPoint.x,
anchorPoint.y - targetPoint.y);
}
body {
display: grid;
margin: 0;
min-height: 100vh;
}
svg {
margin: auto;
width: 70vmin;
height: 70vmin;
border: thin solid gray;
cursor: move;
}
<svg viewBox="-40 -40 80 80">
<polygon fill="skyBlue"
points="0 -40, 40 0, 0 40 -40 0" />
</svg>
So the script needs something so that the vectors moved by the SVG are coordinated against the vectors moved by the mouse on screen. Despite the event being on your target, your SVG, the MouseEvent properties relate to your screen alone.
The movementX read-only property of the MouseEvent interface provides the difference in the X coordinate of the mouse pointer between the given event and the previous mousemove event. In other words, the value of the property is computed like this: currentEvent.movementX = currentEvent.screenX - previousEvent.screenX.
From https://developer.mozilla.org/en-US/docs/Web/API/MouseEvent/movementX
The screenX read-only property of the MouseEvent interface provides the horizontal coordinate (offset) of the mouse pointer in global (screen) coordinates.
So what you're measuring, and to the best of my knowledge the only thing you can measure direcly without additional libraries or complication, is the movement of the pointer in pixel terms across the screen. The only way to make this work in terms of vector for movement of your SVG is to translate the on screen movement to the dimensions that are relevant to your scaled SVG.
My initial thinking was that you would be able to work out the scaling of the SVG object, using some combination of its viewbox and its actual width on the screen. Naturally what would initially appear sensible is not. This approach won't work, if it appears to it would be purely by chance.
But it turns out that the solution is essentially to use the same type of code you've used in your scaling when you approach your mouse movements. The .getScreenCTM() and .inverse() functions are exactly what you'll need again. But instead of trying to find a single point on the SVG to work from, you need to find out what the on-screen distance translates to in the SVG by comparing two points on the SVG instead.
What I provide here isn't necessarily the most optimal solution but hopefully helps explain and gives you something to work further from...
function dragger(event) {
var target = document.getElementById('color-wheel');
var coords = parseViewBox(target);
//Get an initial point in the SVG to start measuring from
var start_pt = target.createSVGPoint();
start_pt.x = 0;
start_pt.y = 0;
var svgcoord = start_pt.matrixTransform(target.getScreenCTM().inverse());
//Create a point within the same SVG that is equivalent to
//the px movement by the pointer
var comparison_pt = target.createSVGPoint();
comparison_pt.x = event.movementX;
comparison_pt.y = event.movementY;
var svgcoord_plus_movement = comparison_pt.matrixTransform(target.getScreenCTM().inverse());
//Use the two SVG points created from screen position values to determine
//the in-SVG distance to change coordinates
coords.x -= (svgcoord_plus_movement.x - svgcoord.x);
//Repeat the above, but for the Y axis
coords.y -= (svgcoord_plus_movement.y - svgcoord.y);
//Deliver the changes to the SVG to update the view
changeViewBox(target,coords);
}
Sorry for the long winded answer, but hopefully it explains it from the beginning enough that anyone else looking to find an answer can get the whole picture if they've not come as far as you have in this script.
From MouseEvent, we have clientX and movememntX. Taken together, we can deduce our last location. We can then take the transform of our current location and subtract it from the transform of our last location:
element.onpointermove = e => {
const { clientX, clientY, movementX, movementY } = e;
const DOM_pt = svg.createSVGPoint();
DOM_pt.x = clientX;
DOM_pt.y = clientY;
const { x, y } = DOM_pt.matrixTransform(svgs[i].getScreenCTM().inverse());
DOM_pt.x += movementX;
DOM_pt.y += movementY;
const { x: last_x, y: last_y } = DOM_pt.matrixTransform(svgs[i].getScreenCTM().inverse());
const dx = last_x - x;
const dy = last_y - y;
// TODO: use dx & dy
}
I have figured out a nice effect from application that needs several UI elements but also requires as much screen space as possible.
Idea of the effect is, that the UI buttons almost dissapear as soon as you move the mouse too away from them.
I have made a jsFiddle in case you'd like to see it.
It's quite simple:
window.addEventListener("mousemove", function(e) {
var rect = element.getBoundingClientRect();
//Measuring distance from top-left corner of the div
var top = rect.top+(rect.bottom-rect.top)/2;
var left = rect.left+(rect.right-rect.left)/2;
//Mouse position
var x = e.clientX;
var y = e.clientY;
//Thank pythagoras for this
var distance = Math.sqrt(Math.pow(x-left, 2)+Math.pow(y-top, 2));
//Brightness in interval <1, 0.1>
var brightness = Math.min(1, Math.max(0.1, 100/distance));
element.style.opacity = brightness+"";
});
This jsFiddle was also supposed to demonstrate problem I have - but it runs unexpectedly smoothly.
The problem is, that the browsers seem to buffer the CSS changes if they are too frequent. This is a very smart performance strategy but in my case it quite breaks the effect.
I have also uploaded test script here. In Google Chrome, buffering appeared to be so strong (and unsynchronized), that the buttons sometimes flickered.
Should I implement some frame-skip so that the browser buffer is not initiated by the animation effect?
My buttons have their bottom border cut off. If you'd know why this happens, please let me know in comments
I've made a drawing tool using emberjs and raphaeljs and recently started supporting touchscreens.
The application allows users to connect dots with lines.
My problem:
my dots are too small for touchscreens and I don't want to resize them because I like the aesthetic... I've also disabled user scaling so the users can't pinch to make the dots bigger so the web application feels more like a native application...
Here's my code for the eventhandlers:
startpoints.mousedown(this.startCurve);
this is my startCurve function:
startCurve: function (evt, x, y){
var me = this.data('view');
me.set('dragStartedPoint', this);
if(evt.target === undefined) {
//IE
x = evt.srcElement.offsetLeft + 10;
y = evt.srcElement.offsetTop + 10;
} else {
x = evt.target.cx.baseVal.value;
y = evt.target.cy.baseVal.value;
}
me.set('mouseStartX', x);
me.set('mouseStartY', y);
me.set('mouseStopX', x);
me.set('mouseStopY', y);
currentCurve = me.curve(me.makePath(x,y,x,y));
if(this.data('type') == 'answer'){
currentCurve.data('answer', this.data('answer'));
} else {
currentCurve.data('subQuestion', this.data('subQuestion'));
}
me.set('isDragging', true);
me.set('currentCurve', currentCurve);
evt.preventDefault();
return false;
}
after this I go on to drawing a line as long as the mousebutton is held down
when the mouse button is released on a point in my set of endpoints the line stays in place
when the mouse button is released on anything else I remove the line
My question:
Is there a way for me to make the clickable area around my dots bigger, in an invisible way for the users, without resizing the dots?
Thanks
Draw a second transparent dot on top of each real dot and attach the mousedown event handler ot the transparent dot. You may need to adjust the pointer-events property to ensure that the transparent dot gets mouse events.
There are various ways to draw a transparent dot e.g. fill="none" or fill-opacity="0".
Recently, I have started dabbling with HTML5 and the mighty canvas. However, I am trying to accomplish something and I am not sure what the best way would be to handle it.
I am trying to make a randomly generated set of buildings with windows, as you can see in the following example that uses divs:
Example Using Divs
The issue that I am coming up with is that I want to be able to randomly generate images/content in the windows of these buildings, and be able to easily capture when a window is clicked, but I am not sure how to go about handling it using the canvas.
Example Building:
function Building()
{
this.floors = Math.floor((Math.random()+1)*7); //Number of Floors
this.windows = Math.floor((Math.random()+1)*3); //Windows per Floor
this.height = (this.floors*12); //1px window padding
this.width = (this.windows*12); //1px window padding
//Do I need to build an array with all of my windows and their locations
//to determine if a click occurs?
}
Example Window:
function Window(x,y)
{
this.x = x; //X Coordinate
this.y = y; //Y Coordinate
this.color = //Random color from a range
this.hasPerson //Determines if a person in is the window
this.hasObject //Determines if an object is in the window
}
Summary: I am trying to generate random buildings with windows, however I am unsure how to go about building them, displaying them and tracking window locations using the canvas.
UPDATE:
I was finally able to get the buildings to generate as I was looking for, however now all I need to do is generate the windows within the buildings and keep track of their locations.
Building Generation Demo
I guess if you are drawing the window, you already have the function to create their canvas path. So you can apply the isPointInPath function on all window you have to determine if the user clicked on a window.
canvasContext.beginPath()
{
(functions corresponding to your window path)
}
canvasContext.closePath()
var isInWindow = canvasContext.isInPath(clicPosX,clicPosY);
Actualy you have to check where mouse is clicked, and if it's in window, then call some function. And yes, you will need to have array, of locations.
Take a look here
Draw your squares using fillRect, store their north-western point coordinates into an array. You'll also need these rectangles' dimensions, but since they are all equal squares — no need to store them in the array.
Then add a click listener to the canvas element, in which detect the pointer's position via pageX/pageY minus the position of the canvas element.
Then on each click traverse the array of rectangles and see if they contain the pointer's coordinates:
if (((pageX > rectX && pageX < rectX + rectWidth) || (pageX < rectX && pageX > rectX + rectWidth))) &&
((pageY > rectY && pageY < rectY + rectHeight) || (pageY < rectY && pageY > rectY + rectHeight))) {
/* clicked on a window */
}
Demo.