Bigger clickable area for raphaeljs objects - javascript

I've made a drawing tool using emberjs and raphaeljs and recently started supporting touchscreens.
The application allows users to connect dots with lines.
My problem:
my dots are too small for touchscreens and I don't want to resize them because I like the aesthetic... I've also disabled user scaling so the users can't pinch to make the dots bigger so the web application feels more like a native application...
Here's my code for the eventhandlers:
startpoints.mousedown(this.startCurve);
this is my startCurve function:
startCurve: function (evt, x, y){
var me = this.data('view');
me.set('dragStartedPoint', this);
if(evt.target === undefined) {
//IE
x = evt.srcElement.offsetLeft + 10;
y = evt.srcElement.offsetTop + 10;
} else {
x = evt.target.cx.baseVal.value;
y = evt.target.cy.baseVal.value;
}
me.set('mouseStartX', x);
me.set('mouseStartY', y);
me.set('mouseStopX', x);
me.set('mouseStopY', y);
currentCurve = me.curve(me.makePath(x,y,x,y));
if(this.data('type') == 'answer'){
currentCurve.data('answer', this.data('answer'));
} else {
currentCurve.data('subQuestion', this.data('subQuestion'));
}
me.set('isDragging', true);
me.set('currentCurve', currentCurve);
evt.preventDefault();
return false;
}
after this I go on to drawing a line as long as the mousebutton is held down
when the mouse button is released on a point in my set of endpoints the line stays in place
when the mouse button is released on anything else I remove the line
My question:
Is there a way for me to make the clickable area around my dots bigger, in an invisible way for the users, without resizing the dots?
Thanks

Draw a second transparent dot on top of each real dot and attach the mousedown event handler ot the transparent dot. You may need to adjust the pointer-events property to ensure that the transparent dot gets mouse events.
There are various ways to draw a transparent dot e.g. fill="none" or fill-opacity="0".

Related

How to trace / track user input on touch screen to match image on screen

So I want to have an image, let's say a square and would want the user to 'trace' the image with their finger, I need to track this and know / understand if the user has done it correctly.
The use case is educational software where users trace shapes to learn how to draw them.
My thinking was an SVG object and then mouse hold events but because SVG has beginning and end points I am not sure if they could then be tracked all the way over the image etc.
Also how? If I have interaction on the SVG is it a matter of if statements and some kind of variance on the line and if user gets to far away from the original line then stop / break?
Sorry if this is explained badly also I couldn't find almost anything so I'm also sorry if this is a duplicate.
Found this article: https://www.smashingmagazine.com/2018/05/svg-interaction-pointer-events-property/
And: https://gist.github.com/elidupuis/11325438 / http://bl.ocks.org/elidupuis/11325438
So could maybe cobble something together but yes any direction would be appreciated.
var xmlns = "http://www.w3.org/2000/svg";
function setMouseCoordinates(event) {
// contains the size of element having id 'image svg' and its position relative to the viewport, svg width/height equal to image height and width
var boundary = document.getElementById('<id_of_image_svg>').getBoundingClientRect();
// sets the x position of the mouse co-ordinate
auth_mouseX = event.clientX - boundary.left;
// sets the y position of the mouse co-ordinate
auth_mouseY = event.clientY - boundary.top;
return [auth_mouseX, auth_mouseY];
}
// add this in mousedown or respective touch event
function drawPath(event) {
var {
auth_mouseX,
auth_mouseY
} = setMouseCoordinates(event);
// Creates an element with the specified namespace URI and qualified name.
scribble = document.createElementNS(xmlns, 'path');
// sets the stroke width and color of the drawing drawn by scribble drawing tool
scribble.style.stroke = 'red';
// sets the stroke width of the scribble drawing
scribble.style.strokeWidth = '2';
scribble.style.fill = 'none';
scribble.setAttributeNS(null, 'd', 'M' + auth_mouseX + ' ' + auth_mouseY);
}
Now you can add this function in the touchdown event and create logic accordingly. You have to check that the user has start or stopped the drawing by a varible and changing its value accordingly.
For evaluation of task whether it is in a given area or not you have to use some shape detection api or AI (You can search on google there are many of them like AWS rekognition and many others I don't remember the name)
Hope it helps anyhow. :)

How do I translate mouse movement distances to SVG coordinate space?

I have an SVG visualization of the distribution of CSS4 color keywords in HSL space here: https://meyerweb.com/eric/css/colors/hsl-dist.html
I recently added zooming via the mouse wheel, and panning via mouse clack-and-drag. I’m able to convert a point from screen space to SVG coordinate space using matrixTransform, .getScreenCTM(), and .inverse() thanks to example code I found online, but how do I convert mouse movements during dragging? Right now I’m just shifting the viewBox coordinates by the X and Y values from event, which means the image drag is faster than the mouse movement when zoomed in.
As an example, suppose I’m zoomed in on the image and am dragging to pan, and I jerk the mouse leftwards and slightly downwards. event.movementX returns -37 and event.movementY returns 6. How do I determine how far that equates to in SVG coordinates, so that the viewBox coordinates are shifted properly?
(Note: I’m aware that there are libraries for this sort of thing, but I’m intentionally writing vanilla JS code in order to learn more about both SVG and JS. So please, don’t post “lol just use library X” and leave it at that. Thanks!)
Edited to add: I was asked to post code. Posting the entire JS seems overlong, but this is the function that fires on mousemove events:
function dragger(event) {
var target = document.getElementById('color-wheel');
var coords = parseViewBox(target);
coords.x -= event.movementX;
coords.y -= event.movementY;
changeViewBox(target,coords);
}
If more is needed, then view source on the linked page; all the JS is at the top of the page. Nothing is external except for a file that just contains all the HSL values and color names for the visualization.
My recommendation:
Don't worry about the movementX/Y properties on the event.
Just worry about where the mouse started and where it is now.
(This has the additional benefit that you get the same result even if you miss some events: maybe because the mouse moved out of the window, or maybe because you want to group events so you only run the code once per animation frame.)
For where the mouse started, you measure that on the mousedown event.
Convert it to a position in the SVG coordinates, using the method you were using,
with .getScreenCTM().inverse() and .matrixTransform().
After this conversion, you don't care where on the screen this point is. You only care about where it is in the picture. That's the point in the picture that you're always going to move to be underneath the mouse.
On the mousemove events, you use that same conversion method to find out where the mouse currently is within the current SVG coordinate system. Then you figure out how far that is from the point (again, in SVG coordinates) that you want underneath the mouse. That's the amount that you use to transform the graphic. I've followed your example and am doing the transform by shifting the x and y parts of the viewBox:
function move(e) {
var targetPoint = svgCoords(event, svg);
shiftViewBox(anchorPoint.x - targetPoint.x,
anchorPoint.y - targetPoint.y);
}
You can also shift the graphic around with a transform on a group (<g> element) within the SVG; just be sure to use that same group element for the getScreenCTM() call that converts from the clientX/Y event coordinates.
Full demo for the drag to pan. I've skipped all your drawing code and the zooming effect.
But the zoom should still work, because the only position you're saving in global values is already converted into SVG coordinates.
var svg = document.querySelector("svg");
var anchorPoint;
function shiftViewBox(deltaX, deltaY) {
svg.viewBox.baseVal.x += deltaX;
svg.viewBox.baseVal.y += deltaY;
}
function svgCoords(event,elem) {
var ctm = elem.getScreenCTM();
var pt = svg.createSVGPoint();
// Note: rest of method could work with another element,
// if you don't want to listen to drags on the entire svg.
// But createSVGPoint only exists on <svg> elements.
pt.x = event.clientX;
pt.y = event.clientY;
return pt.matrixTransform(ctm.inverse());
}
svg.addEventListener("mousedown", function(e) {
anchorPoint = svgCoords(event, svg);
window.addEventListener("mousemove", move);
window.addEventListener("mouseup", cancelMove);
});
function cancelMove(e) {
window.removeEventListener("mousemove", move);
window.removeEventListener("mouseup", cancelMove);
anchorPoint = undefined;
}
function move(e) {
var targetPoint = svgCoords(event, svg);
shiftViewBox(anchorPoint.x - targetPoint.x,
anchorPoint.y - targetPoint.y);
}
body {
display: grid;
margin: 0;
min-height: 100vh;
}
svg {
margin: auto;
width: 70vmin;
height: 70vmin;
border: thin solid gray;
cursor: move;
}
<svg viewBox="-40 -40 80 80">
<polygon fill="skyBlue"
points="0 -40, 40 0, 0 40 -40 0" />
</svg>
So the script needs something so that the vectors moved by the SVG are coordinated against the vectors moved by the mouse on screen. Despite the event being on your target, your SVG, the MouseEvent properties relate to your screen alone.
The movementX read-only property of the MouseEvent interface provides the difference in the X coordinate of the mouse pointer between the given event and the previous mousemove event. In other words, the value of the property is computed like this: currentEvent.movementX = currentEvent.screenX - previousEvent.screenX.
From https://developer.mozilla.org/en-US/docs/Web/API/MouseEvent/movementX
The screenX read-only property of the MouseEvent interface provides the horizontal coordinate (offset) of the mouse pointer in global (screen) coordinates.
So what you're measuring, and to the best of my knowledge the only thing you can measure direcly without additional libraries or complication, is the movement of the pointer in pixel terms across the screen. The only way to make this work in terms of vector for movement of your SVG is to translate the on screen movement to the dimensions that are relevant to your scaled SVG.
My initial thinking was that you would be able to work out the scaling of the SVG object, using some combination of its viewbox and its actual width on the screen. Naturally what would initially appear sensible is not. This approach won't work, if it appears to it would be purely by chance.
But it turns out that the solution is essentially to use the same type of code you've used in your scaling when you approach your mouse movements. The .getScreenCTM() and .inverse() functions are exactly what you'll need again. But instead of trying to find a single point on the SVG to work from, you need to find out what the on-screen distance translates to in the SVG by comparing two points on the SVG instead.
What I provide here isn't necessarily the most optimal solution but hopefully helps explain and gives you something to work further from...
function dragger(event) {
var target = document.getElementById('color-wheel');
var coords = parseViewBox(target);
//Get an initial point in the SVG to start measuring from
var start_pt = target.createSVGPoint();
start_pt.x = 0;
start_pt.y = 0;
var svgcoord = start_pt.matrixTransform(target.getScreenCTM().inverse());
//Create a point within the same SVG that is equivalent to
//the px movement by the pointer
var comparison_pt = target.createSVGPoint();
comparison_pt.x = event.movementX;
comparison_pt.y = event.movementY;
var svgcoord_plus_movement = comparison_pt.matrixTransform(target.getScreenCTM().inverse());
//Use the two SVG points created from screen position values to determine
//the in-SVG distance to change coordinates
coords.x -= (svgcoord_plus_movement.x - svgcoord.x);
//Repeat the above, but for the Y axis
coords.y -= (svgcoord_plus_movement.y - svgcoord.y);
//Deliver the changes to the SVG to update the view
changeViewBox(target,coords);
}
Sorry for the long winded answer, but hopefully it explains it from the beginning enough that anyone else looking to find an answer can get the whole picture if they've not come as far as you have in this script.
From MouseEvent, we have clientX and movememntX. Taken together, we can deduce our last location. We can then take the transform of our current location and subtract it from the transform of our last location:
element.onpointermove = e => {
const { clientX, clientY, movementX, movementY } = e;
const DOM_pt = svg.createSVGPoint();
DOM_pt.x = clientX;
DOM_pt.y = clientY;
const { x, y } = DOM_pt.matrixTransform(svgs[i].getScreenCTM().inverse());
DOM_pt.x += movementX;
DOM_pt.y += movementY;
const { x: last_x, y: last_y } = DOM_pt.matrixTransform(svgs[i].getScreenCTM().inverse());
const dx = last_x - x;
const dy = last_y - y;
// TODO: use dx & dy
}

Drawing mouse selection area (rubber band) with Pixi.js / html Canvas

I have a pixi.js html canvas with thousands of objects on it and I want the user to be able to zoom into it with the usual rectangular selection area. The brute force way to implement this would be to draw the rectangle on each mouse move and rerender the whole stage. But this seems like a waste of CPU. Plus this is so common in user interfaces, that I suspect that there is already some function in pixi.js or a plugin that solves this.
If there is no plugin: If I could save the whole buffer to some 2nd buffer when the user presses the mouse button, I could draw the rectangle on top, and on every mouse move, copy back the 2nd buffer to the primary buffer before drawing the rectangle. This would mean that I didn't have to redraw everything on every mouse move. But I don't think that one can clone the current buffer to some named secondary buffer.
Another alternative would be to move a rectangular DOM object on top of the canvas, but then I am afraid that the current pixel position will be hard to relate to the pixi.js / html5 canvas pixels.
Is there a better way? Or some plugin / search engine keyword that I'm missing? How would you implement a rubber band in html canvas or pixi.js ?
I ended up solving this with a separate DOM object that is moved over the canvas. The solution also requires the new interaction manager in PIXI 4, that offers a single callback for any mouse movement over the canvas.
In the following, I assume that the canvas is placed at canvasLeft and canvasTop pixels with CSS.
$(document.body).append("<div style='position:absolute; display:none; border: 1px solid black' id='tpSelectBox'></div>");
renderer = new PIXI.CanvasRenderer(0, 0, opt);
// setup the mouse zooming callbacks
renderer.plugins.interaction.on('mousedown', function(ev) {
mouseDownX = ev.data.global.x;
mouseDownY = ev.data.global.y; $("#tpSelectBox").css({left:mouseDownX+canvasLeft, top:mouseDownY+canvasTop}).show();
});
renderer.plugins.interaction.on('mousemove', function(ev) {
if (mouseDownX == null)
return;
var x = ev.data.global.x;
var y = ev.data.global.y;
var selectWidth = Math.abs(x - mouseDownX);
var selectHeight = Math.abs(y - mouseDownY);
var minX = Math.min(ev.data.global.x, mouseDownX);
var minY = Math.min(ev.data.global.y, mouseDownY);
var posCss = {
"left":minX+canvasLeft,
"top":minY+canvasTop,
"width":selectWidth,
"height":selectHeight
};
$("#tpSelectBox").css(posCss);
});
renderer.plugins.interaction.on('mouseup', function(ev) {
$("#tpSelectBox").hide();
mouseDownX = null;
mouseDownY = null;
$("#tpSelectBox").css({"width":0, "height":0});
});
For older version of PIXI, here is an example of pan/zoom without a rectangle
https://github.com/Arduinology/Pixi-Pan-and-Zoom/blob/master/js/functions.js
In May 2015, the Interaction Manager got extended to allow easier pan/zoom handling https://github.com/pixijs/pixi.js/issues/1825 which is what I'm using here.

Rotating a shape in KineticJS seems to move it out of sync with it's group

I am working on some image viewing tools in KineticJS. I have a rotate tool. When you move the mouse over an image, a line appears from the centre of the image to the mouse position, and then when you click and move, the line follows and the image rotates around that point, in sync with the line. This works great.
My issue is, I have the following set up:
Canvas->
Stage->
Layer->
GroupA->
GroupB->
Image
This is because I draw tabs for options on GroupA and regard it as a container for the image. GroupB is used because I flip GroupB to flip the image ( and down the track, any objects like Text and Paths that I add to the image ), so that it flips but stays in place. This all works well. I am also hoping when I offer zoom, to zoom groupb and thus zoom anything drawn on the image, but for groupA to create clipping and continue to support drag buttons, etc.
The object I am rotating, is GroupA. Here is the method I call to set up rotation:
this.init = function(control)
{
console.log("Initing rotate for : " + control.id());
RotateTool.isMouseDown = false;
RotateTool.startRot = isNaN(control.getRotationDeg()) ? 0 : control.getRotationDeg();
RotateTool.lastAngle = control.parent.rotation() / RotateTool.factor;
RotateTool.startAngle = control.parent.rotation();
this.image = control.parent;
var center = this.getCentrePoint();
RotateTool.middleX = this.image.getAbsolutePosition().x + center.x;
RotateTool.middleY = this.image.getAbsolutePosition().y + center.y;
this.image.x(this.image.x() + center.x - this.image.offsetX());
this.image.y(this.image.y() + center.y - this.image.offsetY());
this.image.offsetX(center.x);
this.image.offsetY(center.y);
}
getCentrePoint is a method that uses trig to get the size of the image, based on the rotation. As I draw a line to the centre of the image, I can tell it's working well, to start with. I've also stepped in to it and it always returns values only slightly higher than the actual width and height, they always look like they should be about what I'd expect, for the angle of the image.
Here is the code I use on mouse move to rotate the image:
this.layerMouseMove = function (evt, layer)
{
if (RotateTool.isRotating == false)
return;
if (!Util.isNull(this.image) && !Util.isNull(this.line))
{
if (Item.moving && !RotateTool.isRotating)
{
console.log("layer mousemove item moving");
RotateTool.layerMouseUp(evt, layer);
}
else
{
var pt = this.translatePoint(evt.x, evt.y);
var x = pt.x;
var y = pt.y;
var end = this.getPoint(x, y, .8);
RotateTool.line.points([this.middleX, this.middleY, end.x, end.y]);
RotateTool.line.parent.draw();
RotateTool.sign.x(x - 20);
RotateTool.sign.y(y - 20);
var angle = Util.findAngle({ x: RotateTool.startX, y: RotateTool.startY }, { x: x, y: y }, { x: RotateTool.middleX, y: RotateTool.middleY });
var newRot = (angle) + RotateTool.startAngle;
RotateTool.image.rotation(newRot);
console.log(newRot);
}
}
}
Much of this code is ephemeral, it's maintaining the line ( which is 80% of the length from the centre to my mouse, as I also show a rotate icon, over the mouse.
Sorry for the long windedness, I'm trying to make sure I am clear, and that it's obvious that I've done a lot of work before asking for help.
So, here is my issue. After I've rotated a few times, when I click again, the 'centre' point that the line draws to, is way off the bottom right of my screen, and if I set a break point, sure enough, the absolute position of my groups are no longer in sync. This seems to me like my rotation has moved the image in the manner I hoped, but moved my group off screen. When I set offsetX and offsetY, do I need to also set it on all the children ? But, it's the bottom child I can see, and the top group I set those things on, so I don't really understand how this is happening.
I do notice my image jumps a few pixels when I move the mouse over it ( which is when the init method is called ) so I feel like perhaps I am just out slightly somewhere, and it's causing this flow on effect. I've done some more testing, and my image always jumps slightly up and to the right when I move the mouse over it, and the rotate tool continues to work reliably, so long as I don't move the mouse over the image again, causing my init method to call. It seems like every time this is called, is when it breaks. So, I could just call it once, but I'd have to associate the data with the image then, for the simple reason that once I have many images, I'll need to change my data as my selected image changes. Either way, I'd prefer to understand and resolve the issue, than just hide it.
Any help appreciated.

HTML5 Canvas Question | Click Tracking

Recently, I have started dabbling with HTML5 and the mighty canvas. However, I am trying to accomplish something and I am not sure what the best way would be to handle it.
I am trying to make a randomly generated set of buildings with windows, as you can see in the following example that uses divs:
Example Using Divs
The issue that I am coming up with is that I want to be able to randomly generate images/content in the windows of these buildings, and be able to easily capture when a window is clicked, but I am not sure how to go about handling it using the canvas.
Example Building:
function Building()
{
this.floors = Math.floor((Math.random()+1)*7); //Number of Floors
this.windows = Math.floor((Math.random()+1)*3); //Windows per Floor
this.height = (this.floors*12); //1px window padding
this.width = (this.windows*12); //1px window padding
//Do I need to build an array with all of my windows and their locations
//to determine if a click occurs?
}
Example Window:
function Window(x,y)
{
this.x = x; //X Coordinate
this.y = y; //Y Coordinate
this.color = //Random color from a range
this.hasPerson //Determines if a person in is the window
this.hasObject //Determines if an object is in the window
}
Summary: I am trying to generate random buildings with windows, however I am unsure how to go about building them, displaying them and tracking window locations using the canvas.
UPDATE:
I was finally able to get the buildings to generate as I was looking for, however now all I need to do is generate the windows within the buildings and keep track of their locations.
Building Generation Demo
I guess if you are drawing the window, you already have the function to create their canvas path. So you can apply the isPointInPath function on all window you have to determine if the user clicked on a window.
canvasContext.beginPath()
{
(functions corresponding to your window path)
}
canvasContext.closePath()
var isInWindow = canvasContext.isInPath(clicPosX,clicPosY);
Actualy you have to check where mouse is clicked, and if it's in window, then call some function. And yes, you will need to have array, of locations.
Take a look here
Draw your squares using fillRect, store their north-western point coordinates into an array. You'll also need these rectangles' dimensions, but since they are all equal squares — no need to store them in the array.
Then add a click listener to the canvas element, in which detect the pointer's position via pageX/pageY minus the position of the canvas element.
Then on each click traverse the array of rectangles and see if they contain the pointer's coordinates:
if (((pageX > rectX && pageX < rectX + rectWidth) || (pageX < rectX && pageX > rectX + rectWidth))) &&
((pageY > rectY && pageY < rectY + rectHeight) || (pageY < rectY && pageY > rectY + rectHeight))) {
/* clicked on a window */
}
Demo.

Categories

Resources