I am building an alloy app where I have the requirement to authenticate a user based on his tap location on an image, loaded with ImageView. The image has few marked points of authentication, any of which when tapped on, shows whether the tap location was right or wrong. How can I build this?
So far, I am only able to get the tapped location in a touch event listener and the image's properties using ImageView's rect property. How can I pinpoint the marked locations in the image to identify a tap over those specific positions?
Here is the code I am using (derived from this: How to convert coordinates of the image view to the coordinates of the bitmap?):
// eventX, eventY are x and y coordinates of tapped location
function getScaledCoordinates(imageView, eventX, eventY) {
//original height and width of the image
var originalImageBounds = imageView.toBlob();
var intrinsicHeight = originalImageBounds.height;
var intrinsicWidth = originalImageBounds.width;
//height and width of the visible (scaled) image
var imageBounds = imageView.getRect();
var scaledHeight = imageBounds.height;
var scaledWidth = imageBounds.width;
//Find the ratio of the original image to the scaled image
var heightRatio = intrinsicHeight / scaledHeight;
var widthRatio = intrinsicWidth / scaledWidth;
//get the distance from the left and top of the image bounds
var scaledImageOffsetX = (eventX - imageBounds.x);
var scaledImageOffsetY = (eventY - imageBounds.y);
// get the units in device pixel
// getUnitsInDevicePixel(scaledImageOffsetX, scaledImageOffsetY)
//scale these distances according to the ratio of your scaling
var originalImageOffsetX = scaledImageOffsetX * widthRatio;
var originalImageOffsetY = scaledImageOffsetY * heightRatio;
// Return the coordinates in original image adjusted for scale factor
return [originalImageOffsetX, originalImageOffsetY];
}
I have coordinates of the original marked points stored in a data structure.
I am even able to get this to work, but the solution lacks accuracy. Sometimes, the tapped location is mapped correctly to the original image, other times it's not. Any suggestions to improve on this?
I was looking for something along the lines of clickable area of image, but can't find enough documentation to implement these in Titanium. Also, I read about the idea of placing invisible buttons at the marked locations, but not sure how to get this working in Titanium.
Any help will be appreciated. Thanks in advance!
Related
So I want to have an image, let's say a square and would want the user to 'trace' the image with their finger, I need to track this and know / understand if the user has done it correctly.
The use case is educational software where users trace shapes to learn how to draw them.
My thinking was an SVG object and then mouse hold events but because SVG has beginning and end points I am not sure if they could then be tracked all the way over the image etc.
Also how? If I have interaction on the SVG is it a matter of if statements and some kind of variance on the line and if user gets to far away from the original line then stop / break?
Sorry if this is explained badly also I couldn't find almost anything so I'm also sorry if this is a duplicate.
Found this article: https://www.smashingmagazine.com/2018/05/svg-interaction-pointer-events-property/
And: https://gist.github.com/elidupuis/11325438 / http://bl.ocks.org/elidupuis/11325438
So could maybe cobble something together but yes any direction would be appreciated.
var xmlns = "http://www.w3.org/2000/svg";
function setMouseCoordinates(event) {
// contains the size of element having id 'image svg' and its position relative to the viewport, svg width/height equal to image height and width
var boundary = document.getElementById('<id_of_image_svg>').getBoundingClientRect();
// sets the x position of the mouse co-ordinate
auth_mouseX = event.clientX - boundary.left;
// sets the y position of the mouse co-ordinate
auth_mouseY = event.clientY - boundary.top;
return [auth_mouseX, auth_mouseY];
}
// add this in mousedown or respective touch event
function drawPath(event) {
var {
auth_mouseX,
auth_mouseY
} = setMouseCoordinates(event);
// Creates an element with the specified namespace URI and qualified name.
scribble = document.createElementNS(xmlns, 'path');
// sets the stroke width and color of the drawing drawn by scribble drawing tool
scribble.style.stroke = 'red';
// sets the stroke width of the scribble drawing
scribble.style.strokeWidth = '2';
scribble.style.fill = 'none';
scribble.setAttributeNS(null, 'd', 'M' + auth_mouseX + ' ' + auth_mouseY);
}
Now you can add this function in the touchdown event and create logic accordingly. You have to check that the user has start or stopped the drawing by a varible and changing its value accordingly.
For evaluation of task whether it is in a given area or not you have to use some shape detection api or AI (You can search on google there are many of them like AWS rekognition and many others I don't remember the name)
Hope it helps anyhow. :)
I am using three js to display video where the user can move through the video using mouse, example here:
https://threejs.org/examples/?q=video#webgl_video_panorama_equirectangular
the code: https://github.com/mrdoob/three.js/blob/master/examples/webgl_video_panorama_equirectangular.html
Is there a way to display where user is watching (position and direction) inside container, something like this:
http://ignitersworld.com/lab/imageViewer.html
In the top left corner there is a little square showing current position. I would like to know position and direction in which view is facing (all in 2D)
How could I achieve that?
edit:
orientation has been solved.
I am looking for position on the layout within video, is this possible? Like on the picture: https://imgur.com/a/xsNYM
You can get the camera direction and calculate the angles. These angles will be your 2D orientation on a sphere:
var dir = camera.getWorldDirection();
var groundProjection = dir.clone().projectOnPlane(new THREE.Vector3(0,1,0))
var longitudeRadians = Math.atan2(groundProjection.x, groundProjection.z);
var latitudeRadians = Math.atan2(groundProjection.length(), dir.y)
// longitudeRadians is now an angle between -3.14 and 3.14
// latitudeRadians is now an angle between 0 and 3.14
Here is a running example: https://jsfiddle.net/holgerl/bqvdotps/
I am using the ariutta svg-pan-zoom library(https://github.com/ariutta/svg-pan-zoom). (Also using with jquery.layout.js panes and jquery-ui.js)
I would like to save the values such as pan and zoom values of an svg-pan-zoom svg and use those values in to jump to the same location using a browser with a different sized window.
Currently I am using getPan() and getZoom() to save the values; then zoom(zoom) and pan(pan) in the other browser to zoom and pan to the same location. That does not work when the browser window size is different.
I found this article, but it does not address a window that is a different size:
Pan to specific X and Y coordinates and center image svg-pan-zoom
Pan and zoom are relative to current container size. So what you want is to compute the center point of visible SVG's part. Then in new window compute what should be the new pan to have the that SVG's point (that was the center in previous case).
As about the zoom it depends on how you want to make it work, but you could use the real zoom.
To get container sizes and real zoom use instance.getSizes().
So for example to compute x axis do:
var s = instance.getSizes()
var p = instance.getPan()
var relativeX = s.width / 2 / (p.x + s.viewBox.width * s.realZoom)
// After resize
var s = instance.getSizes()
var p = instance.getPan()
var x = (s.width / 2 / relativeX) - s.viewBox.width * s.realZoom
instance.pan({x: x, y: 0})
Do the same for Y axis
I have the following code, which is part of a larger program. I am trying to insert an image from my Google drive into a google doc and have it resized and centered. So far I am able to get the program to insert the image and resize it, but I do not know how to center an inlineImage. I am new to using google apps script and I have basically been copying other people's examples and modifying them. Your help would be appreciated. Let me know if I need to clarify. Again, I am trying to CENTER the inlineImage (var inlineI). Thanks!
var GDoc = DocumentApp.openByUrl("URL"); //I deleted my actual URL
function insertImageFromDrive(){
var img = DriveApp.getFileById(myImageFileID).getBlob(); //I deleted my actual image ID
var inlineI = GDoc.appendImage(img); //insert image
//resizing the image
var width = inlineI.getWidth();
var newW = width;
var height = inlineI.getHeight();
var newH = height;
var ratio = width/height;
Logger.log('w='+width+'h='+height+' ratio='+ratio);
if(width>320){
//max width of image
newW = 320;
newH = parseInt(newW/ratio);
}
inlineI.setWidth(newW).setHeight(newH); //resizes the image but also needs to center it
}
You need to center-align the paragraph that contains your image. Add this code to do it:
var styles = {};
styles[DocumentApp.Attribute.HORIZONTAL_ALIGNMENT] = DocumentApp.HorizontalAlignment.CENTER;
inlineI.getParent().setAttributes(styles);
getParent() method gets the container element (paragraph) containing your image. setAttributes() method applies custom style attributes (center alignment in this case) to the element.
Recently, I have started dabbling with HTML5 and the mighty canvas. However, I am trying to accomplish something and I am not sure what the best way would be to handle it.
I am trying to make a randomly generated set of buildings with windows, as you can see in the following example that uses divs:
Example Using Divs
The issue that I am coming up with is that I want to be able to randomly generate images/content in the windows of these buildings, and be able to easily capture when a window is clicked, but I am not sure how to go about handling it using the canvas.
Example Building:
function Building()
{
this.floors = Math.floor((Math.random()+1)*7); //Number of Floors
this.windows = Math.floor((Math.random()+1)*3); //Windows per Floor
this.height = (this.floors*12); //1px window padding
this.width = (this.windows*12); //1px window padding
//Do I need to build an array with all of my windows and their locations
//to determine if a click occurs?
}
Example Window:
function Window(x,y)
{
this.x = x; //X Coordinate
this.y = y; //Y Coordinate
this.color = //Random color from a range
this.hasPerson //Determines if a person in is the window
this.hasObject //Determines if an object is in the window
}
Summary: I am trying to generate random buildings with windows, however I am unsure how to go about building them, displaying them and tracking window locations using the canvas.
UPDATE:
I was finally able to get the buildings to generate as I was looking for, however now all I need to do is generate the windows within the buildings and keep track of their locations.
Building Generation Demo
I guess if you are drawing the window, you already have the function to create their canvas path. So you can apply the isPointInPath function on all window you have to determine if the user clicked on a window.
canvasContext.beginPath()
{
(functions corresponding to your window path)
}
canvasContext.closePath()
var isInWindow = canvasContext.isInPath(clicPosX,clicPosY);
Actualy you have to check where mouse is clicked, and if it's in window, then call some function. And yes, you will need to have array, of locations.
Take a look here
Draw your squares using fillRect, store their north-western point coordinates into an array. You'll also need these rectangles' dimensions, but since they are all equal squares — no need to store them in the array.
Then add a click listener to the canvas element, in which detect the pointer's position via pageX/pageY minus the position of the canvas element.
Then on each click traverse the array of rectangles and see if they contain the pointer's coordinates:
if (((pageX > rectX && pageX < rectX + rectWidth) || (pageX < rectX && pageX > rectX + rectWidth))) &&
((pageY > rectY && pageY < rectY + rectHeight) || (pageY < rectY && pageY > rectY + rectHeight))) {
/* clicked on a window */
}
Demo.