Detect multi-touch finger positions in IE10/IE11 - javascript

I'm developing in JavaScript for IE10/IE11, and am trying to get custom multi-touch events to work. I'm using the MSGesture API, and it provides a nice abstraction layer, but I would like access to the underlying positions of the touched points. Is there a way to get this?
I guess it could be calculated from the initial offset, using the scale/translation information, but was thinking there would probably be a cleaner way?
Example code:
container.addEventListener("MSGestureChange", function (e) {
// We have access to e.g.
// e.scale / e.translationX / e.translationY
// Now, this is what we need, but it's only for one finger
// (or is it an average somehow?)
// e.offsetX
// e.offsetY
// Is it possible to get offsetX/Y for each finger (pointer)?
}, false);

Related

collect data from usb mouse macintosh os x

I would like to collect the data from my usb mouse with my OS X Sierra, with javascript / angular js.
Any idea where I can find the inputs ?
I would like to get the bits packets. Precisely, I would like then to calculate position, speed ect... of the cursor.
With some help from https://www.w3schools.com/js/js_events_examples.asp (seriously, this SHOULD have everything you need), you can create a function that calculates the current mouse position and speed.
The event you may have been looking for would be onmousemove. Try giving the window object that property. Your function should be called with an event object too which contains clientX and clientY data, use that to track the current position of the mouse.
Of course, to calculate the mouse speed, we just need to know the difference between the last position it was in, and the current position. So this should work:
var Mx = 0; // Mouse X position
var My = 0; // Mouse Y position
var lastSpeedx = 0; // Last movement by mouse on x axis
var lastSpeedy = 0; // Last movement by mouse on y axis
window.onmousemove = function (e) {
lastSpeedx = e.clientX - Mx; lastSpeedy = e.clientY - My;
Mx = e.clientX; My = e.clientY;
}
Although I hear you want to hear exactly what comes through that usb port, well I'm afraid it's not exactly that simple. Even if javascript has some kind of extension that might do that, it won't be fun to deal with the hundreds of different interfaces mice use to talk to your computer. Drivers are somewhat there to simplify this, then the OS simplifies it further, and by the time it gets to your javascript parser, it would have be quite basic.
I actually to accomplish low level usb input on C++ a couple years back (just for fun) but I just couldn't find what I needed.
I'll look into a direct solution for you, along with an angularjs solution (because I wrote this answer before I saw angularjs tagged), although I'm not sure if there's one as low level as you want it to be.

Detect when user reaches maxBounds using Leaflet

I am using leaflet to show an interactive map to our users.
We want to let them browse through a limited area, and inform them they have to subscribe in case they want to see something too far away (using a pop up or equivalent).
So far I have seen that Leaflet supports a maxBounds option.
This is a good start that lets me prevent users to see larger areas.
Now I would like to be able to detect a maxBounds 'event' to show the user a pop up.
I have been looking into the Leaflet source code, but couldn't find an obvious way to do it.
so far I have found that the maxBounds option is fed into the setView method.
This method itself uses the _limitCenter method to define the center.
This goes a few levels deeper, down to the _getBoundsOffset method that finally uses the bounds.
_getBoundsOffset: function (pxBounds, maxBounds, zoom) {
var projectedMaxBounds = toBounds(
this.project(maxBounds.getNorthEast(), zoom),
this.project(maxBounds.getSouthWest(), zoom)
),
minOffset = projectedMaxBounds.min.subtract(pxBounds.min),
maxOffset = projectedMaxBounds.max.subtract(pxBounds.max),
dx = this._rebound(minOffset.x, -maxOffset.x),
dy = this._rebound(minOffset.y, -maxOffset.y);
return new Point(dx, dy);
},
The closest I could find so far would be to hook into the moveend event and check whether the center is out of my bounds manually.
However, it seems like this would be redundant with what leaflet is already doing.
Is there a better to leverage leaflet to achieve this?
Thanks
Just check if your defined bounds contain the map bounds. As long as the map bounds are inside the defined bounds, this will do nothing:
var myBounds = L.latLngBounds(...)
map.on('move moveend zoomend', function(){
if (!myBounds.contains(map.getBounds())) {
// Display popup or whatever
}
});
it seems like this would be redundant with what leaflet is already doing.
Don't worry about that. The overhead is negligible for this use case.

How do I keep Transform Control from moving your object if there is a collision, using raycasting?

So I'm using Three.js and I have some cubes inside of a box. I'm using the Transform Control to move the cubes around inside of the box with my mouse. I'd like to use raycasting in order to check for collisions. The question is how to I prevent the transform controller from moving the object if there is a collision? I'd like to stop it if it hits the wall. By the way, I'm on version r81 for Three.js.
UPDATE: I've used the size of the room to constrain the cubes from
moving outside of the room. This seems to work well. Is there a way
to use the cannon.js just for collisions? I don't want the momentum
or gravity or any other feature. JUST the collision check and to stop
it dead in its tracks when there is a collision.
I know this post is from a long time ago, but hopefully a googler finds this helpful. I wasn't able to stop the user from moving my object, but I was able to move it back to its proper position immediately afterward by adding some logic to the render method.
For the original poster's problem with collisions, you could attach an event listener to the transform controls and request the object to be repositioned if it is in an illegal state.
transformControls.addEventListener('objectChange', (e) => {
if (illegalPosition(this.obj.position)) {
needsReset = true;
}
lastPosition = attachedObject.position.clone();
});
and then in your render function
if (needsReset) {
attachedObject.position.set(lastPosition.x, lastPosition.y, lastPosition.z);
}
If this feels a little hacky, that's because it is. But for those of us who don't have the time or skill to read and modify TransformControls.js, I think it may prove helpful.
You could create helper raycaster and place all colliders in separate container. After movement is applied to object move raycaster to its position and test if ray intersects any of other objects in container. If yes: reset previous position for that object. In case of cube colliders you could want to raycast from cube center in multiple directions with half of side length as ray length.
Ben S does have the best and most painless way to implement collision detection with transform controls. Within a event listener.
But I don't know if the time of writing his answer he knew about or if there even was a function called "requestAnimationFrame". All you would have to do for collision detection instead of simply resetting the models position is to set up your render call within a loop (60 fps) by adding "requestAnimationFrame" to your render (I call it animate since that is more descriptive) function.
Since it is in a loop and is called when the every frame the scene is drawn it will just not allow the object to move past the point of collision.
function animate() {
// Called to draw onto screen every frame (60fps).
requestAnimationFrame(animate);
renderer.render(scene, camera);
}
And your event listener would just look like this.
control.addEventListener('objectChange', (e) => {
// Collision detection code here. Set colliding model position here.
// No need to set it in render
});
Old post, I know. But here is a method that is still fairly simple but does not flicker or use ray casting. The biggest catch here is that you have a little bit of a bounce if you move the Transform control really quickly. But otherwise it seems to work fairly well. You can control the precision of the collision by adjusting the step value.
let transStart = null;
//capture objects position on start
control.addEventListener('mouseDown', function(){
transStart = control.object.position.clone();
})
//you'll have to provide your own collision function
control.addEventListener('objectChange', function(e){
if(collision(sphere, cube)){ stopControls() };
});
function stopControls(){
if(control.dragging && stopAt){
//calculate direction object was moving at time of collision
const s = transStart;
const e = control.object.position.clone();
const n = e.clone().sub(s).negate().normalize();
//janky hack nonsense that stops the transform control from
//continuing without making the camera controller go nuts.
control.pointerUp({button:0});
control.dragging = true;
//translate back the direction it came by the step amount and do not
//stop until the objects are no longer colliding.
//Increase the step size if you do not need super precise collision
//detection. It will save calculations.
let step = 0.00005;
while(colliding(sphere, cube)){
sphere.translateOnAxis( n, step ) ;
sphere.updateMatrix();
}
}
}

Avoid cleaning canvas in updateOptions call

I am working in a signals plot program trying to simulate the 'persistence' feature as available in many oscilloscopes.
I would like to prevent dygraph canvas to clean for every updateOptions call. Instead of that, my plot should be preserved until an explicit call for cleaning. This feature will allow me to check if a signal preserves its phase during a certain amount of time.
I tried to use block_redraw parameter set to false in updateOptions function without no success.
Any ideas?
This isn't really something dygraphs is designed to do. You're asking it to render the full history of its data source, rather than the current state of its data source.
That being said, here's the code that clears the plotting canvas:
DygraphCanvasRenderer.prototype.clear = function() {
this.elementContext.clearRect(0, 0, this.width, this.height);
};
So if you override that, it might do what you want:
DygraphCanvasRenderer.prototype.clear = function() {};
That being said, this is liable to break lots of things (like zooming and panning) in addition to giving you the behavior you want. You can see this if you visit the live random data demo page and copy that snippet into the JS console.
Good luck!

Touch swipe coordinates using quo.js

I'm building a web app using quo.js to handle touch events.
When the user swipes (drags) with one finger, I can use the following quo.js code to recognise this and apply a css transformation:
$$('element').swipe(function() {
$('element').vendor('transform','translate(-100px,-100px)');
});
Now what I want to do is apply the translate amount based on the amount of swipe. In other words, I need to get the X/Y coordinates of the swipe. Does anyone know if this is possible using quo.js or do I need to use a different js library?
I tried this to get coordinates but it returns 'undefined':
$$('element').swipe(function(e) {
alert(e.pageX);
});
The event object quo.js passes to the callback contains a currentTouch object holding the x and y coordinates: http://jsfiddle.net/marionebl/UupmU/1/
$$('selector').swipe(function(e){
console.log('swipe');
console.log(e.currentTouch); // Gives position when gesture is cancelled
});
Note that the swipe event only fires when a swipe gesture is completed. As far as I understood your use case it would be more convenient to use the swiping event, which fires as soon as a swipe gesture is detected and during all movements until release:
var $swipeable = $$('.swipeable'),
swipeableHeight = $swipeable.height(),
swipeableWidth = $swipeable.width();
$swipeable.swiping(function(e){
var x = e.currentTouch.x - swipeableWidth / 2;
var y = e.currentTouch.y - swipeableHeight / 2;
$swipeable
.vendor('transform', 'translate(' + x +'px,'+ y +'px)');
});

Categories

Resources