DeviceOrientation Compass Android - javascript

I try to implement a compass using the deviceorientation event. I want to use the rotation to rotate an arrow facing in the viewing direction. I'm simply rotating the Image when the deviceorientation changes.
On iOS this works like charm by doing the following:
if (typeof DeviceOrientationEvent.requestPermission === "function") {
//#ts-ignore
DeviceOrientationEvent.requestPermission()
.then(permissionState => {
if (permissionState === "granted") {
this.compassActive = true;
window.addEventListener(
"deviceorientation",
eventData =>
this.zone.run(() => {
if(!this.compassActive){
return false;
}
//#ts-ignore
this.rotatePlayerIcon(eventData.webkitCompassHeading);
}),
false
);
}
})
.catch(console.error);
Since the DeviceOrientationEvent.webkitCompassHeading gives me a clockwise world based presentation of the rotation of the device.
I try the same on Android but I can't find a world based solution. webkitCompassHeading does not exist on android so I tried using just eventData.alpha. But this gives 0 based on the rotation it was when the event was fired, not based on world north.
All the Guides I find seem outdated.
How do I get a clockwise compass on android like I get on iOS?

bellow youll seea compass of vtm-mapsforge. it also includes an arrow. appart from the usual magnetometer-accelerometer sensors I recommend using location bearing as the sensors are not at all accurate and get easily disrupted.the nice thing about the compass is that it includes code to make it rotate smoothly as the raw output from the sensors is often noisy.
For the Java code look at this, it is the original version of the vtm-mapsforge compass

The problem is that alpha is not absolute, means 0 is not north instead 0 is where the device is pointing on activation.
The best way to fix for chrome-based browsers, like most standard Android Browsers ist using an AbsoluteOrientationSensor from W3C Generic Sensor API polyfills.
Github
The Project I used it in is Typescript based. So here is a Typescript example:
import {AbsoluteOrientationSensor} from 'motion-sensors-polyfill'
const options = { frequency: 60, referenceFrame: 'device' };
const sensor = new AbsoluteOrientationSensor(options);
sensor.addEventListener('reading', e => {
this.zone.run(() => {
var q = e.target.quaternion;
let alpha = Math.atan2(2*q[0]*q[1] + 2*q[2]*q[3], 1 - 2*q[1]*q[1] - 2*q[2]*q[2])*(180/Math.PI);
if(alpha < 0) alpha = 360+ alpha;
this.alpha = 360 - alpha;
this.rotatePlayerIcon(360 - alpha)
})
});
sensor.start();

Related

Touch move is really slow

I'm building a paint-like feature where the user can draw a line, but the touchmove event gets emitted really slow on my device (android phone), so the line becomes edgy. As soon as I connect the device to my PC and open the chrome devtools via USB debugging, everything works fine. On the phone emulator in desktop-chrome aren't any problems.
Here is a screenshot. The inner circle was drawn with the slow touch events, and for the outer one I connected the device to my PC.
Here is another screenshot showing the durations between individual "touchmove" event-calls. The top part (green values) occured when the devtools were open, the bottom part (red values) when they were closed.
The code:
function DrawingCanvas(/* ... */) {
// ...
const handleTouchMove = (event) => {
handleMouseMove(event.touches[0])
}
const handleMouseMove = ({ clientX, clientY }) => {
if (!isDrawing) {
return
}
const canvasRect = canvas.getBoundingClientRect()
const x = clientX - canvasRect.x
const y = clientY - canvasRect.y
currentPath.current.addPoint([x, y])
update()
}
const update = () => {
clearCanvas()
drawPath()
}
// ...
useEffect(() => {
const drawingCanvas = drawingCanvasRef.current
// ...
drawingCanvas.addEventListener("touchstart", handleDrawStart)
drawingCanvas.addEventListener("touchend", handleDrawEnd)
drawingCanvas.addEventListener("touchcancel", handleDrawEnd)
drawingCanvas.addEventListener("touchmove", handleTouchMove)
drawingCanvas.addEventListener("mousedown", handleDrawStart)
drawingCanvas.addEventListener("mouseup", handleDrawEnd)
drawingCanvas.addEventListener("mousemove", handleMouseMove)
return () => {
drawingCanvas.removeEventListener("touchstart", handleDrawStart)
drawingCanvas.removeEventListener("touchmove", handleTouchMove)
drawingCanvas.removeEventListener("touchend", handleDrawEnd)
drawingCanvas.removeEventListener("touchcancel", handleDrawEnd)
drawingCanvas.removeEventListener("mousedown", handleDrawStart)
drawingCanvas.removeEventListener("mouseup", handleDrawEnd)
drawingCanvas.removeEventListener("mousemove", handleMouseMove)
}
})
return <canvas /* ... */ />
}
Does anyone have an idea on how to fix this?
You can test it by yourself on the website: https://www.easymeme69.com/editor
Somehow calling event.preventDefault() on the touchmove event fixed it.
I'm facing exactly the same situation, I'm developing a React app with some touch features implementing actions on touchmove.
All my tests are done inside Chrome on the Debian-based Raspberry OS distro.
It results in a deadly laggy UI with a real touch screen...except (this is when it becomes very interesting!) if the console is opened with Chrome mobile emulator, then even if I try to play with my finger on the real touch screen at this moment.
touch-action: none & event.stopPropagation hacks were already existing in my code and didn't change the game.
2 conclusions on that :
The touch screen (and its driver) is fine
The CPU is quite able to handle the load
As for now, the mystery is still opaque for me.
My feeling is that, somehow, Chrome is deliberately decreasing/increasing the touch events rate depending (correspondingly) on whether we're in a real use case or whether we're on the emulator. I created a simple fiddle to validate this hypothesis: https://jsfiddle.net/ncgtjesh/20/show
It seems to be the case since I can clearly that the emulator-enabled mode outputs 240 events/second while the real non-emulated interface is stuck to 120.
I'm quite surprised that the fixes enacted in the responses above made it since it seems to be a browser implementation choice.
I had this exact same thing happen to me, down to not being able to reproduce with USB debugging open. Besides the e.preventDefault() hack, you can also set the touchable element's touch-action: none; in CSS.
I've had the same problem. I had no freezes on mobile or firefox, only on Chromium. Either disabling touchpad-overscroll-history-navigation in chrome-flags or e.preventDefault() can solve the problem.

Fabric.js - how to use custom cursors without drawing mode

I have no idea how to set up cursor image for drawing on the Canvas. I have noticed that I can set it only when
FABRICCANVAS.isDrawingMode = true;
However, the problem is that I have created dedicated drawing tools and I don't want to use those that are built into the Fabric.js.
Sample of my code (which doesn't work properly):
const FABRICCANVAS = new fabric.Canvas('canvas-draft');
const DRAFT = document.querySelector(".upper-canvas");
button.addEventListener('click', () => {
DRAFT.style.cursor = 'url(img/cursors/image.png) 0 34, auto';
});
But when I set isDrawingMode to true, it works. Unfortunately, I don't want to use built-in drawing tools because they leave paths (that can then be moved later, when FABRICCANVAS.selection = true).
Do you know any solution for this problem?
For the Canvas you can set different cursors:
e.g.
canvas.hoverCursor
canvas.defaultCursor
canvas.moveCursor
You can use absolute or relative paths to your cursor image:
canvas.moveCursor = 'url("...") 10 10, crosshair';

How to render multiple scenes with THREE.js VREffect

I have written a Three.js application using StereoEffect, using 3 scenes for overlaying purposes, by clearing renderer depth.
(Using this approach Three.js - Geometry on top of another).
However, I now need to use VREffect, for better compatibility with headsets such as the Gear VR, using the WebVR Polyfill.
The following are snippets from the code, to show how it's set up:
const renderer = new THREE.WebGLRenderer({antialias: false})
renderer.setSize(window.innerWidth, window.innerHeight)
renderer.autoClear = false;
document.body.appendChild(renderer.domElement)
effect = new THREE.VREffect(renderer)
effect.separation = 0
effect.setSize(window.innerWidth, window.innerHeight)
let vrDisplay
navigator.getVRDisplays().then(displays => {
if (displays.length > 0)
vrDisplay = displays[0]
})
// Add button to enable the VR mode (display stereo)
const vrButton = VRSamplesUtil.addButton("Enter VR", "E", "/vr/cardboard64.png", () => {
vrDisplay.requestPresent([{source: renderer.domElement}])
})
... The rest of the code ...
Inside my animation loop:
renderer.clear()
effect.render(scene, camera)
renderer.clearDepth()
effect.render(scene2, camera)
effect.render(scene3, camera)
However, this approach doesn't seem to work when using VREffect (only when entering VR mode - EG viewing it on my desktop works fine). I think the issue is that the renderer.clear() or renderer.clearDepth() is not taking effect, as the canvas is pitch black, apart from some elements in scene3.
Furthermore, when commenting out the rendering of scene2 and scene3, I can perfectly well see everything in the first scene, rendered correctly.
Looking through the code in VREffect, and StereoEffect, I couldn't figure out which part rendered my changes useless.
Any help/hints would be greatly appreciated.
I never did find out how to fix that issue, but I got around it by using StereoEffect instead of VREffect for browsers other than the Gear VR's, as VREffect worked perfectly fine there, but not on a normal phone browser, where cardboard might be used. Here's what I did, if anyone else runs into this issue:
From the example above, I turned the vrButton bit into this:
const vrButton = VRSamplesUtil.addButton("Enter VR", "E", "/images/cardboard64.png", () => {
if(navigator.userAgent.includes("Mobile VR")){
vrDisplay.requestPresent([{source: renderer.domElement}])
}else {
effect = new THREE.StereoEffect(renderer)
effect.separation = 0
effect.setSize(window.innerWidth, window.innerHeight)
document.getElementById("vr-sample-button-container").style.display = "none"
}
})
where I switched over from VREffect to StereoEffect when the 'View in VR' button is clicked.
With this approach, however, the content will not be fullscreen and the device will eventually go to sleep. To fix both issues, you can have the user tap the screen to manually turn on fullscreen with this:
renderer.domElement.addEventListener("click", () => {
if(document.fullscreenEnabled && renderer.domElement.requestFullScreen() ||
document.webkitFullscreenEnabled && renderer.domElement.webkitRequestFullScreen() ||
document.mozFullScreenEnabled && renderer.domElement.mozRequestFullScreen() ||
document.msFullScreenEnabled && renderer.domElement.msRequestFullScreen() ){}
})
Obviously, this is not as good of a user experience, and you don't get the nice UI, so if someone finds this and knows of an actual fix, please leave an answer/comment. I'll update this if I find anything myself.
Some final thoughts, afaik the Gear VR browser has some sort of native WebVR implementation whereas in other places, the polyfill was used, so that could be part of the issue.

Modernizr: Testing for WebGL vs WebGL Exentions

I'm using THREE.js scenes and graphic objects on my webpage. I know, at the least, THREE.js utilizes WebGL.
I'd like to utilize Modernizr to check the current browser for compatability with WebGL and, if the browser doesn't have it, prompt a message to the user.
When selecting the browser features to have Modernizr test for, I see two features that relate to my goal
WebGL: Detects for WebGL in the browser.
WebGl Extentions: Detects support for OpenGL extensions in WebGL. It's true if the WebGL extensions API is supported, then exposes the supported extensions as subproperties, e.g.:
So in order for THREE.js to work, do I need to test for WebGL Extentions and WebGL or simply just WebGL?
It depends whether you're using features that require extensions. Three.js itself doesn't need any extensions. Certain things like shadows probably run faster if you WEBGL_depth_texture extension.
If you don't know what extensions you personally need consider inserting some code to hide them and see if your app still runs
Example:
// disable all extensions
WebGLRenderingContext.prototype.getExtension = function() {
return null;
}
WebGLRenderingContext.prototype.getSupportedExtensions = function() {
return [];
}
// now init three.js
If you want to allow specific extensions you could do something like this
var allowedExtensions = [
"webgl_depth_texture",
"oes_texture_float",
];
WebGLRenderingContext.prototype.getExtension = function(origFn) {
return function(name) {
if (allowedExtensions.indexOf(name.ToLowerCase()) >= 0) {
return origFn.call(this, name);
}
return null;
};
}(WebGLRenderingContext.prototype.getExtension);
WebGLRenderingContext.prototype.getSupportedExtensions = function(origFn) {
return function() {
return origFn.call(this).filter(function(name) {
return allowedExtensions.indexOf(n) >= 0;
});
};
}(WebGLRenderingContext.prototype.getSupportedExtensions);

Cesium - why is scene.pickPositionSupported false

I'm ultimately trying to draw a polygon on top of my house. I can do that.
The problem is that on zoom-out, zoom-in, and rotation (or camera move) the polygon doesn't stick to the top of my house. I received great help from this answer. So, now I'm trying to go through the sample code but there is a lot of Cesium methods and functionality that I need to learn.
The sample code I am trying to follow is located in the gold standard that appears to be baked into the existing camera controller here.
I call testMe with the mousePosition as Cartesian3 and the SceneMode is 3D, so pickGlobe is executed.
Here is my code:
var pickedPosition;
var scratchZoomPickRay = new Cesium.Ray();
var scratchPickCartesian = new Cesium.Cartesian3();
function testMe(mousePosition) {
if (Cesium.defined(scene.globe)) {
if(scene.mode !== Cesium.SceneMode.SCENE2D) {
pickedPosition = pickGlobe(viewer, mousePosition, scratchPickCartesian);
} else {
pickedPosition = camera.getPickRay(mousePosition, scratchZoomPickRay).origin;
}
}
}
var pickGlobeScratchRay = new Cesium.Ray();
var scratchDepthIntersection = new Cesium.Cartesian3();
var scratchRayIntersection = new Cesium.Cartesian3();
function pickGlobe(viewer, mousePosition, result) {
var globe = scene.globe;
var camera = scene.camera;
if (!Cesium.defined(globe)) {
return undefined;
}
var depthIntersection;
if (scene.pickPositionSupported) {
depthIntersection = scene.pickPosition(mousePosition, scratchDepthIntersection);
}
var ray = camera.getPickRay(mousePosition, pickGlobeScratchRay);
var rayIntersection = globe.pick(ray, scene, scratchRayIntersection);
var pickDistance;
if(Cesium.defined(depthIntersection)) {
pickDistance = Cesium.Cartesian3.distance(depthIntersection, camera.positionWC);
} else {
pickDistance = Number.POSITIVE_INFINITY;
}
var rayDistance;
if(Cesium.defined(rayIntersection)) {
rayDistance = Cesium.Cartesian3.distance(rayIntersection, camera.positionWC);
} else {
rayDistance = Number.POSITIVE_INFINITY;
}
var scratchCenterPosition = new Cesium.Cartesian3();
if (pickDistance < rayDistance) {
var cart = Cesium.Cartesian3.clone(depthIntersection, result);
return cart;
}
var cart = Cesium.Cartesian3.clone(rayIntersection, result);
return cart;
}
Here is my problem:
Here is the result:
Here are my questions to get this code working:
1. How do I get the scene.pickPositionSupported set to true? I'm using Chrome on Windows 10. I cannot find in the sample code anything about this and I haven't had much luck with the documentation or Google.
2. Why is rayIntersection not getting set? ray and scene have values and scratchRayIntersection in an empty Cartesian3.
I think if I can get those two statements working, I can probably get the rest of the pickGlobe method working.
WebGLGraphics Report:
I clicked on Get WebGL and the cube is spinning!
Picking positions requires that the underlying WebGL implementation support depth textures, either through the WEBGL_depth_texture or WEBKIT_WEBGL_depth_texture extensions. scene.pickPositionSupported is returning false because this extension is missing. You can verify this by going to http://webglreport.com/ and looking at the list of extensions; I have both of the above listed there. There is nothing you can do in your code itself to make it suddenly return true, it's a reflection of the underlying browser.
That being said, I know for a fact that Chrome supports the depth texture and it works on Windows 10, so this sounds like a likely video card driver issue. I full expect downloading and installing the latest drivers for your system to solve the problem.
As for rayIntersection, from a quick look at your code I only expect it to be defined if the mouse is actually over the globe, which may not always be the case. If you can reduce this to a runnable Sandcastle example, it would be easier for me to debug.
OK. So it turned out that I had a totally messed up Cesium environment. I had to delete it and reinstall it in my project (npm install cesium --save-dev). Then I had to fix a few paths and VOILA! It worked. Thanks to both of you for all your help.

Categories

Resources