I have written a Three.js application using StereoEffect, using 3 scenes for overlaying purposes, by clearing renderer depth.
(Using this approach Three.js - Geometry on top of another).
However, I now need to use VREffect, for better compatibility with headsets such as the Gear VR, using the WebVR Polyfill.
The following are snippets from the code, to show how it's set up:
const renderer = new THREE.WebGLRenderer({antialias: false})
renderer.setSize(window.innerWidth, window.innerHeight)
renderer.autoClear = false;
document.body.appendChild(renderer.domElement)
effect = new THREE.VREffect(renderer)
effect.separation = 0
effect.setSize(window.innerWidth, window.innerHeight)
let vrDisplay
navigator.getVRDisplays().then(displays => {
if (displays.length > 0)
vrDisplay = displays[0]
})
// Add button to enable the VR mode (display stereo)
const vrButton = VRSamplesUtil.addButton("Enter VR", "E", "/vr/cardboard64.png", () => {
vrDisplay.requestPresent([{source: renderer.domElement}])
})
... The rest of the code ...
Inside my animation loop:
renderer.clear()
effect.render(scene, camera)
renderer.clearDepth()
effect.render(scene2, camera)
effect.render(scene3, camera)
However, this approach doesn't seem to work when using VREffect (only when entering VR mode - EG viewing it on my desktop works fine). I think the issue is that the renderer.clear() or renderer.clearDepth() is not taking effect, as the canvas is pitch black, apart from some elements in scene3.
Furthermore, when commenting out the rendering of scene2 and scene3, I can perfectly well see everything in the first scene, rendered correctly.
Looking through the code in VREffect, and StereoEffect, I couldn't figure out which part rendered my changes useless.
Any help/hints would be greatly appreciated.
I never did find out how to fix that issue, but I got around it by using StereoEffect instead of VREffect for browsers other than the Gear VR's, as VREffect worked perfectly fine there, but not on a normal phone browser, where cardboard might be used. Here's what I did, if anyone else runs into this issue:
From the example above, I turned the vrButton bit into this:
const vrButton = VRSamplesUtil.addButton("Enter VR", "E", "/images/cardboard64.png", () => {
if(navigator.userAgent.includes("Mobile VR")){
vrDisplay.requestPresent([{source: renderer.domElement}])
}else {
effect = new THREE.StereoEffect(renderer)
effect.separation = 0
effect.setSize(window.innerWidth, window.innerHeight)
document.getElementById("vr-sample-button-container").style.display = "none"
}
})
where I switched over from VREffect to StereoEffect when the 'View in VR' button is clicked.
With this approach, however, the content will not be fullscreen and the device will eventually go to sleep. To fix both issues, you can have the user tap the screen to manually turn on fullscreen with this:
renderer.domElement.addEventListener("click", () => {
if(document.fullscreenEnabled && renderer.domElement.requestFullScreen() ||
document.webkitFullscreenEnabled && renderer.domElement.webkitRequestFullScreen() ||
document.mozFullScreenEnabled && renderer.domElement.mozRequestFullScreen() ||
document.msFullScreenEnabled && renderer.domElement.msRequestFullScreen() ){}
})
Obviously, this is not as good of a user experience, and you don't get the nice UI, so if someone finds this and knows of an actual fix, please leave an answer/comment. I'll update this if I find anything myself.
Some final thoughts, afaik the Gear VR browser has some sort of native WebVR implementation whereas in other places, the polyfill was used, so that could be part of the issue.
Related
I'm using an augmented reality library that does some fancy image tracking stuff. After learning a whole lot about this project, I'm now beyond my current ability and could use some help. For our purposes, the library creates an (empty) anchor point at the center of an IRL image target in-camera. Then moves the virtual world around the IRL camera.
My goal is to drive plane.rotation to always face the camera, while keeping plane.position locked to the anchor point. Additionally, plane.rotation values will be referenced later in development.
const THREE = window.MINDAR.IMAGE.THREE;
document.addEventListener('DOMContentLoaded', () => {
const start = async() => {
// initialize MindAR
const mindarThree = new window.MINDAR.IMAGE.MindARThree({
container: document.body,
imageTargetSrc: '../../assets/targets/testQR.mind',
});
const {renderer, scene, camera} = mindarThree;
// create AR object
const geometry = new THREE.PlaneGeometry(1, 1.25);
const material = new THREE.MeshBasicMaterial({color: 0x00ffff, transparent: true, opacity: 0.5});
const plane = new THREE.Mesh(geometry, material);
// create anchor
const anchor = mindarThree.addAnchor(0);
anchor.group.add(plane);
// start AR
await mindarThree.start();
renderer.setAnimationLoop(() => {
renderer.render(scene, camera);
});
}
start();
});
Everything I've tried so far went into the solutions already massaged into the (functioning draft) code. I have, however, done some research and found a couple avenues that might or might not work. Just tossing them out to see what might stick or inspire another solution. Skill-wise, I'm still in the beginner category, so any help figuring this out is much appreciated.
identify plane object by its group index number;
drive (override lib?) object rotation (x, y, z) to face camera;
possible solutions from dev:
"You can get those values through the anchor object, e.g. anchor.group.position. Meaning that you can use the current three.js API and get those values but without using it for rendering i.e. don't append the renderer.domElement to document."
"You can hack into the source code of mindar (it's open source)."
"Another way might be easier for you to try is to just create another camera yourself. I believe you can have multiple cameras, and just render another layer on top using your new camera."
I think it may be as simple as calling lookAt in the animation loop function:
// start AR
await mindarThree.start();
renderer.setAnimationLoop(() => {
plane.lookAt(new THREE.Vector3());
renderer.render(scene, camera);
});
This assumes the camera is always located at (0,0,0) (i.e., new THREE.Vector3()). This seems to be true from my limited testing. I found it helpful to debug by copy-pasting the MindAR three.js example into this codepen and printing some relevant values to the console.
Also note that, internally, MindAR's three.js module seems to directly modify the world matrix of the anchor.group object without modifying the position/rotation/scale parameters.
I'm building a paint-like feature where the user can draw a line, but the touchmove event gets emitted really slow on my device (android phone), so the line becomes edgy. As soon as I connect the device to my PC and open the chrome devtools via USB debugging, everything works fine. On the phone emulator in desktop-chrome aren't any problems.
Here is a screenshot. The inner circle was drawn with the slow touch events, and for the outer one I connected the device to my PC.
Here is another screenshot showing the durations between individual "touchmove" event-calls. The top part (green values) occured when the devtools were open, the bottom part (red values) when they were closed.
The code:
function DrawingCanvas(/* ... */) {
// ...
const handleTouchMove = (event) => {
handleMouseMove(event.touches[0])
}
const handleMouseMove = ({ clientX, clientY }) => {
if (!isDrawing) {
return
}
const canvasRect = canvas.getBoundingClientRect()
const x = clientX - canvasRect.x
const y = clientY - canvasRect.y
currentPath.current.addPoint([x, y])
update()
}
const update = () => {
clearCanvas()
drawPath()
}
// ...
useEffect(() => {
const drawingCanvas = drawingCanvasRef.current
// ...
drawingCanvas.addEventListener("touchstart", handleDrawStart)
drawingCanvas.addEventListener("touchend", handleDrawEnd)
drawingCanvas.addEventListener("touchcancel", handleDrawEnd)
drawingCanvas.addEventListener("touchmove", handleTouchMove)
drawingCanvas.addEventListener("mousedown", handleDrawStart)
drawingCanvas.addEventListener("mouseup", handleDrawEnd)
drawingCanvas.addEventListener("mousemove", handleMouseMove)
return () => {
drawingCanvas.removeEventListener("touchstart", handleDrawStart)
drawingCanvas.removeEventListener("touchmove", handleTouchMove)
drawingCanvas.removeEventListener("touchend", handleDrawEnd)
drawingCanvas.removeEventListener("touchcancel", handleDrawEnd)
drawingCanvas.removeEventListener("mousedown", handleDrawStart)
drawingCanvas.removeEventListener("mouseup", handleDrawEnd)
drawingCanvas.removeEventListener("mousemove", handleMouseMove)
}
})
return <canvas /* ... */ />
}
Does anyone have an idea on how to fix this?
You can test it by yourself on the website: https://www.easymeme69.com/editor
Somehow calling event.preventDefault() on the touchmove event fixed it.
I'm facing exactly the same situation, I'm developing a React app with some touch features implementing actions on touchmove.
All my tests are done inside Chrome on the Debian-based Raspberry OS distro.
It results in a deadly laggy UI with a real touch screen...except (this is when it becomes very interesting!) if the console is opened with Chrome mobile emulator, then even if I try to play with my finger on the real touch screen at this moment.
touch-action: none & event.stopPropagation hacks were already existing in my code and didn't change the game.
2 conclusions on that :
The touch screen (and its driver) is fine
The CPU is quite able to handle the load
As for now, the mystery is still opaque for me.
My feeling is that, somehow, Chrome is deliberately decreasing/increasing the touch events rate depending (correspondingly) on whether we're in a real use case or whether we're on the emulator. I created a simple fiddle to validate this hypothesis: https://jsfiddle.net/ncgtjesh/20/show
It seems to be the case since I can clearly that the emulator-enabled mode outputs 240 events/second while the real non-emulated interface is stuck to 120.
I'm quite surprised that the fixes enacted in the responses above made it since it seems to be a browser implementation choice.
I had this exact same thing happen to me, down to not being able to reproduce with USB debugging open. Besides the e.preventDefault() hack, you can also set the touchable element's touch-action: none; in CSS.
I've had the same problem. I had no freezes on mobile or firefox, only on Chromium. Either disabling touchpad-overscroll-history-navigation in chrome-flags or e.preventDefault() can solve the problem.
I try to implement a compass using the deviceorientation event. I want to use the rotation to rotate an arrow facing in the viewing direction. I'm simply rotating the Image when the deviceorientation changes.
On iOS this works like charm by doing the following:
if (typeof DeviceOrientationEvent.requestPermission === "function") {
//#ts-ignore
DeviceOrientationEvent.requestPermission()
.then(permissionState => {
if (permissionState === "granted") {
this.compassActive = true;
window.addEventListener(
"deviceorientation",
eventData =>
this.zone.run(() => {
if(!this.compassActive){
return false;
}
//#ts-ignore
this.rotatePlayerIcon(eventData.webkitCompassHeading);
}),
false
);
}
})
.catch(console.error);
Since the DeviceOrientationEvent.webkitCompassHeading gives me a clockwise world based presentation of the rotation of the device.
I try the same on Android but I can't find a world based solution. webkitCompassHeading does not exist on android so I tried using just eventData.alpha. But this gives 0 based on the rotation it was when the event was fired, not based on world north.
All the Guides I find seem outdated.
How do I get a clockwise compass on android like I get on iOS?
bellow youll seea compass of vtm-mapsforge. it also includes an arrow. appart from the usual magnetometer-accelerometer sensors I recommend using location bearing as the sensors are not at all accurate and get easily disrupted.the nice thing about the compass is that it includes code to make it rotate smoothly as the raw output from the sensors is often noisy.
For the Java code look at this, it is the original version of the vtm-mapsforge compass
The problem is that alpha is not absolute, means 0 is not north instead 0 is where the device is pointing on activation.
The best way to fix for chrome-based browsers, like most standard Android Browsers ist using an AbsoluteOrientationSensor from W3C Generic Sensor API polyfills.
Github
The Project I used it in is Typescript based. So here is a Typescript example:
import {AbsoluteOrientationSensor} from 'motion-sensors-polyfill'
const options = { frequency: 60, referenceFrame: 'device' };
const sensor = new AbsoluteOrientationSensor(options);
sensor.addEventListener('reading', e => {
this.zone.run(() => {
var q = e.target.quaternion;
let alpha = Math.atan2(2*q[0]*q[1] + 2*q[2]*q[3], 1 - 2*q[1]*q[1] - 2*q[2]*q[2])*(180/Math.PI);
if(alpha < 0) alpha = 360+ alpha;
this.alpha = 360 - alpha;
this.rotatePlayerIcon(360 - alpha)
})
});
sensor.start();
For some reason having a tap operator is altering the output of a stream (actually providing my expected result). When I remove the tap the subsequent filter no longer appears to work.
I have run the code below on codesandbox with the
tap(([prevMouseEvent, currentMouseEvent]) =>
console.log(directionChange(prevMouseEvent, currentMouseEvent))
),
included as well as commented out. Without the tap the subsequent filter doesn't appear to work and I get a constant stream of paired mouse events.
// Returns true if the predominant movement in the mouse has changed direction
const directionChange = (prevMouseEvent, currentMouseEvent) => {
const dominantAxisMovement =
Math.abs(currentMouseEvent.movementX) >
Math.abs(currentMouseEvent.movementY) ? "X" : "Y";
if (dominantAxisMovement === "X") {
return (
Math.sign(currentMouseEvent.movementX) !==
Math.sign(prevMouseEvent.movementX)
);
} else {
return (
Math.sign(currentMouseEvent.movementY) !==
Math.sign(prevMouseEvent.movementY)
);
}
};
const mouseDirectionSwitch$ = fromEvent(document, "mousemove").pipe(
pairwise(),
tap(([prevMouseEvent, currentMouseEvent]) =>
console.log(directionChange(prevMouseEvent, currentMouseEvent))
),
filter(([prevMouseEvent, currentMouseEvent]) =>
directionChange(prevMouseEvent, currentMouseEvent)
)
);
mouseDirectionSwitch$.subscribe(passed => console.log(passed));
What I am trying to achieve is an observable that only emits when the user changes the direction of a mouse movement (and by change movement I mean up to down and left to right, not subtle changes). With the tap it works but can anyone explain why the tap operator is required here to get the desired output? I thought the tap operator returned an identical observable and would therefore have no effect on the output of the stream it is in.
Thanks for the suggestions. Jacob - your suggestion made me go and recreate this in its own independent project... and it worked fine. So I shut everything down and reopened the original project and low and behold it worked. I'm really not sure why it wasn't working before (despite full page refreshes to flush any remnant event handlers) but the age old "have you tried restarting your computer" was all I really needed. Thanks for your ideas!
I've been trying to experiment with box2d and threejs.
So box2d has a series of js iterations, I've been successful at using them so far in projects as well as threejs in others, but I'm finding when including the latest instance of threejs and box2dweb, threejs seems to be mis-performing when just close to box2dweb but maybe I'm missing something really simple, like a better way to load them in together, or section them off from one another?
I've tried a few iterations of the box2d js code now and I always seemed to run into the same problem with later versions of threejs and box2d together! - currently version 91 threejs.
The problem I'm seeing is very weird.
I'm really hoping someone from either the box2d camp or threejs camp can help me out with this one, please?
Below is a very simple example where I don't initialize anything to do with box2d, but just by having the library included theres problems and you can test by removing that resource, then it behaves like it should.
The below demo uses threejs 91 and box2dweb. It is supposed to every couple of seconds create a box or a simple sphere each with a random colour. Very simple demo, you will see the mesh type never changes and the colour seems to propagate across all mesh instances. However if you remove the box2dweb resource from the left tab then it functions absolutely fine, very odd :/
jsfiddle link here
class Main {
constructor(){
this._container = null;
this._scene = null;
this._camera = null;
this._renderer = null;
console.log('| Main |');
this.init();
}
init(){
this.initScene();
this.addBox(0, 0, 0);
this.animate();
}
initScene() {
this._container = document.getElementById('viewport');
this._scene = new THREE.Scene();
this._camera = new THREE.PerspectiveCamera(75, 600 / 400, 0.1, 1000);
this._camera.position.z = 15;
this._camera.position.y = -100;
this._camera.lookAt(new THREE.Vector3());
this._renderer = new THREE.WebGLRenderer({antialias:true});
this._renderer.setPixelRatio( 1 );
this._renderer.setSize( 600, 400 );
this._renderer.setClearColor( 0x000000, 1 );
this._container.appendChild( this._renderer.domElement );
}
addBox(x,y,z) {
var boxGeom = new THREE.BoxGeometry(5,5,5);
var sphereGeom = new THREE.SphereGeometry(2, 5, 5);
var colour = parseInt(Math.random()*999999);
var boxMat = new THREE.MeshBasicMaterial({color:colour});
var rand = parseInt(Math.random()*2);
var mesh = null;
if(rand == 1) {
mesh = new THREE.Mesh(boxGeom, boxMat);
}
else {
mesh = new THREE.Mesh(sphereGeom, boxMat);
}
this._scene.add(mesh);
mesh.position.x = x;
mesh.position.y = y;
mesh.position.z = z;
}
animate() {
requestAnimationFrame( this.animate.bind(this) );
this._renderer.render( this._scene, this._camera );
}
}
var main = new Main();
window.onload = main.init;
//add either a box or a sphere with a random colour every now and again
setInterval(function() {
main.addBox(((Math.random()*100)-50), ((Math.random()*100)-50), ((Math.random()*100)-50));
}.bind(this), 4000);
so the way im including the library locally is just a simple...
<script src="js/vendor/box2dweb.js"></script>
So just by including the box2d library threejs starts to act weird, I have tested this across multiple computers too and multiple version of both box2d (mainly box2dweb) and threejs.
So with later versions of threejs it seems to have some comflicts with box2d.
I found from research that most of the box2d conversions to js are sort of marked as not safe for thread conflicts.
Im not sure if this could be the cause.
I also found examples where people have successfully used box2d with threejs but the threejs is always quite an old version, however you can see exactly the same problems occurring in my example, when I update them.
So below is a demo I found and I wish I could credit the author, but here is a copy of the fiddle using threejs 49
jsfiddle here
.....and then below just swapping the resource of threejs from 49 to 91
jsfiddle here
its quite an odd one and maybe the two libraries just don't play together anymore but would be great if someone can help or has a working example of them working together on latest threejs version.
I have tried a lot of different box2d versions but always found the same problem, could this be a problem with conflicting libraries or unsafe threads?
but also tried linking to the resource include in the fiddles provided.
Any help really appreciated!!