I'm building a paint-like feature where the user can draw a line, but the touchmove event gets emitted really slow on my device (android phone), so the line becomes edgy. As soon as I connect the device to my PC and open the chrome devtools via USB debugging, everything works fine. On the phone emulator in desktop-chrome aren't any problems.
Here is a screenshot. The inner circle was drawn with the slow touch events, and for the outer one I connected the device to my PC.
Here is another screenshot showing the durations between individual "touchmove" event-calls. The top part (green values) occured when the devtools were open, the bottom part (red values) when they were closed.
The code:
function DrawingCanvas(/* ... */) {
// ...
const handleTouchMove = (event) => {
handleMouseMove(event.touches[0])
}
const handleMouseMove = ({ clientX, clientY }) => {
if (!isDrawing) {
return
}
const canvasRect = canvas.getBoundingClientRect()
const x = clientX - canvasRect.x
const y = clientY - canvasRect.y
currentPath.current.addPoint([x, y])
update()
}
const update = () => {
clearCanvas()
drawPath()
}
// ...
useEffect(() => {
const drawingCanvas = drawingCanvasRef.current
// ...
drawingCanvas.addEventListener("touchstart", handleDrawStart)
drawingCanvas.addEventListener("touchend", handleDrawEnd)
drawingCanvas.addEventListener("touchcancel", handleDrawEnd)
drawingCanvas.addEventListener("touchmove", handleTouchMove)
drawingCanvas.addEventListener("mousedown", handleDrawStart)
drawingCanvas.addEventListener("mouseup", handleDrawEnd)
drawingCanvas.addEventListener("mousemove", handleMouseMove)
return () => {
drawingCanvas.removeEventListener("touchstart", handleDrawStart)
drawingCanvas.removeEventListener("touchmove", handleTouchMove)
drawingCanvas.removeEventListener("touchend", handleDrawEnd)
drawingCanvas.removeEventListener("touchcancel", handleDrawEnd)
drawingCanvas.removeEventListener("mousedown", handleDrawStart)
drawingCanvas.removeEventListener("mouseup", handleDrawEnd)
drawingCanvas.removeEventListener("mousemove", handleMouseMove)
}
})
return <canvas /* ... */ />
}
Does anyone have an idea on how to fix this?
You can test it by yourself on the website: https://www.easymeme69.com/editor
Somehow calling event.preventDefault() on the touchmove event fixed it.
I'm facing exactly the same situation, I'm developing a React app with some touch features implementing actions on touchmove.
All my tests are done inside Chrome on the Debian-based Raspberry OS distro.
It results in a deadly laggy UI with a real touch screen...except (this is when it becomes very interesting!) if the console is opened with Chrome mobile emulator, then even if I try to play with my finger on the real touch screen at this moment.
touch-action: none & event.stopPropagation hacks were already existing in my code and didn't change the game.
2 conclusions on that :
The touch screen (and its driver) is fine
The CPU is quite able to handle the load
As for now, the mystery is still opaque for me.
My feeling is that, somehow, Chrome is deliberately decreasing/increasing the touch events rate depending (correspondingly) on whether we're in a real use case or whether we're on the emulator. I created a simple fiddle to validate this hypothesis: https://jsfiddle.net/ncgtjesh/20/show
It seems to be the case since I can clearly that the emulator-enabled mode outputs 240 events/second while the real non-emulated interface is stuck to 120.
I'm quite surprised that the fixes enacted in the responses above made it since it seems to be a browser implementation choice.
I had this exact same thing happen to me, down to not being able to reproduce with USB debugging open. Besides the e.preventDefault() hack, you can also set the touchable element's touch-action: none; in CSS.
I've had the same problem. I had no freezes on mobile or firefox, only on Chromium. Either disabling touchpad-overscroll-history-navigation in chrome-flags or e.preventDefault() can solve the problem.
Related
I was trying to implement dynamic re-rendering of my react application using this function I found here:
https://stackoverflow.com/a/19014495/7838374
function useWindowSize() {
const [size, setSize] = useState([0, 0]);
useLayoutEffect(
() => {
function updateSize() {
setSize([window.innerWidth, window.innerHeight]);
}
window.addEventListener("resize", updateSize);
updateSize();
return () => window.removeEventListener("resize", updateSize);
},[]);
return size;
}
function ShowWindowDimensions(props) {
const [width, height] = useWindowSize();
return (
<span>
Window size: {width} x {height}
</span>
);
}
Link to my app:
https://github.com/Intetra/aubreyw
I was able to get everything working perfectly when displaying my app in the browser on my desktop. I'm using expo to run the app. The problem came when I tried to run the app on my android phone. I was getting an error at launch.
COMPONENT EXCEPTION: window.addEventListener is not a function
I was able to get it working with a solution I found here:
https://stackoverflow.com/a/61470685/7838374
That solution says that the event listener for window doesn't exist in react native, so we have to mock it. I don't understand what that means. I still don't know why the solution I found worked. I would like to understand. Can someone enlighten me?
The browser environment is different than the React Native environment in some ways, despite the fact that both use Javascript. This means that while the language is the same, some of the underlying features may be different, meaning they could behave differently, or exist in one place and not the other.
window.addEventListener is an example of something that we can expect to exist in the browser world, but is not implemented in React Native. This gets slightly more complicated, of course, by Expo, which allows running React Native code on the web by shimming certain features, trying to bridge some of the difference between the two worlds.
Because of the dynamic nature of Javascript, even though window.addEventListener isn't provided by React Native on iOS/Android, we can just add it to our environment by defining it ourselves. That's what the solution you found (window.addEventListener = x => x) does -- it just adds a function that doesn't actually do anything (it takes x as a parameter and returns x as a result). This is sometimes referred to as a mock -- you'll often see this in the testing world.
On React Native on the device, in my testing, your solution wouldn't produce an error, but it also wouldn't actually give you the dimensions. Luckily, you can use Dimensions to get the screen size, which Expo also exposes in the web version. So, this will looks to return the correct size on both the native app and the Expo web version:
function useWindowSize() {
const [size, setSize] = React.useState([0, 0]);
React.useLayoutEffect(
() => {
console.log("Layout effect")
function updateSize() {
setSize([Dimensions.get('window').width, Dimensions.get('window').height])
}
window.addEventListener("resize", updateSize);
updateSize();
return () => window.removeEventListener("resize", updateSize);
},[]);
return size;
}
Note, you should do some testing to see what happens when rotating, resizing, etc -- I only made sure the basic functionality works.
Here's a snack with the code in context: https://snack.expo.io/Mofut1jHa
Also note that window.removeEventListener has to be mocked as well, which you can see in the Snack.
Problem
An application requires the inner size of the window. React patterns suggests registering an event listener within a one-time effect hook. The call to window.addEventListener appears to occur only once, but event listeners pile up and negatively affect performance.
Code
Here's the pared down source code that reproduces this issue
import React, {useState, useEffect} from 'react';
const getWindowRect = () => {
return new DOMRect(0, 0, window.innerWidth, window.innerHeight);
}
// custom hook to track the window dimensions
const useWindowRect = () => {
// use state for the aspect ratio
let [rect, setRect] = useState(getWindowRect);
// useEffect w/o deps should only be called once
useEffect(() => {
const resizeHandler = () => { setRect(getWindowRect()); };
window.addEventListener('resize', resizeHandler);
console.log('added resize listener');
// return the cleanup function
return () => {
window.removeEventListener('resize', resizeHandler);
console.log('removed resize listener');
}
}, []);
// return the up-to-date window rect
return rect;
}
const App = () => {
const window_rect = useWindowRect();
return <div>
{window_rect.width/window_rect.height}
</div>
};
export default App;
Testing
relevant console output reads:
added resize listener
this is the expected result where the listener is added only once, no matter how much the app is re-rendered
reference, window not resized max listeners: 56
resizing performance, hundreds of listeners accumulate max listeners: 900+
resizing performance w/ window.addEventListener commented out max listeners: 49
Environment
React 16.13.1
TypeScript 4.0.3
WebPack 4.44.2
Babel Loader 8.1.0
Chrome 86.0.4240.111 (Official Build) (x86_64)
Demo
Assuming is would be difficult to run performance metrics on a JSFiddle or CodePen I've provided a full demo at this repo: oclyke-exploration/resize-handler-performance You can easily run the demo as long as you have node and yarn installed.
General Discussion
this approach has worked before w/o these symptoms, however the environment was slightly different and did not include TypeScript (could this be caused by the cross-compilation?)
i've briefly looked into whether the function reference that is provided to window.removeEventListener is the same as that provided to window.addEventListener - though this should not even come into play when the effect only occurs once
there are many possible ways to work around this issue - this question is intended to ask why this method, which is expected to work, does not
reproduced this issue on a fresh create-react-app project using react-scripts 4.0.0
Ask
Does anyone have an explanation for this issue? I'm stumped!
(related: can others reproduce this issue?)
As pointed out by Patrick Roberts and Aleksey L. in comments on the question the issue is not actually an issue because:
the event handlers are registered by invokeGuardedCallbackDev in react-dom.development.js
the event handlers are cleaned up periodically
this does not affect production builds
I try to implement a compass using the deviceorientation event. I want to use the rotation to rotate an arrow facing in the viewing direction. I'm simply rotating the Image when the deviceorientation changes.
On iOS this works like charm by doing the following:
if (typeof DeviceOrientationEvent.requestPermission === "function") {
//#ts-ignore
DeviceOrientationEvent.requestPermission()
.then(permissionState => {
if (permissionState === "granted") {
this.compassActive = true;
window.addEventListener(
"deviceorientation",
eventData =>
this.zone.run(() => {
if(!this.compassActive){
return false;
}
//#ts-ignore
this.rotatePlayerIcon(eventData.webkitCompassHeading);
}),
false
);
}
})
.catch(console.error);
Since the DeviceOrientationEvent.webkitCompassHeading gives me a clockwise world based presentation of the rotation of the device.
I try the same on Android but I can't find a world based solution. webkitCompassHeading does not exist on android so I tried using just eventData.alpha. But this gives 0 based on the rotation it was when the event was fired, not based on world north.
All the Guides I find seem outdated.
How do I get a clockwise compass on android like I get on iOS?
bellow youll seea compass of vtm-mapsforge. it also includes an arrow. appart from the usual magnetometer-accelerometer sensors I recommend using location bearing as the sensors are not at all accurate and get easily disrupted.the nice thing about the compass is that it includes code to make it rotate smoothly as the raw output from the sensors is often noisy.
For the Java code look at this, it is the original version of the vtm-mapsforge compass
The problem is that alpha is not absolute, means 0 is not north instead 0 is where the device is pointing on activation.
The best way to fix for chrome-based browsers, like most standard Android Browsers ist using an AbsoluteOrientationSensor from W3C Generic Sensor API polyfills.
Github
The Project I used it in is Typescript based. So here is a Typescript example:
import {AbsoluteOrientationSensor} from 'motion-sensors-polyfill'
const options = { frequency: 60, referenceFrame: 'device' };
const sensor = new AbsoluteOrientationSensor(options);
sensor.addEventListener('reading', e => {
this.zone.run(() => {
var q = e.target.quaternion;
let alpha = Math.atan2(2*q[0]*q[1] + 2*q[2]*q[3], 1 - 2*q[1]*q[1] - 2*q[2]*q[2])*(180/Math.PI);
if(alpha < 0) alpha = 360+ alpha;
this.alpha = 360 - alpha;
this.rotatePlayerIcon(360 - alpha)
})
});
sensor.start();
IntersectionObserver is fairly new, experimental API, and at this moment is not fully supported by all browsers.
It will have many uses, but for now the most prominent one is lazy-loading your images, that is if you have them plenty on your website. It is recommended by Google if you audit your website with Lighthouse.
Now, there are several snippets around the web suggesting its usage but I think none of them are 100% vetted. For example I'm trying to use this one. It works like a charm on Chrome, Firefox and Opera but it doesn't work on IE and Edge.
const images = document.querySelectorAll('img[data-src]');
const config = {
rootMargin: '50px 0px',
threshold: 0.01
};
let observer;
if ('IntersectionObserver' in window) {
observer = new IntersectionObserver(onChange, config);
images.forEach(img => observer.observe(img));
} else {
console.log('%cIntersection Observers not supported', 'color: red');
images.forEach(image => loadImage(image));
}
const loadImage = image => {
image.classList.add('fade-in');
image.src = image.dataset.src;
}
function onChange(changes, observer) {
changes.forEach(change => {
if (change.intersectionRatio > 0) {
// Stop watching and load the image
loadImage(change.target);
observer.unobserve(change.target);
}
});
}
To be more precise, the code should recognize if browser supports IntersectionObserver and if NOT it should immediately load all images without utilizing the API and write to console that IntersectionObserver is not supported. So, the snippet above fails to do that.
As far as my investigation goes, when testing with IE 11 and Edge 15, they spit an error to console that they don't recognize forEach, despite the fact that they should support it.
I've tried to shim forEach, and even replace forEach with good old for, but I can't get this snippet to work on IE and Edge.
Any thoughts?
After some tests, I found the reason.
First, I let the observer observe document.body, it works. Then I guess the observer can't observe empty elements, so I set 1px border on the element I want to observe, and then it works.
This may be a bug on Edge, because Chrome and Firefox can both observe empty elements.
I have written a Three.js application using StereoEffect, using 3 scenes for overlaying purposes, by clearing renderer depth.
(Using this approach Three.js - Geometry on top of another).
However, I now need to use VREffect, for better compatibility with headsets such as the Gear VR, using the WebVR Polyfill.
The following are snippets from the code, to show how it's set up:
const renderer = new THREE.WebGLRenderer({antialias: false})
renderer.setSize(window.innerWidth, window.innerHeight)
renderer.autoClear = false;
document.body.appendChild(renderer.domElement)
effect = new THREE.VREffect(renderer)
effect.separation = 0
effect.setSize(window.innerWidth, window.innerHeight)
let vrDisplay
navigator.getVRDisplays().then(displays => {
if (displays.length > 0)
vrDisplay = displays[0]
})
// Add button to enable the VR mode (display stereo)
const vrButton = VRSamplesUtil.addButton("Enter VR", "E", "/vr/cardboard64.png", () => {
vrDisplay.requestPresent([{source: renderer.domElement}])
})
... The rest of the code ...
Inside my animation loop:
renderer.clear()
effect.render(scene, camera)
renderer.clearDepth()
effect.render(scene2, camera)
effect.render(scene3, camera)
However, this approach doesn't seem to work when using VREffect (only when entering VR mode - EG viewing it on my desktop works fine). I think the issue is that the renderer.clear() or renderer.clearDepth() is not taking effect, as the canvas is pitch black, apart from some elements in scene3.
Furthermore, when commenting out the rendering of scene2 and scene3, I can perfectly well see everything in the first scene, rendered correctly.
Looking through the code in VREffect, and StereoEffect, I couldn't figure out which part rendered my changes useless.
Any help/hints would be greatly appreciated.
I never did find out how to fix that issue, but I got around it by using StereoEffect instead of VREffect for browsers other than the Gear VR's, as VREffect worked perfectly fine there, but not on a normal phone browser, where cardboard might be used. Here's what I did, if anyone else runs into this issue:
From the example above, I turned the vrButton bit into this:
const vrButton = VRSamplesUtil.addButton("Enter VR", "E", "/images/cardboard64.png", () => {
if(navigator.userAgent.includes("Mobile VR")){
vrDisplay.requestPresent([{source: renderer.domElement}])
}else {
effect = new THREE.StereoEffect(renderer)
effect.separation = 0
effect.setSize(window.innerWidth, window.innerHeight)
document.getElementById("vr-sample-button-container").style.display = "none"
}
})
where I switched over from VREffect to StereoEffect when the 'View in VR' button is clicked.
With this approach, however, the content will not be fullscreen and the device will eventually go to sleep. To fix both issues, you can have the user tap the screen to manually turn on fullscreen with this:
renderer.domElement.addEventListener("click", () => {
if(document.fullscreenEnabled && renderer.domElement.requestFullScreen() ||
document.webkitFullscreenEnabled && renderer.domElement.webkitRequestFullScreen() ||
document.mozFullScreenEnabled && renderer.domElement.mozRequestFullScreen() ||
document.msFullScreenEnabled && renderer.domElement.msRequestFullScreen() ){}
})
Obviously, this is not as good of a user experience, and you don't get the nice UI, so if someone finds this and knows of an actual fix, please leave an answer/comment. I'll update this if I find anything myself.
Some final thoughts, afaik the Gear VR browser has some sort of native WebVR implementation whereas in other places, the polyfill was used, so that could be part of the issue.