Why is the tap operator impacting stream flow? - javascript

For some reason having a tap operator is altering the output of a stream (actually providing my expected result). When I remove the tap the subsequent filter no longer appears to work.
I have run the code below on codesandbox with the
tap(([prevMouseEvent, currentMouseEvent]) =>
console.log(directionChange(prevMouseEvent, currentMouseEvent))
),
included as well as commented out. Without the tap the subsequent filter doesn't appear to work and I get a constant stream of paired mouse events.
// Returns true if the predominant movement in the mouse has changed direction
const directionChange = (prevMouseEvent, currentMouseEvent) => {
const dominantAxisMovement =
Math.abs(currentMouseEvent.movementX) >
Math.abs(currentMouseEvent.movementY) ? "X" : "Y";
if (dominantAxisMovement === "X") {
return (
Math.sign(currentMouseEvent.movementX) !==
Math.sign(prevMouseEvent.movementX)
);
} else {
return (
Math.sign(currentMouseEvent.movementY) !==
Math.sign(prevMouseEvent.movementY)
);
}
};
const mouseDirectionSwitch$ = fromEvent(document, "mousemove").pipe(
pairwise(),
tap(([prevMouseEvent, currentMouseEvent]) =>
console.log(directionChange(prevMouseEvent, currentMouseEvent))
),
filter(([prevMouseEvent, currentMouseEvent]) =>
directionChange(prevMouseEvent, currentMouseEvent)
)
);
mouseDirectionSwitch$.subscribe(passed => console.log(passed));
What I am trying to achieve is an observable that only emits when the user changes the direction of a mouse movement (and by change movement I mean up to down and left to right, not subtle changes). With the tap it works but can anyone explain why the tap operator is required here to get the desired output? I thought the tap operator returned an identical observable and would therefore have no effect on the output of the stream it is in.

Thanks for the suggestions. Jacob - your suggestion made me go and recreate this in its own independent project... and it worked fine. So I shut everything down and reopened the original project and low and behold it worked. I'm really not sure why it wasn't working before (despite full page refreshes to flush any remnant event handlers) but the age old "have you tried restarting your computer" was all I really needed. Thanks for your ideas!

Related

Touch move is really slow

I'm building a paint-like feature where the user can draw a line, but the touchmove event gets emitted really slow on my device (android phone), so the line becomes edgy. As soon as I connect the device to my PC and open the chrome devtools via USB debugging, everything works fine. On the phone emulator in desktop-chrome aren't any problems.
Here is a screenshot. The inner circle was drawn with the slow touch events, and for the outer one I connected the device to my PC.
Here is another screenshot showing the durations between individual "touchmove" event-calls. The top part (green values) occured when the devtools were open, the bottom part (red values) when they were closed.
The code:
function DrawingCanvas(/* ... */) {
// ...
const handleTouchMove = (event) => {
handleMouseMove(event.touches[0])
}
const handleMouseMove = ({ clientX, clientY }) => {
if (!isDrawing) {
return
}
const canvasRect = canvas.getBoundingClientRect()
const x = clientX - canvasRect.x
const y = clientY - canvasRect.y
currentPath.current.addPoint([x, y])
update()
}
const update = () => {
clearCanvas()
drawPath()
}
// ...
useEffect(() => {
const drawingCanvas = drawingCanvasRef.current
// ...
drawingCanvas.addEventListener("touchstart", handleDrawStart)
drawingCanvas.addEventListener("touchend", handleDrawEnd)
drawingCanvas.addEventListener("touchcancel", handleDrawEnd)
drawingCanvas.addEventListener("touchmove", handleTouchMove)
drawingCanvas.addEventListener("mousedown", handleDrawStart)
drawingCanvas.addEventListener("mouseup", handleDrawEnd)
drawingCanvas.addEventListener("mousemove", handleMouseMove)
return () => {
drawingCanvas.removeEventListener("touchstart", handleDrawStart)
drawingCanvas.removeEventListener("touchmove", handleTouchMove)
drawingCanvas.removeEventListener("touchend", handleDrawEnd)
drawingCanvas.removeEventListener("touchcancel", handleDrawEnd)
drawingCanvas.removeEventListener("mousedown", handleDrawStart)
drawingCanvas.removeEventListener("mouseup", handleDrawEnd)
drawingCanvas.removeEventListener("mousemove", handleMouseMove)
}
})
return <canvas /* ... */ />
}
Does anyone have an idea on how to fix this?
You can test it by yourself on the website: https://www.easymeme69.com/editor
Somehow calling event.preventDefault() on the touchmove event fixed it.
I'm facing exactly the same situation, I'm developing a React app with some touch features implementing actions on touchmove.
All my tests are done inside Chrome on the Debian-based Raspberry OS distro.
It results in a deadly laggy UI with a real touch screen...except (this is when it becomes very interesting!) if the console is opened with Chrome mobile emulator, then even if I try to play with my finger on the real touch screen at this moment.
touch-action: none & event.stopPropagation hacks were already existing in my code and didn't change the game.
2 conclusions on that :
The touch screen (and its driver) is fine
The CPU is quite able to handle the load
As for now, the mystery is still opaque for me.
My feeling is that, somehow, Chrome is deliberately decreasing/increasing the touch events rate depending (correspondingly) on whether we're in a real use case or whether we're on the emulator. I created a simple fiddle to validate this hypothesis: https://jsfiddle.net/ncgtjesh/20/show
It seems to be the case since I can clearly that the emulator-enabled mode outputs 240 events/second while the real non-emulated interface is stuck to 120.
I'm quite surprised that the fixes enacted in the responses above made it since it seems to be a browser implementation choice.
I had this exact same thing happen to me, down to not being able to reproduce with USB debugging open. Besides the e.preventDefault() hack, you can also set the touchable element's touch-action: none; in CSS.
I've had the same problem. I had no freezes on mobile or firefox, only on Chromium. Either disabling touchpad-overscroll-history-navigation in chrome-flags or e.preventDefault() can solve the problem.

Angular 4 weird error: query doesn't end when -not- showing a message toast

Here this is, my weirdest error in my whole programming career. I've been struggling through this, yet I can't find what's going on in this code. It just seems not to make any sense in any way.
I'm using the following tools:
Ionic 3
Angular 4
Typescript / ES6
I'm trying to do a method, "assignChat(user)", which assigns a chat to a user. It has to use several APIs, geolocation... it's a big method, actually. That's why I've split it in two parts connected by promises, and used them after, so my method looks pretty much like this:
assignChat(user){
const getLocationName = () => {
return new Promise((resolve,reject) => {
// 30 lines of code
});
}
const assignOrCreateChat= (area) => {
return new Promise((resolve,reject) => {
// 40 lines of code
});
}
const getLocationName = () => {
return new Promise((resolve,reject) => {
// 30 lines of code
});
}
// then I use the inner functions here and write an extra 60-70 lines of code
}
Ok! This works neat. Didn't have much problems with this algorithm after some several testing, although is quite heavy and takes ~0.5s to properly execute, finish it's queries, and show the result.
Thing is... I had some toasts displaying some information, like where you're located. I wanted to remove them, and started by this one, in the inner function getLocationName(). This is the code I want to talk you about:
const getLocationName = () => {
return new Promise( (resolve, reject) => {
const ADDRESS_LEVEL = 2;
this.reverseGeocode(ADDRESS_LEVEL).then( address => {
---> this.toastify("You have been located at: "+address, 1500);
let query = new Parse.Query("PoliticalArea");
// more code
The line I marked with an arrow, is the line which is giving me problems. I mean, you probably think the code fails because of the line, but it's totally the oposite! If I remove that line, the algorithm suddenly stops working and fails to display any result.
The "toastify" method is a quick way I did for myself for displaying toasts. It works well, actually! This is the implementation:
toastify(message, duration){
this.toastCtrl.create({
message: message,
duration: duration
}).present();
}
Not like the most dangerous method. Well, in fact, it seems that the code won't work without it. If I comment the line, or erase it, I never get any result, or any error, from the big algorithm I showed you before. I've got every possible exception catched, although the API connectors don't have timeout, but it's like it gets stuck every time it doesn't display the toast.
I just don't understand what's going on. Seems like a very serious thing the Angular team should look into, in my very honest opinion.
Any idea of what kind of black magic is going there?
UPDATE:
Some further info: when I navigate through the "bugged" view (without the toastify line, and therefore not displaying the chat result), and per example, click in another chat (which pushes a view into the Navigation Controller), it somehow starts showing the chat result I expected. When I pop the new view from the navCtrl, and get back to the page, the expected result is now visible.
Is this some problem with angular watches?
Ok, the solution was not obvious.
It seems that the view was being rendered before the task completed. It was a tough task, so maybe that's the reason why Angular didn't work properly. Tried executing it both in the constructor and in ionViewDidEnter(), though nothing worked.
My final solution was to force component's re-rendering, through ApplicationRef, using the .tick() method at the dead end of my method.
That fixed it all!

How to render multiple scenes with THREE.js VREffect

I have written a Three.js application using StereoEffect, using 3 scenes for overlaying purposes, by clearing renderer depth.
(Using this approach Three.js - Geometry on top of another).
However, I now need to use VREffect, for better compatibility with headsets such as the Gear VR, using the WebVR Polyfill.
The following are snippets from the code, to show how it's set up:
const renderer = new THREE.WebGLRenderer({antialias: false})
renderer.setSize(window.innerWidth, window.innerHeight)
renderer.autoClear = false;
document.body.appendChild(renderer.domElement)
effect = new THREE.VREffect(renderer)
effect.separation = 0
effect.setSize(window.innerWidth, window.innerHeight)
let vrDisplay
navigator.getVRDisplays().then(displays => {
if (displays.length > 0)
vrDisplay = displays[0]
})
// Add button to enable the VR mode (display stereo)
const vrButton = VRSamplesUtil.addButton("Enter VR", "E", "/vr/cardboard64.png", () => {
vrDisplay.requestPresent([{source: renderer.domElement}])
})
... The rest of the code ...
Inside my animation loop:
renderer.clear()
effect.render(scene, camera)
renderer.clearDepth()
effect.render(scene2, camera)
effect.render(scene3, camera)
However, this approach doesn't seem to work when using VREffect (only when entering VR mode - EG viewing it on my desktop works fine). I think the issue is that the renderer.clear() or renderer.clearDepth() is not taking effect, as the canvas is pitch black, apart from some elements in scene3.
Furthermore, when commenting out the rendering of scene2 and scene3, I can perfectly well see everything in the first scene, rendered correctly.
Looking through the code in VREffect, and StereoEffect, I couldn't figure out which part rendered my changes useless.
Any help/hints would be greatly appreciated.
I never did find out how to fix that issue, but I got around it by using StereoEffect instead of VREffect for browsers other than the Gear VR's, as VREffect worked perfectly fine there, but not on a normal phone browser, where cardboard might be used. Here's what I did, if anyone else runs into this issue:
From the example above, I turned the vrButton bit into this:
const vrButton = VRSamplesUtil.addButton("Enter VR", "E", "/images/cardboard64.png", () => {
if(navigator.userAgent.includes("Mobile VR")){
vrDisplay.requestPresent([{source: renderer.domElement}])
}else {
effect = new THREE.StereoEffect(renderer)
effect.separation = 0
effect.setSize(window.innerWidth, window.innerHeight)
document.getElementById("vr-sample-button-container").style.display = "none"
}
})
where I switched over from VREffect to StereoEffect when the 'View in VR' button is clicked.
With this approach, however, the content will not be fullscreen and the device will eventually go to sleep. To fix both issues, you can have the user tap the screen to manually turn on fullscreen with this:
renderer.domElement.addEventListener("click", () => {
if(document.fullscreenEnabled && renderer.domElement.requestFullScreen() ||
document.webkitFullscreenEnabled && renderer.domElement.webkitRequestFullScreen() ||
document.mozFullScreenEnabled && renderer.domElement.mozRequestFullScreen() ||
document.msFullScreenEnabled && renderer.domElement.msRequestFullScreen() ){}
})
Obviously, this is not as good of a user experience, and you don't get the nice UI, so if someone finds this and knows of an actual fix, please leave an answer/comment. I'll update this if I find anything myself.
Some final thoughts, afaik the Gear VR browser has some sort of native WebVR implementation whereas in other places, the polyfill was used, so that could be part of the issue.

How do reactive streams in JS work?

I'm novice in reactive streams and now trying to understand them. The idea looks pretty clear and simple, but on practice I can't understand what's really going on there.
For now I'm playing with most.js, trying to implement a simple dispatcher. The scan method seems to be exactly what I need for this.
My code:
var dispatch;
// expose method for pushing events to stream:
var events = require("most").create(add => dispatch = add);
// initialize stream, so callback in `create` above is actually called
events.drain();
events.observe(v => console.log("new event", v));
dispatch(1);
var scaner = events.scan(
(state, patch) => {
console.log("scaner", patch);
// update state here
return state;
},
{ foo: 0 }
);
scaner.observe(v => console.log("scaner state", v));
dispatch(2);
As I understand, the first observer should be called twice (once per event), and scaner callback and second observer – once each (because they were added after triggering first event).
On practice, however, console shows this:
new event 1
new event 2
scaner state { foo: 0 }
Scaner is never called, no matter how much events I push in stream.
But if I remove first dispatch call (before creating scaner), everything works just as I expected.
Why is this? I'm reading docs, reading articles, but so far didn't found anything even similar to this problem. Where am I wrong in my suggestions?
Most probably, you have studied examples like this from the API:
most.from(['a', 'b', 'c', 'd'])
.scan(function(string, letter) {
return string + letter;
}, '')
.forEach(console.log.bind(console));
They are suggesting a step-by-step execution like this:
Get an array ['a', 'b', 'c', 'd'] and feed its values into the stream.
The values fed are transformed by scan().
... and consumed by forEach().
But this is not entirely true. This is why your code doesn't work.
Here in the most.js source code, you see at line 1340 ff.:
exports.from = from;
function from(a) {
if(Array.isArray(a) || isArrayLike(a)) {
return fromArray(a);
}
...
So from() is forwarding to some fromArray(). Then, fromArray() (below in the code) is creating a new Stream:
...
function fromArray (a) {
return new Stream(new ArraySource(a));
}
...
If you follow through, you will come from Stream to sink.event(0, array[i]);, having 0 for timeout millis. There is no setTimeout in the code, but if you search the code further for .event = function, you will find a lot of additional code that uncovers more. Specially, around line 4692 there is the Scheduler with delay() and timestamps.
To sum it up: the array in the example above is fed into the stream asynchronously, after some time, even if the time seems to be 0 millis.
Which means you have to assume that somehow, the stream is first built, and then used. Even if the program code doesn't look that way. But hey, isn't it always the target to hide complexity :-) ?
Now you can check this with your own code. Here is a fiddle based on your snippet:
https://jsfiddle.net/aak18y0m/1/
Look at your dispatch() calls in the fiddle. I have wrapped them with setTimeout():
setTimeout( function() { dispatch( 1 /* or 2 */); }, 0);
By doing so, I force them also to be asynchronous calls, like the array values in the example actually are.
In order to run the fiddle, you need to open the browser debugger (to see the console) and then press the run button above. The console output shows that your scanner is now called three times:
doc ready
(index):61 Most loaded: [object Object]
(index):82 scanner state Object {foo: 0}
(index):75 scanner! 1
(index):82 scanner state Object {foo: 0}
(index):75 scanner! 2
(index):82 scanner state Object {foo: 0}
First for drain(), then for each event.
You can also reach a valid result (but it's not the same behind scenes) if you use dispatch() synchronously, having them added at the end, after JavaScript was able to build the whole stream. Just uncomment the lines after // Alternative solution, run again and watch the result.
Well, my question appears to be not so general as it sounds. It's just a lib-specific one.
First – approach from topic is not valid for most.js. They argue to 'take a declarative, rather than imperative, approach'.
Second – I tried Kefir.js lib, and with it code from topic works perfect. Just works. Even more, the same approach which is not supported in most.js, is explicitly recommended for Kefir.js.
So, the problem is in a particular lib implementation, not in my head.

What to do when one stream depends on the value from another?

I'm new to Rx.js and looking for some basic help with FRP concepts. I have one stream using generateWithRelativeTime that keeps a game loop running, and a stream that keeps track of the screen resolution. My problem with my current code is that my game loop only runs when my screen resolution changes.
The stream generated by my model function requires the latest value from res$.
function model(res$) {
return res$.map(res => {
const rows = ceil(res[1] / CELL.HEIGHT)+1
const cols = ceil(res[0] / CELL.WIDTH)+1
return Observable.generateWithRelativeTime(
initWorld(rows, cols, 1.618),
noOp,
world => step(world),
noOp,
_ => 100,
Rx.Scheduler.requestAnimationFrame
).timeInterval()
});
}
function intent() {
return Observable.fromEvent(window, 'resize')
.map(e => [e.currentTarget.innerWidth, e.currentTarget.innerHeight])
.debounce(100, Rx.Scheduler.requestAnimationFrame)
.startWith([window.innerWidth, window.innerHeight])
}
model(intent())
Welcome to the joy of FRP. As it is, it is difficult to understand your question. How What to do when one stream depends on the value from another? is connected to My problem with my current code is that my game loop only runs when my screen resolution changes.? You give the actual behavior, what is your expected behavior?
For my limited understanding of the question, I think that you need to replace this line return res$.map(res => { by return res$.flatMapLatest(res => {, i.e. use the operator flatMapLatest instead of the map operator.
To understand the flatMap operator, I recommend you to review the following resources : The introduction to reactive programming you've been missing, the illustrated marbles and the official documentation.
PS : I see that you seem to follow a MODEL-VIEW-INTENT architecture. Out of curiosity, are you experimenting with cyclejs?

Categories

Resources