I am building a web application using React where users can enter a group call. I have a NodeJS server that runs Socket.IO to manage the client events, and the users are connected through a peer-to-peer connection using simple-peers (WebRTC).
Everything is functional, from joining the call, seeing other users, and being able to leave. The call is "always open", similar to discord, and users can come and go as they please. However, if you leave and then try to rejoin the call without refreshing the page, the call breaks on all sides. The user leaving and rejoining gets the following error:
Error: cannot signal after peer is destroyed
For another user in the call, it logs the "user joined" event multiple times for the one user that tried to rejoin. Before it would add multiple peers as well, but I now make sure duplicates cannot exist.
However, to me, the strangest part is that when the user leaves, they send a call out to all other users in the room. The other users successfully destroy the peer connection and then remove the user from their peer array. The user who left on his turn also destroys each connection and resets the peer array to an empty array. So I'm very confused as to what PTP connection it is trying to re-establish.
const [roomSize, setRoomSize] = useState(0);
const socketRef = useRef();
const userVideo = useRef();
const peersRef = useRef([]);
const roomId = 'meeting_' + props.zoneId;
useEffect(async () => {
socketRef.current = io.connect(SERVER_URI, {
jsonp: false,
forceNew: true,
extraHeaders: {
"x-access-token": window.localStorage.getItem('accessToken'),
"zone-id": props.zoneId
}
});
}, []);
useEffect(async () => {
if (props.active) {
try {
const stream = await navigator.mediaDevices.getUserMedia({
video: {
height: window.innerHeight / 2,
width: window.innerWidth / 2
},
audio: true });
userVideo.current.srcObject = stream;
console.log(`%cJoined socket at ${SERVER_URI}, connected=${socketRef.current.connected}`, 'color: pink');
socketRef.current.emit("join room", roomId);
socketRef.current.on("all users", users => {
users.forEach(userID => {
const peer = createPeer(userID, socketRef.current.id, stream);
const peerObj = {
peerID: userID,
peer,
};
if (!peersRef.current.find(p => p.peerID === userID))
peersRef.current.push(peerObj);
});
setRoomSize(peersRef.current.length);
console.log(`%cNew Room Members: ${peersRef.current.length}; %o`, 'color: cyan', {peersRef: peersRef.current});
})
socketRef.current.on("user joined", payload => {
const peer = addPeer(payload.signal, payload.callerID, stream);
const peerObj = {
peerID: payload.callerID,
peer,
};
if (!peersRef.current.find(p => p.peerID === payload.callerID))
peersRef.current.push(peerObj);
setRoomSize(peersRef.current.length);
console.log(`%cSomeone Joined. Members: ${peersRef.current.length}; %o`, 'color: cyan', {peersRef: peersRef.current});
});
socketRef.current.on("receiving returned signal", payload => {
/** #type {Peer} */
const item = peersRef.current.find(p => p.peerID === payload.id);
item.peer.signal(payload.signal);
console.log("%creceiving return signal", 'color: lightgreen');
});
socketRef.current.on('user left', id => {
const peerObj = peersRef.current.find(p => p.peerID === id);
// console.log("user left", { peerObj });
if (peerObj)
peerObj.peer.destroy();
const peers = peersRef.current.filter(p => p.peerID !== id);
peersRef.current = peers;
setRoomSize(peersRef.current.length);
console.log(`%cSomeone Left. Members: ${peersRef.current.length}`, 'color: cyan');
});
} catch (err) {
console.trace(err);
}
}
else if (socketRef.current && socketRef.current.connected) {
socketRef.current.emit("leave room");
peersRef.current.forEach(peerObj => {
peerObj.peer.destroy();
});
peersRef.current = [];
setRoomSize(peersRef.current.length);
}
}, [props.active, peersRef.current]);
const createPeer = (userToSignal, callerID, stream) => {
const peer = new Peer({
initiator: true,
trickle: false,
stream,
});
peer.on("signal", signal => {
socketRef.current.emit("sending signal", { userToSignal, callerID, signal })
})
return peer;
}
const addPeer = (incomingSignal, callerID, stream) => {
const peer = new Peer({
initiator: false,
trickle: false,
stream,
})
peer.on("signal", signal => {
socketRef.current.emit("returning signal", { signal, callerID })
})
peer.signal(incomingSignal);
return peer;
}
Quick Edit: The above code is part of a React component that renders a video element for each peer.
When props.active becomes false is when the user leaves the call. This happens at the end of the second useEffect hook, where the client who left should have removed all their peer objects after destroying them. Why does this user receive the above error on a reconnect? And how do I keep this error from occurring?
Edit: I just noticed that when both users leave the call, and both try to rejoin without refreshing, the error does not occur. So something is different when removing a peer upon a user leaving compared to leaving yourself is my best guess.
TLDR; Put all refs you use in the useEffect body in the useEffect deps array.
I'd be sure to first check the useEffect deps array. It looks like socketRef is required in multiple places throughout that hook body, but it doesn't appear to be in the deps array. This can cause the hook to use less-than-current data.
It's also because of this that the socketRef ref object may never actually update, meaning, it may correctly remove the user from peers, as peerRefs is in the useEffect deps array, but the internal session (the room) may not recognize this; the room's internal representation of the user still exists.
Repeating myself, but just to make it clear, you mentioned:
So something is different when removing a peer upon a user leaving compared to leaving yourself is my best guess.
This is the same reason as listed above. The reason it happens when a peer leaves is because the peerRefs ref object IS in the useEffect deps array, so the effect you're describing is just 'perfect timing', if you will, since the applications state (all the refs) are correctly sync'd up with each other.
Let's imagine i have a function fetchUser which takes as parameter userId and return an observable of user.
As i am calling this method often, i want to batch the ids to perform one request with multiple ids instead !
Here my troubles began...
I can't find a solution to do that without sharing an observable between the different calls of fetchUser.
import { Subject, from } from "rxjs"
import { bufferTime, mergeMap, map, toArray, filter, take, share } from "rxjs/operators"
const functionThatSimulateAFetch = (userIds: string[]) => from(userIds).pipe(
map((userId) => ({ id: userId, name: "George" })),
toArray(),
)
const userToFetch$ = new Subject<string>()
const fetchedUser$ = userToFetch$.pipe(
bufferTime(1000),
mergeMap((userIds) => functionThatSimulateAFetch(userIds)),
share(),
)
const fetchUser = (userId: string) => {
const observable = fetchedUser$.pipe(
map((users) => users.find((user) => user.id === userId)),
filter((user) => !!user),
take(1),
)
userToFetch$.next(userId)
return observable
}
But that's ugly and it has multiple troubles:
If i unsubscribe from the observable returned by fetchUser before the timer of bufferTime is finished, it doesn't prevent the fetch of the user.
If i unsubscribe from all the observables returned by fetchUser before the fetch of the batch is finished, it doesn't cancel the request.
Error handling is more complex
etc
More generally: i don't know how to solve the problems requiring sharing resources using RxJS. It's difficult to find advanced example of RxJS.
I think #Biggy is right.
This is the way I understand the problem and what you want to achieve
There are different places in your app where you want to fetch users
You do not want to fire a fetch request all the time, rather you
want to buffer them and send them at a certain interval of time,
let's say 1 second
You want to cancel a certain buffer and avoid that for that 1 second
interval a request to fetch a batch of users is fired
At the same time, if somebody, let's call it Code at Position
X has asked for a User and just few milliseconds later somebody
else, i.e. Code at Position Y cancels the entire batch of
requests, then Code at Position X has to receive some sort of
answer, let's say a null
More, you may want to be able to ask to fetch a User and then change
your mind, if within the interval of the buffer time, and and avoid
this User to be fetched (I am far from sure this is really something
you want, but it seems somehow to emerge from your question
If this is all true, then you probably have to have some sort of queuing mechanism, as Buggy suggested.
Then there may be many implementations of such mechanism.
What you have is a good, but as with everything RxJS, but the devil is in the details.
Issues
The switchMaping
mergeMap((userIds) => functionThatSimulateAFetch(userIds)),
This is where you first go wrong. By using a merge map here, you are making it impossible to tell appart the "stream of requests" from the "stream returned by a single request":
You are making it near impossible to unsubscribe from an individual request (to cancel it)
You are making it impossible to handle errors
It falls appart if your inner observable emits more than once.
Rather, what you want is to emit individual BatchEvents, via a normal map (producing an observable of observable), and switchMap/mergeMap those after the filtering.
Side effects when creating an observable & Emitting before subscribing
userToFetch$.next(userId)
return observable
Don’t do this. An observable by itself does not actually do anything. It’s a "blueprint" for a sequence of actions to happen when you subscribe to it. By doing this, you’ll only create a batch action on observable creating, but you’re screwed if you get multiple or delayed subscriptions.
Rather, you want to create an observable from defer that emits to userToFetch$ on every subscription.
Even then you’ll want to subscribe to your observable before emitting to userToFetch: If you aren’t subscribed, your observable is not listening to the subject, and the event will be lost. You can do this in a defer-like observable.
Solution
Short, and not very different from your code, but structure it like this.
const BUFFER_TIME = 1000;
type BatchEvent = { keys: Set<string>, values: Observable<Users> };
/** The incoming keys */
const keySubject = new Subject<string>();
const requests: Observable<{ keys: Set<string>, values: Observable<Users> }> =
this.keySubject.asObservable().pipe(
bufferTime(BUFFER_TIME),
map(keys => this.fetchBatch(keys)),
share(),
);
/** Returns a single User from an ID. Batches the request */
function get(userId: string): Observable<User> {
console.log("Creating observable for:", userId);
// The money observable. See "defer":
// triggers a new subject event on subscription
const observable = new Observable<BatchEvent>(observer => {
this.requests.subscribe(observer);
// Emit *after* the subscription
this.keySubject.next(userId);
});
return observable.pipe(
first(v => v.keys.has(userId)),
// There is only 1 item, so any *Map will do here
switchMap(v => v.values),
map(v => v[userId]),
);
}
function fetchBatch(args: string[]): BatchEvent {
const keys = new Set(args); // Do not batch duplicates
const values = this.userService.get(Array.from(keys)).pipe(
share(),
);
return { keys, values };
}
This does exactly what you were asking, including:
Errors are propagated to the recipients of the batch call, but nobody else
If everybody unsubscribes from a batch, the observable is canceled
If everybody unsubscribes from a batch before the request is even fired, it never fires
The observable behaves like HttpClient: subscribing to the observable fires a new (batched) request for data. Callers are free to pipe shareReplay or whatever though. So no surprises there.
Here is a working stackblitz Angular demo: https://stackblitz.com/edit/angular-rxjs-batch-request
In particular, notice the behavior when you "toggle" the display: You’ll notice that re-subscribing to existing observables will fire new batch requests, and that those requests will cancel (or outright not fire) if you re-toggle fast enough.
Use case
In our project, we use this for Angular Tables, where each row needs to individually fetch additional data to render. This allows us to:
chunk all the requests for a "single page", without needing any special knowledge of pagination
Potentially fetch multiple pages at once if the user paginates fast
re-use existing results even if page size changes
Limitations
I would not add chunking or rate limitting into this. Because the source observable is a dumb bufferTime you run into issues:
The "chunking" will happen before the deduping. So if you have 100 requests for a single userId, you’ll end up firing several requests with only 1 element
If you rate limit, you’ll not be able to inspect your queue. So you may end up with a very long queue containing multiple same requests.
This is a pessimistic point of view though. Fixing it would mean going full out with a stateful queue/batch mechanism, which is an order of magnitude more complex.
I'm not sure if this is the best way to solve this problem (at least it need tests), but I will try to explain my point of view.
We have 2 queue: for pending and for feature requests.
result to help delivery response/error to subscribers.
Some kind of worker who is based on some schedule takes a task from the queue to do the request.
If i unsubscribe from the observable returned by fetchUser before the
timer of bufferTime is finished, it doesn't prevent the fetch of the
user.
Unsubscribe from fetchUser will cleanup the request queue and worker will do nothing.
If i unsubscribe from all the observables returned by fetchUser before
the fetch of the batch is finished, it doesn't cancel the request.
Worker subscribe until isNothingRemain$
const functionThatSimulateAFetch = (userIds: string[]) => from(userIds).pipe(
map((userId) => ({ id: userId, name: "George" })),
toArray(),
tap(() => console.log('API_CALL', userIds)),
delay(200),
)
class Queue {
queue$ = new BehaviorSubject(new Map());
private get currentQueue() {
return new Map(this.queue$.getValue());
}
add(...ids) {
const newMap = ids.reduce((acc, id) => {
acc.set(id, (acc.get(id) || 0) + 1);
return acc;
}, this.currentQueue);
this.queue$.next(newMap);
};
addMap(idmap: Map<any, any>) {
const newMap = (Array.from(idmap.keys()))
.reduce((acc, id) => {
acc.set(id, (acc.get(id) || 0) + idmap.get(id));
return acc;
}, this.currentQueue);
this.queue$.next(newMap);
}
remove(...ids) {
const newMap = ids.reduce((acc, id) => {
acc.get(id) > 1 ? acc.set(id, acc.get(id) - 1) : acc.delete(id);
return acc;
}, this.currentQueue)
this.queue$.next(newMap);
};
removeMap(idmap: Map<any, any>) {
const newMap = (Array.from(idmap.keys()))
.reduce((acc, id) => {
acc.get(id) > idmap.get(id) ? acc.set(id, acc.get(id) - idmap.get(id)) : acc.delete(id);
return acc;
}, this.currentQueue)
this.queue$.next(newMap);
};
has(id) {
return this.queue$.getValue().has(id);
}
asObservable() {
return this.queue$.asObservable();
}
}
class Result {
result$ = new BehaviorSubject({ ids: new Map(), isError: null, value: null });
select(id) {
return this.result$.pipe(
filter(({ ids }) => ids.has(id)),
switchMap(({ isError, value }) => isError ? throwError(value) : of(value.find(x => x.id === id)))
)
}
add({ isError, value, ids }) {
this.result$.next({ ids, isError, value });
}
clear(){
this.result$.next({ ids: new Map(), isError: null, value: null });
}
}
const result = new Result();
const queueToSend = new Queue();
const queuePending = new Queue();
const doRequest = new Subject();
const fetchUser = (id: string) => {
return Observable.create(observer => {
queueToSend.add(id);
doRequest.next();
const subscription = result
.select(id)
.pipe(take(1))
.subscribe(observer);
// cleanup queue after got response or unsubscribe
return () => {
(queueToSend.has(id) ? queueToSend : queuePending).remove(id);
subscription.unsubscribe();
}
})
}
// some kind of worker that take task from queue and send requests
doRequest.asObservable().pipe(
auditTime(1000),
// clear outdated results
tap(()=>result.clear()),
withLatestFrom(queueToSend.asObservable()),
map(([_, queue]) => queue),
filter(ids => !!ids.size),
mergeMap(ids => {
// abort the request if it have no subscribers
const isNothingRemain$ = combineLatest(queueToSend.asObservable(), queuePending.asObservable()).pipe(
map(([queueToSendIds, queuePendingIds]) => Array.from(ids.keys()).some(k => queueToSendIds.has(k) || queuePendingIds.has(k))),
filter(hasSameKey => !hasSameKey)
)
// prevent to request the same ids if previous requst is not complete
queueToSend.removeMap(ids);
queuePending.addMap(ids);
return functionThatSimulateAFetch(Array.from(ids.keys())).pipe(
map(res => ({ isErorr: false, value: res, ids })),
takeUntil(isNothingRemain$),
catchError(error => of({ isError: true, value: error, ids }))
)
}),
).subscribe(res => result.add(res))
fetchUser('1').subscribe(console.log);
const subs = fetchUser('2').subscribe(console.log);
subs.unsubscribe();
fetchUser('3').subscribe(console.log);
setTimeout(() => {
const subs1 = fetchUser('10').subscribe(console.log);
subs1.unsubscribe();
const subs2 = fetchUser('11').subscribe(console.log);
subs2.unsubscribe();
}, 2000)
setTimeout(() => {
const subs1 = fetchUser('20').subscribe(console.log);
subs1.unsubscribe();
const subs21 = fetchUser('20').subscribe(console.log);
const subs22 = fetchUser('20').subscribe(console.log);
}, 4000)
// API_CALL
// ["1", "3"]
// {id: "1", name: "George"}
// {id: "3", name: "George"}
// API_CALL
// ["20"]
// {id: "20", name: "George"}
// {id: "20", name: "George"}
stackblitz example
FYI, i tried to create a generic batched task queue using the answers of
#buggy & #picci :
import { Observable, Subject, BehaviorSubject, from, timer } from "rxjs"
import { catchError, share, mergeMap, map, filter, takeUntil, take, bufferTime, timeout, concatMap } from "rxjs/operators"
export interface Task<TInput> {
uid: number
input: TInput
}
interface ErroredTask<TInput> extends Task<TInput> {
error: any
}
interface SucceededTask<TInput, TOutput> extends Task<TInput> {
output: TOutput
}
export type FinishedTask<TInput, TOutput> = ErroredTask<TInput> | SucceededTask<TInput, TOutput>
const taskErrored = <TInput, TOutput>(
taskFinished: FinishedTask<TInput, TOutput>,
): taskFinished is ErroredTask<TInput> => !!(taskFinished as ErroredTask<TInput>).error
type BatchedWorker<TInput, TOutput> = (tasks: Array<Task<TInput>>) => Observable<FinishedTask<TInput, TOutput>>
export const createSimpleBatchedWorker = <TInput, TOutput>(
work: (inputs: TInput[]) => Observable<TOutput[]>,
workTimeout: number,
): BatchedWorker<TInput, TOutput> => (
tasks: Array<Task<TInput>>,
) => work(
tasks.map((task) => task.input),
).pipe(
mergeMap((outputs) => from(tasks.map((task, index) => ({
...task,
output: outputs[index],
})))),
timeout(workTimeout),
catchError((error) => from(tasks.map((task) => ({
...task,
error,
})))),
)
export const createBatchedTaskQueue = <TInput, TOutput>(
worker: BatchedWorker<TInput, TOutput>,
concurrencyLimit: number = 1,
batchTimeout: number = 0,
maxBatchSize: number = Number.POSITIVE_INFINITY,
) => {
const taskSubject = new Subject<Task<TInput>>()
const cancelTaskSubject = new BehaviorSubject<Set<number>>(new Set())
const cancelTask = (task: Task<TInput>) => {
const cancelledUids = cancelTaskSubject.getValue()
const newCancelledUids = new Set(cancelledUids)
newCancelledUids.add(task.uid)
cancelTaskSubject.next(newCancelledUids)
}
const output$: Observable<FinishedTask<TInput, TOutput>> = taskSubject.pipe(
bufferTime(batchTimeout, undefined, maxBatchSize),
map((tasks) => {
const cancelledUids = cancelTaskSubject.getValue()
return tasks.filter((task) => !cancelledUids.has(task.uid))
}),
filter((tasks) => tasks.length > 0),
mergeMap(
(tasks) => worker(tasks).pipe(
takeUntil(cancelTaskSubject.pipe(
filter((uids) => {
for (const task of tasks) {
if (!uids.has(task.uid)) {
return false
}
}
return true
}),
)),
),
undefined,
concurrencyLimit,
),
share(),
)
let nextUid = 0
return (input$: Observable<TInput>): Observable<TOutput> => input$.pipe(
concatMap((input) => new Observable<TOutput>((observer) => {
const task = {
uid: nextUid++,
input,
}
const subscription = output$.pipe(
filter((taskFinished) => taskFinished.uid === task.uid),
take(1),
map((taskFinished) => {
if (taskErrored(taskFinished)) {
throw taskFinished.error
}
return taskFinished.output
}),
).subscribe(observer)
subscription.add(
timer(0).subscribe(() => taskSubject.next(task)),
)
return () => {
subscription.unsubscribe()
cancelTask(task)
}
})),
)
}
With our example:
import { from } from "rxjs"
import { map, toArray } from "rxjs/operators"
import { createBatchedTaskQueue, createSimpleBatchedWorker } from "mmr/components/rxjs/batched-task-queue"
const functionThatSimulateAFetch = (userIds: string[]) => from(userIds).pipe(
map((userId) => ({ id: userId, name: "George" })),
toArray(),
)
const userFetchQueue = createBatchedTaskQueue(
createSimpleBatchedWorker(
functionThatSimulateAFetch,
10000,
),
)
const fetchUser = (userId: string) => {
return from(userId).pipe(
userFetchQueue,
)
}
I am open to any improvement suggestions
I have a connectable observable that will get connected, when the other stream executed:
const source1 = Rx.Observable.of([1,2,3,4,5])
.do(() => console.log('Do Something!'))
.map(() => "Always connected.")
.publish();
source1.subscribe((v) => console.log(v));
const connect = () => {
let c = false;
return () => {
if (!c) {
console.log("Get connected");
source1.connect();
c = true;
}
}
}
const source2 = Rx.Observable.fromEvent(document, 'click')
.do(() => console.log("execute!"))
.do(connect())
.switchMapTo(source1);
source2.subscribe((v) => console.log(v));
The output
"execute!"
"Get connected"
"Do Something!"
"Always connected."
Further clicks on the document source1 will be not subscribe anymore and my question is, why?
You've encountered exactly this situations: Rx.Subject loses events
If you update the first part of your example you'll see that the Subject receives a complete notification:
const source1 = Rx.Observable.of([1,2,3,4,5])
.do(() => console.log('Do Something!'), null, () => console.log('source1 complete'))
.map(() => "Always connected.")
.publish();
This marks that Subject as closed and it won't reemit any values ant further.
See my linked answer above for more detailed information. Also you might have a look at On The Subject Of Subjects (in RxJS) (paragramp Subjects Are Not Reusable) by Ben Leash (lead developer of RxJS 5.
I use rxjs to handle a websocket connection
var socket = Rx.Observable.webSocket('wss://echo.websocket.org')
socket.resultSelector = (e) => e.data
I want to periodically (5s) sent a ping message and wait 3s to receive a pong response and subscribe to the a stream if no response has been receive.
I try that without success. I admit I'm a bit lost will all the operator available to handle timeout, deboune or throttle.
// periodically send a ping message
const ping$ = Rx.Observable.interval(2000)
.timeInterval()
.do(() => socket.next('ping'))
const pong$ = socket
.filter(m => /^ping$/.test(`${m}`))
.mergeMap(
ping$.throttle(2000).map(() => Observable.throw('pong timeout'))
)
pong$.subscribe(
(msg) => console.log(`end ${msg}`),
(err) => console.log(`err ${err}`),
() => console.log(`complete`)
)
But unfortunately, no ping are send.
I've also try to achieved that using without success.
const ping$ = Rx.Observable.interval(2000)
.timeInterval()
.do(() => socket.next('ping'))
const pong$ = socket
.filter(m => /^ping$/.test(`${m}`))
const heartbeat$ = ping$
.debounceTime(5000)
.mergeMap(() => Rx.Observable.timer(5000).takeUntil(pong$))
heartbeat$.subscribe(
(msg) => console.log(`end ${msg}`),
(err) => console.log(`err ${err}`),
() => console.log(`complete`)
)
Any help appreciated.
You can use race() operator to always connect only to the Observable that emits first:
function sendMockPing() {
// random 0 - 5s delay
return Observable.of('pong').delay(Math.random() * 10000 / 2);
}
Observable.timer(0, 5000)
.map(i => 'ping')
.concatMap(val => {
return Observable.race(
Observable.of('timeout').delay(3000),
sendMockPing()
);
})
//.filter(response => response === 'timeout') // remove all successful responses
.subscribe(val => console.log(val));
See live demo: https://jsbin.com/lavinah/6/edit?js,console
This randomly simulates response taking 0 - 5s. When the response takes more than 3s than Observable.of('timeout').delay(3000) completes first and the timeout string is passed to its observer by concatMap().
I finally found a solution based on mergeMapand takeUntil
My initial mistake was to use ping$ as an input for my heartBeat$ where I should use $pong
// define the pong$
const pong$ = socket
.filter(m => /^ping$/.test(`${m}`))
.share()
//use share() because pong$ is used twice
const heartbeat$ = pong$
.startWith('pong') // to bootstrap the stream
.debounceTime(5000) // wait for 5s after the last received pong$ value
.do(() => this.socket.next('ping')) // send a ping
.mergeMap(() => Observable.timer(3000).takeUntil(pong$))
// we merge the current stream with another one that will
// not produce value while a pong is received before the end of the
// timer
heartbeat$.subscribe(
(msg) => console.log(`handle pong timeout`),
)
Below heartbeat$ function return an observable which you can continuously listen to
1) the latency value of each round trip (time of socket.receive - socket.send) in every 5000ms
or
2) -1 if the round trip goes beyond the threshold (e.g. 3000ms)
You will keep receiving latency value or -1 even though -1 has been emitted which gives you the flexibility to decide what to do ^.^
heartbeat$(pingInterval: number, pongTimeout: number) {
let start = 0;
const timer$ = timer(0, pingInterval).pipe(share());
const unsub = timer$.subscribe(() => {
start = Date.now();
this.ws.next('ping');
});
const ping$ = this.ws$.pipe(
switchMap(ws =>
ws.pipe(
filter(m => /^ping$/.test(`${m}`)),
map(() => Date.now() - start),
),
),
share(),
);
const dead$ = timer$.pipe(
switchMap(() =>
of(-1).pipe(
delay(pongTimeout),
takeUntil(ping$),
),
),
);
return merge(ping$, dead$).pipe(finalize(() => unsub.unsubscribe()));
}
heartbeat$(5000, 3000).subscribe(
(latency) => console.log(latency) // 82 83 82 -1 101 82 -1 -1 333 ...etc
)
This little test script shows my problem. It will send messages, close all sockets, and then just wait, never exiting. Supposedly setting ZMQ_LINGER to 0 is supposed to make it discard all queued messages immediately, so why isn't this allowing my Node.js process to exit?
const zmq = require('zmq')
const bindUrl = 'tcp://127.0.0.1:4000'
let timer
let publisher = zmq.socket('pub')
publisher.monitor(500, 0)
publisher.setsockopt(zmq.ZMQ_LINGER, 0)
publisher.bind(bindUrl)
let subscriber = zmq.socket('sub')
subscriber.monitor(500, 0)
subscriber.setsockopt(zmq.ZMQ_LINGER, 0)
subscriber.connect(bindUrl)
subscriber.on('connect_error', () => {
console.log('connect error')
})
subscriber.on('connect', () => {
subscriber.subscribe('some topic')
})
publisher.on('bind', function () {
console.log('bound')
timer = setInterval(() => publisher.send(['some topic', 'blah']), 1000)
})
publisher.on('bind_error', function () {
console.log('bind error')
})
subscriber.on('disconnect', function () {
console.log('subscriber disconnected')
subscriber.close()
})
subscriber.on('close', function () {
console.log('subscriber closed')
subscriber.removeAllListeners()
subscriber = null
})
publisher.on('unbind', function () {
console.log('publisher unbound')
publisher.close()
})
publisher.on('close', function () {
console.log('publisher closed')
publisher.removeAllListeners()
publisher = null
subscriber.disconnect(bindUrl)
})
subscriber.on('message', function (topic, message) {
console.log(topic.toString(), message.toString())
clearInterval(timer)
subscriber.unsubscribe('some topic')
publisher.unbind(bindUrl)
})
Output is the following, and the process never exits.
erin#titania:~/$ node test-disconnect.js
bound
some topic blah
publisher unbound
publisher closed
subscriber disconnected
subscriber closed
The fact that I am explicitly monitoring the sockets is what caused this behavior. I have to explicitly call socket.unmonitor when I'm ready for the process to exit.