Angular 8 - handling SSE reconnect on error - javascript

I'm working on an Angular 8 (with Electron 6 and Ionic 4) project and right now we are having evaluation phase where we are deciding whether to replace polling with SSE (Server-sent events) or Web Sockets. My part of the job is to research SSE.
I created small express application which generates random numbers and it all works fine. The only thing that bugs me is correct way to reconnect on server error.
My implementation looks like this:
private createSseSource(): Observable<MessageEvent> {
return Observable.create(observer => {
this.eventSource = new EventSource(SSE_URL);
this.eventSource.onmessage = (event) => {
this.zone.run(() => observer.next(event));
};
this.eventSource.onopen = (event) => {
console.log('connection open');
};
this.eventSource.onerror = (error) => {
console.log('looks like the best thing to do is to do nothing');
// this.zone.run(() => observer.error(error));
// this.closeSseConnection();
// this.reconnectOnError();
};
});
}
I tried to implement reconnectOnError() function following this answer, but I just wasn't able to make it work. Then I ditched the reconnectOnError() function and it seems like it's a better thing to do. Do not try to close and reconnect nor propagate error to observable. Just sit and wait and when the server is running again it will reconnect automatically.
Question is, is this really the best thing to do? Important thing to mention is, that the FE application communicates with it's own server which can't be accessed by another instance of the app (built-in device).

I see that my question is getting some attention so I decided to post my solution. To answer my question: "Is this really the best thing to do, to omit reconnect function?" I don't know :). But this solution works for me and it was proven in production, that it offers way how to actually control SSE reconnect to some extent.
Here's what I did:
Rewritten createSseSource function so the return type is void
Instead of returning observable, data from SSE are fed to subjects/NgRx actions
Added public openSseChannel and private reconnectOnError functions for better control
Added private function processSseEvent to handle custom message types
Since I'm using NgRx on this project every SSE message dispatches corresponding action, but this can be replaced by ReplaySubject and exposed as observable.
// Public function, initializes connection, returns true if successful
openSseChannel(): boolean {
this.createSseEventSource();
return !!this.eventSource;
}
// Creates SSE event source, handles SSE events
protected createSseEventSource(): void {
// Close event source if current instance of SSE service has some
if (this.eventSource) {
this.closeSseConnection();
this.eventSource = null;
}
// Open new channel, create new EventSource
this.eventSource = new EventSource(this.sseChannelUrl);
// Process default event
this.eventSource.onmessage = (event: MessageEvent) => {
this.zone.run(() => this.processSseEvent(event));
};
// Add custom events
Object.keys(SSE_EVENTS).forEach(key => {
this.eventSource.addEventListener(SSE_EVENTS[key], event => {
this.zone.run(() => this.processSseEvent(event));
});
});
// Process connection opened
this.eventSource.onopen = () => {
this.reconnectFrequencySec = 1;
};
// Process error
this.eventSource.onerror = (error: any) => {
this.reconnectOnError();
};
}
// Processes custom event types
private processSseEvent(sseEvent: MessageEvent): void {
const parsed = sseEvent.data ? JSON.parse(sseEvent.data) : {};
switch (sseEvent.type) {
case SSE_EVENTS.STATUS: {
this.store.dispatch(StatusActions.setStatus({ status: parsed }));
// or
// this.someReplaySubject.next(parsed);
break;
}
// Add others if neccessary
default: {
console.error('Unknown event:', sseEvent.type);
break;
}
}
}
// Handles reconnect attempts when the connection fails for some reason.
// const SSE_RECONNECT_UPPER_LIMIT = 64;
private reconnectOnError(): void {
const self = this;
this.closeSseConnection();
clearTimeout(this.reconnectTimeout);
this.reconnectTimeout = setTimeout(() => {
self.openSseChannel();
self.reconnectFrequencySec *= 2;
if (self.reconnectFrequencySec >= SSE_RECONNECT_UPPER_LIMIT) {
self.reconnectFrequencySec = SSE_RECONNECT_UPPER_LIMIT;
}
}, this.reconnectFrequencySec * 1000);
}
Since the SSE events are fed to subject/actions it doesn't matter if the connection is lost since at least last event is preserved within subject or store. Attempts to reconnect can then happen silently and when new data are send, there are processed seamlessly.

Related

Calling a class function from within another class function - JS newb

This is probably more of a javascript question but I dont really know javascript in general, so im hoping its a simple fix.
So basically I am using two libraries that I want to work together. The beacon class listens for a bluetooth LE signal on a raspberry pi, at which point the onAdvertisement fucntion is called from the node-beacon-scanner library.
When this happens I want to publish a message to a message broker using MQTT (the other library), which should be a case of just calling client.publish(topic, message).
THe publish function works when I use it outside of the scanner functions, however if I call it inside one of the scanner functions such as onAdvertisement, it doesnt work, but also doesnt throw any errors.
I think it is an issue of me not understanding encapsualtion in node.js regarding modules.
Basically I want to know is there somthign im missing when calling the client.publish function from within the scanner.onadvertisement function? In the code below the publish call has been put in a function called pubBe.
Thankyou in advance!
const noble = require('#abandonware/noble');
const BeaconScanner = require("node-beacon-scanner");
const mqtt = require('mqtt');
var client = mqtt.connect('http://localhost:1883');
var scanner = new BeaconScanner();
var ledOn = false
client.on('connect', function () {
client.subscribe('led', function (err) {
if (!err) {
}
})
})
client.on('message', function (topic, message) {
// message is Buffer
console.log(message.toString())
client.end()
})
scanner.onadvertisement = (advertisement) => {
var beacon = advertisement["rssi"]
}
scanner.startScan().then(() => {
console.log("Scanning has started...")
pubBe(-10)
}).catch(error => {
console.error(error);
})
function pubBe(rssi){
if( rssi > -40 && !ledOn){
console.log('Turn LED on')
ledOn = true
payload = {"level": 100}
client.publish('led', JSON.stringify(payload))
}else if ( rssi < -40 && ledOn){
console.log('Turn LED off')
ledOn = false
payload = {"level": 0}
client.publish('led', JSON.stringify(payload))
}
}

Cannot dispose/destroy WebRTC RTCPeerConnection

I have an issue where RTCPeerConnection is not being garbage collected after being created via user interaction. The instances remain in chrome://webrtc-internals/ indefinitely and eventually users see an error about not being able to create new instances. I've reduced this down to a simple test page and this what I have:
<script>
let pc
function createPeerConnection() {
console.log("CREATE CONNECTION")
pc = new RTCPeerConnection()
}
function destroyPeerConnection() {
console.log("DISPOSE CONNECTION")
if (pc) {
pc.close()
pc = null
}
}
setTimeout(() => {
createPeerConnection()
setTimeout(destroyPeerConnection, 2000)
}, 1000)
</script>
<div>
<button onClick="javascript:createPeerConnection()">CREATE</button>
<button onClick="javascript:destroyPeerConnection()">DESTROY</button>
</div>
Interesting points to note:
On page load the timeout fires and creates, then destroys an instance, and this seems to work (instance disappears from webrtc internals after 5-10 seconds)
When clicking CREATE and DESTROY buttons, instances are not removed (they appear in webrtc internals indefinitely)
What am I doing wrong - feel like it must be something very simple. Any pointers much appreciated!
EDIT
In webrtc internals for the connection that is not being disposed I can see a media stream is still attached...
However, I am running the following code when closing the call:
console.log("DISPOSE CONNECTION")
pc.onsignalingstatechange = null
pc.onconnectionstatechange = null
pc.onnegotiationneeded = null
pc.onicecandidate = null
pc.ontrack = null
pc.getSenders().forEach(sender => {
console.log("STOP SENDER", sender)
pc.removeTrack(sender)
sender.setStreams()
sender.track?.stop()
})
pc.getReceivers().forEach(receiver => {
receiver.track?.stop()
})
pc.getTransceivers().forEach(transceiver => {
pc.removeTrack(transceiver.sender)
transceiver.sender.setStreams()
transceiver.sender.track?.stop()
transceiver.stop()
})
pc.close()

How do I stream live audio from the browser to Google Cloud Speech via socket.io?

I have a situation with a React-based app where I have an input for which I wanted to allow voice input as well. I'm okay making this compatible with Chrome and Firefox only, so I was thinking of using getUserMedia. I know I'll be using Google Cloud's Speech to Text API. However, I have a few caveats:
I want this to stream my audio data live, not just when I'm done recording. This means that a lot of solutions I've found won't work very well, because it's not sufficient to save the file and then send it out to Google Cloud Speech.
I don't trust my front end with my Google Cloud API information. Instead, I already have a service running on the back end which has my credentials, and I want to stream the audio (live) to that back end, then from that back end stream to Google Cloud, and then emit updates to my transcript as they come in back to the Front End.
I already connect to that back end service using socket.io, and I want to manage this entirely via sockets, without having to use Binary.js or anything similar.
Nowhere seems to have a good tutorial on how to do this. What do I do?
First, credit where credit is due: a huge amount of my solution here was created by referencing vin-ni's Google-Cloud-Speech-Node-Socket-Playground project. I had to adapt this some for my React app, however, so I'm sharing a few of the changes I made.
My solution here was composed of four parts, two on the front end and two on the back end.
My front end solution was of two parts:
A utility file to access my microphone, stream audio to the back
end, retrieve data from the back end, run a callback function each
time that data was received from the back end, and then clean up
after itself either when done streaming or when the back end threw
an error.
A microphone component which wrapped my React
functionality.
My back end solution was of two parts:
A utility file to handle the actual speech recognize stream
My main.js file
(These don't need to be separated by any means; our main.js file is just already a behemoth without it.)
Most of my code will just be excerpted, but my utilities will be shown in full because I had a lot of problem with all of the stages involved. My front end utility file looked like this:
// Stream Audio
let bufferSize = 2048,
AudioContext,
context,
processor,
input,
globalStream;
//audioStream constraints
const constraints = {
audio: true,
video: false
};
let AudioStreamer = {
/**
* #param {function} onData Callback to run on data each time it's received
* #param {function} onError Callback to run on an error if one is emitted.
*/
initRecording: function(onData, onError) {
socket.emit('startGoogleCloudStream', {
config: {
encoding: 'LINEAR16',
sampleRateHertz: 16000,
languageCode: 'en-US',
profanityFilter: false,
enableWordTimeOffsets: true
},
interimResults: true // If you want interim results, set this to true
}); //init socket Google Speech Connection
AudioContext = window.AudioContext || window.webkitAudioContext;
context = new AudioContext();
processor = context.createScriptProcessor(bufferSize, 1, 1);
processor.connect(context.destination);
context.resume();
var handleSuccess = function (stream) {
globalStream = stream;
input = context.createMediaStreamSource(stream);
input.connect(processor);
processor.onaudioprocess = function (e) {
microphoneProcess(e);
};
};
navigator.mediaDevices.getUserMedia(constraints)
.then(handleSuccess);
// Bind the data handler callback
if(onData) {
socket.on('speechData', (data) => {
onData(data);
});
}
socket.on('googleCloudStreamError', (error) => {
if(onError) {
onError('error');
}
// We don't want to emit another end stream event
closeAll();
});
},
stopRecording: function() {
socket.emit('endGoogleCloudStream', '');
closeAll();
}
}
export default AudioStreamer;
// Helper functions
/**
* Processes microphone data into a data stream
*
* #param {object} e Input from the microphone
*/
function microphoneProcess(e) {
var left = e.inputBuffer.getChannelData(0);
var left16 = convertFloat32ToInt16(left);
socket.emit('binaryAudioData', left16);
}
/**
* Converts a buffer from float32 to int16. Necessary for streaming.
* sampleRateHertz of 1600.
*
* #param {object} buffer Buffer being converted
*/
function convertFloat32ToInt16(buffer) {
let l = buffer.length;
let buf = new Int16Array(l / 3);
while (l--) {
if (l % 3 === 0) {
buf[l / 3] = buffer[l] * 0xFFFF;
}
}
return buf.buffer
}
/**
* Stops recording and closes everything down. Runs on error or on stop.
*/
function closeAll() {
// Clear the listeners (prevents issue if opening and closing repeatedly)
socket.off('speechData');
socket.off('googleCloudStreamError');
let tracks = globalStream ? globalStream.getTracks() : null;
let track = tracks ? tracks[0] : null;
if(track) {
track.stop();
}
if(processor) {
if(input) {
try {
input.disconnect(processor);
} catch(error) {
console.warn('Attempt to disconnect input failed.')
}
}
processor.disconnect(context.destination);
}
if(context) {
context.close().then(function () {
input = null;
processor = null;
context = null;
AudioContext = null;
});
}
}
The main salient point of this code (aside from the getUserMedia configuration, which was in and of itself a bit dicey) is that the onaudioprocess callback for the processor emitted speechData events to the socket with the data after converting it to Int16. My main changes here from my linked reference above were to replace all of the functionality to actually update the DOM with callback functions (used by my React component) and to add some error handling that wasn't included in the source.
I was then able to access this in my React Component by just using:
onStart() {
this.setState({
recording: true
});
if(this.props.onStart) {
this.props.onStart();
}
speechToTextUtils.initRecording((data) => {
if(this.props.onUpdate) {
this.props.onUpdate(data);
}
}, (error) => {
console.error('Error when recording', error);
this.setState({recording: false});
// No further action needed, as this already closes itself on error
});
}
onStop() {
this.setState({recording: false});
speechToTextUtils.stopRecording();
if(this.props.onStop) {
this.props.onStop();
}
}
(I passed in my actual data handler as a prop to this component).
Then on the back end, my service handled three main events in main.js:
// Start the stream
socket.on('startGoogleCloudStream', function(request) {
speechToTextUtils.startRecognitionStream(socket, GCSServiceAccount, request);
});
// Receive audio data
socket.on('binaryAudioData', function(data) {
speechToTextUtils.receiveData(data);
});
// End the audio stream
socket.on('endGoogleCloudStream', function() {
speechToTextUtils.stopRecognitionStream();
});
My speechToTextUtils then looked like:
// Google Cloud
const speech = require('#google-cloud/speech');
let speechClient = null;
let recognizeStream = null;
module.exports = {
/**
* #param {object} client A socket client on which to emit events
* #param {object} GCSServiceAccount The credentials for our google cloud API access
* #param {object} request A request object of the form expected by streamingRecognize. Variable keys and setup.
*/
startRecognitionStream: function (client, GCSServiceAccount, request) {
if(!speechClient) {
speechClient = new speech.SpeechClient({
projectId: 'Insert your project ID here',
credentials: GCSServiceAccount
}); // Creates a client
}
recognizeStream = speechClient.streamingRecognize(request)
.on('error', (err) => {
console.error('Error when processing audio: ' + (err && err.code ? 'Code: ' + err.code + ' ' : '') + (err && err.details ? err.details : ''));
client.emit('googleCloudStreamError', err);
this.stopRecognitionStream();
})
.on('data', (data) => {
client.emit('speechData', data);
// if end of utterance, let's restart stream
// this is a small hack. After 65 seconds of silence, the stream will still throw an error for speech length limit
if (data.results[0] && data.results[0].isFinal) {
this.stopRecognitionStream();
this.startRecognitionStream(client, GCSServiceAccount, request);
// console.log('restarted stream serverside');
}
});
},
/**
* Closes the recognize stream and wipes it
*/
stopRecognitionStream: function () {
if (recognizeStream) {
recognizeStream.end();
}
recognizeStream = null;
},
/**
* Receives streaming data and writes it to the recognizeStream for transcription
*
* #param {Buffer} data A section of audio data
*/
receiveData: function (data) {
if (recognizeStream) {
recognizeStream.write(data);
}
}
};
(Again, you don't strictly need this util file, and you could certainly put the speechClient as a const on top of the file depending on how you get your credentials; this is just how I implemented it.)
And that, finally, should be enough to get you started on this. I encourage you to do your best to understand this code before you reuse or modify it, as it may not work 'out of the box' for you, but unlike all other sources I have found, this should get you at least started on all involved stages of the project. It is my hope that this answer will prevent others from suffering like I have suffered.

What should I do with the redundant state of a ServiceWorker?

I gotta a companion script for a serviceworker and I'm trialling right now.
The script works like so:
((n, d) => {
if (!(n.serviceWorker && (typeof Cache !== 'undefined' && Cache.prototype.addAll))) return;
n.serviceWorker.register('/serviceworker.js', { scope: './book/' })
.then(function(reg) {
if (!n.serviceWorker.controller) return;
reg.onupdatefound = () => {
let installingWorker = reg.installing;
installingWorker.onstatechange = () => {
switch (installingWorker.state) {
case 'installed':
if (navigator.serviceWorker.controller) {
updateReady(reg.waiting);
} else {
// This is the initial serviceworker…
console.log('May be skipwaiting here?');
}
break;
case 'waiting':
updateReady(reg.waiting);
break;
case 'redundant':
// Something went wrong?
console.log('[Companion] new SW could not install…')
break;
}
};
};
}).catch((err) => {
//console.log('[Companion] Something went wrong…', err);
});
function updateReady(worker) {
d.getElementById('swNotifier').classList.remove('hidden');
λ('refreshServiceWorkerButton').on('click', function(event) {
event.preventDefault();
worker.postMessage({ 'refreshServiceWorker': true } );
});
λ('cancelRefresh').on('click', function(event) {
event.preventDefault();
d.getElementById('swNotifier').classList.add('hidden');
});
}
function λ(selector) {
let self = {};
self.selector = selector;
self.element = d.getElementById(self.selector);
self.on = function(type, callback) {
self.element['on' + type] = callback;
};
return self;
}
let refreshing;
n.serviceWorker.addEventListener('controllerchange', function() {
if (refreshing) return;
window.location.reload();
refreshing = true;
});
})(navigator, document);
I'm a bit overwhelmed right now by the enormity of the service workers api and unable to "see" what one would do with reg.installing returning a redundant state?
Apologies if this seems like a dumb question but I'm new to serviceworkers.
It's kinda difficult to work out what your intent is here so I'll try and answer the question generally.
A service worker will become redundant if it fails to install or if it's superseded by a newer service worker.
What you do when this happens is up to you. What do you want to do in these cases?
Based on the definition here https://www.w3.org/TR/service-workers/#service-worker-state-attribute I am guessing just print a log in case it comes up in debugging otherwise do nothing.
You should remove any UI prompts you created that ask the user to do something in order to activate the latest service worker. And be patient a little longer.
You have 3 service workers, as you can see on the registration:
active: the one that is running
waiting: the one that was downloaded, and is ready to become active
installing: the one that we just found, being downloaded, after which it becomes waiting
When a service worker reaches #2, you may display a prompt to the user about the new version of the app being just a click away. Let's say they don't act on it.
Then you publish a new version. Your app detects the new version, and starts to download it. At this point, you have 3 service workers. The one at #2 changes to redundant. The one at #3 is not ready yet. You should remove that prompt.
Once #3 is downloaded, it takes the place of #2, and you can show that prompt again.
Write catch function to see the error. It could be SSL issue.
/* In main.js */
if ('serviceWorker' in navigator) {
navigator.serviceWorker.register('./sw.js')
.then(function(registration) {
console.log("Service Worker Registered", registration);
})
.catch(function(err) {
console.log("Service Worker Failed to Register", err);
})
}

What is the correct way to manipulate RxJS streams and publish an Observable of the results?

I have a websocket connection that is generating internal message events with a ReplaySubect. I process these events and add a delay to certain messages. Internally I use publish().refCount() twice, once on the internal ReplaySubject and again on the published output stream.
Should the internal subject have both 'publish' and 'refCount' called on it? I use 'publish' because I have multiple subscribers but I'm not entirely sure when to use 'refCount'.
Is it okay to just dispose of the internal subject? Will that clean up everything else?
Whoever subscribes to 'eventStream' should get the latest revision but the connection shouldn't wait for any subscribers
Example code:
function Connection(...) {
var messageSubject = new Rx.ReplaySubject(1);
var messageStream = messageSubject.publish().refCount();
// please ignore that we're not using rxdom's websocket.
var ws = new WebSocket(...);
ws.onmessage = function(messageEvent) {
var message = JSON.parse(messageEvent.data);
messageSubject.onNext(message);
}
ws.onclose = function(closeEvent) {
messageSubject.dispose(); // is this all I need to dispose?
}
var immediateRevisions = messageStream
.filter((e) => e[0] === "immediate")
.map((e) => ["revision", e[1]]);
var delayedRevisions = messageStream
.filter((e) => e[0] === "delayed")
.map((e) => ["revision", e[1]]).delay(1000);
var eventStream = Rx.Observable.merge(immediateRevisions, delayedRevisions).publish().refCount();
Object.defineProperties(this, {
"eventStream": { get: function() { return eventStream; }},
});
}
// using the eventStream
var cxn = new Connection(...)
cxn.eventStream.subscribe((e) => {
if (e[0] === "revision") {
// ...
}
});
publish and refCounting is basically what shareReplay does in RxJS4. Honestly though, you should just let your observable be "warm" and then use a ReplaySubject as a subscriber if you really want to guarantee that the last message gets pushed to new subscribers even if subscription count falls below one. e.g:
const wsStream = Observable.create(observer => {
ws.onmessage = message => observer.next(message);
ws.onclose = () => observer.complete();
});
const latestWsMessages = new ReplaySubject(1);
wsStream.subscribe(latestWsMessages);
Make sure you review how Observables work: after creating an observable, normally, each subscriber will call the subscription (cold), but in this case, you probably want a hot observable so that you have multiple subscribers sharing a subscription. See Andre's video here and the RxJS docs on creating observables for some more info.
Also, as useful as classes can be, looks like in this case you just want a function of makeWebsocketObservable(WebsocketConfig): Observable<WebsocketEvent>

Categories

Resources