I'm attempting to setup a node js server to stream some events via SSE and I cannot get it to work unless I start the stream with the data field.
When i try and include other fields for example id or event they dont show in the chrome inspector?
If I attempt to put these field before data no events other than the connection opening occurs.
Here is the route that I am playing around with.
router.get('/stream', function * (){
let stream = new PassThrough();
let send = (message, id) => stream.write(`id: ${JSON.stringify(id)}\n data: ${JSON.stringify(message)}\n\n`);
let finish = () => dispatcher.removeListener('message', send);
this.socket.setTimeout(Number.MAX_VALUE);
this.type = 'text/event-stream;charset=utf-8';
this.set('Cache-Control', 'no-cache');
this.set('Connection', 'keep-alive');
this.body = stream;
stream.write(': open stream\n\n');
dispatcher.on('message', send);
this.req.on('close', finish);
this.req.on('finish', finish);
this.req.on('error', finish);
setInterval(function(){
dispatcher.emit('message', {date: Date.now()}, '12345')
}, 5000)
});
I am not too certain if I am writing to the stream correctly.
Thanks
Silly me, it was indeed a malformed event stream. There isnt supposed to be a space between the fields. It should instead be:
id: ${JSON.stringify(id)}\ndata: ${JSON.stringify(message)}\n\n
Related
I have created a real time voice chat application for a game I am making. I got it to work completely fine using audiocontext.createScriptProcessor() method.
Here's the code, I left out parts that weren't relevant
//establish websocket connection
const audioData = []
//websocket connection.onMessage (data) =>
audioData.push(decodeBase64(data)) //push audio data coming from another player into array
//on get user media (stream) =>
const audioCtx = new AudioContext({latencyHint: "interactive", sampleRate: 22050,})
const inputNode = audioCtx.createMediaStreamSource(stream)
var processor = audioCtx.createScriptProcessor(2048, 1, 1);
var outputNode = audioCtx.destination
inputNode.connect(tunerNode)
processor.connect(outputNode)
processor.onaudioprocess = function (e) {
var input = e.inputBuffer.getChannelData(0);
webSocketSend(input) //send microphone input to other sockets via a function set up in a different file, all it does is base 64 encode then send.
//if there is data from the server, play it, else, play nothing
var output
if(audioData.length > 0){
output = audioData[0]
audioData.splice(0,1)
}else output = new Array(2048).fill(0)
};
the only issue is that the createScriptProccessor() method is deprecated. As recommended, I attempted to do this using Audio Worklet Nodes. However I quickly ran into a problem. I can't access the user's microphone input, or set the output from the main file where the WebSocket connection is.
Here is my code for main.js:
document.getElementById('btn').onclick = () => {createVoiceChatSession()}
//establish websocket connection
const audioData = []
//webSocket connection.onMessage (data) =>
audioData.push(data) //how do I get this data to the worklet Node???
var voiceChatContext
function createVoiceChatSession(){
voiceChatContext = new AudioContext()
navigator.mediaDevices.getUserMedia({audio: true}).then( async stream => {
await voiceChatContext.audioWorklet.addModule('module.js')
const microphone = voiceChatContext.createMediaStreamSource(stream)
const processor = new AudioWorkletNode(voiceChatContext, 'processor')
microphone.connect(processor).connect(voiceChatContext.destination)
}).catch(err => console.log(err))
}
Here is my code for module.js:
class processor extends AudioWorkletProcessor {
constructor() {
super()
}
//copies the input to the output
process(inputList, outputList) { // how do I get the input list data (the data from my microphone) to the main file so I can send it via websocket ???
for(var i = 0; i < inputList[0][0].length; i++){
outputList[0][0][i] = inputList[0][0][i]
outputList[0][1][i] = inputList[0][1][i]
}
return true;
}
}
registerProcessor("processor", processor);
So I can record and process the input, but I can't send input via WebSocket or pass in data that is coming from the server to the worklet node because I can't access the input list or output list from the main file where the WebSocket connection is. Does anyone know a way to work around this? Or is there a better solution that doesn't use audio worklet nodes?
Thank you to all who can help!
I figured it out, all I needed to do was use the port.onmessage method to exchange data between the worklet and the main file.
processor.port.onmessage = (e) => {//do something with e.data}
I'm trying to compose a stream with a send event and an undo event. after sending the message, there is a 3s delay while you can undo the message and return the sent message into the text field. if you started to compose a new message, the sent message should be prepended.
so far I've managed to create the delayed send and undo functionality. the problem occurs, when I send a message, undo it, and then send it again without touching the input, I need to change the value of the input to be able to re-send the message, but cannot resend the restored message.
tried a few workarounds, like dispatching an input event on the textarea, or calling next on the message observable, both in the restore function. none of them worked.
textarea.addEventListener('input', event => message$.next(event.target.value))
send.addEventListener('click', () => textarea.value = '')
const sendEvent$ = fromEvent(send, 'click')
const undoEvent$ = fromEvent(undo, 'click')
const message$ = new Subject()
let cache = []
sendEvent$
.pipe(
withLatestFrom(message$, (_, m) => m),
tap(m => cache.push(m)),
delay(3000),
takeUntil(undoEvent$.pipe(
tap(restore)
)),
repeatWhen(complete => complete)
)
.subscribe(x => {
console.log(x)
cache = []
})
function restore() {
if (!textarea.value) {
const message = cache.join('\n')
textarea.value = message
cache = []
}
}
link the example: https://stackblitz.com/edit/rxjs-undo-message
The problem is pretty much there in the restore function. When you restore the "onInput" event doesn't get triggered. So your message queue basically is not enqueued with the restored item. The suggestion given by #webber where you can pass the message$.next(message) is pretty much right and that's what you need to do.
But the problem is how exactly you set it. You can set the value through a setTimeout interval in restore() so that your takeUntil() completes and then the value is enqueued in the Subject
function restore() {
if (!textarea.value) {
const message = cache.join('\n')
textarea.value = message
cache = []
setTimeout(function(){
message$.next(message)
},3000)
}
}
(or)
You can remove the
textarea.addEventListener('input', event => message$.next(event.target.value))
and change your send event handler to the following.
send.addEventListener('click', () => {
message$.next(textarea.value);
textarea.value =''
})
The subscriber works fine, it's just that your message$ doesn't get updated when the undoEvent$ triggers. However the value gets set to an empty string.
If you undo, then type and then send again, you will see that it works in the first time as well.
What you have to do is set message$ to the value of your textarea and then it works.
So, I'm creating the bot for my Discord channel. I created a special system based on requests. For example, the user sends a request to be added to the chat he wants. Each request is paired with a unique ID. The request is formed and sent to the service channel where the moderator can see those requests. Then, once the request is solved, moderator types something like .resolveRequest <ID> and this request is copied and posted to 'resolved requests' channel.
There is some code I wrote.
Generating request:
if (command === "join-chat") {
const data = fs.readFileSync('./requestID.txt');
let requestID = parseInt(data, 10);
const emb = new Discord.RichEmbed()
.setTitle('New request')
.setDescription('Request to add to chat')
.addField('Who?', `**User ${msg.author.tag}**`)
.addField('Which chat?', `**Chat: ${args[0]}**`)
.setFooter('Request\'s ID: ' + requestID)
.setColor('#fffb3a');
let chan = client.channels.get('567959560900313108');
chan.send(emb);
requestID++;
fs.writeFileSync('./requestID.txt', requestID.toString(10));
}
Now the .resolveRequest <ID>:
if (command === '.resolveRequest') {
msg.channel.fetchMessages({limit : 100}) //getting last 100 messages
.then((messages) => messages.forEach(element => { //for each message get an embed
element.embeds.forEach(element => {
msg.channel.send(element.fields.find('value', args[0].toString(10))); //send a message containing the ID mentioned in 'args[0]' that was taken form the message
})
}));
}
.join-chat <chat_name> works flawlessly, but .resolveRequest <ID> does't work at all, even no errors.
Any way to fix it?
Using .find('value', 'key') is deprecated, use .find(thing => thing.value == 'key') instead.
Also you should use a DataBase to store things, but your Code actually is not broken, its just that you check for: command === '.resolveRequest', wich means you need to run ..resolveRequest, as in the command variable the prefix gets cut away so change that to: command === 'resolveRequest'
I have been trying to get data chat working using webrtc. It was working previously in google chrome and suddenly stopped working, I have narrowed down the issue to 'ondatachannel' callback function not getting triggered. The exact same code works fine in Mozilla.
Here's the overall code:
app.pc_config =
{'iceServers': [
{'url': 'stun:stun.l.google.com:19302'}
]};
app.pc_constraints = {
'optional': [
/* {'DtlsSrtpKeyAgreement': true},*/
{'RtpDataChannels': true}
]};
var localConnection = null, remoteConnection = null;
try {
localConnection = new app.RTCPeerConnection(app.pc_config, app.pc_constraints);
localConnection.onicecandidate = app.setIceCandidate;
localConnection.onaddstream = app.handleRemoteStreamAdded;
localConnection.onremovestream = app.handleRemoteStreamRemoved;
}
catch (e) {
console.log('Failed to create PeerConnection, exception: ' + e.message);
return;
}
isStarted = true;
In Create Channel that follows this:
var localConnection = app.localConnection;
var sendChannel = null;
try {
sendChannel = localConnection.createDataChannel(app.currentchannel,
{reliable: false});
sendChannel.onopen = app.handleOpenState;
sendChannel.onclose = app.handleCloseState;
sendChannel.onerror = app.handleErrorState;
sendChannel.onmessage = app.handleMessage;
console.log('created send channel');
} catch (e) {
console.log('channel creation failed ' + e.message);
}
if (!app.isInitiator){
localConnection.ondatachannel = app.gotReceiveChannel;
}
app.sendChannel = sendChannel;
I create Offer:
app.localConnection.createOffer(app.gotLocalDescription, app.handleError);
and Answer:
app.localConnection.createAnswer(app.gotLocalDescription, app.handleError);
the offer and answer get created successfully, candidates are exchanged and onicecandidate event is triggered at both ends! Local Description and RemoteDescription are set on both respective ends.
I have a pusher server for signalling, I am able to send and receive messages through the pusher server successfully.
The same webrtc code works for audio/video = true, the only issue is when I try to create datachannel. The only step that does not get executed is the callback function not getting executed i.e "gotReceiveChannel"
I'm starting to think it's the version of chrome.. I am not able to get the GitHub example working in chrome either: (Step4 for data chat)
https://bitbucket.org/webrtc/codelab
While the same code works in Mozilla.
The sendChannel from the offerer has a "readyState" of "connecting"
Any help much appreciated.
I'm trying to implement webRTC with the API's that are available in the browser. I'm using this tutorial as my guide: https://www.webrtc-experiment.com/docs/WebRTC-PeerConnection.html
Here's what I'm currently doing. First I get the audio element in the page. I also have an audioStream variable for storing the stream that I get from navigator.webkitGetUserMedia when call button is clicked by the initiating user or when answer button is clicked by the receiving user. Then a variable for storing the current call.
var audio = document.querySelector('audio');
var audioStream;
var call = {};
Then I have the following settings:
var iceServers = [
{ url: 'stun:stun1.l.google.com:19302' },
{ url: 'turn:numb.viagenie.ca', credential: 'muazkh', username: 'webrtc#live.com' }
];
var sdpConstraints = {
optional: [],
mandatory: {
OfferToReceiveAudio: true,
OfferToReceiveVideo: false
}
};
var DtlsSrtpKeyAgreement = {
DtlsSrtpKeyAgreement: true
};
On page load I create a new peer:
var peer = new webkitRTCPeerConnection({
'iceServers': iceServers
});
On the add stream event I just assign the event to the call variable.
peer.onaddstream = function(event){
call = event;
};
On ice candidate event, I send the candidate to the peer.
peer.onicecandidate = function(event){
var candidate = event.candidate;
if(candidate){
SocketService.emit('message', {
'conversation_id': me.conversation_id,
'targetUser': to,
'candidate': candidate
});
}
if(typeof candidate == 'undefined'){
send_SDP();
}
};
Once the gathering state is completed, I send the SDP.
peer.ongatheringchange = function(e){
if(e.currentTarget && e.currentTarget.iceGatheringState === 'complete'){
send_SDP();
}
};
What the send_SDP method does is send the local description to the peer.
function send_SDP(){
SocketService.emit('message', {
'conversation_id': me.conversation_id,
'targetUser': to,
'sdp': peer.localDescription
});
}
Here's what I have inside the event listener for the CALL button.
So first it gets the audio then assigns the stream to the current peer object. Then it creates a new offer, it that's successful, the local description is set and once that's done, it sends an offer to the other peer.
getAudio(
function(stream){
peer.addStream(stream);
audioStream = stream;
peer.createOffer(function(offerSDP){
peer.setLocalDescription(offerSDP, function(){
SocketService.emit('message', {
'conversation_id': me.conversation_id,
'targetUser': to,
'offerSDP': offerSDP
});
},
function(){});
},
function(){},
sdpConstraints
);
},
function(err){});
On the receiving peer, the offer is captured, so it shows the modal that somebody is calling. The receiving peer can then click on the ANSWER button. But here I'm setting the session description using the offer even before the ANSWER button is clicked.
SocketService.on('message', function(msg){
if(msg.offerSDP){
//show calling modal on the receiving peer
var remoteDescription = new RTCSessionDescription(msg.offerSDP);
peer.setRemoteDescription(remoteDescription, function(){
createAnswer(msg.offerSDP);
},
function(){});
}
});
Once the remote description is set, the answer is created. First by getting the audio then adding the stream to the local peer object. Next the remote session description is created by using the offerSDP, this remote session description is then set to the local peer object. After that, the answer is created, the local description is set on the local peer and then it sends the answerSDP to the peer who initiated the call.
function createAnswer(offerSDP) {
getAudio(
function(stream){
peer.addStream(stream);
audioStream = stream;
var remoteDescription = new RTCSessionDescription(offerSDP);
peer.setRemoteDescription(remoteDescription);
peer.createAnswer(function(answerSDP) {
peer.setLocalDescription(answerSDP, function(){
SocketService.emit('message', {
'conversation_id': me.conversation_id,
'targetUser': to,
'answerSDP': answerSDP
});
},
function(){});
}, function(err){}, sdpConstraints);
},
function(err){
}
);
};
The peer who initiated the call receives the answerSDP. Once it does, it creates a remote description using the answerSDP and uses it to set the remote description for its local peer object
if(msg.answerSDP){
var remoteDescription = new RTCSessionDescription(msg.answerSDP);
peer.setRemoteDescription(remoteDescription, function(){
},
function(){});
}
After that, I'm not really sure what happens next. Based on what I understand, the onicecandidate event is fired on the intiating peer (caller) and it sends a candidate to the receiving peer. Which then executes the following code:
if(msg.candidate){
var candidate = msg.candidate.candidate;
var sdpMLineIndex = msg.candidate.sdpMLineIndex;
peer.addIceCandidate(new RTCIceCandidate({
sdpMLineIndex: sdpMLineIndex,
candidate: candidate
}));
}
Now once the ANSWER button is clicked, a message is sent to the initiating peer that the receiving peer picked up. And it uses the call stream as source for the audio element, once all meta data is load it plays the audio.
SocketService.emit('message', {
'answerCall': true,
'conversation_id': me.conversation_id,
'targetUser': to
});
audio.src = window.URL.createObjectURL(call.stream);
audio.onloadedmetadata = function(e){
audio.play();
}
Something might be wrong here. That's why the audio is only one-way. Only the user who initiated the call can hear the input from the receiving user. Sounds produced by the initiating user can also be heard, so its like echoing what you're saying. Any ideas?
If you know any tutorial or book that shows how to implement webRTC using native API calls, that will also help. Thanks in advance.
The simplest example I can recommend is the webrtc sample Peer Connection for basic peer connection.
As for the echo, you can set audio.muted on the local peer's audio element, to prevent it from playing and causing an echo (a user doesn't need to hear her own audio, so you can mute that element).