Live Audio Streaming over HTML5/NodeJS - javascript

I'm trying to make a website that will serve as a VoIP recorder app. It will take audio from the microphone, transmit the audio to the server and the server only, and then the server will handle the redistribution of audio to it's connected clients.
Here's what I've tried already:
WebRTC (from what I can tell, it's peer-to-peer only)
MediaRecorder - timeSlice to Socket.IO (only the first packet is playable due to header information)
MediaRecorder - Stopping every few milliseconds, transmitting the audio, and starting again. (is extremely choppy)
The stack I'm set on is NodeJS with Express, but I'm extremely open to any packages that will help.
As far as possibility, I know it is possible because Discord wrote in their own blog that they explicitly do not send packets peer-to-peer because they have large numbers of connected users.
Below is the way I imagine it being setup:
Anyways, hope someone can help - I've been stuck on this for a while. Thanks!

WebRTC is NOT only P2P. You can put a WebRTC Peer on a server (and then have it do fan-out). This is what all major conferencing solutions do. SFU is a very popular deployment style, mesh isn't the only thing you can do.
You can go down the MediaRecorder path, but you are going to hit issues with congestion control/backpressure.

Related

Streaming video from browser to Amazon Kinesis Video

I'm developing a web application that captures video from a webcam and saves the stream to Amazon Kinesis.
The first approach I came up with is getUserMedia / mediaRecorder / XMLHttpRequest which posts chunked MKV to my unix server (not AWS), where simple PHP backend proxies that traffic to Kinesis with putMedia.
This should work, but all media streams from user will go through my server which could become a bottleneck. As far as I know, it's not possible to post chunked mkv to Amazon directly from browser due to cross-origin problems. Correct me if I'm wrong or there's a solution for this.
Another thing that I feel I'm missing - is WebRTC. XHR feels a little bit like a legacy in 2019 for streaming media. But if I want this to work, I will need a stack of three servers: webrtc server to establish connection, webrtc->rtsp proxy, and Kinesis gstreamer plugin, which grabs rtsp stream and pushes it to Kinesis. It looks a bit overcomplicated, and media traffic still runs through my server. Or maybe there is a better approach?
I need a suggestion on how to make better architecture for my app. I feel the best solution would be direct webrtc connection with some amazon service, which proxies stream to kinesis. Is it possible?
Thanks!
I was looking into this also for general education/research purpose. The closest example is featured on AWS blog.
And this is github repo. From the README.md
If the source is a sequence of buffered webcam frames, the browser client posts frame data to an API Gateway - Lambda Proxy endpoint, triggering the lambda/WebApi/frame-converter function. This function uses FFmpeg to construct a short MKV fragment out of the image frame sequence. For details on how this API request is executed, see the function-specific documentation.

WebRTC video streaming through a server

I wanna run the stream from client side then join from server to client
. How can I stream the video through a server to another Viewers? Is this possible?
I would like to try and point you in the right direction.
First, lets understand a little more about how WebRTC works.
In WebRTC you have a websocket that is called the bridge, the bridge's role is to help broker a connection between two or more peers.
Generaly speaking, the bridge uses STUN/TURN servers along with SDP Protocol to help establish the connections between peers.
STUN servers are used to establish p2p udp conenctions by punch holes through NAT.
If the STUN fails to punch a whole (ie there is a firewall), a TURN server is used as a hub & spoke (ie relays data though the TURN server).
The full WebRTC stack includes video/audio streaming with vp8/vp9/h264 codecs & data is packaged using RTP.
Lucky for you there is a node-js library that implments almost the entire stack.
https://github.com/js-platform/node-webrtc
The library essentially provides you a WebRTC data channel.
There is no support for "Media Streams" and thus I assume you need to build the encoding/decoding and RTP packaging yourself.
However, there is a discussion here on how to stream audio/video with the data channel:
https://github.com/js-platform/node-webrtc/issues/156
Now, your specific question, how to stream from a "server"?
Well WebRTC is generally p2p, however you could setup a "Server Peer" and designate it as having a source channel only (ie there is no input channel).
This peer then becomes the "server" and all the other peers can view its contents when they connect.
Hope that helps.
Cheers!

WebRTC live video stream node.js

I am searching for a way to stream video with webRTC. I am very new to webRTC. I have seen a lot of applications on the web that have p2p video chat. The tutorials I follow explain how WebRTC working for the client, but they do not show what use a backend script. And that's exactly what I'm looking for, a backend script (preferably node.js) that ensures that I can stream live video (getUsersMedia) to the client.
Marnix Bouhuis
Its really simple to get started, checkout a simple demo here
1.You need a WebRTC supported browser. Chrome and firefox are best at now
A signalling server to exchange a media options. SocketIO with Nodejs
TURN and STUN server to solve NAT and Symmetric NAT (Only if you public)
MCU, if you want to limit the bandwidth usage. It give flexibility to a star network rather than mesh network in normal p2p

Sending a MediaStream to host Server with WebRTC after it is captured by getUserMedia

I am capturing audio data using getUserMedia() and I want to send it to my server so I can save it as a Blob in a MySQL field.
This is all I am trying to do. I have made several attempts to do this using WebRTC, but I don't even know at this point if this is right or even the best way to do this.
Can anybody help me?
Here is the code I am using to capture audio from the microphone:
navigator.getUserMedia({
video:false,
audio:true,
},function(mediaStream){
// output mediaStream to speakers:
var mediaStreamSource=audioContext.createMediaStreamSource(mediaStream);
mediaStreamSource.connect(audioContext.destintion);
// send mediaStream to server:
// WebRTC code? not sure about this...
var RTCconfig={};
var conn=new RTCPeerConnection(RTCconfig);
// ???
},function(error){
console.log('getUserMedia() fail.');
console.log(error);
});
How can I send this mediaStream up to the server?
After Googling around I've been looking into WebRTC, but this seems to be for just peer to peer communication - actually, now I'm looking into this more, I think this is the way to go. It seems to be the way to communicate from the client's browser up to the host webserver, but nothing I try even comes close to working.
I've been going through the W3C documentation (which I am finding way too abstract), and I've been going thru this article on HTML5 Rocks (which is bringing up more questions than answers). Apparently I need a signalling method, can anyone advise which signalling method is best for sending mediaStreams, XHR, XMPP, SIP, Socket.io or something else?
What will I need on the server to support the receiving of WebRTC? My web server is running a basic LAMP stack.
Also, is it best to wait until the mediaStream is finished recording before I send it up to the server, or is it better to send the mediaStream as its being recorded? I want to know if I am going about doing this the right way. I have written file uploaders in javascript and HTML5, but uploading one of these mediaStreams seems hellishly more complicated and I'm not sure if I am approaching it right.
Any help on this would be greatly appreciated.
You cannot upload the live stream itself while it is running. This is because it is a LIVE stream.
So, this leaves you with a handful options.
Record the audio stream using one of the many recorders out there RecordRTC works fairly well. Wait until the stream is completed and then upload the file.
Send smaller chuncks of recorded audio with a timer and merge them again server side. This is an example of this
Send the audio packets as they occur over websockets to your server so that you can manipulate and merge them there. My version of RecordRTC does this.
Make an actual peer connection with your server so it can grab the raw rtp stream and you can record the stream using some lower level code. This can easily be done with the Janus-Gateway.
As for waiting to send the stream vs sending it in chunks, it all depends on how long you are recording. If it is for a longer period of time, I would say sending the recording in chunks or actively sending audio packets over websockets is a better solution as uploading and storing larger audio files from the client side can be arduous for the client.
Firefox actually has a its own solution for recording but it is not supported in chrome so it may not work in your situation.
As an aside, the signalling method mentioned is for session build/destroy and really has nothing to do with the media itself. You would only really worry about this if you were using possibly solution number 4 shown above.
A good API for you would be MediaRecorder API but it is less supported than the Web Audio API, so you can do it using a ScriptNode or use Recorder.js (or base on it to build your own scriptnode).
WebRTC is design as peer-to-peer, but the peer could be a browser and a server. So it's definitely possible to push the stream by WebRTC to a server, then record the stream as a file.
The stream flow is:
Chrome ----WebRTC---> Server ---record---> FLV/MP4
There are lots of servers, like SRS, janus or mediasoup to accept WebRTC stream. Please note that you might need to covert the WebRTC(H.264+Opus) to MP4(H.264+AAC), or just choose SRS which supports this feature.
yes it is possible to send MediaStream to your server, but the only way you can achieve is by going through WebSocket which enable client browser to send data to your server in real time connection. so i recommend you to use websocket

Is WebRTC Conference possible in Star Topology with html5-JS client?

To Connect 10 people in WebRTC Audio/Video Conference, it requires 90 calls in Mesh-Topology (Each Peer should connect with all other Peers in Conference). If number of participants are more the bandwidth consumption is more for each user.
Is There any way to make WebRTC conference in Star-Topology(i.e. conferencing 10 people with 10 Calls) from browser client without any Hardware Like MCU?
My Requirement is Initiate Audio Conference 'n' people with n calls:
Moderator initiated 3 calls from WebRTC browser client to different users(A,B,C)
with 3 different peer connections. Now Moderator can able to here/speak with all three. Now Moderator Want to Conference all Three(A RemoteStream.AudioTracks to B&C, B Audio-Tracks to A&C, C Audio-Tracks to A&B). Without any new peer connections from A,B,C.
Is it Possible to mix audio tracks in Chrome/Firefox ......?
Yes, exactly what you described is possible in Firefox by using the WebAudio API. At the current time it is not possible in Chrome because it cannot create a MediaStreamAudioSourceNode from WebRTC streams (I hope this limitation will go away soon). Thus, the moderator's browser must be Firefox. The other peers can use other browsers.
This way you can set up a conference call with 10 peers, all of them only connecting to the moderator, thus using only 10 WebRTC connections.
What you have forgotten to mention is that you also have to mix in the moderator's audio for each peer.
With the WebAudio API you also could do some fancy things like per-peer audio visualization, muting, volume control, mixing in of audio files, etc.
WebRTC gives you the possibility to stream audio to somewhere. Whether that "somewhere" is a server or another client is up to you and does not depend on the protocol. For your scenario, what needs to be built, is a central box that implements moderator control and executes audio mixing.
Am answering my old question (posted when i was an Amateur WebRTC Programmer).
The Solution is using WebAudio API, available in both Chrome & Firefox.
Here the great post by WebRTCHacks Mixing Audio in browser
It explained everything step by step.

Categories

Resources