Broadcast Live Streaming Audio Server using Node.js - javascript

My goal is to create a web-radio station hosted on localhost using JavaScript with node.js server for the backend part and HTML/CSS/JavaScript for the frontend part. My radio is going to be simple, and the goal is to create a server which is going to constantly broadcast a single song (or many songs using the .mp3 format) so they can be consumable from each client connected.
The streaming part is not so hard. The part I am struggling so far, is to achieve the "broadcast" transmission simultaneously to all consumers. The image below explains better my thoughts:
Does anyone have a code example so I can easily understand it and at the same time implement it for my project's use?
Has anyone faced a similar situation like this?

This is exactly what SHOUTcast/Icecast and compatible servers do. You basically just need to copy the audio data to each client.
You can use normal HTTP for the streaming protocol. The clients don't need to know or care that they're streaming live. A simple audio element works:
<audio src="https://stream.example.com/some-stream" preload="none" controls></audio>
On the server-side, you do need to ensure that you're sending a stream aligned in such a way that the client can just pick right up and play it. The good news is that for raw MP3 and ADTS (which normally wraps AAC) streams, all the data they need for playback is in each frame, so you can just "needle drop", stream from an arbitrary position, and the client will figure it out.
Inside your Node.js app, it will look a bit like this:
Input Stream -> Buffer(s) -> Clients HTTP Responses
And in fact, you can even drop the buffer part if you want. As data comes in from the codec, you can just write it to all the clients. The buffer is useful to ensure a fast starting playback. Most clients are going to need some buffered data to sniff stream type and compatibility, as well as to fill their own playback buffer for smooth streaming.
That's really all there is to it!

Related

How to stream two videos concurrently using js or react or node js?

I am trying to create a video streaming website which can play multiple videos concurrently (something similar to a video management system). The requirement for now is just to be able to play two videos at the same time.
All the tutorials I have seen so far are streaming only one video. I am a bit lost and helpless since I am still learning about JS as well. So, here, I am hoping that any of you could make some suggestion on the libraries that I could use (I learned about ffmpeg), even the process on how to make it work or how to kick start.
Any help would be very much appreciated!
I can already stream a video using node JS and I am hoping to play two videos at the same time.
The way video streaming typically works is that the client requests chunks of the video at a time using an http request (with the Range header) and the server sends each chunk of the video as requested. The client then uses it's own logic to decide when it needs to request the next chunk.
So, for a server to support multiple clients streaming video, all it has to do is be able to respond to multiple http requests in a timely manner and have enough server bandwidth for the data being sent to the clients.
Pretty much any basic web server with enough bandwidth to the internet should be able to stream two videos. And, this should work just fine with nodejs and the http server that it has built-in. You will, of course, have to write your own request handlers that support the Range header appropriately so that your server can inform the clients that you support the Range header and so that you can properly fulfill the client requests when they request a particular byte range of the video.
As for ffmpeg, that's more typically used for converting videos to different formats. Ideally, your server would store the videos in a format that is already ready for streaming and you would not need to use ffmpeg in the actual streaming process.

How do I play a stream of data with custom but partially supported MimeType in browser?

Bringing this over from softwareengineering. Was told this question may be better for stackoverflow.
I am sending a video stream of data to another peer and want to reassemble that data and make that stream the source of a video element. I record the data using the npm package RecordRTC and am getting a Blob of data every 1s.
I send it over an WebRTC Data Channel and initially tried to reassemble the data using the MediaSource API but turns out that MediaSource doesn't support data with a mimetype of video/webm;codecs=vp8,pcm. Are there any thoughts on how to reassemble this stream? Is it possible to modify the MediaSource API?
My only requirement of this stream of data is that the audio be encoded with pcm but if you have any thoughts or questions please let me know!
P.S. I thought opinion based questioned weren't for stackoverflow so that's why I posted there first.
The easiest way to handle this is to proxy the stream through a server where you can return the stream as an HTTP response. Then, you can do something as simple as:
<video src="https://example.com/your-stream"></video>
The downside of course is that now you have to cover the bandwidth cost, since the connection is no longer peer-to-peer.
What would be nice is if you could use a Service Worker and have it return a faked HTTP response from the data you're receiving from the peer. Unfortunately, the browser developers have crippled the Service Worker standards by disabling it if the user reloads the page, or uses privacy modes. (It seems that they assumed Service Workers were only useful for caching.)
Also, a note on WebRTC... what you're doing is fine. You don't want to use the normal WebRTC media streams, as not only are they lossy compressed, but they will drop segments to prioritize staying realtime over quality. This doesn't sound like what you want.
I've been wondering this - is the raw mediastream returned from something like getusermedia what format is that in?
The MediaStream is the raw data, but it isn't accessible directly. If you attach the MediaStream to a Web Audio API graph, whatever format the sound card captured in is converted to 32-bit floating point PCM. At this point, you can use a script processor node to capture the raw PCM data.

Sending a MediaStream to host Server with WebRTC after it is captured by getUserMedia

I am capturing audio data using getUserMedia() and I want to send it to my server so I can save it as a Blob in a MySQL field.
This is all I am trying to do. I have made several attempts to do this using WebRTC, but I don't even know at this point if this is right or even the best way to do this.
Can anybody help me?
Here is the code I am using to capture audio from the microphone:
navigator.getUserMedia({
video:false,
audio:true,
},function(mediaStream){
// output mediaStream to speakers:
var mediaStreamSource=audioContext.createMediaStreamSource(mediaStream);
mediaStreamSource.connect(audioContext.destintion);
// send mediaStream to server:
// WebRTC code? not sure about this...
var RTCconfig={};
var conn=new RTCPeerConnection(RTCconfig);
// ???
},function(error){
console.log('getUserMedia() fail.');
console.log(error);
});
How can I send this mediaStream up to the server?
After Googling around I've been looking into WebRTC, but this seems to be for just peer to peer communication - actually, now I'm looking into this more, I think this is the way to go. It seems to be the way to communicate from the client's browser up to the host webserver, but nothing I try even comes close to working.
I've been going through the W3C documentation (which I am finding way too abstract), and I've been going thru this article on HTML5 Rocks (which is bringing up more questions than answers). Apparently I need a signalling method, can anyone advise which signalling method is best for sending mediaStreams, XHR, XMPP, SIP, Socket.io or something else?
What will I need on the server to support the receiving of WebRTC? My web server is running a basic LAMP stack.
Also, is it best to wait until the mediaStream is finished recording before I send it up to the server, or is it better to send the mediaStream as its being recorded? I want to know if I am going about doing this the right way. I have written file uploaders in javascript and HTML5, but uploading one of these mediaStreams seems hellishly more complicated and I'm not sure if I am approaching it right.
Any help on this would be greatly appreciated.
You cannot upload the live stream itself while it is running. This is because it is a LIVE stream.
So, this leaves you with a handful options.
Record the audio stream using one of the many recorders out there RecordRTC works fairly well. Wait until the stream is completed and then upload the file.
Send smaller chuncks of recorded audio with a timer and merge them again server side. This is an example of this
Send the audio packets as they occur over websockets to your server so that you can manipulate and merge them there. My version of RecordRTC does this.
Make an actual peer connection with your server so it can grab the raw rtp stream and you can record the stream using some lower level code. This can easily be done with the Janus-Gateway.
As for waiting to send the stream vs sending it in chunks, it all depends on how long you are recording. If it is for a longer period of time, I would say sending the recording in chunks or actively sending audio packets over websockets is a better solution as uploading and storing larger audio files from the client side can be arduous for the client.
Firefox actually has a its own solution for recording but it is not supported in chrome so it may not work in your situation.
As an aside, the signalling method mentioned is for session build/destroy and really has nothing to do with the media itself. You would only really worry about this if you were using possibly solution number 4 shown above.
A good API for you would be MediaRecorder API but it is less supported than the Web Audio API, so you can do it using a ScriptNode or use Recorder.js (or base on it to build your own scriptnode).
WebRTC is design as peer-to-peer, but the peer could be a browser and a server. So it's definitely possible to push the stream by WebRTC to a server, then record the stream as a file.
The stream flow is:
Chrome ----WebRTC---> Server ---record---> FLV/MP4
There are lots of servers, like SRS, janus or mediasoup to accept WebRTC stream. Please note that you might need to covert the WebRTC(H.264+Opus) to MP4(H.264+AAC), or just choose SRS which supports this feature.
yes it is possible to send MediaStream to your server, but the only way you can achieve is by going through WebSocket which enable client browser to send data to your server in real time connection. so i recommend you to use websocket

Can I use WebRTC to receive a non-standard RTP stream?

I have a piece of software running on a node in my network that generates RTP streams carried over UDP/IP. Those streams contain streaming data, but not in any standard audio/video format (like H.264, etc.). I would like to have a simple Web app that can hook into these streams, decode the payloads appropriately, and display their contents. I understand that it isn't possible to have direct access to a UDP socket from a browser.
Is there a way to, using JavaScript/HTML5, to read an arbitrary RTP stream (i.e. given a UDP port number to receive the data from)? The software that sends the stream does not implement any of the signaling protocols specified by WebRTC, and I'm unable to change it. I would like to just be able to get at the RTP packet contents; I can handle the decoding and display without much issue.
As far as I know, there is nothing in the set of WebRTC APIs that will allow you to do this. As you have pointed out, there also isn't a direct programmatic way to handle UDP packets in-browser.
You can use Canvas and the Web Audio API to effectively playback arbitrary video, but this takes a ton of CPU. The MediaSource extensions can be used to run data through the browser's codec, but you still have to get the data somehow.
I think the best solution in your case is to make these connections server-side and use something like FFmpeg to output a stream in a codec and container that your browser can handle, and simply play back in a video element. Then, you can connect to whatever you want. I have done similar projects with Node.js which make it very easy to pipe streams through, and on out to the browser.
Another alternative is to use WASM and create your own player for your stream. It's pretty incredible technology of these recent years > 2014. Also as stated by #Brad, WebRTC doesn't support what you need even as of this year 2020.

Best way to record audio with mic and send to server

I know there are similar questions but there is no answer for me.
What is the best option for recording audio with microphone on website and send to server for some operation.
1) java/javascript
2) red5
3) flash/flex
4) silverlight
5) other(pls specify)
I want to create something like this : http://wami.csail.mit.edu/examples/jsapi/calculator.html
Well, your question isn't exactly a good one. There is no 'best technology', only what's best for your project which I know nothing about.
With that being said, there's also the fact that you're bundling both front end and back end technologies together, which doesn't work. And what kind of 'work' do you need done on the audio.
If it was me, I'd use Flash on the front end to record the microphone since it has the most market penetration compared to say Silverlight. Javascript cannot record the microphone. From there, I can then send the audio (streamed or not) to the server, which in this case is really up in the air. I could be any technology and it wouldn't matter all that much unless one language has a better audio library than the other. If you just want to store the recording, you can use something extremely simple like PHP, but if you need something a bit more robust, you'll probably have a better time with using Java.
How Flash sends the audio to the server is up to you. There are several options but if it doesn't need to be streamed, I'd say just upload using http.
The technology you refer to in your example is open-source. It uses a hidden Flash app to perform an HTTP post from client to server. Streaming is simulated by chunking the audio into multiple POSTs. Here's the link:
https://code.google.com/p/wami-recorder/

Categories

Resources