read ICY meta data reactJS - javascript

Hi I am wondering how in javascript or reactjs would I read data from a streaming station?
I have googled sadly I have had no luck and I was wondering if anyone knows of a script that can read (icecast ICY metadata?)

Please note that web browsers don't support ICY metadata, so you'd have to implement quite a few things manually and consume the whole stream just for the metadata. I do NOT recommend this.
As you indicate Icecast, the recommended way to get metadata is by querying the JSON endpoint: /status-json.xsl. It's documented.
It sounds like you are custom building for a certain server, so this should be a good approach. Note that you must be running a recent Icecast version (at the very least 2.4.1, but for security reasons better latest).
If you are wondering about accessing random Icecast servers where you have no control over, it becomes complicated: https://stackoverflow.com/a/57353140/2648865
If you want to play a stream and then display it's ICY metadata, look at miknik's answer. (It applies to legacy ICY streams, won't work with WebM or Ogg encapsulated Opus, Vorbis, etc)

I wrote a script that does exactly this.
It implements a service worker and uses the Fetch API and the Readable Streams API to intercept network requests from your page to your streaming server, add the necessary header to the request to initiate in-stream metadata from your streaming server and then extract the metadata from the response while playing the mp3 via the audio element on your page.
Due to restrictions on service workers and the Fetch API my script will only work if your site is served over SSL and your streaming server and website are on the same domain.
You can find the code on Github and a very basic demo of it in action here (open the console window to view the data being passed from the service worker)

I don't know much about stream's but I've found some stuff googling lol
https://www.npmjs.com/package/icy-metadata
https://living-sun.com/es/audio/85978-how-do-i-obtain-shoutcast-ldquonow-playingrdquo-metadata-from-the-stream-audio-stream-metadata-shoutcast-internet-radio.html
also this
Developing the client for the icecast server
its for php but maybe you can translate it to JS.

Related

How to stream two videos concurrently using js or react or node js?

I am trying to create a video streaming website which can play multiple videos concurrently (something similar to a video management system). The requirement for now is just to be able to play two videos at the same time.
All the tutorials I have seen so far are streaming only one video. I am a bit lost and helpless since I am still learning about JS as well. So, here, I am hoping that any of you could make some suggestion on the libraries that I could use (I learned about ffmpeg), even the process on how to make it work or how to kick start.
Any help would be very much appreciated!
I can already stream a video using node JS and I am hoping to play two videos at the same time.
The way video streaming typically works is that the client requests chunks of the video at a time using an http request (with the Range header) and the server sends each chunk of the video as requested. The client then uses it's own logic to decide when it needs to request the next chunk.
So, for a server to support multiple clients streaming video, all it has to do is be able to respond to multiple http requests in a timely manner and have enough server bandwidth for the data being sent to the clients.
Pretty much any basic web server with enough bandwidth to the internet should be able to stream two videos. And, this should work just fine with nodejs and the http server that it has built-in. You will, of course, have to write your own request handlers that support the Range header appropriately so that your server can inform the clients that you support the Range header and so that you can properly fulfill the client requests when they request a particular byte range of the video.
As for ffmpeg, that's more typically used for converting videos to different formats. Ideally, your server would store the videos in a format that is already ready for streaming and you would not need to use ffmpeg in the actual streaming process.

How does Youtube/Facebook live stream from web browser works

I'm looking at a way to implement video encoder using web browser. Youtube and Facebook already allow you to go live directly from the web browser. I'm wondering how do they do that?
There are a couple of solutions I've researched:
Using web socket: using web browser to encode the video (using mediarecorder api) and push the encoded video to the server to be broadcast.
Using WebRTC: web browser as a WebRTC peer and another server as the other end to receive the stream and re-broadcast (transcode) using other means (rtmp, hls).
Is there any other tech to implement this that those guys (YouTube, Facebook) are using? Or they also use one of these things?
Thanks
WebRTCHacks has a "how does youtube use webrtc" post here which examines some of the technical details of their implementation.
In addition one of their engineers gave a Talk at WebRTC Boston describing the system which is available on Youtube
Correct, you've hit on two ways to do this. (Note that for the MediaRecorder method, you can use any other method to get the data to the server. Web Sockets is one way... so is a regular HTTP PUT of segments. Or, you could even use a data channel of a WebRTC connection to the server.)
Pretty much everyone uses the WebRTC method, as there are some nice built-in benefits:
Low latency (at the cost of some quality)
Dynamic bitrate
Well-optimized on the client
Able to automatically scale output if there are not enough system resources to continue encoding at a higher frame size
The downsides of the WebRTC method:
Ridiculously complicated stack to maintain server-side.
Lower quality (due to emphasis on low latency, BUT you can tweak this by fiddling with the SDP yourself)
If you go the WebRTC route, consider gstreamer. If you want to go the Web Socket route, I've written a proxy to receive the data and send it off to FFmpeg to be copied over to RTMP. You can find it here: https://github.com/fbsamples/Canvas-Streaming-Example

Streaming video from browser to Amazon Kinesis Video

I'm developing a web application that captures video from a webcam and saves the stream to Amazon Kinesis.
The first approach I came up with is getUserMedia / mediaRecorder / XMLHttpRequest which posts chunked MKV to my unix server (not AWS), where simple PHP backend proxies that traffic to Kinesis with putMedia.
This should work, but all media streams from user will go through my server which could become a bottleneck. As far as I know, it's not possible to post chunked mkv to Amazon directly from browser due to cross-origin problems. Correct me if I'm wrong or there's a solution for this.
Another thing that I feel I'm missing - is WebRTC. XHR feels a little bit like a legacy in 2019 for streaming media. But if I want this to work, I will need a stack of three servers: webrtc server to establish connection, webrtc->rtsp proxy, and Kinesis gstreamer plugin, which grabs rtsp stream and pushes it to Kinesis. It looks a bit overcomplicated, and media traffic still runs through my server. Or maybe there is a better approach?
I need a suggestion on how to make better architecture for my app. I feel the best solution would be direct webrtc connection with some amazon service, which proxies stream to kinesis. Is it possible?
Thanks!
I was looking into this also for general education/research purpose. The closest example is featured on AWS blog.
And this is github repo. From the README.md
If the source is a sequence of buffered webcam frames, the browser client posts frame data to an API Gateway - Lambda Proxy endpoint, triggering the lambda/WebApi/frame-converter function. This function uses FFmpeg to construct a short MKV fragment out of the image frame sequence. For details on how this API request is executed, see the function-specific documentation.

Javascript real-time voice streaming and processing it in django backend

Hi I'm currently working on a project where I want to stream users' voice, using js, in realtime - from user's perspective, think Google's speech recognition API demo.
So far I tried few jquery libraries but they doesn't seem to work like I expected - there was either no compatibility with web browser, they couldn't detect microphone or sending to server failed.
Recently, I was exploring webrtc and it seems it could do the job, but I'm not sure if it's possbile to stream from web browser to django backend.
I don't want to use neither node.js nor java's apllets.
I will appreciate any help with js as well as with receiving voice stream in django. Thank you!
There are two separate parts here to consider: signaling and media.
The signaling part (as well as the application logic) can be handled by django. The media part can't.
In order to handle the media part you will need to use a media server that receives and processes that data - the low level media processing parts are usually implemenetd in C/C++. See http://kurento.org for a media server framework that can fit your needs (though it isn't written in Python).

How to capture HTML5 microphone input to icecast?

What are the steps and means of capturing microphone audio stream through HTML5 / Javascript (no flash) and then sending it to an already set up icecast server?
The solution must be solely browser/web-based, no additional software.
The server is on Rails 5.0.0.1.
Where should I start?
I'm struggling to find any relevant info online as everything talks about uploading/recording audio files as complete files, not streams.
The solution must be solely browser/web-based, no additional software.
That's not possible at the moment, because there is no way to make a streaming HTTP PUT request. That is, to make an HTTP request either via XHR or Fetch, the request body has to be immutable.
There is a new standard of ReadableStream which will be available any day now, allowing you to make a ReadableStream (your encoded audio) and PUT it to a server. Once that is available, this is possible.
The server is on Rails 5.0.0.1.
Not sure why you mention that since you said it has to all live in the browser. In any case, let me tell you how I implemented this, with a server-side proxy.
Client-side, you need to capture the audio with getUserMedia(). (Be sure to use adapter.js, as the spec has recently changed and you don't want to have to deal with the constraints object changes manually.) Once you have that, you can either send PCM samples (I'd recommend reducing from 32-bit float to 16-bit signed first), or you can use a codec like AAC or MP3 client-side. If you do the codec in the client, you're only going to be able to send one or two streams. If you do the codec server-side, you can derive as many as you want, due to CPU requirements, but you will take more bandwidth depending on the streams you're deriving.
For the codec on the client, you can either use MediaRecorder, or a codec compiled to JavaScript via emscripten (or similar). Aurora.js has some codecs for decoding, but I believe at least one of the codecs in there also had encoding.
To get the data to the server, you need a binary web socket.
On the server side, if you're keeping the codecs there, FFmpeg makes this easy.
Once you have the encoded audio, you need to make your HTTP PUT request to Icecast or similar, as well as update the metadata either out-of-band, or muxed into the container that you're PUT-ing to the server.
Self Promotion: If you don't want to do all of that yourself, I have some code you can license from me called the AudioPump Web Encoder, which does exactly what you ask for. You can modify it for your needs, take components and embed them into your project, etc. E-mail me at brad#audiopump.co if you're interested.

Categories

Resources