How to combine webcam and screen share video without using canvas? - javascript

I have a webrtc react app, where users can simulcast their stream to youtube, facebook etc (like restream.io).
I want to send both streams (screen share and webcam) as one video (half screen share and half webcam, webcam overlayed on screen share, captions on top of video) like studio.restream.io
Everything is working fine with drawing streams on canvas and piping data using websocket to backend where it is transcoded to rtmp and sent to fb,yt etc.(This method is working only in high end PC).
But the only problem with this method is when i draw stream on canvas it takes lot of cpu and browser hangs(only works when you have gpu).
The question is how to optimize this?
Do we need a back-end service to merge videos using ffmpeg? or
Is there any way to do it in browser?

In general, the canvas operations (and a lot of other drawing-related operations) in the browser assume a GPU is available and are very slow when they have to run on the CPU.
For what you're doing, you probably do need to run the browser on hardware that has a GPU.
You're right that you can do this kind of compositing more flexibly using ffmpeg or GStreamer. We've used both ffmpeg and GStreamer pretty extensively, at Daily.co.
For our production live streaming workers, we use GStreamer running on AWS instances without GPUs. Our media servers forward the WebRTC rtp tracks as raw rtp to a GStreamer process, which decodes the tracks, composites the video tracks, mixes the audio tracks, and encodes to RTMP. GStreamer has a steep learning curve and is a totally different toolkit than the browser, but it's also efficient and flexible in ways that running in the browser can't quite be.

Related

How to allow multiple audio stream to multiple listeners

How can i allow multiple users to stream audio coming from their machine over the network for multiple listeners i mean taking all the sound from their soundcard to the network,
I know this can be accomplish using icecast, edcast etc. but that will be only when the user installs these program to their device and start making configurations and a lot of work.
what i need is if its possible to do this without icecast just javascript, if we use webrtc it will be more like a voice call i guess,
but i need that audio streamed from device A to device B as if it was already in device B, am talking about playing music on the device and sound from mic at same time. is this possible with javascript? and multiple users can do this stream at same time.
taking all the sound from their soundcard to the network [...] without icecast just javascript
There is no Web API for capturing all sound the computer plays that I know of.
You could maybe make something like this with WebRTC or other web APIs if the user's sound drivers expose a recording device (like the "Stereo Mix" recording device of olden days) that the user selects for your web app to use.
(As an aside, Icecast itself doesn't care where the audio comes from, it just accepts and redistributes OGG or MP3 streams. It's up to the casting client to figure out where the audio comes from.)

How to stream video from my WebRTC to Facebook RTMP server directly?

I'm trying to develop a web application with WebRTC and I'm getting video from my webcam through WebRTC and I want to do live streaming on Facebook and YouTube with my browser I have searched python and node js libraries but I haven't find any library for that. I want to build an application like streamyard.com.
I have watched ffmpeg
You can do this using Pion WebRTC and ffmpeg!
I have created a demo here. If you have ffmpeg installed and the Go compiler this should just work!
This takes audio/video from the browser, and then constructs a webm in memory. It then passes this WebM to ffmpeg via a stdin pipe, which then is transcode and sent to Twitch!
There are a lot of optimizations we could make here (like taking H264 from the browser directly) but H264 isn't supported everywhere, so this just makes the sample easier to reason with.

Uncompressed, unencrypted, unaltered, raw transfer of real-time PCM audio data through WebRTC stream

I'm transferring a live audio stream between 2 Electron window processes using WebRTC. There are no ICE or STUN servers, or anything like that, the connection is established manually through Electron IPC communication (based on this code).
Note: from the technical point of view regarding the audio streams themselves, this is very similar (if not identical) to streaming between 2 browser tabs on the same domain, so this is primarily not a question regarding Electron itself, although Electron IPC would be obviously substituted with a browser equivalent.
The audio stream works, I can transmit audio from one window to another in real-time, as it is generated. That is, I can generate audio (Web Audio API) in window "A" and listen to it through an <audio> element in window "B", or do processing on it using a separate AudioContext in window "B" (although there is some latency).
However, the audio data is vastly altered during streaming: it became mono, its quality dropped, and there is significant latency. After fiddling around I've learned WebRTC does pretty much everything I don't need, including encoding the audio stream with an audio codec, encrypting the transfer, running echo cancellation, and so on.
I don't need these. I need to simply transfer raw audio data through local WebRTC without altering the audio in any way. It needs to be float32 accurate to the sample.
How can I do this with WebRTC?
Why use WebRTC then?
I need to do custom audio processing inside the Web Audio API.
The only way to do this is using a ScriptProcessorNode, which is unusuable in production code when there's essentially anything on the page, because it is broken by design (it processes audio on the UI thread, and causes audio glitching by even slight UI interactions).
So basically, because of this (and to the best of my knowledge), my only option is to transfer audio with WebRTC streams to another window process, perform ScriptProcessorNode processing there (nothing more is happening in that window, empty DOM, so the processing is always nice and smooth), then send the results back.
This works, but the audio is altered during streaming, which I want to avoid (see above).
Why not use AudioWorklet?
Because Electron is 5 versions behind Chrome unfortunately (version 59 at the moment), and simply does not ship AudioWorklet yet.

Live stream a single / static audio file using Azure Media Services

I'm struggling trying to figure out how to implement live streaming of an audio file through Azure Media Services. What I'm trying to do is have a single/static audio file that live streams and repeats when it reaches the end of the file.
The thought is to have a radio station type experience that when the user starts listening to the audio it begins playing at where the file is currently at in the live stream.
I have very limited knowledge with codecs, streaming types, and encoding. That said, I was able to upload my mp3 file to Azure Media Services, encode it using "AAC Good Quality Audio" and am able to play the audio clip. However, I want to enable streaming to ensure the experience I described above.
The last piece of this will be enabled through a responsive website so I would like to enable the stream using HTML 5 so it's playable on all devices that support it (desktop, mobile, tablet, etc). Is there a HTML5/JavaScript player that is able to do this? Flash/Silverlight is not an option since this won't render on mobile nor tablets.
If I can provide any further information please let me know. Most/all of the articles I see about live streaming is about video and I'm struggling to find how to do this with audio. Any help would be greatly appreciated.
Thanks!
There is no such thing as Live Streaming a single audio file. Live streaming as such imposes a live event. Something that is happening and while happening is streaming. And it doesn't matter if it is Audio or Audio + Video or only Video.
Using only Azure Media Services you cannot achieve this goal. You need a process that plays in repeative mode the media and streams it to a live streaming channel of Azure Media Services.
But it would be a rather expensive exercise! For your need, a more cost effective way would be use some linux streaming server on a Linux VM, like http://icecast.org/
You can also send one file, and Azure can transcode it to various formats for you. I have a full list of tutorials I wrote on Azure Media Services:
Intro to HTML5 video
Intro to Azure Media services, AES, and
PlayReady DRM
Live streaming HTML5 video using Azure Media
Services
Using Azure Blob Storage to store & serve your audio and
video files
Use this Azure Media Player for streaming Media Service
video to all devices
Uploading video to Azure Media Services
In terms of the encoding, Azure can do that for you. Give it one file type, and it can create multiple copies of various formats for you. I have a short post and a 10 minute video on how to do that.
There is also an Azure Media Player, and the selling point behind this is that it adapts the video stream based on the device it detects the player is running on, which is nice for pairing with the format changes listed above. So it saves you the trouble of having to write the fallback conditions yourself. (Ex: Running on an iOS device, so use HLS).
You can use any of this for audio, as well. Your best bet is to set set Looping = true on the video player, as both the HTML5 audio and video player has this attribute.
Let me know if you need more info.

icecast audio.js play without buffering

I have an icecast setup running on a server. The clients that will be connecting to it are tags in web pages, either through HTML5 or Flash. I am currently using audio.js to achieve this (specifically, the flash fallback).
The problem is, the audio is being played concurrently but separately with a stream of images. (It's a 10-fps jpeg stream.) I need the audio to match up as much as possible with the images. Unfortunately, the audio is sometimes as much as 7 seconds delayed before it starts playing.
Some information:
The image stream cannot be delayed to match the audio. The audio must speed up to match the images.
The icecast server config has <burst-on-connect> set to 0 to minimize latency.
There is essentially no lag when playing via VLC (perhaps a few hundred ms, which is acceptable).
Put another way, when viewing the images and playing the audio via vlc, everything is sufficiently aligned. Unfortunately, using VLC is not an option in the endgame.
Since VLC has no lag, that tells me that the web browser (Chrome, firefox, IE) is buffering the audio before playing it.
The question: How do I prevent the web browser from buffering the audio? I want it to play immediately as soon as it has anything available. I'm currently using audio.js, but other similar technologies are acceptable.
Additional information: I've set audio.js to autoplay and preload=none.
Thanks for your help!
A buffer is always necessary. Networks are packet switched. Data comes in chunks, not continuously. In fact, there are many buffers:
Capture buffer (at the sound card)
Codec buffer (codecs work on a chunk of samples at a time)
Network buffer to server
Server-side buffer (typically very large, 10+ seconds)
Network buffer to client
Client buffer (typically 2-3 seconds)
Client codec buffer
Client sound device buffer
Each buffer adds latency, as you have noticed. The only buffer you really have control over is the server-side buffer, which is configured by the <burst-on-connect> setting. By setting the size of this buffer to a larger size, you can fill all downstream buffers very quickly, enabling an extremely fast start to playback. You have set this to zero, which means that the downstream buffers can only fill as fast as the data comes in from the encoder.
Client-side, you have absolutely no control over the buffering, and nor should you. Clients are free to implement a codec in whatever way they choose. Some codecs can begin streaming right away, and others can't. Some devices have to re-sample your audio to fit within their playback, and others don't.
What it sounds like you really want to do is synchronize a video stream and an audio stream. For that, you should be just streaming a video stream to begin with. Video is made to keep audio and video in sync. Icecast even supports streaming video in a few formats.

Categories

Resources