Stream video from a canvas element using WebRTC - javascript

I am using WebRTC for peer-to-peer video communication, and I would like to apply video filters to local webcam video before sending it to a remote peer.
The approach that I am considering is to send the local webcam video to a canvas element, where I will apply javascript filters to the video. Then I would like to stream the video from the canvas element to the peer using WebRTC. However, it is not clear to me if this is possible.
Is it possible to stream video from a canvas element using WebRTC? If so, how can this be done? Alternatively, are there any other approaches that I might consider to accomplish my objective?

It's April 2020; you can achieve this with the canvas.captureStream() method.
There is an excellent article on how to use it, along with several demos on github. See the following links:
Capture Stream
Stream from a canvas element to peer connection
So, basically, you can apply all the transformations on the canvas and stream from the canvas to remote peer.

my solution would be, send the normal stream to the peer, also transmit, how it has to be modified, so on the other side, instead of showing in a video element directly( play the video n hide the element), you would keep drawing in a canvas( after processing) with settimeout/requestAnimationFrame.

mozCaptureStreamUntilEnded is supported on firefox but resulting stream can't be attached to peer connection.
Playing over <canvas> is easier however streaming media from a <video> element requires Media Processing API (capture-stream-until-ended) along with RTCPeerConnection (with all features support).
We can get images from <canvas> however I'm not sure if we can generate MediaStream from <canvas>.
So, mozCaptureStreamUntilEnded is useful only wth pre-recorded media streaming.

Related

Build a volume meter for an HLS video managed with Hls.js

I am using Hls.js to manage a video into my HTML page. I need to build a volume meter that inform the user about the audio level of the video. Since I need to keep the video.muted = true, I am wondering if the there is any way with Hls.js to extract the audio information from the stream and build a volume meter with those. The goal is give the users a feedback without have the volume of the video on.
You can do this easily with the Web Audio API.
Specifically, you'll want a couple nodes:
MediaElementAudioSourceNodeYou will use this to route the audio from your media element (i.e. the video element HLS.js is playing in) to the audio graph.
AnalyserNodeThis node analyzes the audio in chunks, giving you frequency data (via FFT) and time domain data. The time domain data is simplified from the main stream. You can run a min/max on it to get a value (generally -1.0 to +1.0). You can use that value in your visualization.
You also need to connect the AnalyserNode to the AudioContext's destinationNode to output the audio in the end, since it will be re-routed from that video element.
Note that this solution isn't particular to HLS. The same method works on any audio/video element, provided that the source data isn't tainted by cross-origin restrictions. Given how HLS.js works, you won't have to worry about that, since the CORS problem is already solved or it wouldn't work at all.

Send Canvas to UDP Multicast adress - multicast canvas live stream

I'm currently working on the following:
On one computer, I have a browser with a white canvas, where you can draw in.
On many other computers, you should be able to receive that canvas as a video stream. Plan would be to somehow convert the canvas surface to a video stream and send it via udp to other computers.
What I achieved so far is, that the canvas is redrawed on other computers with node.js and socket.io (so I basically just send the drawing information, like the coordinates). Then I also use the WebRTC's captureStream()-method, to convert the canvas surface to a video tag. So "visually", its working, I draw on one computer, and on other computers, I can just set the video as fullscreen and it seems to be working.
But thats not yet what I want and need. I need it as a real video stream, so like receiving it with MPV then. So the question is: How can I send the canvas surface as a UDP live video stream? Propably I would also need to send it through FFMPEG or something to transcode it..
I read a lot so far, but basically didn't completely figure out what to do...
I had a look at the MediaStream you get back from captureStream(), but that doesn't seem to help a lot, as getTracks() isn't working when capturing from a canvas.
Also, when talking about WebRTC, I'm not sure if its working, isn't it 2P2? Or can I somehow broadcast it and send packets to a UDP adress? What I read here
is that it is not directly possible. But even if, what should I send then? So how can I send the canvas surface as a video?
So there's basically two question: 1. What would I have to send, how can I get the canvas to a video stream and 2. How can I send it as a stream to other clients?
Any approaches or tips are welcome.
The timetocode.org site is an example of streaming from an HTML5 canvas (on the host computer) to a video element (on a client computer).
There's help in the "More on the demos" link on the main page. Read the topic on the multiplayer stuff there. But basically you just check the "Multiplayer" option, name a "room", connect to that room (that makes you the host of that room), follow one of links to the client page, then connect the client to the room that you set up. You should shortly see the canvas video streaming out to the client.
It uses socket.io for signaling in establishing WebRTC (P2P) connections. Note that the client side sends mouse and keyboard data back to the host via a WebRTC datachannel.
Key parts of the host-side code for the video stream are the captureStream method of the canvas element,
var hostCanvas = document.getElementById('hostCanvas');
videoStream = hostCanvas.captureStream(); //60
and the addTrack method of the WebRTC peer connection object,
pc.addTrack( videoStream.getVideoTracks()[0], videoStream);
and on the client-side code, the ontrack handler that directs the stream to the srcObject of the video element:
pc.ontrack = function (evt) {
videoMirror.srcObject = evt.streams[0];
};

How do I play a stream of H.264 NAL units in a video tag with MSE?

The situation is pretty straight-forward; I am receiving a stream of NAL units via WebSockets. How do I feed them into an HTML5 video tag using MSE?
Research indicates that I should mux the data into a fragmented mp4, but I haven't found any specifics on how to accomplish that. Does anyone have specifics?
If you receive a stream data e.g. hls, nalu h.264...and so on, you can transform and mux that into a fragmented mp4. Setting HTML5 video tag combines with MSE like creating mediaSource, mediaSource.addSourceBuffer, sourceBuffer.appendBuffer. That will play video while fmp4 right feed into buffer.
You may check out https://github.com/ChihChengYang/wfs.js which demonstrates transmuxing NALu h.264 streams from websocket. That works directly on top of a standard HTML5 element and MSE.

manipulating binaryjs video with node-canvas

Is it possible to live change video stream with canvas-node?
A'm using binaryjs to send binary frames over web socket and I want to change video live in node-canvas.
This is the code a'm using Webcam Binary.JS Demo
and this is the library that I want to use to change video with canvas node-canvas
Is that possible? If that is not possible what is the best solution to combine multiple video streams (with nodejs) to one video?

Streaming video with transparent pixels using webrtc

I am trying to capture html5 canvas using captureStream API which has drawings on it and play it using html5 video tag.
Problem I am facing is, when I capture the stream and play it with in video tag It plays exactly same.But when I send that stream to the another peer (webRTC Streaming Using Licode MCU) and play it there.
It gets played with the black background. i.e. video is not transparent anymore.Has anyone encountered this before?
What could be the issue:
Is it the issue with the webrtc channel, may be its not able to handle transparent pixels?
OR
It can be something to do with the media server? Or something else?
It sounds like you're sending your canvas as video data. WebRTC usually uses either VP8 or H264 to transmit video and neither support alpha channels. So if you want to sent it as a video, it is not possible to use transparency.
You could, however, send it using the data channel part of WebRTC. You'd have to serialize and deserialize it yourself, but since it's just transmitting bytes, you can keep your alpha channel.

Categories

Resources