I'm currently working on the following:
On one computer, I have a browser with a white canvas, where you can draw in.
On many other computers, you should be able to receive that canvas as a video stream. Plan would be to somehow convert the canvas surface to a video stream and send it via udp to other computers.
What I achieved so far is, that the canvas is redrawed on other computers with node.js and socket.io (so I basically just send the drawing information, like the coordinates). Then I also use the WebRTC's captureStream()-method, to convert the canvas surface to a video tag. So "visually", its working, I draw on one computer, and on other computers, I can just set the video as fullscreen and it seems to be working.
But thats not yet what I want and need. I need it as a real video stream, so like receiving it with MPV then. So the question is: How can I send the canvas surface as a UDP live video stream? Propably I would also need to send it through FFMPEG or something to transcode it..
I read a lot so far, but basically didn't completely figure out what to do...
I had a look at the MediaStream you get back from captureStream(), but that doesn't seem to help a lot, as getTracks() isn't working when capturing from a canvas.
Also, when talking about WebRTC, I'm not sure if its working, isn't it 2P2? Or can I somehow broadcast it and send packets to a UDP adress? What I read here
is that it is not directly possible. But even if, what should I send then? So how can I send the canvas surface as a video?
So there's basically two question: 1. What would I have to send, how can I get the canvas to a video stream and 2. How can I send it as a stream to other clients?
Any approaches or tips are welcome.
The timetocode.org site is an example of streaming from an HTML5 canvas (on the host computer) to a video element (on a client computer).
There's help in the "More on the demos" link on the main page. Read the topic on the multiplayer stuff there. But basically you just check the "Multiplayer" option, name a "room", connect to that room (that makes you the host of that room), follow one of links to the client page, then connect the client to the room that you set up. You should shortly see the canvas video streaming out to the client.
It uses socket.io for signaling in establishing WebRTC (P2P) connections. Note that the client side sends mouse and keyboard data back to the host via a WebRTC datachannel.
Key parts of the host-side code for the video stream are the captureStream method of the canvas element,
var hostCanvas = document.getElementById('hostCanvas');
videoStream = hostCanvas.captureStream(); //60
and the addTrack method of the WebRTC peer connection object,
pc.addTrack( videoStream.getVideoTracks()[0], videoStream);
and on the client-side code, the ontrack handler that directs the stream to the srcObject of the video element:
pc.ontrack = function (evt) {
videoMirror.srcObject = evt.streams[0];
};
Related
I set up a pretty simple audio call test utilizing WebRTC based off of another one of my projects, a video chat (also using WebRTC). I thought it would be easy, but once I got it set up, the audio isn't played by the user. That means that both peers receive the respective offer/answer SDP WebSocket event, and the SDP is present, but I cannot hear my voice echo back at me when I talk or make any noise. Their is nothing in my console (I catch all errors, too).
Is their a cause for this?
I based my code off of Amir Sanni's video chat located here. I basically just used getUserAudio instead of media stream, and deleted the lines where it added a video.
You will need to play the audio back somehow. I would recommend using audio tags instead of video ones and hiding them with display: none.
I am trying to capture html5 canvas using captureStream API which has drawings on it and play it using html5 video tag.
Problem I am facing is, when I capture the stream and play it with in video tag It plays exactly same.But when I send that stream to the another peer (webRTC Streaming Using Licode MCU) and play it there.
It gets played with the black background. i.e. video is not transparent anymore.Has anyone encountered this before?
What could be the issue:
Is it the issue with the webrtc channel, may be its not able to handle transparent pixels?
OR
It can be something to do with the media server? Or something else?
It sounds like you're sending your canvas as video data. WebRTC usually uses either VP8 or H264 to transmit video and neither support alpha channels. So if you want to sent it as a video, it is not possible to use transparency.
You could, however, send it using the data channel part of WebRTC. You'd have to serialize and deserialize it yourself, but since it's just transmitting bytes, you can keep your alpha channel.
I am using WebRTC for peer-to-peer video communication, and I would like to apply video filters to local webcam video before sending it to a remote peer.
The approach that I am considering is to send the local webcam video to a canvas element, where I will apply javascript filters to the video. Then I would like to stream the video from the canvas element to the peer using WebRTC. However, it is not clear to me if this is possible.
Is it possible to stream video from a canvas element using WebRTC? If so, how can this be done? Alternatively, are there any other approaches that I might consider to accomplish my objective?
It's April 2020; you can achieve this with the canvas.captureStream() method.
There is an excellent article on how to use it, along with several demos on github. See the following links:
Capture Stream
Stream from a canvas element to peer connection
So, basically, you can apply all the transformations on the canvas and stream from the canvas to remote peer.
my solution would be, send the normal stream to the peer, also transmit, how it has to be modified, so on the other side, instead of showing in a video element directly( play the video n hide the element), you would keep drawing in a canvas( after processing) with settimeout/requestAnimationFrame.
mozCaptureStreamUntilEnded is supported on firefox but resulting stream can't be attached to peer connection.
Playing over <canvas> is easier however streaming media from a <video> element requires Media Processing API (capture-stream-until-ended) along with RTCPeerConnection (with all features support).
We can get images from <canvas> however I'm not sure if we can generate MediaStream from <canvas>.
So, mozCaptureStreamUntilEnded is useful only wth pre-recorded media streaming.
I'm currently experiencing a problem that a client who has audio but no video can't receive the remote clients video (even though the remote client is capturing both audio and video).
Video and audio constraints are set to true on both clients. The application runs correctly if both clients have audio and video.
Does anyone know a solution for this?
Simply make sure that the client who has audio/video MUST create offer; and other client should create answer. Then it will be oneway streaming; and it will work!
userWhoHasMedia.createOffer(sdp_sucess_callback, sdp_failure_callback, sdp_constraints);
userWhoDontHavemedia.createAnswer(sdp_sucess_callback, sdp_failure_callback, sdp_constraints);
Also, if you want, you can set "OfferToReceiveAudio" and "offerToReceiveVideo" to false for client who doesn't captures media. Though, it is useless in your case, because non-Media client is receiver.
I'm currently working on a interactive web application in javascript that renders in realtime a video received on a webpage and lets you send keyboard inputs.
The fact is that I can only receive VP8 video streams (not webm, just raw VP8 video without the Matroska container). I've managed to decode the video from the client side using dixie decoder (https://github.com/dominikhlbg/vp8-webm-javascript-decoder/), but the problem is that it adds buffering or something, because there is a lag of almost 2 seconds between when I receive a stream and I render it. Is there a way I can decode the stream natively? That would speed it the performance.
I thought of adding a matroska container to the vp8 received stream and sending it to the video tag, but I don't know how to create such container.
Ok, after days trying to figure out how to solve this I finally found the bug, which it wasn't in the Dixie decoder, but the server that needed a flag to stop buffering the video.