no remote video if localstream has no video webrtc - javascript

I'm currently experiencing a problem that a client who has audio but no video can't receive the remote clients video (even though the remote client is capturing both audio and video).
Video and audio constraints are set to true on both clients. The application runs correctly if both clients have audio and video.
Does anyone know a solution for this?

Simply make sure that the client who has audio/video MUST create offer; and other client should create answer. Then it will be oneway streaming; and it will work!
userWhoHasMedia.createOffer(sdp_sucess_callback, sdp_failure_callback, sdp_constraints);
userWhoDontHavemedia.createAnswer(sdp_sucess_callback, sdp_failure_callback, sdp_constraints);
Also, if you want, you can set "OfferToReceiveAudio" and "offerToReceiveVideo" to false for client who doesn't captures media. Though, it is useless in your case, because non-Media client is receiver.

Related

Microphone Input with JavaScript

I have seen a lot of questions and articles about getting user's microphone input, but what I want is actually the opposite.
Is it possible to send a sound through the microphone as if the user himself had spoken? It would be something like soundpad, using JS.
Here's an idea:
When the user wants a MediaStream with the microphone audio, they make a call to navigator.getUserMedia({video:false, audio:true});. We can redefine navigator.getUserMedia to our own function (keeping the original in a separate global variable so we can still get the microphone data) that will return a MediaStream which plays audio from a file. We can even return a combined MediaStream that will combine audio from the microphone and a file using the Web Audio API to do the combining.
I've been trying to do this with video so I can replace my video in Google Meet, but Google Meet seems to automatically do things to the MediaStream (like muting and pausing) that I haven't handled, so that project doesn't work yet. Google Meet is very secure, so that might be the problem, but I think this trick might work for you!

Can't hear peer's voice in a simple audio only WebRTC call with Node.js (WebSockets) and Javascript (WebRTC)

I set up a pretty simple audio call test utilizing WebRTC based off of another one of my projects, a video chat (also using WebRTC). I thought it would be easy, but once I got it set up, the audio isn't played by the user. That means that both peers receive the respective offer/answer SDP WebSocket event, and the SDP is present, but I cannot hear my voice echo back at me when I talk or make any noise. Their is nothing in my console (I catch all errors, too).
Is their a cause for this?
I based my code off of Amir Sanni's video chat located here. I basically just used getUserAudio instead of media stream, and deleted the lines where it added a video.
You will need to play the audio back somehow. I would recommend using audio tags instead of video ones and hiding them with display: none.

Build a volume meter for an HLS video managed with Hls.js

I am using Hls.js to manage a video into my HTML page. I need to build a volume meter that inform the user about the audio level of the video. Since I need to keep the video.muted = true, I am wondering if the there is any way with Hls.js to extract the audio information from the stream and build a volume meter with those. The goal is give the users a feedback without have the volume of the video on.
You can do this easily with the Web Audio API.
Specifically, you'll want a couple nodes:
MediaElementAudioSourceNodeYou will use this to route the audio from your media element (i.e. the video element HLS.js is playing in) to the audio graph.
AnalyserNodeThis node analyzes the audio in chunks, giving you frequency data (via FFT) and time domain data. The time domain data is simplified from the main stream. You can run a min/max on it to get a value (generally -1.0 to +1.0). You can use that value in your visualization.
You also need to connect the AnalyserNode to the AudioContext's destinationNode to output the audio in the end, since it will be re-routed from that video element.
Note that this solution isn't particular to HLS. The same method works on any audio/video element, provided that the source data isn't tainted by cross-origin restrictions. Given how HLS.js works, you won't have to worry about that, since the CORS problem is already solved or it wouldn't work at all.

Send Canvas to UDP Multicast adress - multicast canvas live stream

I'm currently working on the following:
On one computer, I have a browser with a white canvas, where you can draw in.
On many other computers, you should be able to receive that canvas as a video stream. Plan would be to somehow convert the canvas surface to a video stream and send it via udp to other computers.
What I achieved so far is, that the canvas is redrawed on other computers with node.js and socket.io (so I basically just send the drawing information, like the coordinates). Then I also use the WebRTC's captureStream()-method, to convert the canvas surface to a video tag. So "visually", its working, I draw on one computer, and on other computers, I can just set the video as fullscreen and it seems to be working.
But thats not yet what I want and need. I need it as a real video stream, so like receiving it with MPV then. So the question is: How can I send the canvas surface as a UDP live video stream? Propably I would also need to send it through FFMPEG or something to transcode it..
I read a lot so far, but basically didn't completely figure out what to do...
I had a look at the MediaStream you get back from captureStream(), but that doesn't seem to help a lot, as getTracks() isn't working when capturing from a canvas.
Also, when talking about WebRTC, I'm not sure if its working, isn't it 2P2? Or can I somehow broadcast it and send packets to a UDP adress? What I read here
is that it is not directly possible. But even if, what should I send then? So how can I send the canvas surface as a video?
So there's basically two question: 1. What would I have to send, how can I get the canvas to a video stream and 2. How can I send it as a stream to other clients?
Any approaches or tips are welcome.
The timetocode.org site is an example of streaming from an HTML5 canvas (on the host computer) to a video element (on a client computer).
There's help in the "More on the demos" link on the main page. Read the topic on the multiplayer stuff there. But basically you just check the "Multiplayer" option, name a "room", connect to that room (that makes you the host of that room), follow one of links to the client page, then connect the client to the room that you set up. You should shortly see the canvas video streaming out to the client.
It uses socket.io for signaling in establishing WebRTC (P2P) connections. Note that the client side sends mouse and keyboard data back to the host via a WebRTC datachannel.
Key parts of the host-side code for the video stream are the captureStream method of the canvas element,
var hostCanvas = document.getElementById('hostCanvas');
videoStream = hostCanvas.captureStream(); //60
and the addTrack method of the WebRTC peer connection object,
pc.addTrack( videoStream.getVideoTracks()[0], videoStream);
and on the client-side code, the ontrack handler that directs the stream to the srcObject of the video element:
pc.ontrack = function (evt) {
videoMirror.srcObject = evt.streams[0];
};

Web Audio API and Audio Download and Protection

I'm reading a book about Web Audio API.
In the book it states that to play and load a sound using the WEB AUDIO API, there are 4 steps that needs to be taken:
1.) Load the sound file with XHR and decode it. (Will end up with a 'buffer')
2.) Connect the buffer to audio effects nodes.
3.) To hear the sound, connect the last node in the effects chain to the destination.
4.) Start the sound.
My question is...given these 4 steps, is there a way for the user of the website that uses the Web Audio to download the audio/audios played on the website???
If so, how does one prevent this.
or does it being 'buffered' prevent it from being illegally downloaded?
I would like to find a way to protect the audio files I use inside my game/app that I put up on the webpage that are played with the Web Audio API.....
Thank you....
EASILY save it, no. But 1) if it's being transferred as an MP3, etc file the user can go into their network cache and copy it; there's no inherent DRM or anything. 2) Even if the sound was being generated completely from scratch (e.g. mathematically) the user could use a virtual audio device like Soundflower to save the output.
So no, it's not really possible to prevent the user from saving audio files.

Categories

Resources