I've got the following code which loops through all videoinput devices on a machine and should display a stream of that input device.
$.each(devices, function( index, value ) {
if(value.kind == 'videoinput') {
console.log(value);
navigator.mediaDevices.getUserMedia({video: { exact: value.deviceId }}).then(function(stream) {
console.log(stream);
var video = document.createElement('video');
video.srcObject = stream;
video.autoplay = true;
var elem = '\
<div>\
<div class="view_camera_' + index + ' uk-card uk-card-default uk-card-body uk-card-small"></div>\
</div>\
';
outputs.append(elem);
$('.view_camera_' + index).append(video);
}).catch(function(err) {
console.log(err);
});
}
});
Notice in my selector, I used {video: { exact: value.deviceId }}, which should, according to the documentation, should "require the specific camera".
I was originally using { video: { deviceId: value.deviceId } } which actually worked the way I wanted it to, but the documentation says "The above [using deviceId instead of exact] will return the camera you requested, or a different camera if that specific camera is no longer available". I do not want it to "return a different camera if that specific camera is no longer available", so I switched over to using the exact keyword.
The problem is, this is not working properly. Even though I am passing 2 different deviceId's, it is creating 2 separate streams for the same device.
Here is a picture of my console logs from when the function is running, you can see that there are 2 camera devices with different deviceId's, and there are 2 different streams that are created, however, the 2 video streams displayed on the page are from the same camera.
Why is getUserMedia using the exact keyword creating 2 separate but identical streams from the same camera, instead of 2 separate streams from the 2 separate cameras?
Your selector format is wrong. You missed the deviceId in between.
It should be: { video: { deviceId: { exact: value.deviceId } } }
Related
I am trying to create a web application with some video chat functionality, but trying to get it to work on mobile (specifically, Chrome for iOS) is giving me fits.
What I would like to do is have users be able to join a game, and join a team within that game. There are two tabs on the page for players - a "Team" tab and a "Game" tab. When the player selects the game tab, they may talk to all participants in the entire game (e.g. to ask the host/moderator a question). When the team tab is selected, the player's stream to the game is muted, and only the player's team can hear them talk. As a result, I believe I need two MediaStream objects for each player - one to stream to the game, and one to stream to the player's team - this way, I can mute one while keeping the other unmuted.
There is an iOS quirk where you can only call the getUserMedia() function once, so I need to clone the stream using MediaStream.clone(). AddVideoStream is a function that just adds the video to the appropriate grid of videos, and it appears to work properly.
The problem is - when I use my iPhone 12 to connect to the game, I can see my video just fine, but when I click over to the "game" tab and look at the second stream, the stream works for a second, and then freezes. The weird thing is, if I open a new tab in Chrome, and then go back to the game tab, both videos seem to run smoothly.
Has anyone ever tried something similar, and figured out why this behavior occurs?
const myPeer = new Peer(undefined);
myPeer.on('open', (userId) => {
myUserId = userId;
console.log(`UserId: ${myUserId}`);
socket.emit('set-peer-id', {
id: userId,
});
});
const myVideo = document.createElement('video');
myVideo.setAttribute('playsinline', true);
myVideo.muted = true;
const myTeamVideo = document.createElement('video');
myTeamVideo.setAttribute('playsinline', true);
myTeamVideo.muted = true;
const myStream =
// (navigator.mediaDevices ? navigator.mediaDevices.getUserMedia : undefined) ||
navigator.mediaDevices ||
navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia;
let myVideoStream;
let myTeamStream;
if (myStream) {
myStream
.getUserMedia({
video: true,
audio: true,
})
.then((stream) => {
myVideoStream = stream;
myTeamStream = stream.clone();
addVideoStream(myTeamVideo, myTeamStream, myUserId, teamVideoGrid);
addVideoStream(myVideo, myVideoStream, myUserId, videoGrid);
myPeer.on('call', (call) => {
call.answer(stream);
const video = document.createElement('video');
video.setAttribute('playsinline', true);
call.on('stream', (userVideoStream) => {
const teammate = teammates.find((t) => {
return t.peerId === call.peer;
});
if (teammate) {
addVideoStream(
video,
userVideoStream,
call.peer,
teamVideoGrid,
teammate.name
);
} else {
addVideoStream(video, userVideoStream, call.peer, videoGrid);
}
});
call.on('close', () => {
console.log(`Call with ${call.peer} closed`);
});
});
socket.on('player-joined', (data) => {
addMessage({
name: 'System',
isHost: false,
message: `${data.name} has joined the game.`,
});
if (data.id !== myUserId) {
if (data.teamId !== teamId) {
connectToNewUser(data.peerId, myVideoStream, videoGrid, data.name);
} else {
connectToNewUser(
data.peerId,
myTeamStream,
teamVideoGrid,
data.name
);
}
}
});
});
}
I am trying to get my laptop's speaker level shown in my application. I am new to WebRTC and Web Audio API, so just wanted to confirm about the possibility of a feature. The application is an electron application and has a calling feature, so when the user at the other end of the call speaks, the application should display a level of output which varies accordingly to the sound. I have tried using WebRTC and Web Audio API, and even seen a sample. I am able to log values but that changes when I speak in the microphone, while I need only the values of speaker not the microphone.
export class OutputLevelsComponent implements OnInit {
constructor() { }
ngOnInit(): void {
this.getAudioLevel()
}
getAudioLevel() {
try {
navigator.mediaDevices.enumerateDevices().then(devices => {
console.log("device:", devices);
let constraints = {
audio : {
deviceId: devices[3].deviceId
}
}
navigator.mediaDevices.getUserMedia(constraints).then((stream) => {
console.log("stream test: ", stream);
this.handleSuccess(stream)
});
});
} catch(e) {
console.log("error getting media devices: ", e);
}
}
handleSuccess(stream: any) {
console.log("stream: ", stream);
var context = new AudioContext();
var analyser = context.createScriptProcessor(1024, 1, 1);
var source = context.createMediaStreamSource(stream);
source.connect(analyser);
// source.connect(context.destination);
analyser.connect(context.destination);
opacify();
function opacify() {
analyser.onaudioprocess = function(e) {
// no need to get the output buffer anymore
var int = e.inputBuffer.getChannelData(0);
var max = 0;
for (var i = 0; i < int.length; i++) {
max = int[i] > max ? int[i] : max;
}
if (max > 0.01) {
console.log("max: ", max);
}
}
}
}
}
I have tried the above code, where I use enumerateDevices() and getUserMedia() which will give a set of devices, for demo purposes I am taking the last device which has 'audiooutput' as value for kind property and accessing stream of the device.
Please let me know if this is even possible with Web Audio API. If not, is there any other tool that can help me implement this feature?
Thanks in advance.
You would need to use your handleSuccess() function with the stream that you get from the remote end. That stream usually gets exposed as part of the track event.
The problem is likely linked to the machine you are running. On macOS, there is no way to capture system audio output from Browser APIs as it requires a signed kernel extension. Potential workarounds are using Blackhole for Sunflower. On windows, the code should work fine though.
I have an html5 video element I'm trying to increase the volume off.
I'm using the code I found in this answer
However there is no sound coming out of the speakers. If I disable it sound is fine.
videoEl.muted = true //tried with this disabled or enabled
if(!window.audio)
window.audio = amplify(vol)
else
window.audio.amplify(vol)
...
export function amplify(multiplier) {
const media = document.getElementById('videoEl')
//#ts-ignore
var context = new(window.AudioContext || window.webkitAudioContext),
result = {
context: context,
source: context.createMediaElementSource(media),
gain: context.createGain(),
media,
amplify: function(multiplier) {
result.gain.gain.value = multiplier;
},
getAmpLevel: function() {
return result.gain.gain.value;
}
};
result.source.connect(result.gain)
result.gain.connect(context.destination)
result.amplify(multiplier)
return result;
}
That value is set to 3 for testing.
Any idea how why I'm getting no sound?
I also have Howler running for other audio files, could it be blocking the web audio API?
I have this piece of code, that activates my webcam:
var video = document.getElementById('video');
// Get access to the camera!
if (navigator.mediaDevices && navigator.mediaDevices.getUserMedia) {
// Not adding `{ audio: true }` since we only want video now
navigator.mediaDevices.getUserMedia({ video: true }).then(function (stream) {
video.srcObject = stream;
video.play();
});
}
When running the code above, the browser asks for permission to use the webcam. Let's assume that I allow it. The webcam is active now.
What I want now, is to write some code that checks if the webcam is active/being used. So I want to do something like this:
if(navigator.mediaDevices.getUserMedia == true) {
alert("The camera is active");
}
I found a similar post which has a solution, but I guess I am doing something wrong, even though I tried to follow the same solution. The post is here: How to check with JavaScript that webcam is being used in Chrome
Here is what I tried:
function isCamAlreadyActive() {
if (navigator.getUserMedia) {
navigator.getUserMedia({
video: true
}), function (stream) {
// returns true if any tracks have active state of true
var result = stream.getVideoTracks().some(function (track) {
return track.enabled && track.readyState === 'live';
});
}
if (result) {
alert("Already active");
return true;
}
}
alert("Not active");
return false;
}
My solution always returns false, even when the webcam is active
I am not too sure as to what you are trying to detect exactly.
If you want to know if the webcam is already used by some other program or some other pages, or some other scripts on the same page, then there is in my knowledge no bullet proof solution: different devices and different OSes will have different abilities with regard to request the same device simultaneously. So yes, requesting the device and keeping the stream alive, you could do function isActive() { return true; } but...
But if I read correctly between the lines of your question, I feel that what you want to know is if you have been granted the authorization from the user, and thus if they will see the prompt or not.
In this case, you can make use of the MediaDevices.enumerateDevices method, which will (IMM unfortunately) not request for user permission, while it needs it to get the full informations about the user's devices.
Indeed, the label property of the MediaDeviceInfo object should remain anonymised to avoid finger-printing. So if when you do request for this information, the MediaDeviceInfo all return an empty string (""), it means that you don't have user authorization, and thus, that they will get a prompt.
function checkIsApproved(){
return navigator.mediaDevices.enumerateDevices()
.then(infos =>
// if any of the MediaDeviceInfo has a label, we're good to go
[...infos].some(info=>info.label!=="")
);
}
check.onclick = e => {
checkIsApproved().then(console.log);
};
req.onclick = e => {
navigator.mediaDevices.getUserMedia({video:true}).catch(e=>console.error('you might be better using the jsfiddle'));
}
<button id="check">check</button>
<button id="req">request approval</button>
But since StackSnippets don't work well with getUserMedia, here is a jsfiddle demonstrating this.
In the code which you have shared, there is some problem with the syntax of calling navigator.getMediaUserMedia(). navigator.getMediaUserMedia() expects 3 parameters. You can check here for more details.
You can modify your function to:
function isCamAlreadyActive() {
if (navigator.getUserMedia) {
navigator.getUserMedia({
video: true
}, function (stream) {
// returns true if any tracks have active state of true
var result = stream.getVideoTracks().some(function (track) {
return track.enabled && track.readyState === 'live';
});
if (result) {
alert("Already active");
}else{
alert("No")
}
},
function(e) {
alert("Error: " + e.name);
});
}
}
PS : Since navigator.getMediaUserMedia() is deprecated, you can use navigator.mediaDevices.getUserMedia instead. You can checkout here for more details.
You can use navigator.mediaDevices.getUserMedia as :
function isCamAlreadyActive(){
navigator.mediaDevices.getUserMedia({video: true}).then(function(stream){
result = stream.getVideoTracks().some(function (track) {
return track.enabled && track.readyState === 'live';
});
if(result){
alert("On");
}else{
alert("Off");
}
}).catch(function(err) { console.log(err.name + ": " + err.message); });
}
Hope it helps.
I am building a project similar to this example with jsartoolkit5, and I would like to be able to select the back camera of my device instead of letting Chrome on Android select the front one as default.
According to the example in this demo, I have added the code below to switch camera automatically if the device has a back camera.
var videoElement = document.querySelector('canvas');
function successCallback(stream) {
window.stream = stream; // make stream available to console
videoElement.src = window.URL.createObjectURL(stream);
videoElement.play();
}
function errorCallback(error) {
console.log('navigator.getUserMedia error: ', error);
}
navigator.mediaDevices.enumerateDevices().then(
function(devices) {
for (var i = 0; i < devices.length; i++) {
if (devices[i].kind == 'videoinput' && devices[i].label.indexOf('back') !== -1) {
if (window.stream) {
videoElement.src = null;
window.stream.stop();
}
var constraints = {
video: {
optional: [{
sourceId: devices[i].deviceId
}]
}
};
navigator.getUserMedia(constraints, successCallback, errorCallback);
}
}
}
);
The issue is that it works perfectly for a <video> tag, but unluckily jsartoolkit renders the content inside a canvas and it consequently throws an error.
I have also tried to follow the instructions in this closed issue in the Github repository, but this time I get the following error: DOMException: play() can only be initiated by a user gesture.
Do you know or have any suggestion on how to solve this issue?
Thanks in advance for your replies!
Main problem :
You are mixing old and new getUserMedia syntax.
navigator.getUserMedia is deprecated, and navigator.mediaDevices.getUserMedia should be preferred.
Also, I think that optional is not part of the constraints dictionary anymore.
Default Solution
This part is almost a duplicate of this answer : https://stackoverflow.com/a/32364912/3702797
You should be able to call directly
navigator.mediaDevices.getUserMedia({
video: {
facingMode: {
exact: 'environment'
}
}
})
But chrome still has this bug, and even if #jib's answer states that it should work with adpater.js polyfill, I myself were unable to make it work on my chrome for Android.
So previous syntax will currently work only on Firefox for Android.
For chrome, you'll indeed need to use enumerateDevices, along with adapter.js to make it work, but don't mix up the syntax, and everything should be fine :
let handleStream = s => {
document.body.append(
Object.assign(document.createElement('video'), {
autoplay: true,
srcObject: s
})
);
}
navigator.mediaDevices.enumerateDevices().then(devices => {
let sourceId = null;
// enumerate all devices
for (var device of devices) {
// if there is still no video input, or if this is the rear camera
if (device.kind == 'videoinput' &&
(!sourceId || device.label.indexOf('back') !== -1)) {
sourceId = device.deviceId;
}
}
// we didn't find any video input
if (!sourceId) {
throw 'no video input';
}
let constraints = {
video: {
sourceId: sourceId
}
};
navigator.mediaDevices.getUserMedia(constraints)
.then(handleStream);
});
<script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>
Fiddle for chrome which need https.
Make it work with jsartoolkit
You'll have to fork jsartoolkit project and edit artoolkit.api.js.
The main project currently disables mediaDevices.getUserMedia(), so you'll need to enable it again, and you'll also have to add a check for an sourceId option, that we'll add later in the ARController.getUserMediaThreeScene() call.
You can find a rough and ugly implementation of these edits in this fork.
So once it is done, you'll have to rebuild the js files, and then remember to include adapter.js polyfill in your code.
Here is a working fiddle that uses one of the project's demo.