I am developing a web application for video recording.
I have tried the below code but the problem is it working fine with a external camera, but in the laptops inbuilt camera with the same Browser it is giving no Object found error.
I am using Firefox 60.6.1
if (navigator.mediaDevices.getUserMedia) {
navigator.mediaDevices.getUserMedia(constraints)
.then(function(stream) {
//load the stream in the video variable
video.srcObject = stream;
//load the stream in revokeAccess variable
revokeAccess=stream;
//video playback
video.play();
/*
Optional to avoid the dual audio disturbance. Playback audio is muted
*/
video.muted= true;
if (MediaRecorder.isTypeSupported('video/webm;codecs=vp9')) {
var options = {mimeType: 'video/webm;codecs=vp9'};
console.log("using vp9");
} else if (MediaRecorder.isTypeSupported('video/webm;codecs=h264')) {
var options = {mimeType: 'video/webm;codecs=h264'};
console.log("using h264");
} else if (MediaRecorder.isTypeSupported('video/webm;codecs=vp8')) {
var options = {mimeType: 'video/webm;codecs=vp8',videoBitsPerSecond : 1500000,audioBitsPerSecond : 160000};
console.log("using vp8");
}else{
console.log('isTypeSupported is not supported, using default codecs for browser');
}
//load the stream and type of video in the function
mediaRecorder = new MediaRecorder(stream,options);
//handle the data availability
mediaRecorder.ondataavailable = handleDataAvailable;
//Start the recording
mediaRecorder.start();
alert("Started Recording");
//push the data into chunks(array)
function handleDataAvailable(event) {
if (event.data.size > 0) {
recordedChunks.push(event.data);
console.log(recordedChunks);
} else {
alert(event);
}
}
//disable the Start Recording button
document.getElementById("startRecording").disabled = true;
})
.catch(function(error) {
//handle the device not found exception
alert("Camera not Found !! Please connect camera properly");
console.log(error);
});
}
I want the application to be working in each platform.
Hi All Developers Thanks for your support and quick response.
I fixed the issues using the below adapter addition in the code.
var getUserMedia = navigator.getUserMedia ||
navigator.mozGetUserMedia ||
navigator.webkitGetUserMedia;
If you want to support legacy browser, check following.
https://github.com/webrtcHacks/adapter
WebRTC adapter
adapter.js is a shim to insulate apps from spec changes and prefix
differences. In fact, the standards and protocols used for WebRTC
implementations are highly stable, and there are only a few prefixed
names. For full interop information, see webrtc.org/web-apis/interop.
This repository used to be part of the WebRTC organisation on github
but moved. We aim to keep the old repository updated with new
releases.
Also, you can check your hardware environment form firefox.
In the address bar on browser type following
about:support
Related
someone can help me with this line of code?
recorder = new MediaRecorder(stream, {mimeType: 'video/webm'});
when i use this variable in iOS, cath(err) answer me "RefenceError, cant find variable: mediaRecorder"
pleases can you helpe? if you need it this is the complete function
startBtn.addEventListener('click',function() {
navigator.mediaDevices.getUserMedia(constraint).then(function(stream) {
recorder = new MediaRecorder(stream, {
mimeType: 'video/webm'
});
recorder.start();
}).catch(function(err) {
alert('impossible '+err);
});
})
Unfortunately the getUserMedia is only supported in safari browser
Any other browsers the getUserMedia is not supported in iOS devices
First check the availability of getUserMedia before you assign it to recorder
Good Luck
I'm developing an application that allows voice recording in the web browser. This is working fine on most browsers but I have some issues with iOS Safari.
Below you can find an extract of the code, it is not complete but it gives an idea of what's going on.
//Triggered when the user clicks on a button that start the recording
function startRecording() {
//Create new audio context
let audioContext = new (window.AudioContext || window.webkitAudioContext);
//Hack polyfill media recorder to re-use the audioContex
window.NewMediaRecorder.changeAudioContext(audioContext);
navigator.mediaDevices.enumerateDevices().then(function (devices) {
console.log('loaded audio devices');
console.log(devices);
devices = devices.filter((d) => d.kind === 'audioinput');
console.log(devices);
console.log('chosen device: ' + devices[0].deviceId);
navigator.mediaDevices.getUserMedia({
audio: {
deviceId : {
exact : devices[0].deviceId
}
}
}).then(function (stream) {
console.log(stream);
let recorder = new NewMediaRecorder(stream);
recorder.addEventListener('dataavailable', function (e) {
document.getElementById('ctrlAudio').src = URL.createObjectURL(e.data);
});
recorder.start();
console.log('stop listening after 15 seconds');
setTimeout(function () {
console.log('15 seconds passed');
console.log("Force stop listening");
recorder.stop();
recorder.stream.getTracks()[0].stop();
}, 15000);
});
});
}
For the record, I'm using audio recorder polyfill (https://ai.github.io/audio-recorder-polyfill/) in order to achieve recording, as MediaRecorder is not yet available on Safari.
The recorder works fine on all navigators (this including OS X Safari), yet on iOS Safari it only records noice. If I set the volume of my speakers at maximal level I can hear myself speak, but it is from "very far away".
All the online dictaphones/recorders that I found have the same issue, they always recording noise. (Tested with an iPhone 5S, 5SE and X, all up to date).
I'm a bit desperate because I already did a lot of research, but I didn't find any solution for this issue.
As required, the AudioContext is created on a user event (in this case a touch on a button).
I even tried to change the gain of but that didn't help.
Trying to access the audio without setting a media device isn't helping.
navigator.mediaDevices.getUserMedia({audio: true})
I want to capture video with the webcamera.
And there is the right decision:
window.onload = function () {
var video = document.getElementById('video');
var videoStreamUrl = false;
navigator.getUserMedia({video: true}, function (stream) {
videoStreamUrl = window.URL.createObjectURL(stream);
video.src = videoStreamUrl;
}, function () {
console.log('error');
});
};
but produces an error in the browser:
[Deprecation] URL.createObjectURL with media streams is deprecated and will be removed in M68, around July 2018. Please use HTMLMediaElement.srcObject instead. See https://www.chromestatus.com/features/5618491470118912 for more details.
how to use HTMLMediaElement.srcObject for my purposes ? Thanks for your time!
MediaElement.srcObject should allow Blobs, MediaSources and MediaStreams to be played in the MediaElement without the need to bind these sources in the memory for the lifetime of the document like blobURIs do.
(Currently no browser support anything else than MediaStream though...)
Indeed, when you do URL.createObjectURL(MediaStream), you are telling the browser that it should keep alive this Source until your revoke the blobURI, or until the document dies.
In the case of a LocalMediaStream served from a capturing device (camera or microphone), this also means that the browser has to keep the connection to this device open.
Firefox initiated the deprecation of this feature, one year or so ago, since srcObject can provide the same result in better ways, easier to handle for everyone, and hence Chrome seems to finally follow (not sure what's the specs status about this).
So to use it, simply do
MediaElement.srcObject = MediaStream;
Also note that the API you are using is itself deprecated (and not only in FF), and you shouldn't use it anymore. Indeed, the correct API to capture MediaStreams from user Media is the MediaDevices.getUserMedia one.
This API now returns a Promise which gets resolved to the MediaStream.
So a complete correction of your code would be
var video = document.getElementById('video');
navigator.mediaDevices.getUserMedia({
video: true
})
.then(function(stream) {
video.srcObject = stream;
})
.catch(function(error) {
console.log('error', error);
});
<video id="video"></video>
Or as a fiddle since StackSnippets® overprotected iframe may not deal well with gUM.
As far as I know the only way to capture audio from user microphone at real time is by using Flash plugin (need to get user permission) or Java.
Does someone know any other way like HTML5 or JavaScript? All my program is built with HTML5 and I don't want to use another technology.
You can use navigator.getUserMedia.
For example:
(function(){
getMedia = ( navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia);
getMedia(
{
video: false,
audio: true
},
function (localMediaStream) {
var video = document.createElement('video');
video.src = window.URL.createObjectURL(localMediaStream);
video.onloadedmetadata = function(e) {
//deal with data
};
},
function (err) {
console.log("The following error occured: " + err);
}
);
})();
See Mozilla Developer Network for more info.
Note: This only works in Chrome, recent Firefox (20+), and Opera (12+), but not in Internet Explorer.
I'm on Chrome 25 successfully using getUserMedia and RTCPeerConnection to connect audio from a web page to another party, but I'm unable to get the API to stop the red blinking indication icon in the Chrome tab that media is being used on that page. My question is essentially a duplicate of Stop/Close webcam which is opened by navigator.getUserMedia except that the resolution there isn't working. If I have a page that just uses getUserMedia with no remote media (no peer), then stopping the camera turns off the blinking tab indicator. Adding remote streams seems to be a/the issue. Here's what I've currently got for my "close" code:
if (localStream) {
if (peerConnection && peerConnection.removeStream) {
peerConnection.removeStream(localStream);
}
if (localStream.stop) {
localStream.stop();
}
localStream.onended = null;
localStream = null;
}
if (localElement) {
localElement.onerror = null;
localElement.pause();
localElement.src = undefined;
localElement = null;
}
if (remoteStream) {
if (peerConnection && peerConnection.removeStream) {
peerConnection.removeStream(remoteStream);
}
if(remoteStream.stop) {
remoteStream.stop();
}
remoteStream.onended = null;
remoteStream = null;
}
if (remoteElement) {
remoteElement.onerror = null;
remoteElement.pause();
remoteElement.src = undefined;
remoteElement = null;
}
if (peerConnection) {
peerConnection.close();
peerConnection = null;
}
I've tried with and without the removeStream() call, I've tried with and without the stop() call, I've tried the element.src="" and element.src=null, I'm running out of ideas. Anyone know if this is a bug or user/my error in the use of the API?
EDIT: I set my default device (using Windows) to a camera that has a light when it's in use, and upon stopping, the camera light goes off, so perhaps this is a Chrome bug. I also discovered that if I use chrome://settings/content to change the microphone device to anything other than "Default", Chrome audio fails altogether. And finally, I realized that using element.src=undefined resulted in Chrome attempting to load a resource and throwing a 404 so that's clearly not correct... so back to element.src='' on that.
Ended up being my fault (yes, shocking). Turns out I wasn't saving localStream correctly in the onUserMediaSuccess callback of getUserMedia... once that was set, Chrome is turning off the blinking recording icon. That didn't explain the other anomalies, but it closes the main point of the question.
I just got this working yesterday after trawling through the WebRTC specification. I don't know if this is the "right" way to do it, but I found that renegotiating the PeerConnection with a new offer after removing the stream did the trick.
var pc = peerConnections[socketId];
pc.removeStream(stream);
pc.createOffer( function(session_description) {
pc.setLocalDescription(session_description);
_socket.send(JSON.stringify({
"eventName": "send_offer",
"data":{
"socketId": socketId,
"sdp": session_description
}
}));
},
function(error) {},
defaultConstraints);