I am using react-webcam to capture photos through webcam. I am able to stop the stream of the webcam but the camera indicator does not turn off even after the stream has been stopped. It turns off when I reload the page. Here's the function I am using to stop the stream.
function stopStreamedVideo() {
let videoElem = document.querySelector("#camera-content > video");
const stream = videoElem.srcObject;
const tracks = stream.getTracks();
tracks.forEach(function (track) {
track.stop();
});
videoElem.srcObject = null;
}
Related
const context = new AudioContext();
let o = null,
g = null;
function play(){
o = context.createOscillator();
g = context.createGain();
o.type = "sine";
o.connect(g);
o.connect(context.destination);
o.start();
}
function stop(){
o.stop();
// DO SOMETHING TO SAVE AUDIO IN AN HTML <AUDIO> TAG
}
When the play() function is called, a sine wave sound is played which is then stopped by calling the stop() function. I want to send this audio to an html <audio> tag. Is it possible?
After being curious myself of how this could be done, I stumbled on an MDN article doing this exact thing.
It uses the MediaRecorder interface and a MediaStreamDestinationNode. To record a sound wave created by your oscillator you must pass the sound to the MediaStreamDestinationNode to turn it into a stream. This stream is then used by the MediaRecorder, which captures the data streamed to the node whenever the sound is playing. When the playing is stopped, all the data sent is then converted into a Blob. You can specify the type of blob by settings its type property to the desired MIME type that you want to use. For example audio/mp3.
With URL.createObjectURL() you can create a reference with an URL to this blob which is created. This URL can then be used as the src of the <audio> tag. Now the audio element has a source to play from, which is your recorded sound.
Down below I've made an example, based on the code in the article, which records your sine wave and allows it to be replayed in the <audio> element. Note: whenever you re-record, the previous recording is lost.
// Select button and audio elements.
const button = document.querySelector('button');
const audio = document.querySelector('audio');
// Define global variables for oscillator and gain.
let oscillator = null;
let gain = null;
let source = null;
// Create context, stream destination and recorder.
const ctx = new AudioContext();
const mediaStreamDestination = ctx.createMediaStreamDestination();
const recorder = new MediaRecorder(mediaStreamDestination.stream);
// Store the chunks of audio data in an array.
let chunks = [];
// Dump the previous stored blob from memory and clear the chunks array.
// Otherwise, all recorded data will be stored until the page is closed.
recorder.addEventListener('start', function(event) {
if (source !== null) {
URL.revokeObjectURL(source);
}
chunks.length = 0;
});
// When all the sound has been recorded, store the recorded data
// in the chunks array. The chunks will later be converted into
// a workable file for the audio element.
recorder.addEventListener('dataavailable', function(event) {
const { data } = event;
chunks.push(data);
});
// Whenever the recorder has stopped recording, create a Blob
// out of the chunks that you've recorded, then create a object url
// to the Blob and pass that url to the audio src property.
recorder.addEventListener('stop', function(event) {
const blob = new Blob(chunks, { 'type': 'audio/aac' });
source = URL.createObjectURL(blob);
audio.src = source;
});
// Click on the button to start and stop the recording.
button.addEventListener('click', function(event) {
if (recorder.state !== 'recording') {
// Create new oscillator and gain.
oscillator = ctx.createOscillator();
gain = ctx.createGain();
// Connect the oscillator and gain to the MediaStreamDestination.
// And play the sound on the speakers.
oscillator.connect(gain);
gain.connect(ctx.destination);
gain.connect(mediaStreamDestination);
// Start recording and playing.
recorder.start();
oscillator.start();
event.target.textContent = 'Stop recording';
} else {
// Stop recording and playing.
recorder.stop();
oscillator.stop();
event.target.textContent = 'Record sine wave';
}
});
<button>Make sine wave</button>
<audio controls></audio>
If you have any questions regarding the code above, or I haven't explained it properly, let me know.
I want to record voice, split the recorded voice (or the audio blob) automatically into 1 second chunks, export each chunk to a wav file and send to the back end . This should happen asynchronously while the user speaks.
I currently use the following recorder.js library to do the above tasks
https://cdn.rawgit.com/mattdiamond/Recorderjs/08e7abd9/dist/recorder.js
My problem is, with time the blob/wave file becomes bigger in size. I think it is because the data gets accumulated and make the chunk size bigger. So with time I am not actually sending sequential 1 second chunks but accumulated chunks.
I can’t figure our where in my code this issue is caused. May be this happens inside the recorder.js library. If someone has used recorder js or any other JavaScript method for a similar tasks, appreciate if you could go through this code and let me know where it breaks.
This is my JS code
var gumStream; // Stream from getUserMedia()
var rec; // Recorder.js object
var input; // MediaStreamAudioSourceNode we'll be recording
var recordingNotStopped; // User pressed record button and keep talking, still not stop button pressed
const trackLengthInMS = 1000; // Length of audio chunk in miliseconds
const maxNumOfSecs = 1000; // Number of mili seconds we support per recording (1 second)
// Shim for AudioContext when it's not available.
var AudioContext = window.AudioContext || window.webkitAudioContext;
var audioContext //audio context to help us record
var recordButton = document.getElementById("recordButton");
var stopButton = document.getElementById("stopButton");
//Event handlers for above 2 buttons
recordButton.addEventListener("click", startRecording);
stopButton.addEventListener("click", stopRecording);
//Asynchronous function to stop the recoding in each second and export blob to a wav file
const sleep = time => new Promise(resolve => setTimeout(resolve, time));
const asyncFn = async() => {
for (let i = 0; i < maxNumOfSecs; i++) {
if (recordingNotStopped) {
rec.record();
await sleep(trackLengthInMS);
rec.stop();
//stop microphone access
gumStream.getAudioTracks()[0].stop();
//Create the wav blob and pass it on to createWaveBlob
rec.exportWAV(createWaveBlob);
}
}
}
function startRecording() {
console.log("recordButton clicked");
recordingNotStopped = true;
var constraints = {
audio: true,
video: false
}
recordButton.disabled = true;
stopButton.disabled = false;
//Using the standard promise based getUserMedia()
navigator.mediaDevices.getUserMedia(constraints).then(function(stream) {
//Create an audio context after getUserMedia is called
audioContext = new AudioContext();
// Assign to gumStream for later use
gumStream = stream;
//Use the stream
input = audioContext.createMediaStreamSource(stream);
//Create the Recorder object and configure to record mono sound (1 channel)
rec = new Recorder(input, {
numChannels: 1
});
//Call the asynchronous function to split and export audio
asyncFn();
console.log("Recording started");
}).catch(function(err) {
//Enable the record button if getUserMedia() fails
recordButton.disabled = false;
stopButton.disabled = true;
});
}
function stopRecording() {
console.log("stopButton clicked");
recordingNotStopped = false;
//disable the stop button and enable the record button to allow for new recordings
stopButton.disabled = true;
recordButton.disabled = false;
//Set the recorder to stop the recording
rec.stop();
//stop microphone access
gumStream.getAudioTracks()[0].stop();
}
function createWaveBlob(blob) {
var url = URL.createObjectURL(blob);
//Convert the blob to a wav file and call the sendBlob function to send the wav file to the server
var convertedfile = new File([blob], 'filename.wav');
sendBlob(convertedfile);
}
Recorder.js keeps a record buffer of the audio that it records. When exportWAV is called, the record buffer is encoded but not cleared. You'd need to call clear on the recorder before calling record again so that the previous chunk of audio is cleared from the record buffer.
This is how it was fixed in the above code.
//Extend the Recorder Class and add clear() method
Recorder.prototype.step = function () {
this.clear();
};
//After calling the exportWAV(), call the clear() method
rec.exportWAV(createWaveBlob);
rec.step();
Looking for experience working with media devices:
I'm working on recording on cache and playback from Microphone source; Firefox & Chrome using HTML5.
This is what I've so far:
var constraints = {audio: true, video: false};
var promise = navigator.mediaDevices.getUserMedia(constraints);
I've been checking on official documentation from MDN on getUserMedia
but nothing related to storage the audio from the constraint to cache.
No such question has been asked previously at Stackoverflow; I'm wondering if's possible.
Thanks you.
You can simply use the MediaRecorder API for such task.
In order to record only the audio from your video+audio gUM stream, you will need to create a new MediaStream, from the gUM's audioTrack:
// using async for brevity
async function doit() {
// first request both mic and camera
const gUMStream = await navigator.mediaDevices.getUserMedia({video: true, audio: true});
// create a new MediaStream with only the audioTrack
const audioStream = new MediaStream(gUMStream.getAudioTracks());
// to save recorded data
const chunks = [];
const recorder = new MediaRecorder(audioStream);
recorder.ondataavailable = e => chunks.push(e.data);
recorder.start();
// when user decides to stop
stop_btn.onclick = e => {
recorder.stop();
// kill all tracks to free the devices
gUMStream.getTracks().forEach(t => t.stop());
audioStream.getTracks().forEach(t => t.stop());
};
// export all the saved data as one Blob
recorder.onstop = e => exportMedia(new Blob(chunks));
// play current gUM stream
vid.srcObject = gUMStream;
stop_btn.disabled = false;
}
function exportMedia(blob) {
// here blob is your recorded audio file, you can do whatever you want with it
const aud = new Audio(URL.createObjectURL(blob));
aud.controls = true;
document.body.appendChild(aud);
document.body.removeChild(vid);
}
doit()
.then(e=>console.log("recording"))
.catch(e => {
console.error(e);
console.log('you may want to try from jsfiddle: https://jsfiddle.net/5s2zabb2/');
});
<video id="vid" controls autoplay></video>
<button id="stop_btn" disabled>stop</button>
And as a fiddle since stacksnippets don't work very well with gUM...
I want to know how to set the volume in WebRTC.
I'm drawing audio like this:
audio = document.createElement('audio');
audio.controls = true;
audio.autoplay = true;
audio.src = window.URL.createObjectURL(stream);
div.appendChild(audio);
I want to make my custom Audio UI. So, I will use HTML's slide bar.
<input type="range">
But, I don't know set volumes in WebRTC stream. How can I set it?
For output(speakers) audio volume, you can manage with volume property of audio/video element.
var audio = document.getElementById('audioId');
audio.volume = 0.9; // 0.0(Silent) -> 1 (Loudest)
You can change the audio.volume based on your slide bar position
To change input(microphone) volume, there is no direct method available in WebRTC AudioTrack/MediaStream.
We can use WebAudio Api to handle volume at Stream/Track level and you can connect WebAudio output to PeerConnection as following
var audioContext = new AudioContext()
var gainNode = audioContext.createGain();
navigator.mediaDevices.getUserMedia({audio:true})
.then((stream) => {
console.log('got stream', stream);
window.orginalStream = stream;
return stream;
})
.then((stream) => {
audioSource = audioContext.createMediaStreamSource(stream),
audioDestination = audioContext.createMediaStreamDestination();
audioSource.connect(gainNode);
gainNode.connect(audioDestination);
gainNode.gain.value = 1;
window.localStream = audioDestination.stream;
//audioElement.srcObject = window.localStream; //for playback
//you can add this stream to pc object
// pc.addStream(window.localStream);
})
.catch((err) => {
console.error('Something wrong in capture stream', err);
})
Now we can easily control the microphone volume with below function
function changeMicrophoneLevel(value) {
if(value && value >= 0 && value <= 2) {
gainNode.gain.value = value;
}
}
For more info have a look at my demo
I'm trying to create audio stream from browser and send it to server.
Here is the code:
let recording = false;
let localStream = null;
const session = {
audio: true,
video: false
};
function start () {
recording = true;
navigator.webkitGetUserMedia(session, initializeRecorder, onError);
}
function stop () {
recording = false;
localStream.getAudioTracks()[0].stop();
}
function initializeRecorder (stream) {
localStream = stream;
const audioContext = window.AudioContext;
const context = new audioContext();
const audioInput = context.createMediaStreamSource(localStream);
const bufferSize = 2048;
// create a javascript node
const recorder = context.createScriptProcessor(bufferSize, 1, 1);
// specify the processing function
recorder.onaudioprocess = recorderProcess;
// connect stream to our recorder
audioInput.connect(recorder);
// connect our recorder to the previous destination
recorder.connect(context.destination);
}
function onError (e) {
console.log('error:', e);
}
function recorderProcess (e) {
if (!recording) return;
const left = e.inputBuffer.getChannelData(0);
// send left to server here (socket.io can do the job). We dont need stereo.
}
when function start is fired, the samples can be catched in recorderProcess
when function stop is fired, the mic icon in browser disappears, but...
unless I put if (!recording) return in the beginning of recorderProcess, it still process samples.
Unfortunately it's not a solution at all - the samples are still being received by recordingProcess and if I fire start functiono once more, it will get all samples from previous stream and from new one.
My question is:
How can I stop/start recording without such issue?
or if it's not best solution
How can I totally remove stream in stop function, to safely initialize it again anytime?
recorder.disconnect() should help.
You might want to consider the new MediaRecorder functionality in Chrome Canary shown at https://webrtc.github.io/samples/src/content/getusermedia/record/ (currently video-only I think) instead of the WebAudio API.