I am trying to analyse the audio output from the browser, but I don't want the getUserMedia prompt to appear (which asks for microphone permission).
The sound sources are SpeechSynthesis and an Mp3 file.
Here's my code:
return navigator.mediaDevices.getUserMedia({
audio: true
})
.then(stream => new Promise(resolve => {
const track = stream.getAudioTracks()[0];
this.mediaStream_.addTrack(track);
this._source = this.audioContext.createMediaStreamSource(this.mediaStream_);
this._source.connect(this.analyser);
this.draw(this);
}));
This code is working fine, but it's asking for permission to use the microphone! I a not interested at all in the microphone I only need to gauge the audio output. If I check all available devices:
navigator.mediaDevices.enumerateDevices()
.then(function(devices) {
devices.forEach(function(device) {
console.log(device.kind + ": " + device.label +
" id = " + device.deviceId);
});
})
I get a list of available devices in the browser, including 'audiooutput'.
So, is there a way to route the audio output in a media stream that can be then used inside 'createMediaStreamSource' function?
I have checked all the documentation for the audio API but could not find it.
Thanks for anyone that can help!
There are various ways to get a MediaStream which is originating from gUM, but you won't be able to catch all possible audio output...
But, for your mp3 file, if you read it through an MediaElement (<audio> or <video>), and if this file is served without breaking CORS, then you can use MediaElement.captureStream.
If you read it from WebAudioAPI, or if you target browsers that don't support captureStream, then you can use AudioContext.createMediaStreamDestination.
For SpeechSynthesis, unfortunately you will need gUM... and a Virtual Audio Device: first you would have to set your default output to the VAB_out, then route your VAB_out to VAB_in and finally grab VAB_in from gUM...
Not an easy nor universally doable task, moreover when IIRC SpeechSynthesis doesn't have any setSinkId method.
Related
I have developed an electron app and for the first time ever, my antivirus (ESET) has raised "Webcam access attempt" when the app loads. Has anyone else experienced this?
My app does not use the webcam and I have no code that requires the webcam.
I do have code that accesses audio for audio recording. I have denied access to the webcam in antivirus and the app does function as designed. However, antivirus warning messages appear on every load of the app. As you can imagine, this is not cool.
This has surfaced immediately after updating ESET (v14.2.10.0), so they have some new rule that gets triggered. I have to assume that this is not an ESET over sensitivity to something (I have no idea how AV's function and ‘blaming’ the antivirus doesn’t seem like a sound response to provide users), so I am left questioning my deployment of web-apis in my code.
My audio access uses the native web-apis: AudioContext, Navigator, MediaDevices, MediaRecorder. The key lines of code are below:
// getting list of all AUDIO devices:
// const audioSources = await navigator.mediaDevices.enumerateDevices({ audio: true });
// ^ above does NOT filter by audio only
const audioSources = await navigator.mediaDevices.enumerateDevices();
// creating a recorder object:
const audioContext = new AudioContext();
const dest = audioContext.createMediaStreamDestination();
const audioParams = {
deviceId: "6e5fc2d7ffa5c6c04e06d282a5aa743e983e585a7e12118c80c0cd8646cce4b7", // this ID is from audioSources object
}
const mediaStream = await navigator.mediaDevices.getUserMedia({ audio: audioParams });
const audioIn = audioContext.createMediaStreamSource(mediaStream);
audioIn.connect(dest);
const audioOptions = {
bitsPerSecond: 128000,
mimeType: 'audio/webm; codecs=opus',
};
const recorder = new MediaRecorder(dest.stream, audioOptions);
Because navigator.mediaDevices.enumerateDevices() does not take parameters, such as { audio: true }, enumerateDevices() triggers the camera request.
I use the results of enumerateDevices() to access the device ID, which is then passed into .getUserMedia() to select the specific device. This allows users to select one or more audio inputs for the same recording.
Is there a way of just querying available media for audio devices / excluding video?
Is there another way of identifiying all available audio devices?
How else can I select what device .getUserMedia() returns as a stream?
The only existing information I could find on this was on the shut-down Atom Community forum:
Electron keeps accessing my webcam for NO REASON - two developers discovering the same behaviour in Sept'20 with different Antivirus software. No resolution.
Originally seen using Electron 8.5.0. Issue remains after updating to 13.1.2
Software versions: Electron 13.1.2, ESET 14.2.10.0
I'm trying to stream a video directly to a browser using node.js as the backend. I would like the video to be streamed from a specific time and would also like it to be partially streamed since it is a pretty large file. Right now I am doing this with fluent-ffmpeg, like this:
const ffmpeg = require('fluent-ffmpeg');
app.get('/clock/', (req, res) => {
const videoPath = 'video.mp4';
const now = new Date();
ffmpeg(videoPath)
.videoCodec('libx264')
.withAudioCodec('aac')
.setStartTime(`${(now.getHours() - 10) % 24}:${now.getMinutes() - 1}:${now.getSeconds()}`)
.format('mp4')
.outputOptions(['-frag_duration 100','-movflags frag_keyframe+faststart','-pix_fmt yuv420p'])
.on('end', () => {
console.log("File has been converted succesfully");
})
.on('error', (err) => {
if (err.message.toLowerCase().includes('output stream closed')) return;
console.log('An error occoured', err);
})
.pipe(res, { end: true });
});
This will work with Chrome, but Safari just doesn't want to stream it.
I know that the reason why it doesn't work on Safari is that Safari needs the range header. I've therefore tried to do that, but:
I can't get it to work with fluent-ffmpeg.
When I try to do it the "normal" way, without fluent-ffmpeg, it needs to load the whole video file before it plays.
The video doesn't need to start at the specific timestamp. It would be nice tho, but I have a workaround for that if it's not possible :)
So my question is: How can I get the code above to work with Safari. And if that is impossible: How can I code something that doesn't need to be loaded fully, before it can be played in Safari browsers, aka. partial video streaming.
I currently played around with the Web Audio API a little bit. I managed to "read" a microphone and play it to my speakers which worked quite seamlessly.
Using the Web Audio API, I now would like to resample an incoming audio stream (aka. microphone) from 44.1kHz to 16kHz. 16kHz, because I am using some tools which require 16kHz. Since 44.1kHz divided by 16kHz is not an integer, I believe I cannot just simply use a low-pass filter and "skip samples", right?
I also saw that some people suggested to use the .createScriptProcessor(), but since it is deprecated I feel kind of bad to use it, so I'm searching a different approach now. Also, I don't necessarily need the audioContext.Destination to hear it! It is still fine if I get the "raw" data of the resampled output.
My approaches so far
Creating an AudioContext({sampleRate: 16000}) --> throws an error: "Connecting AudioNodes from AudioContexts with different sample-rate is currently not supported."
Using an OfflineAudioContext --> but it seems to have no option for streams (only for buffers)
Using an AudioWorkletProcessor to resample. In this case, I think, that I could use the processor to actually resample the input and output the "resampled" source. But I couldn't really figure how to resample it.
main.js
...
microphoneGranted: async function(stream){
audioContext = new AudioContext();
var microphone = audioContext.createMediaStreamSource(stream);
await audioContext.audioWorklet.addModule('resample_proc.js');
const resampleNode = new AudioWorkletNode(audioContext, 'resample_proc');
microphone.connect(resampleNode).connect(audioContext.destination);
}
...
resample_proc.js (assuming only one input and output channel)
class ResampleProcesscor extends AudioWorkletProcessor {
...
process(inputs, outputs, parameters) {
const input = inputs[0];
const output = outputs[0];
if(input.length > 0){
const inputChannel0 = input[0];
const outputChannel0 = output[0];
for (let i = 0; i < inputChannel0.length; ++i) {
//do something with resample here?
}
return true;
}
}
}
registerProcessor('resample_proc', ResampleProcesscor);
Thank you!
Your general idea looks good. While I can't provide the code to do the resampling, I can point out that you might want to start with Sample-rate conversion. Method 1 would work here with L/M = 160/441. Designing the filters takes a bit of work but only needs to be done once. You can also search for polyphase filtering for hints on how to do this effectively.
What chrome does in various parts is to use a windowed-sinc function to resample between any set of rates. This is method 2 in the wikipedia link.
The WebAudio API now allows to resample by passing the sample rate in the constructor. This code works in Chrome and Safari:
const audioStream = await navigator.mediaDevices.getUserMedia({ audio: true, video: false })
const audioContext = new AudioContext({ sampleRate: 16000 })
const audioStreamSource = audioContext.createMediaStreamSource(audioStream);
audioStreamSource.connect(audioContext.destination)
But fails in Firefox that throws a NotSupportedError exception with AudioContext.createMediaStreamSource: Connecting AudioNodes from AudioContexts with different sample-rate is currently not supported.
In the example below, I've downsampled the audio coming from the microphone to 8kHz and added a one second delay so we can clearly hear the effect of downsampling:
https://codesandbox.io/s/magical-rain-xr4g80
I want to capture video with the webcamera.
And there is the right decision:
window.onload = function () {
var video = document.getElementById('video');
var videoStreamUrl = false;
navigator.getUserMedia({video: true}, function (stream) {
videoStreamUrl = window.URL.createObjectURL(stream);
video.src = videoStreamUrl;
}, function () {
console.log('error');
});
};
but produces an error in the browser:
[Deprecation] URL.createObjectURL with media streams is deprecated and will be removed in M68, around July 2018. Please use HTMLMediaElement.srcObject instead. See https://www.chromestatus.com/features/5618491470118912 for more details.
how to use HTMLMediaElement.srcObject for my purposes ? Thanks for your time!
MediaElement.srcObject should allow Blobs, MediaSources and MediaStreams to be played in the MediaElement without the need to bind these sources in the memory for the lifetime of the document like blobURIs do.
(Currently no browser support anything else than MediaStream though...)
Indeed, when you do URL.createObjectURL(MediaStream), you are telling the browser that it should keep alive this Source until your revoke the blobURI, or until the document dies.
In the case of a LocalMediaStream served from a capturing device (camera or microphone), this also means that the browser has to keep the connection to this device open.
Firefox initiated the deprecation of this feature, one year or so ago, since srcObject can provide the same result in better ways, easier to handle for everyone, and hence Chrome seems to finally follow (not sure what's the specs status about this).
So to use it, simply do
MediaElement.srcObject = MediaStream;
Also note that the API you are using is itself deprecated (and not only in FF), and you shouldn't use it anymore. Indeed, the correct API to capture MediaStreams from user Media is the MediaDevices.getUserMedia one.
This API now returns a Promise which gets resolved to the MediaStream.
So a complete correction of your code would be
var video = document.getElementById('video');
navigator.mediaDevices.getUserMedia({
video: true
})
.then(function(stream) {
video.srcObject = stream;
})
.catch(function(error) {
console.log('error', error);
});
<video id="video"></video>
Or as a fiddle since StackSnippets® overprotected iframe may not deal well with gUM.
I use webRTC (getUserMedia) for recording sound and uploading it to backend server. All works well except i am unable to determine the microphone type (is it a built-in mic, usb mic, headset mic, sth else?)
Does anybody know how can i detect the type?
You can use navigator.mediaDevices.enumerateDevices() to list the user's cameras and microphones, and try to infer types from their labels (there's no mic-type field unfortunately).
The following code works in Firefox 39 and Chrome 45 *:
var stream;
navigator.mediaDevices.getUserMedia({ audio:true })
.then(s => (stream = s), e => console.log(e.message))
.then(() => navigator.mediaDevices.enumerateDevices())
.then(devices => {
stream && stream.stop();
console.log(devices.length + " devices.");
devices.forEach(d => console.log(d.kind + ": " + d.label));
})
.catch(e => console.log(e));
var console = { log: msg => div.innerHTML += msg + "<br>" };
<div id="div"></div>
In Firefox on my system, this produces:
5 devices.
videoinput: Logitech Camera
videoinput: FaceTime HD Camera (Built-in)
audioinput: default (Logitech Camera)
audioinput: Built-in Microphone
audioinput: Logitech Camera
Now, there are some caveats: By spec the labels only show if device access is granted, which is why the snippet asks for it (try it both ways).
Furthermore, Chrome 45 requires persistent permissions (a bug?) which is not available in insecure HTTP, so you may need to reload this question in HTTPS first to see labels. If you do that, don't forget to revoke access in the URL bar afterwards, or Chrome will persist it, which is probably a bad idea on stackoverflow!
Alternatively, try https://webrtc.github.io/samples/src/content/devices/input-output which works in regular Chrome thanks to the adapter.js polyfill, but requires you to grant persistent permission and reload the page before you see labels (because of how it was written).
(*) EDIT: Apparently, enumerateDevices just got put back under an experimental flag in Chrome 45, so you need to enable it as explained here. Sorry about that. Shouldn't be long I hope.