Web Audio API - Stereo to Mono - javascript

I need to convert an stereo input (channelCount: 2) stream comming from chrome.tabCapture.capture to a mono stream and send it to a server, but keep the original audio unchanged.
I've tried several things but the destination.stream always has 2 channels.
const context = new AudioContext()
const splitter = context.createChannelSplitter(1)
const merger = context.createChannelMerger(1)
const source = context.createMediaStreamSource(stream)
const dest = context.createMediaStreamDestination()
splitter.connect(merger)
source.connect(splitter)
source.connect(context.destination) // audio unchanged
merger.connect(dest) // mono audio sent to "dest"
console.log(dest.stream.getAudioTracks()[0].getSettings()) // channelCount: 2
I've also tried this:
const context = new AudioContext()
const merger = context.createChannelMerger(1)
const source = context.createMediaStreamSource(stream)
const dest = context.createMediaStreamDestination()
source.connect(context.destination)
source.connect(merger)
merger.connect(dest)
console.log(dest.stream.getAudioTracks()[0].getSettings()) // channelCount: 2
and this:
const context = new AudioContext()
const source = context.createMediaStreamSource(stream)
const dest = context.createMediaStreamDestination({
channelCount: 1,
channelCountMode: 'explicit'
})
sourcer.connect(context.destination)
soruce.connect(dest)
console.log(dest.stream.getAudioTracks()[0].getSettings()) // channelCount: 2
there has to be an easy way to achieve this...
thanks!

There is a bug in Chrome which requires the audio to flow before the channelCount property gets updated. It's 2 by default.
The following example assumes that the AudioContext is running. Calling resume() in response to a user action should work in case it's not allowed to run on its own.
const audioContext = new AudioContext();
const sourceNode = new MediaStreamAudioSourceNode(
audioContext,
{ mediaStream }
);
const destinationNode = new MediaStreamAudioDestinationNode(
audioContext,
{ channelCount: 1 }
);
sourceNode.connect(destinationNode);
setTimeout(() => {
console.log(destinationNode.stream.getAudioTracks()[0].getSettings());
}, 100);

Related

Procedural Audio using MediaStreamTrack

I want to encode a video (from a canvas) and add procedural audio to it.
The encoding can be accomplished with MediaRecorder that takes a MediaStream.
For the stream, I want to obtain the video part from a canvas, using the canvas.captureStream() call.
I want to add an audio track to the stream. But instead of microphone input, I want to generate the samples for those on the fly, for simplicity sake, let's assume it writes out a sine-wave.
How can I create a MediaStreamTrack that generates procedural audio?
The Web Audio API has a createMediaStreamDestination() method, which will return a MediaStreamAudioDestinationNode object, on which you'll be able to connect your audio context, and which will give you access to a MediaStream instance fed by the audio context audio output.
document.querySelector("button").onclick = (evt) => {
const duration = 5;
evt.target.remove();
const audioContext = new AudioContext();
const osc = audioContext.createOscillator();
const destNode = audioContext.createMediaStreamDestination();
const { stream } = destNode;
osc.connect(destNode);
osc.connect(audioContext.destination);
osc.start(0);
osc.frequency.value = 80;
osc.frequency.exponentialRampToValueAtTime(440, audioContext.currentTime+10);
osc.stop(duration);
// stream.addTrack(canvasStream.getVideoTracks()[0]);
const recorder = new MediaRecorder(stream);
const chunks = [];
recorder.ondataavailable = ({data}) => chunks.push(data);
recorder.onstop = (evt) => {
const el = new Audio();
const [{ type }] = chunks; // for Safari
el.src = URL.createObjectURL(new Blob(chunks, { type }));
el.controls = true;
document.body.append(el);
};
recorder.start();
setTimeout(() => recorder.stop(), duration * 1000);
console.log(`Started recording, please wait ${duration}s`);
};
<button>begin</button>

Getting No audio tracks in MediaStream issue in firefox

I am capturing user screen and audio using getDisplayMedia and getUserMedia and able to record the complete screen capture. But this works only on Chrome and not on Firefox. When I run my application on Firefox it throws error 'DOMException: AudioContext.createMediaStreamSource: No audio tracks in MediaStream'. Below is my code snippet. I have latest version of both browsers installed. Any help would be appreciated. Thanks in advance.
Note:- Its throwing error on line context.createMediaStreamSource(desktopStream)
async function captureScreen() {
desktopStream = await navigator.mediaDevices.getDisplayMedia({ video: true, audio: true });
microPhoneStream = await navigator.mediaDevices.getUserMedia({ video: false, audio: true });
const tracks = [
...desktopStream.getVideoTracks(),
...mergeAudioStreams(desktopStream,microPhoneStream)
];
stream = new MediaStream(tracks);
var options = { mimeType: "video/webm; codecs=opus,vp8" };
startRecording(stream, options);
....
....
....
}
//merges two audio streams into one
const mergeAudioStreams = (desktopStream, microPhoneStream) => {
const context = new AudioContext();
try {
const source1 = context.createMediaStreamSource(desktopStream);
const source2 = context.createMediaStreamSource(microPhoneStream);
const destination = context.createMediaStreamDestination();
const desktopGain = context.createGain();
const voiceGain = context.createGain();
desktopGain.gain.value = 0.7;
voiceGain.gain.value = 0.7;
source1.connect(desktopGain).connect(destination);
source2.connect(voiceGain).connect(destination);
return destination.stream.getAudioTracks();
}
catch (err) {
console.log(err);
}
};
Firefox doesn't currently support capturing audio using getDisplayMedia. There's a feature request for it.
What you could do is check whether your streams have any audio tracks before creating the audio node, like this:
const destination = context.createMediaStreamDestination();
if (desktopStream.getAudioTracks().length) {
const source1 = context.createMediaStreamSource(desktopStream);
const desktopGain = context.createGain();
desktopGain.gain.value = 0.7;
source1.connect(desktopGain).connect(destination);
}
if (microPhoneStream.getAudioTracks().length) {
const source2 = context.createMediaStreamSource(microPhoneStream);
const voiceGain = context.createGain();
voiceGain.gain.value = 0.7;
source2.connect(voiceGain).connect(destination);
}
return destination.stream.getAudioTracks();

Try to make the audio frequency analyser in javascript

adobe audition has the audio frequency analyser.
But it need some people to see.
I want to make the similar function in javascript.
Just put .mp3 file and run will get the most times of frequency.
In below, I put 600hz pure audio in adobe audition.
enter image description here
const AudioContext = window.AudioContext || window.webkitAudioContext
const audioCtx = new AudioContext()
const gainNode = audioCtx.createGain()
const analyser = audioCtx.createAnalyser()
const audio = new Audio('./600hz.mp3')
const source = audioCtx.createMediaElementSource(audio)
source.connect(analyser)
analyser.connect(audioCtx.destination)
gainNode.gain.value = 1
analyser.fftSize = 1024
analyser.connect(gainNode)
const fftArray = new Uint8Array(analyser.fftSize)
analyser.getByteFrequencyData(fftArray)
audio.play()
const timer = setInterval(() => {
const fftArray = new Uint8Array(analyser.fftSize)
analyser.getByteFrequencyData(fftArray)
console.log(fftArray)
}, 100)
setTimeout(() => {
clearInterval(timer)
}, 2000)
In fact, I don't know so many knowledge about audio.
Hope someone give me advise.
Thanks.

WebRTC transmit high audio stream sample rate

Given a WebRTC PeerConnection between two clients, one client is trying to send an audio MediaStream to another.
If this MediaStream is an Oscillator at 440hz - everything works fine. The audio is very crisp, and the transmission goes through correctly.
However, if the audio is at 20000hz, the audio is very noisy and crackly - I expect to hear nothing, but I hear a lot of noise instead.
I believe this might be a problem of sample rate sent in the connection, maybe its not sending the audio at 48000samples/second like I expect.
Is there a way for me to increase the sample rate?
Here is a fiddle to reproduce the issue:
https://jsfiddle.net/mb3c5gw1/9/
Minimal reproduction code including a visualizer:
<button id="btn">start</button>
<canvas id="canvas"></canvas>
<script>class OscilloMeter{constructor(a){this.ctx=a.getContext("2d")}listen(a,b){function c(){g.getByteTimeDomainData(j),d.clearRect(0,0,e,f),d.beginPath();let a=0;for(let c=0;c<h;c++){const e=j[c]/128;var b=e*f/2;d.lineTo(a,b),a+=k}d.lineTo(canvas.width,canvas.height/2),d.stroke(),requestAnimationFrame(c)}const d=this.ctx,e=d.canvas.width,f=d.canvas.height,g=b.createAnalyser(),h=g.fftSize=256,j=new Uint8Array(h),k=e/h;d.lineWidth=2,a.connect(g),c()}}</script>
btn.onclick = e => {
const ctx = new AudioContext();
const source = ctx.createMediaStreamDestination();
const oscillator = ctx.createOscillator();
oscillator.type = 'sine';
oscillator.frequency.setValueAtTime(20000, ctx.currentTime); // value in hertz
oscillator.connect(source);
oscillator.start();
// a visual cue of AudioNode out (uses an AnalyserNode)
const meter = new OscilloMeter(canvas);
const pc1 = new RTCPeerConnection(),
pc2 = new RTCPeerConnection();
pc2.ontrack = ({
track
}) => {
const endStream = new MediaStream([track]);
const src = ctx.createMediaStreamSource(endStream);
const audio = new Audio();
audio.srcObject = endStream;
meter.listen(src, ctx);
audio.play()
};
pc1.onicecandidate = e => pc2.addIceCandidate(e.candidate);
pc2.onicecandidate = e => pc1.addIceCandidate(e.candidate);
pc1.oniceconnectionstatechange = e => console.log(pc1.iceConnectionState);
pc1.onnegotiationneeded = async e => {
try {
await pc1.setLocalDescription(await pc1.createOffer());
await pc2.setRemoteDescription(pc1.localDescription);
await pc2.setLocalDescription(await pc2.createAnswer());
await pc1.setRemoteDescription(pc2.localDescription);
} catch (e) {
console.error(e);
}
}
const stream = source.stream;
pc1.addTrack(stream.getAudioTracks()[0], stream);
};
Looking around in the webrtc demo i found this: https://webrtc.github.io/samples/src/content/peerconnection/audio/ in the example they show a dropdown where you can setup the audio codec. I think this is your solution.

I am implementing Screen Capture API and I want to combine both primary and extended monitor for streaming and taking screenshots. How can i do that?

I am implementing Screen Capture API and I want to combine both primary and extended monitor for streaming and taking screenshots. Here is how I can capture one screen and save the screenshot and I want to enable the same for extended display as well.
screenshot = async() => {
const mediaDevices = navigator.mediaDevices as any;
let displayMediaOptions = {
video: {
mediaSource: 'screen'
}
}
const stream = await mediaDevices.getDisplayMedia(displayMediaOptions);
console.log('stream', stream.getTracks())
const track = stream.getVideoTracks()[0]
// init Image Capture and not Video stream
console.log(track)
const imageCapture = new ImageCapture(track)
const bitmap = await imageCapture.grabFrame()
// destory video track to prevent more recording / mem leak
track.stop()
const canvas = document.getElementById('fake')
// this could be a document.createElement('canvas') if you want
// draw weird image type to canvas so we can get a useful image
canvas.width = bitmap.width
canvas.height = bitmap.height
const context = canvas.getContext('2d')
context.drawImage(bitmap, 0, 0, bitmap.width, bitmap.height)
const image = canvas.toDataURL()
// this turns the base 64 string to a [File] object
const res = await fetch(image)
const buff = await res.arrayBuffer()
// clone so we can rename, and put into array for easy proccessing
const file = [
new File([buff], `photo_${new Date()}.jpg`, {
type: 'image/jpeg',
}),
]
return file
}
I can get the screenshot from single screen and attach it to a canvas. How can I do for extended display as well ?

Categories

Resources