Get MediaStreamTrack(audio) from Video - javascript

I want to record audio from video element alongside recording from canvas.
I have
var stream = canvas.captureStream(29);
Now I am adding audioTrack of video to the stream.
var vStream = video.captureStream();
stream.addTrack(vStream.getAudioTracks()[0]);
But this slows down the performance with every video added. As captureStream() is very heavy on video and it also requires a flag to be switched on in Chrome. Is there a way of creating only audio MediaStream from video element without using captureStream().

Yes, you can use the Web Audio API's method createMediaElementSource which will grab the audio from your mediaElement, and then the createMediaStreamDestination method, which will create an MediaStreamDestination node, which contains an MediaStream.
You then just have to connect it all, and you've got your MediaStream with your MediaElement's audio.
// wait for the video starts playing
vid.play().then(_=> {
var ctx = new AudioContext();
// create an source node from the <video>
var source = ctx.createMediaElementSource(vid);
// now a MediaStream destination node
var stream_dest = ctx.createMediaStreamDestination();
// connect the source to the MediaStream
source.connect(stream_dest);
// grab the real MediaStream
out.srcObject = stream_dest.stream;
out.play();
});
The video's audio will be streamed to this audio elements : <br>
<audio id="out" controls></audio><br>
The original video element : <br>
<video id="vid" crossOrigin="anonymous" src="https://dl.dropboxusercontent.com/s/bch2j17v6ny4ako/movie720p.mp4?dl=0" autoplay></video>
Note that you could also connect more sources to this stream, and also that you can combine it with an other video stream with the new MediaStream([stream1, stream2]) Constructor (It's currently the only way to combine different streams on FF, until this bug is fixed, should be soon though).

Related

Is there a way I can play video audio out of either the left or right audio channels?

I am creating a custom video player using Javascript, HTML and CSS. The video player needs a feature in which the user can swap between the left and right audio channels. However, I noticed that the video properties do not seem to support L-R channel switching, is there a possible way I can work around this?
UPDATE
I had originally not phrased the question as I should have (my bad). What I am trying to do is access the sounds on one channel and redirect them to play through both speakers. E.g. If I had a single audio file which contained rain forest noises and bird cheeps play through the left channel and frog croaks on the right channel, I would want to only play what is on the right channel e.g. the frog croaks.
UPDATE
I am attempting to split the nodes however am struggling with the implementation, the example I found is for AudioBuffer and does not use an file. How do I get the audio context to use the audio file and set the destination of the manipulated back to that same audio file in a playable manner? I would ideally need to be able to have functions which could switch off either channel e.g. playFrogNoises() and playBirdNoises()
<body>
<audio id="myAudio" src="audio/Squirrel Nut Zippers - Trou Macacq.mp3"></audio>
</body>
<script>
var myAudio = document.getElementById('myAudio')
var context = new AudioContext();
var audioSource = context.createMediaElementSource(myAudio)
var splitter = context.createChannelSplitter(2);
audioSource.connect(splitter);
var merger = context.createChannelMerger(2)
//REDUCE VOLUME OF LEFT CHANNEL ONLY
var gainNode = context.createGain();
gainNode.gain.setValueAtTime(0.5, context.currentTime);
splitter.connect(gainNode, 0);
//CONNECT SPLITTER BACK TO SECOND INPUT OF THE MERGER
gainNode.connect(merger, 0, 1);
splitter.connect(merger, 1, 0);
var destination = context.createMediaStreamDestination();
merger.connect(destination)
myAudio.play()
</script>
You have to use WebAudio Api with paneer node.
(or what you need, basically you can do everything with you audio channels in that way)
PannerNode
<video id="my-video" controls
src="myvideo.mp4" type="video/mp4">
</video>
const context = new AudioContext(),
audioSource = context.createMediaElementSource(document.getElementById("my-video")),
panner = context.createStereoPanner();
audioSource.connect(panner);
panner.connect(context.destination);
// Configure panner -1 left and 1 for right.
panner.pan.setValueAtTime(-1, context.currentTime)

Web Audio audiocontext createMediaStreamSource stuttering

I want to mix different audio media streams in to one stream. I'm been doing this with Web Audio audiocontext and createMediaStreamSource.
But the final mixed audio is stuttering.
Have anyone an idea how to optimize this to avoid stuttering?
// init audio context
var audioContext = new AudioContext({ latencyHint: 0 });
var audioDestination = audioContext.createMediaStreamDestination();
// add audio streams
audioContext.createMediaStreamSource(audioStream1).connect(audioDestination);
audioContext.createMediaStreamSource(audioStream2).connect(audioDestination);
audioContext.createMediaStreamSource(audioStream3).connect(audioDestination);
audioContext.createMediaStreamSource(audioStream4).connect(audioDestination);
// get mixed audio stream tracks
var audioTrack = audioDestination.stream.getTracks()[0];
// get video track
var videoTrack = videoStream.getTracks()[0];
// combine video and audio tracks into single stream.
var finalStream = new MediaStream([videoTrack, audioTrack]);
// assign to video element
el_video.srcObject = finalStream;
You could try setting the latencyHint to 'playback' like this:
const audioContext = new AudioContext({ latencyHint: 'playback' });
This allows the browser to add a bit of latency to the audio graph which can help on underpowered devices. Setting the latencyHint to 0 on the other hand will tell the browser that it should do things as fast as possible which increases the risk of dropouts.
Having said that, the latencyHint is only a hint. The browser may very well ignore it. You can check what the browser is actually doing by inspecting the baseLatency property.
console.log(audioContext.baseLatency);

WebRTC MediaRecorder on remote stream cuts when the stream hangs

The Problem:
During a WebRTC unicast video conference, I can successfully stream video from a mobile device's webcam to a laptop/desktop. I would like to record the remote stream on the laptop/desktop side. (The setup is that a mobile device streams to a laptop/desktop).
However, it is usual for the video stream to hang from time to time. That's not a problem, for the "viewer" side will catch up. However, the recording of the remote stream will stop at the first hang.
Minimal and Removed Implementation (Local Recording):
I can successfully record the local stream from navigator.mediaDevices.getUserMedia() as follows:
const recordedChunks = [];
navigator.mediaDevices.getUserMedia({
video: true,
audio: false
}).then(stream => {
const localVideoElement = document.getElementById('local-video');
localVideoElement.srcObject = stream;
return stream;
}).then(stream => {
const mediaRecorder = new MediaRecorder(stream);
mediaRecorder.ondataavailable = (event) => {
if(event.data && event.data.size > 0) {
recordedChunks.push(event.data);
}
};
mediaRecorder.start({ mimeType: 'video/webm;codecs=vp9' }, 10);
});
I can download this quite easily as follows:
const blob = new Blob(recordedChunks, { type: 'video/webm' });
const url = URL.createObjectURL(blob);
const a = document.createElement('a');
document.body.appendChild(a);
a.style = 'display: none';
a.href = url;
a.download = 'test.webm';
a.click();
window.URL.revokeObjectURL(url);
Minimal and Removed Implementation (Remote Recording):
The setup I am using requires recording the remote stream, not the local stream, for IOS Safari does not support the MediaRecorder API. I included the above to show that the recording is working on the local side. The implementation of the remote stream recording is no different except I manually add a 0 Hz audio track to the video, for Chrome appears to have a bug where it won't record without an audio track.
const mediaStream = new MediaStream();
const audioContext = new AudioContext();
const destinationNode = audioContext.createMediaStreamDestination();
const oscillatorNode = audioContext.createOscillator();
oscillatorNode.frequency.setValueAtTime(0, audioContext.currentTime);
oscillatorNode.connect(destinationNode);
const audioTrack = destinationNode.stream.getAudioTracks()[0];
const videoTrack = remoteStream.getVideoTracks()[0]; // Defined somewhere else.
mediaStream.addTrack(videoTrack);
mediaStream.addTrack(audioTrack);
And then I perform the exact same operations that I do on the local stream example above to record the mediaStream variable.
As mentioned, at the first point where the remote stream hangs (due to network latency, perhaps), the remote recording ceases, such that on download, the duration of the .webm file converted to .mp4, via ffmpeg, is only as long as to where the first hang occurred.
Attempts to Mitigate:
One attempt to mitigate this issue I have tried is, rather than recording the remote stream that is attained in the callback for the ontrack event from WebRTC, I use the video stream from the remote video element instead, via remoteVideoElement.captureStream(). This does not work to fix the issue.
Any help would be much appreciated. Thank you.
Hopefully, someone is able to post an actual fix for you. In the mean time, a nasty, inefficient, totally-not-recommended workaround:
Route the incoming MediaStream to a video element.
Use requestAnimationFrame() to schedule drawing frames to a canvas. (Note that this removes any sense of genlock from the original video, and is not something you want to do. Unfortunately, we don't have a way of knowing when incoming frames occur, as far as I know.)
Use CanvasCaptureMediaStream as the video source.
Recombine the video track from CanvasCaptureMediaStream along with the audio track from the original MediaStream in a new MediaStream.
Use this new MediaStream for MediaRecorder.
I've done this with past projects where I needed to programatically manipulate the audio and video. It works!
One big caveat is that there's a bug in Chrome where even though a capture stream is attached to a canvas, the canvas won't be updated if the tab isn't active/visible. And, of course, requestAnimationFrame is severely throttled at best if the tab isn't active, so you need another frame clock source. (I used audio processors, ha!)

AudioContext createMediaElementSource from video fetching data from blob

In javascript, How can I connect an audio context to a video fetching its data from a blob (the video uses the MediaStream capabilities). No matter what I do the audio context returns an empty buffer. Is there any way to connect the two?
Probably, createMediaElementSource is not the right kind of processing node for this use-case.
Rather, you better off to use createMediaStreamSource node from WebAudio API in case you are trying to handle audio live stream, not fixed media source.
The createMediaStreamSource() method of the AudioContext Interface is used to create a new MediaStreamAudioSourceNode object, given a media stream (say, from a navigator.getUserMedia instance), the audio from which can then be played and manipulated.
The link has a more detailed example. However, the main difference for this MediaStreamAudioSourceNode is it can be created only using a MediaStream that you get from media-server or locally(through getUserMedia). In my experience, i couldn't find any way by using only the blob url from the <video> tag.
While this is an old question, I've searched for something similar and found a solution I want to share.
To connect the Blob, you may use a new Response instance. Here is an example for creating a wave form visualizer.
var audioContext = new (window.AudioContext || window.webkitAudioContext)();
var analyser = audioContext.createAnalyser();
var dataArray = new Uint8Array(analyser.frequencyBinCount);
var arrayBuffer = await new Response(yourBlob).arrayBuffer();
var audioBuffer = await audioContext.decodeAudioData(arrayBuffer);
var source = audioContext.createBufferSource();
source.buffer = audioBuffer;
source.connect(analyser);
source.start(0);
Note: yourBlob needs to be a Blob instance.
You may find this fiddle usefull which records video and audio for 5 seconds, turns the recording into a Blob and than plays it back including audio wave visualization.

chrome audio analyzer breaking on audio switch

I'm creating an audio visualizer with webgl, and have been integrating soundcloud tracks into it. I want to no be able to switch tracks, but I can either get my visualizer to work and the audio to break, or I can get the audio to work and the visualizer to break.
The two ways that I've been able to make it work are
Audio working
delete audio element
append new audio element to body
trigger play
Visualizer working
stop audio
change source
trigger play
When I have the visualizer working, the audio is totally messed up. The buffers just sound wrong, and the audio has artifacts in it (noise, beeps and bloops).
When I have the audio working, when I call analyser.getByteFrequencyData, I get an array of 0's. I presume this is because the analyser is not hooked up correctly.
The code for the audio working looks like
$('#music').trigger("pause");
currentTrackNum = currentTrackNum + 1;
var tracks = $("#tracks").data("tracks")
var currentTrack = tracks[parseInt(currentTrackNum)%tracks.length];
// Begin audio switching
analyser.disconnect();
$('#music').remove();
$('body').append('<audio id="music" preload="auto" src="'+ currentTrack["download"].toString() + '?client_id=4c6187aeda01c8ad86e556555621074f"></audio>');
startWebAudio(),
(I don't think I need the pause call. Do I?)
when I want the visualizer to work, I use this code
currentTrackNum = currentTrackNum + 1;
var tracks = $("#tracks").data("tracks")
var currentTrack = tracks[parseInt(currentTrackNum)%tracks.length];
// Begin audio switching
$("#music").attr("src", currentTrack["download"].toString() + "?client_id=4c6187aeda01c8ad86e556555621074f");
$("#songTitle").text(currentTrack["title"]);
$('#music').trigger("play");
The startWebAudio function looks like this.
function startWebAudio() {
// Get our <audio> element
var audio = document.getElementById('music');
// Create a new audio context (that allows us to do all the Web Audio stuff)
var audioContext = new webkitAudioContext();
// Create a new analyser
analyser = audioContext.createAnalyser();
// Create a new audio source from the <audio> element
var source = audioContext.createMediaElementSource(audio);
// Connect up the output from the audio source to the input of the analyser
source.connect(analyser);
// Connect up the audio output of the analyser to the audioContext destination i.e. the speakers (The analyser takes the output of the <audio> element and swallows it. If we want to hear the sound of the <audio> element then we need to re-route the analyser's output to the speakers)
analyser.connect(audioContext.destination);
// Get the <audio> element started
audio.play();
var freqByteData = new Uint8Array(analyser.frequencyBinCount);
}
My suspicion is that the analyzer isn't hooked up correctly, but I can't figure out what to look at to figure it out. I have looked at the frequencyByteData output, and that seems to be indicative of something not being hooked up right. The analyser variable is global. If you would like more reference to the code, here's where it is on github
You can only create a single AudioContext per window. You should also be disconnecting the MediaElementSource when you're finished using it.
Here's an example that I used to answer a similar question: http://jsbin.com/acolet/1/

Categories

Resources