chrome audio analyzer breaking on audio switch - javascript

I'm creating an audio visualizer with webgl, and have been integrating soundcloud tracks into it. I want to no be able to switch tracks, but I can either get my visualizer to work and the audio to break, or I can get the audio to work and the visualizer to break.
The two ways that I've been able to make it work are
Audio working
delete audio element
append new audio element to body
trigger play
Visualizer working
stop audio
change source
trigger play
When I have the visualizer working, the audio is totally messed up. The buffers just sound wrong, and the audio has artifacts in it (noise, beeps and bloops).
When I have the audio working, when I call analyser.getByteFrequencyData, I get an array of 0's. I presume this is because the analyser is not hooked up correctly.
The code for the audio working looks like
$('#music').trigger("pause");
currentTrackNum = currentTrackNum + 1;
var tracks = $("#tracks").data("tracks")
var currentTrack = tracks[parseInt(currentTrackNum)%tracks.length];
// Begin audio switching
analyser.disconnect();
$('#music').remove();
$('body').append('<audio id="music" preload="auto" src="'+ currentTrack["download"].toString() + '?client_id=4c6187aeda01c8ad86e556555621074f"></audio>');
startWebAudio(),
(I don't think I need the pause call. Do I?)
when I want the visualizer to work, I use this code
currentTrackNum = currentTrackNum + 1;
var tracks = $("#tracks").data("tracks")
var currentTrack = tracks[parseInt(currentTrackNum)%tracks.length];
// Begin audio switching
$("#music").attr("src", currentTrack["download"].toString() + "?client_id=4c6187aeda01c8ad86e556555621074f");
$("#songTitle").text(currentTrack["title"]);
$('#music').trigger("play");
The startWebAudio function looks like this.
function startWebAudio() {
// Get our <audio> element
var audio = document.getElementById('music');
// Create a new audio context (that allows us to do all the Web Audio stuff)
var audioContext = new webkitAudioContext();
// Create a new analyser
analyser = audioContext.createAnalyser();
// Create a new audio source from the <audio> element
var source = audioContext.createMediaElementSource(audio);
// Connect up the output from the audio source to the input of the analyser
source.connect(analyser);
// Connect up the audio output of the analyser to the audioContext destination i.e. the speakers (The analyser takes the output of the <audio> element and swallows it. If we want to hear the sound of the <audio> element then we need to re-route the analyser's output to the speakers)
analyser.connect(audioContext.destination);
// Get the <audio> element started
audio.play();
var freqByteData = new Uint8Array(analyser.frequencyBinCount);
}
My suspicion is that the analyzer isn't hooked up correctly, but I can't figure out what to look at to figure it out. I have looked at the frequencyByteData output, and that seems to be indicative of something not being hooked up right. The analyser variable is global. If you would like more reference to the code, here's where it is on github

You can only create a single AudioContext per window. You should also be disconnecting the MediaElementSource when you're finished using it.
Here's an example that I used to answer a similar question: http://jsbin.com/acolet/1/

Related

Web Audio API: createMediaStreamDestination().stream - no sound

I'm stuck with a problem in which whenever I pass the stream from createMediaStreamDestination to an audio element srcObject, no audio is being played. My implementation is based off of the response posted here Combine setSinkId with stereoPanner?
Initially, I have an audio element in which I isolate the sound so that it would only play from the left speaker
const audio = document.createElement('audio');
audio.src = audioUrl;
let audioContext = new AudioContext();
let source = audioContext.createMediaElementSource(audio);
let panner = audioContext.createStereoPanner();
let destination = audioContext.destination;
panner.pan.value = -1;
source.connect(panner).connect(destination);
The above plays sound fine when I add audio.play() but I want to be able to set specifically the speakers that the audio would play out of while keeping the panner changes. Since audioContext doesn't contain any possibility of setting the sinkId yet, I created a new audio element and mediastreamdestination and passed the mediaStream into the source object
const audio = document.createElement('audio');
audio.src = audioUrl;
let audioContext = new AudioContext();
let source = audioContext.createMediaElementSource(audio);
let panner = audioContext.createStereoPanner();
let destination = audioContext.createMediaStreamDestination();
panner.pan.value = -1;
source.connect(panner).connect(destination);
const outputAudio = new Audio();
outputAudio.srcObject = destination.stream;
outputAudio.setSinkId(audioSpeakerId);
outputAudio.play();
With the new code, however, when I start up my application, the outputAudio doesn't play any sound at all. Is there anything wrong with my code that is causing the outputAudio element not to play sound? I'm fairly new to web audio api and I tried implementing the code from the mentioned stackoverflow thread but it doesn't seem to be working for me. Any help would be appreciated!
In the description of your first code block you mention that you additionally also call audio.play() to start the audio. That's also necessary for the second code block to work. You need to start both audio elements.
Generally calling play() on an audio element and creating a new AudioContext should ideally happen in response to a user action to make sure the browser's autoplay policy doesn't block the audio.
If all goes well the state of your AudioContext should be "running".

Web Audio audiocontext createMediaStreamSource stuttering

I want to mix different audio media streams in to one stream. I'm been doing this with Web Audio audiocontext and createMediaStreamSource.
But the final mixed audio is stuttering.
Have anyone an idea how to optimize this to avoid stuttering?
// init audio context
var audioContext = new AudioContext({ latencyHint: 0 });
var audioDestination = audioContext.createMediaStreamDestination();
// add audio streams
audioContext.createMediaStreamSource(audioStream1).connect(audioDestination);
audioContext.createMediaStreamSource(audioStream2).connect(audioDestination);
audioContext.createMediaStreamSource(audioStream3).connect(audioDestination);
audioContext.createMediaStreamSource(audioStream4).connect(audioDestination);
// get mixed audio stream tracks
var audioTrack = audioDestination.stream.getTracks()[0];
// get video track
var videoTrack = videoStream.getTracks()[0];
// combine video and audio tracks into single stream.
var finalStream = new MediaStream([videoTrack, audioTrack]);
// assign to video element
el_video.srcObject = finalStream;
You could try setting the latencyHint to 'playback' like this:
const audioContext = new AudioContext({ latencyHint: 'playback' });
This allows the browser to add a bit of latency to the audio graph which can help on underpowered devices. Setting the latencyHint to 0 on the other hand will tell the browser that it should do things as fast as possible which increases the risk of dropouts.
Having said that, the latencyHint is only a hint. The browser may very well ignore it. You can check what the browser is actually doing by inspecting the baseLatency property.
console.log(audioContext.baseLatency);

Get MediaStreamTrack(audio) from Video

I want to record audio from video element alongside recording from canvas.
I have
var stream = canvas.captureStream(29);
Now I am adding audioTrack of video to the stream.
var vStream = video.captureStream();
stream.addTrack(vStream.getAudioTracks()[0]);
But this slows down the performance with every video added. As captureStream() is very heavy on video and it also requires a flag to be switched on in Chrome. Is there a way of creating only audio MediaStream from video element without using captureStream().
Yes, you can use the Web Audio API's method createMediaElementSource which will grab the audio from your mediaElement, and then the createMediaStreamDestination method, which will create an MediaStreamDestination node, which contains an MediaStream.
You then just have to connect it all, and you've got your MediaStream with your MediaElement's audio.
// wait for the video starts playing
vid.play().then(_=> {
var ctx = new AudioContext();
// create an source node from the <video>
var source = ctx.createMediaElementSource(vid);
// now a MediaStream destination node
var stream_dest = ctx.createMediaStreamDestination();
// connect the source to the MediaStream
source.connect(stream_dest);
// grab the real MediaStream
out.srcObject = stream_dest.stream;
out.play();
});
The video's audio will be streamed to this audio elements : <br>
<audio id="out" controls></audio><br>
The original video element : <br>
<video id="vid" crossOrigin="anonymous" src="https://dl.dropboxusercontent.com/s/bch2j17v6ny4ako/movie720p.mp4?dl=0" autoplay></video>
Note that you could also connect more sources to this stream, and also that you can combine it with an other video stream with the new MediaStream([stream1, stream2]) Constructor (It's currently the only way to combine different streams on FF, until this bug is fixed, should be soon though).

Concerning Web Audio nodes, what does .connect() do?

Trying to follow the example here, which is basically a c&p of this
Think I got most of the parts down, except all the node.connect()'s
From what I understand, this sequence of code is needed to provide the audio analyzer with an audio stream:
var source = audioCtx.createMediaStreamSource(stream);
source.connect(analyser);
analyser.connect(audioCtx.destination);
I can't seem to make sense of it as it looks rather ouroboros-y to me.
And unfortunately, I can't seem to find any documentation on .connect() so quite lost and would appreciate any clarification!
Oh and I'm loading an .mp3 via pure javascript new Audio('db.mp3').play(); and am trying to use that as the source without creating an <audio> element.
Can a mediaStream object be created from this to feed into .createMediaStreamSource(stream)?
connect simply defines the output for the filters.
In this case, your source loads the stream into the buffer and writes to the input of the next filter which is defined by the connect function. This is repeated for your analyser filter.
Think of it as pipes.
here is a sample code snippet that I have written a few years back using web audio api.
this.scriptProcessor = this.audioContext.createScriptProcessor(this.scriptProcessorBufferSize,
this.scriptProcessorInputChannels,
this.scriptProcessorOutputChannels);
this.scriptProcessor.connect(this.audioContext.destination);
this.scriptProcessor.onaudioprocess = updateMediaControl.bind(this);
//Set up the Gain Node with a default value of 1(max volume).
this.gainNode = this.audioContext.createGain();
this.gainNode.connect(this.audioContext.destination);
this.gainNode.gain.value = 1;
sewi.AudioResourceViewer.prototype.playAudio = function(){
if(this.audioBuffer){
this.source = this.audioContext.createBufferSource();
this.source.buffer = this.audioBuffer;
this.source.connect(this.gainNode);
this.source.connect(this.scriptProcessor);
this.beginTime = Date.now();
this.source.start(0, this.offset);
this.isPlaying = true;
this.controls.update({playing: this.isPlaying});
updateGraphPlaybackPosition.call(this, this.offset);
}
};
So as you can see that my source is connected to a gainNode, which is connected to a scriptProcessor. When the audio starts playing, the data is passed from the source->gainNode->destination and source->scriptProcessor->destination. flowing through the "pipes" that connects them, which is defined by connect(). When the audio data pass through the gainNode, volume can be adjusted by changing the amplitude of the audio wave. After that it is passed to the script processor so that events can be attached and triggered while the audio is being processed.

Chrome analyse all audio from web page

Does anyone know how to create a MediaElementSource or any other object that can be used to send ALL sound data that is being played on a webpage through an Analyser from createAnalyser()? I want to be able to use the Analyser without knowing where exactly the sound is coming from.
EDIT: I have accomplished what I wanted but not by capturing all audio. The following block gets you an analyser on a Google Play Music player page (only tested from my library, not the store).
ctx = new (window.audioContext || window.webkitAudioContext);
source = iVisual.ctx.createMediaElementSource($('audio')[0]);
analyser = iVisual.ctx.createAnalyser();
As the audio elements are not supposed to be playing at the same time, but if you still want to do it with all audio elements, I will provide you some code sample to do it. Here's the for loop that runs for every audio file you have, which it will create an audio element for with the appropriate source, and then create a sourcenode for that (createMediaElementSource), and connect that sourcenode to the analyser.
onload = function () { //this will be executed when the page is ready
window.audioFiles = ['audio1.mp3', 'audio2.mp3',...]; //the array with all audio files
window.AudioContext = window.AudioContext || window.webkitAudioContext;
context = new AudioContext();
analyser = context.createAnalyser();
analyser.connect(context.destination);
//now we take all the files and create a button for every file
sources = []; //we create an array where we store all the created sources in.
for (var x in audioFiles) {
var elem = document.createElement('audio'); //create an audio element
elem.src = audioFiles[x]; //append the specific source to it.
sources[x] = context.createMediaElementSource(elem); //create a mediasource for it
sources[x].connect(analyser); //connect that to the analyser
}
}

Categories

Resources