Does anyone know how to create a MediaElementSource or any other object that can be used to send ALL sound data that is being played on a webpage through an Analyser from createAnalyser()? I want to be able to use the Analyser without knowing where exactly the sound is coming from.
EDIT: I have accomplished what I wanted but not by capturing all audio. The following block gets you an analyser on a Google Play Music player page (only tested from my library, not the store).
ctx = new (window.audioContext || window.webkitAudioContext);
source = iVisual.ctx.createMediaElementSource($('audio')[0]);
analyser = iVisual.ctx.createAnalyser();
As the audio elements are not supposed to be playing at the same time, but if you still want to do it with all audio elements, I will provide you some code sample to do it. Here's the for loop that runs for every audio file you have, which it will create an audio element for with the appropriate source, and then create a sourcenode for that (createMediaElementSource), and connect that sourcenode to the analyser.
onload = function () { //this will be executed when the page is ready
window.audioFiles = ['audio1.mp3', 'audio2.mp3',...]; //the array with all audio files
window.AudioContext = window.AudioContext || window.webkitAudioContext;
context = new AudioContext();
analyser = context.createAnalyser();
analyser.connect(context.destination);
//now we take all the files and create a button for every file
sources = []; //we create an array where we store all the created sources in.
for (var x in audioFiles) {
var elem = document.createElement('audio'); //create an audio element
elem.src = audioFiles[x]; //append the specific source to it.
sources[x] = context.createMediaElementSource(elem); //create a mediasource for it
sources[x].connect(analyser); //connect that to the analyser
}
}
Related
I'm stuck with a problem in which whenever I pass the stream from createMediaStreamDestination to an audio element srcObject, no audio is being played. My implementation is based off of the response posted here Combine setSinkId with stereoPanner?
Initially, I have an audio element in which I isolate the sound so that it would only play from the left speaker
const audio = document.createElement('audio');
audio.src = audioUrl;
let audioContext = new AudioContext();
let source = audioContext.createMediaElementSource(audio);
let panner = audioContext.createStereoPanner();
let destination = audioContext.destination;
panner.pan.value = -1;
source.connect(panner).connect(destination);
The above plays sound fine when I add audio.play() but I want to be able to set specifically the speakers that the audio would play out of while keeping the panner changes. Since audioContext doesn't contain any possibility of setting the sinkId yet, I created a new audio element and mediastreamdestination and passed the mediaStream into the source object
const audio = document.createElement('audio');
audio.src = audioUrl;
let audioContext = new AudioContext();
let source = audioContext.createMediaElementSource(audio);
let panner = audioContext.createStereoPanner();
let destination = audioContext.createMediaStreamDestination();
panner.pan.value = -1;
source.connect(panner).connect(destination);
const outputAudio = new Audio();
outputAudio.srcObject = destination.stream;
outputAudio.setSinkId(audioSpeakerId);
outputAudio.play();
With the new code, however, when I start up my application, the outputAudio doesn't play any sound at all. Is there anything wrong with my code that is causing the outputAudio element not to play sound? I'm fairly new to web audio api and I tried implementing the code from the mentioned stackoverflow thread but it doesn't seem to be working for me. Any help would be appreciated!
In the description of your first code block you mention that you additionally also call audio.play() to start the audio. That's also necessary for the second code block to work. You need to start both audio elements.
Generally calling play() on an audio element and creating a new AudioContext should ideally happen in response to a user action to make sure the browser's autoplay policy doesn't block the audio.
If all goes well the state of your AudioContext should be "running".
I am trying to create a visualizer for a music player, using the native audio API. Everything is working well, except when I attach an analyzer, the music stops playing.
See it here, just upload an audiofile to start.
https://codepen.io/jane-fox/pen/RgjgJN
audioSource = audioCtx.createMediaElementSource(audio);
audioSource.connect(analyser);
Comment out these lines to see that music plays fine until the analyzer is connected.
How can I stop the analyzer / visual effects from disrupting the music?
I've made an analyser not too long ago:
https://codepen.io/Cooorsin/pen/zKPbEm
and
http://simple-music-player.corsins.space/
If you want the entire code of the second link I can put it on GitHub for you.
I've used the following code to initialize the audio:
function initAudio(src){
var AudioContext = window.AudioContext || window.webkitAudioContext;
audioContext = new AudioContext();
analyser = audioContext.createAnalyser();
//analyser.smoothingTimeConstant = 1;
analyser.fftSize = barAmount;
audio = new Audio();
audio.src = src;
audio.addEventListener('canplay', function(){
sourceNode = audioContext.createMediaElementSource(audio);
sourceNode.connect(analyser);
sourceNode.connect(audioContext.destination);
audio.play();
});
}
In javascript, How can I connect an audio context to a video fetching its data from a blob (the video uses the MediaStream capabilities). No matter what I do the audio context returns an empty buffer. Is there any way to connect the two?
Probably, createMediaElementSource is not the right kind of processing node for this use-case.
Rather, you better off to use createMediaStreamSource node from WebAudio API in case you are trying to handle audio live stream, not fixed media source.
The createMediaStreamSource() method of the AudioContext Interface is used to create a new MediaStreamAudioSourceNode object, given a media stream (say, from a navigator.getUserMedia instance), the audio from which can then be played and manipulated.
The link has a more detailed example. However, the main difference for this MediaStreamAudioSourceNode is it can be created only using a MediaStream that you get from media-server or locally(through getUserMedia). In my experience, i couldn't find any way by using only the blob url from the <video> tag.
While this is an old question, I've searched for something similar and found a solution I want to share.
To connect the Blob, you may use a new Response instance. Here is an example for creating a wave form visualizer.
var audioContext = new (window.AudioContext || window.webkitAudioContext)();
var analyser = audioContext.createAnalyser();
var dataArray = new Uint8Array(analyser.frequencyBinCount);
var arrayBuffer = await new Response(yourBlob).arrayBuffer();
var audioBuffer = await audioContext.decodeAudioData(arrayBuffer);
var source = audioContext.createBufferSource();
source.buffer = audioBuffer;
source.connect(analyser);
source.start(0);
Note: yourBlob needs to be a Blob instance.
You may find this fiddle usefull which records video and audio for 5 seconds, turns the recording into a Blob and than plays it back including audio wave visualization.
I'm playing a bit with the Web Audio API and there is some behaviour I can't understand.
var audio = document.querySelector('audio');
var context = new AudioContext();
var source = context.createMediaElementSource(audio);
var analyser = context.createAnalyser();
source.connect(analyser);
source.connect(context.destination);
setInterval(function() {
var freqDomain = new Float32Array(analyser.frequencyBinCount);
analyser.getFloatFrequencyData(freqDomain);
console.log(freqDomain);
},1000);
When I pause the Audio element, the console keeps showing me data from the analyser (and the data is changing). Why does it keep sending data when the sound is paused ?
I think this is probably because of the smoothingTimeConstant of your AnalyserNode, which defaults to 0.8.
My guess is that because of this averaging over time, when you pause the <audio> element, the values will gradually decay toward -Infinity.
Anyway, that's just a guess, but I'd say I'm about 95% sure. You could verify it pretty easily be setting analyser.smoothingTimeConstant = 0 and seeing if the behavior persists.
Oh, and here's a link to the relevant portion of the spec.: https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#dfn-smoothingTimeConstant
I'm creating an audio visualizer with webgl, and have been integrating soundcloud tracks into it. I want to no be able to switch tracks, but I can either get my visualizer to work and the audio to break, or I can get the audio to work and the visualizer to break.
The two ways that I've been able to make it work are
Audio working
delete audio element
append new audio element to body
trigger play
Visualizer working
stop audio
change source
trigger play
When I have the visualizer working, the audio is totally messed up. The buffers just sound wrong, and the audio has artifacts in it (noise, beeps and bloops).
When I have the audio working, when I call analyser.getByteFrequencyData, I get an array of 0's. I presume this is because the analyser is not hooked up correctly.
The code for the audio working looks like
$('#music').trigger("pause");
currentTrackNum = currentTrackNum + 1;
var tracks = $("#tracks").data("tracks")
var currentTrack = tracks[parseInt(currentTrackNum)%tracks.length];
// Begin audio switching
analyser.disconnect();
$('#music').remove();
$('body').append('<audio id="music" preload="auto" src="'+ currentTrack["download"].toString() + '?client_id=4c6187aeda01c8ad86e556555621074f"></audio>');
startWebAudio(),
(I don't think I need the pause call. Do I?)
when I want the visualizer to work, I use this code
currentTrackNum = currentTrackNum + 1;
var tracks = $("#tracks").data("tracks")
var currentTrack = tracks[parseInt(currentTrackNum)%tracks.length];
// Begin audio switching
$("#music").attr("src", currentTrack["download"].toString() + "?client_id=4c6187aeda01c8ad86e556555621074f");
$("#songTitle").text(currentTrack["title"]);
$('#music').trigger("play");
The startWebAudio function looks like this.
function startWebAudio() {
// Get our <audio> element
var audio = document.getElementById('music');
// Create a new audio context (that allows us to do all the Web Audio stuff)
var audioContext = new webkitAudioContext();
// Create a new analyser
analyser = audioContext.createAnalyser();
// Create a new audio source from the <audio> element
var source = audioContext.createMediaElementSource(audio);
// Connect up the output from the audio source to the input of the analyser
source.connect(analyser);
// Connect up the audio output of the analyser to the audioContext destination i.e. the speakers (The analyser takes the output of the <audio> element and swallows it. If we want to hear the sound of the <audio> element then we need to re-route the analyser's output to the speakers)
analyser.connect(audioContext.destination);
// Get the <audio> element started
audio.play();
var freqByteData = new Uint8Array(analyser.frequencyBinCount);
}
My suspicion is that the analyzer isn't hooked up correctly, but I can't figure out what to look at to figure it out. I have looked at the frequencyByteData output, and that seems to be indicative of something not being hooked up right. The analyser variable is global. If you would like more reference to the code, here's where it is on github
You can only create a single AudioContext per window. You should also be disconnecting the MediaElementSource when you're finished using it.
Here's an example that I used to answer a similar question: http://jsbin.com/acolet/1/