Volume changing when playing overlapping wav files with Web Audio API - javascript

In Google Chrome:
One .wav file is played, looping. Another .wav file is played from time to time as a sound effect.
When the sound effect plays, the volume of the looping sound automatically decreases. The volume gradually increases again over about 15 seconds.
(I guess it's automatically ducking http://en.wikipedia.org/wiki/Ducking )
I don't want the volume of the loop to decrease when the sound effect plays. How can I prevent this behaviour?
Example: http://www.matthewgatland.com/games/takedown/play/web/audiofail.html
window.AudioContext = window.AudioContext||window.webkitAudioContext;
var context = new AudioContext();
var play = function (buffer, loop) {
var source = context.createBufferSource();
source.buffer = buffer;
if (loop) source.loop = true;
source.connect(context.destination);
source.start(0);
};
var load = function (url, callback) {
var request = new XMLHttpRequest();
request.open('GET', url, true);
request.responseType = 'arraybuffer';
request.onload = function() {
context.decodeAudioData(request.response, function(buffer) {
callback(buffer);
}, null);
};
request.send();
};
var musicSound;
var thudSound;
load("res/snd/music0.wav", function (buffer) {
musicSound = buffer;
});
load("res/snd/thud0.wav", function (buffer) {
thudSound = buffer;
});
Once the sounds have loaded, call:
play(musicSound, true); //start the music looping
//each time you call this, the music becomes quiet for a few seconds
play(thudSound, false);

You might have to do some sound design before you put this into your website. I don't know what you are using for an editor but you might want to edit the sounds together so that their over all level is closer to the level of the original looping sound. That way their won't be as dramatic a difference in levels that is triggering the automatic gain reduction. The combination of both sounds is too loud so the louder of the two will bring down the level of the softer one. So if you bring them closer together in level the overall difference shouldn't be as drastic when or if the gain reduction kicks in.

Related

HTML Video image lags behind audio (chrome mobile)

Im playing a video that weights 31.6MB on a website. The video plays alright on the latest versions of chrome in desktop, but on mobile the frames seems to lag behind the audio, even after looping the video a couple of times, so it should have been loaded completely already.
My best guess is that it's because of the weight of the video. Having played a much smaller file (3MB) and checking that the images was in sync with the audio kind of confirms it. But the fact that I preload the video, and that even if I make it play through more than once the problems persists (after the first play through there shouldn't be anything else to load), makes me believe is something else.
Im leaving the code that I use to preload the file just in case its needed.
HTML
<video id="videoId" src="public/videos/0620_ShakeHandam_VP9.webm" playsinline muted loop="true"></video>
JS
document.addEventListener("DOMContentLoaded", function(event) {
const video = document.getElementById("videoId")
const url = video.dataset.src;
const xhr = new XMLHttpRequest();
const loadingStartedDate = Date.now();
document.body.classList.add("loading");
xhr.open("GET", url, true);
xhr.responseType = "arraybuffer";
xhr.onload = function(oEvent) {
const blob = new Blob([oEvent.target.response], {type: "video/yourvideosmimmetype"});
const loadingTime = Date.now() - loadingStartedDate;
video.src = URL.createObjectURL(blob);
alert(`VIDEO LOADED: ${loadingTime}ms`);
console.log(`Video loaded after ${loadingTime}ms`);
document.body.classList.remove("loading");
document.body.classList.add("loaded");
//video.play() if you want it to play on load
};
xhr.onprogress = function(oEvent) {
if (oEvent.lengthComputable) {
const percentComplete = oEvent.loaded/oEvent.total * 100;
console.log("PROGRESS", percentComplete);
document.getElementById("load-percentage").textContent = `${percentComplete.toFixed(2)}%`;
// do something with this
}
}
xhr.send();
});
EDIT #1
The video in question has a transparent background. After further testing, I believe that it may be the cause of the problem. It doesn't seems to be happening with videos without a transparent background (mp4 or webm)

Javascript Web Audio API - AudioContext to base64

in the project I'm working on, we have about 30 audio tracks where we apply filters and play the audio back. Originally this was done server-side, and returned a base64 string for each track, which I then loaded with new Audio().
This worked well if you had fast internet speeds, but on slow speeds, it could take up to an hour for the tracks to be returned from the server, so now we're applying the filters client-side.
Applying the filters is no problem, but I'm trying not to rewrite my entire playback algorithm (it's much more involved than just pause, play, stop) and am wondering If I can encode an AudioContext to Base64.
I've tried creating a new Audio and passing the AudioContext, creating a new Audio and passing the AudioBuffer and something based on this example. But none if it works and I cant find any examples of what I'm trying to do on the internet.
If someone could take a look at my code and help me out, I'd greatly appreciate it. Thanks in advance!
var audioCtx = new AudioContext();
var source = audioCtx.createBufferSource();
var request = new XMLHttpRequest();
request.open("GET", "/path/to/audio", true);
request.responseType = "arraybuffer";
request.onload = function () {
audioCtx.decodeAudioData(request.response, function (buffer) {
source.buffer = buffer;
// Apply filters to the audio
// Here I would like to convert the audio to Base64
callback(source);
}, function (error) {
console.error("decodeAudioData error", error);
});
};
request.send();
It's a bit hard to know exactly what you want from the snippet you give, but based on the snippet, you might be able to use an OfflineAudioContext if you know how long your audio files are. The offline context will return an AudioBuffer which you can then use to get a base64-encoded audio result.

Can I record the output of an <audio> without use of the microphone?

I have an <audio> element and I'm changing the speed, start/end bounds, and pitch. I want to see if it's possible to record the audio I hear in the browser. However I don't want to just record with the microphone because of the lower quality.
I could do the same effects server-side but I'd rather not since I'd be basically duplicating the same functionality with two different technologies.
In response to a flag vote since it's "unclear what I'm asking", I'll rephrase.
I have an <audio> element playing on the page. I have some javascript manipulating the play-rate, volume, etc. I then want my browser to record the audio as I hear it. This is not the microphone. I want to create a new audio file that is as close as possible to the one playing. If it's at 75%, then the new file will be at 75% volume.
In supporting browsers, you could use the MediaElement.captureStream() method along with the MediaRecorder API.
But note that these technologies are still in active development and that current implementations are still full of bugs.
E.g, for your case, current stable FF will stop the rendering of the original media audio if you change its volume while recording... I didn't had time to search for a bug report on it, but anyway, this is just one of the many bugs you'll find.
// here we will save all the chunks of our record
const chunks = [];
// wait for the original media is ready
audio.oncanplay = function() {
audio.volume = 0.5; // just for your example
// FF still does prefix this unstable method
var stream = audio.captureStream ? audio.captureStream() : audio.mozCaptureStream();
// create a MediaRecorder from our stream
var rec = new MediaRecorder(stream);
// every time we've got a bit of data, store it
rec.ondataavailable = e => chunks.push(e.data);
// once everything is done
rec.onstop = e => {
audio.pause();
// concatenate our chunks into one file
let final = new Blob(chunks);
let a = new Audio(URL.createObjectURL(final));
a.controls = true;
document.body.append(a);
};
rec.start();
// record for 6 seconds
setTimeout(() => rec.stop(), 6000);
// for demo, change volume at half-time
setTimeout(() => audio.volume = 1, 3000);
};
// FF will "taint" the stream, even if the media is served with correct CORS...
fetch("https://dl.dropboxusercontent.com/s/8c9m92u1euqnkaz/GershwinWhiteman-RhapsodyInBluePart1.mp3").then(resp => resp.blob()).then(b => audio.src = URL.createObjectURL(b));
<audio id="audio" autoplay controls></audio>
For older browsers, you could use the WebAudio API's createMediaElementSource method, to pass your audio element media through the API.
From there, you'd be able to extract raw PCM data to arrayBuffers and save it.
In following demo, I'll use recorder.js library which does greatly help for the extraction + save to wav process.
audio.oncanplay = function(){
var audioCtx = new AudioContext();
var source = audioCtx.createMediaElementSource(audio);
var gainNode = audioCtx.createGain();
gainNode.gain.value = 0.5;
source.connect(gainNode);
gainNode.connect(audioCtx.destination);
var rec = new Recorder(gainNode);
rec.record();
setTimeout(function(){
gainNode.gain.value = 1;
}, 3000);
setTimeout(function(){
rec.stop()
audio.pause();
rec.exportWAV(function(blob){
var a = new Audio(URL.createObjectURL(blob));
a.controls = true;
document.body.appendChild(a);
});
}, 6000);
};
<script src="https://rawgit.com/mattdiamond/Recorderjs/master/dist/recorder.js"></script>
<audio id="audio" crossOrigin="anonymous" controls src="https://dl.dropboxusercontent.com/s/8c9m92u1euqnkaz/GershwinWhiteman-RhapsodyInBluePart1.mp3" autoplay></audio>
As Kaiido mentions in his answer, captureStream() is one way of doing it. However, that is not fully supported in Chrome and Firefox yet. MediaRecorder does also not allow for track set changes during a recording, and a MediaStream coming from captureStream() might have those (depends on the application) - thus ending the recording prematurely.
If you need a supported way of recording only audio from a media element, you can use a MediaElementAudioSourceNode, pipe that to a MediaStreamAudioDestinationNode, and pipe the stream attribute of that to MediaRecorder.
Here's an example you can use on a page with an existing audio element:
const a = document.getElementsByTagName("audio")[0];
const ac = new AudioContext();
const source = ac.createMediaElementSource(a);
// The media element source stops audio playout of the audio element.
// Hook it up to speakers again.
source.connect(ac.destination);
// Hook up the audio element to a MediaStream.
const dest = ac.createMediaStreamDestination();
source.connect(dest);
// Record 10s of audio with MediaRecorder.
const recorder = new MediaRecorder(dest.stream);
recorder.start();
recorder.ondataavailable = ev => {
console.info("Finished recording. Got blob:", ev.data);
a.src = URL.createObjectURL(ev.data);
a.play();
};
setTimeout(() => recorder.stop(), 10 * 1000);
Note that neither approach works with cross-origin audio sources without a proper CORS setup, as both WebAudio and recordings would give the application the possibility to inspect audio data.

Decode Html5 Audio fast without using "createMediaElementSource"

I am using the Webaudio api's "createMediaElementSource" which works fine on Firefox(Gecko) and Chrome(Blink) but not Safari(Webkit). This is a big problem for me since I prefer getting the audio from my Html5 audio players rather than using XMLHttpRequests due to the latter being too slow.
The first attempt I did was to get the source as a string from the audio tag and serve it as an url in an XMLHttpRequest. As expected it works but the decoding is very slow and I cant pause the audio with stop() as a resume induces another round of prior decoding of the entire file before it can be heared..
A stackoverflow user named Kevin Ennis gave me an important advice which is a really great idea:
You could break the audio up into a number of smaller files. Like,
maybe break it up into 4 separate 1MB audio files and load them in
order. Then you can start playback after the first one loads, and
while that's playing, you load the other ones.
My question is, how do I do this technically? I am not aware of any function that checks if an audio file finished.
I imagine it would look something like this:
var source = document.getElementByTagName["audio"][0].src;
var fileExt = source.indexOf('.');
var currentFile = 1;
if(decodeCurrentData == complete) {
currentFile += 1;
source = source.slice(0, fileExt) + "_part" + currentFile.toString() + ".mp3";
loadAudioFile();
}
var loadAudioFile = function () {
var request = new XMLHttpRequest();
request.open( "GET", "source", true );
request.responseType = "arraybuffer";
request.onload = function (){
context.decodeAudioData(request.response, function (buffer) {
convolver.buffer = buffer;
});
};
request.send();
};
loadAudioFile();
Will my idea work or would it utterly fail? What would you suggest I do about the long decoding time?

Web Audio API delay in playback when using socket.io to receive ArrayBuffer from server

Let me start with the fact that I am fairly new to the web-development game and have hardly used socket.io for over a week or two. I attempted playing audio from the ArrayBuffer received from socket.io corresponding to an MP3 file transfer using Web Audio API. The ArrayBuffer gets successfully decoded by WebAudio but the only issue I am having is that it takes about 10 seconds after receiving the initial chunk of the ArrayBuffer to start playing this song.
My understanding is that it waits for the entire file to get streamed and then starts the playback ?? Is there a better way of playing the track as soon as the first set of chunks arrive ?
This is how I am using it currently :
socket.on('browser:buffer', function(data) {
console.log(data.buffer);
source = audioContext.createBufferSource();
audioContext.decodeAudioData(data.buffer, function(decodedData) {
source.buffer = decodedData;
source.connect(audioContext.destination);
source.loop = true;
source.connect(audioContext.destination);
source.start(0);
});
}, function(error) {
console.error("decodeAudioData error", error);
});
Yes, you can you the <audio> and the createMediaElementSource(audio). This has benefits of self-dealing with the downloading :)
var audio = new Audio("shoot.mp3");
var context = new AudioContext();
var source = context.createMediaElementSource(audio);
audio.loop = true;
source.connect(context.destination);
audio.play();
https://jsfiddle.net/u8j4h4L4/1/
Other option is just the simplest if you actually don't need WebAudio:
var sound = new Audio("myaudio.mp3");
sound.play()

Categories

Resources