I have an html5 video element I'm trying to increase the volume off.
I'm using the code I found in this answer
However there is no sound coming out of the speakers. If I disable it sound is fine.
videoEl.muted = true //tried with this disabled or enabled
if(!window.audio)
window.audio = amplify(vol)
else
window.audio.amplify(vol)
...
export function amplify(multiplier) {
const media = document.getElementById('videoEl')
//#ts-ignore
var context = new(window.AudioContext || window.webkitAudioContext),
result = {
context: context,
source: context.createMediaElementSource(media),
gain: context.createGain(),
media,
amplify: function(multiplier) {
result.gain.gain.value = multiplier;
},
getAmpLevel: function() {
return result.gain.gain.value;
}
};
result.source.connect(result.gain)
result.gain.connect(context.destination)
result.amplify(multiplier)
return result;
}
That value is set to 3 for testing.
Any idea how why I'm getting no sound?
I also have Howler running for other audio files, could it be blocking the web audio API?
I am trying to write a small library for convenient manipulations with audio. I know about the autoplay policy for media elements, and I play audio after a user interaction:
const contextClass = window.AudioContext || window.webkitAudioContext;
const context = this.audioContext = new contextClass();
if (context.state === 'suspended') {
const clickCb = () => {
this.playSoundsAfterInteraction();
window.removeEventListener('touchend', clickCb);
this.usingAudios.forEach((audio) => {
if (audio.playAfterInteraction) {
const promise = audio.play();
if (promise !== undefined) {
promise.then(_ => {
}).catch(error => {
// If playing isn't allowed
console.log(error);
});
}
}
});
};
window.addEventListener('touchend', clickCb);
}
On android chrome everything ok and on a desktop browser. But on mobile Safari I am getting such error in promise:
the request is not allowed by the user agent or the platform in the current context safari
I have tried to create audios after an interaction, change their "src" property. In every case, I am getting this error.
I just create audio in js:
const audio = new Audio(base64);
add it to array and try to play. But nothing...
Tried to create and play after a few seconds after interaction - nothing.
I have a multi peer WebRTC stream using simple-peer and I'm playing the received stream like this:
peer.on("stream", data => {
let audio = document.createElement("audio") as HTMLAudioElement;
audio.src = URL.createObjectURL(data);
audio.play();
});
This works fine on desktop but for chrome on android there is no sound:
Unhandled Promise rejection: play() can only be initiated by a user gesture.
I couldn't find any documentation on how to correctly play the received stream. Do I really have to show a button when the stream is ready?
I have also tried to work around this issue by playing the stream from getUserMedia but this only worked as long as I didn't call audioTag.muted = true which is no solution either because this creates a feedback loop.
let audioTag = document.createElement("audio") as HTMLAudioElement;
audioTag.autoplay = true;
navigator.getUserMedia({video: false, audio: true}, (async stream => {
audioTag.src = window.URL.createObjectURL(stream);
audioTag.muted = true;
// ...
});
Sites like http://talky.io seem to have found a way around this problem though, so what do I have to do?
Check out: https://www.chromium.org/audio-video/autoplay
var promise = document.querySelector('video').play();
if (promise !== undefined) {
promise.then(_ => {
// Autoplay started!
}).catch(error => {
// Autoplay was prevented.
// Show a "Play" button so that user can start playback.
});
}
I'm trying to create audio stream from browser and send it to server.
Here is the code:
let recording = false;
let localStream = null;
const session = {
audio: true,
video: false
};
function start () {
recording = true;
navigator.webkitGetUserMedia(session, initializeRecorder, onError);
}
function stop () {
recording = false;
localStream.getAudioTracks()[0].stop();
}
function initializeRecorder (stream) {
localStream = stream;
const audioContext = window.AudioContext;
const context = new audioContext();
const audioInput = context.createMediaStreamSource(localStream);
const bufferSize = 2048;
// create a javascript node
const recorder = context.createScriptProcessor(bufferSize, 1, 1);
// specify the processing function
recorder.onaudioprocess = recorderProcess;
// connect stream to our recorder
audioInput.connect(recorder);
// connect our recorder to the previous destination
recorder.connect(context.destination);
}
function onError (e) {
console.log('error:', e);
}
function recorderProcess (e) {
if (!recording) return;
const left = e.inputBuffer.getChannelData(0);
// send left to server here (socket.io can do the job). We dont need stereo.
}
when function start is fired, the samples can be catched in recorderProcess
when function stop is fired, the mic icon in browser disappears, but...
unless I put if (!recording) return in the beginning of recorderProcess, it still process samples.
Unfortunately it's not a solution at all - the samples are still being received by recordingProcess and if I fire start functiono once more, it will get all samples from previous stream and from new one.
My question is:
How can I stop/start recording without such issue?
or if it's not best solution
How can I totally remove stream in stop function, to safely initialize it again anytime?
recorder.disconnect() should help.
You might want to consider the new MediaRecorder functionality in Chrome Canary shown at https://webrtc.github.io/samples/src/content/getusermedia/record/ (currently video-only I think) instead of the WebAudio API.
I often read that it's not possible to pause/resume audio files with the Web Audio API.
But now I saw a example where they actually made it possible to pause and resume it. I tried to figure out what how they did it. I thought maybe source.looping = falseis the key, but it wasn't.
For now my audio is always re-playing from the start.
This is my current code
var context = new (window.AudioContext || window.webkitAudioContext)();
function AudioPlayer() {
this.source = context.createBufferSource();
this.analyser = context.createAnalyser();
this.stopped = true;
}
AudioPlayer.prototype.setBuffer = function(buffer) {
this.source.buffer = buffer;
this.source.looping = false;
};
AudioPlayer.prototype.play = function() {
this.source.connect(this.analyser);
this.analyser.connect(context.destination);
this.source.noteOn(0);
this.stopped = false;
};
AudioPlayer.prototype.stop = function() {
this.analyser.disconnect();
this.source.disconnect();
this.stopped = true;
};
Does anybody know what to do, to get it work?
Oskar's answer and ayke's comment are very helpful, but I was missing a code example. So I wrote one: http://jsfiddle.net/v3syS/2/ I hope it helps.
var url = 'http://thelab.thingsinjars.com/web-audio-tutorial/hello.mp3';
var ctx = new webkitAudioContext();
var buffer;
var sourceNode;
var startedAt;
var pausedAt;
var paused;
function load(url) {
var request = new XMLHttpRequest();
request.open('GET', url, true);
request.responseType = 'arraybuffer';
request.onload = function() {
ctx.decodeAudioData(request.response, onBufferLoad, onBufferError);
};
request.send();
};
function play() {
sourceNode = ctx.createBufferSource();
sourceNode.connect(ctx.destination);
sourceNode.buffer = buffer;
paused = false;
if (pausedAt) {
startedAt = Date.now() - pausedAt;
sourceNode.start(0, pausedAt / 1000);
}
else {
startedAt = Date.now();
sourceNode.start(0);
}
};
function stop() {
sourceNode.stop(0);
pausedAt = Date.now() - startedAt;
paused = true;
};
function onBufferLoad(b) {
buffer = b;
play();
};
function onBufferError(e) {
console.log('onBufferError', e);
};
document.getElementById("toggle").onclick = function() {
if (paused) play();
else stop();
};
load(url);
In current browsers (Chrome 43, Firefox 40) there are now 'suspend' and 'resume' methods available for AudioContext:
var audioCtx = new AudioContext();
susresBtn.onclick = function() {
if(audioCtx.state === 'running') {
audioCtx.suspend().then(function() {
susresBtn.textContent = 'Resume context';
});
} else if(audioCtx.state === 'suspended') {
audioCtx.resume().then(function() {
susresBtn.textContent = 'Suspend context';
});
}
}
(modified example code from https://developer.mozilla.org/en-US/docs/Web/API/AudioContext/suspend)
Actually the web-audio API can do the pause and play task for you. It knows the current state of the audio context (running or suspended), so you can do this in this easy way:
susresBtn.onclick = function() {
if(audioCtx.state === 'running') {
audioCtx.suspend()
} else if(audioCtx.state === 'suspended') {
audioCtx.resume()
}
}
I hope this can help.
Without spending any time checking the source of your example, I'd say you'll want to use the noteGrainOn method of the AudioBufferSourceNode (https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#methodsandparams-AudioBufferSourceNode)
Just keep track of how far into the buffer you were when you called noteOff, and then do noteGrainOn from there when resuming on a new AudioBufferSourceNode.
Did that make sense?
EDIT:
See comments below for updated API calls.
EDIT 2, 2019: See MDN for updated API calls; https://developer.mozilla.org/en-US/docs/Web/API/AudioBufferSourceNode/start
For chrome fix, every time you want to play sound, set it like:
if(audioCtx.state === 'suspended') {
audioCtx.resume().then(function() {
audio.play();
});
}else{
audio.play();
}
The lack of a built-in pause functionality in the WebAudio API seems like a major oversight to me. Possibly, in the future it will be possible to do this using the planned MediaElementSource, which will let you hook up an element (which supports pausing) to Web Audio. For now, most workarounds seem to be based on remembering playback time (such as described in imbrizi's answer). Such a workaround has issues when looping sounds (does the implementation loop gapless or not?), and when you allow dynamically change the playbackRate of sounds (as both affect timing). Another, equally hack-ish and technically incorrect, but much simpler workaround you can use is:
source.playbackRate = paused?0.0000001:1;
Unfortunately, 0 is not a valid value for playbackRate (which would actually pause the sound). However, for many practical purposes, some very low value, like 0.000001, is close enough, and it won't produce any audible output.
UPDATE: This is only valid for Chrome. Firefox (v29) does not yet implement the MediaElementAudioSourceNode.mediaElement property.
Assuming that you already have the AudioContext reference and your media source (e.g. via AudioContext.createMediaElementSource() method call), you can call MediaElement.play() and MediaElement.pause()on your source, e.g.
source.mediaElement.pause();
source.mediaElement.play();
No need for hacks and workarounds, it's supported.
If you are working with an <audio> tag as your source, you should not call pause directly on the audio element in your JavaScript, that will stop playback.
In 2017, using ctx.currentTime works well for keeping track of the point in the song. The code below uses one button (songStartPause) that toggles between a play & pause button. I used global variables for simplicity's sake. The variable musicStartPoint keeps track of what time you're at in the song. The music api keeps track of time in seconds.
Set your initial musicStartPoint at 0 (beginning of the song)
var ctx = new webkitAudioContext();
var buff, src;
var musicLoaded = false;
var musicStartPoint = 0;
var songOnTime, songEndTime;
var songOn = false;
songStartPause.onclick = function() {
if(!songOn) {
if(!musicLoaded) {
loadAndPlay();
musicLoaded = true;
} else {
play();
}
songOn = true;
songStartPause.innerHTML = "||" //a fancy Pause symbol
} else {
songOn = false;
src.stop();
setPausePoint();
songStartPause.innerHTML = ">" //a fancy Play symbol
}
}
Use ctx.currentTime to subtract the time the song ends from when it started, and append this length of time to however far you were in the song initially.
function setPausePoint() {
songEndTime = ctx.currentTime;
musicStartPoint += (songEndTime - songOnTime);
}
Load/play functions.
function loadAndPlay() {
var req = new XMLHttpRequest();
req.open("GET", "//mymusic.com/unity.mp3")
req.responseType = "arraybuffer";
req.onload = function() {
ctx.decodeAudioData(req.response, function(buffer) {
buff = buffer;
play();
})
}
req.send();
}
function createBuffer() {
src = ctx.createBufferSource();
src.buffer = buff;
}
function connectNodes() {
src.connect(ctx.destination);
}
Lastly, the play function tells the song to start at the specified musicStartPoint (and to play it immediately), and also sets the songOnTime variable.
function play(){
createBuffer()
connectNodes();
songOnTime = ctx.currentTime;
src.start(0, musicStartPoint);
}
*Sidenote: I know it might look cleaner to set songOnTime up in the click function, but I figure it makes sense to grab the time code as close as possible to src.start, just like how we grab the pause time as close as possible to src.stop.
I didn't follow the full discussion, but I will soon. I simply headed over HAL demo to understand. For those who now do like me, I would like to tell
1 - how to make this code working now.
2 - a trick to get pause/play, from this code.
1 : replace noteOn(xx) with start(xx) and put any valid url in sound.load(). I think it's all I've done. You will get a few errors in the console that are pretty directive. Follow them. Or not : sometimes you can ignore them, it works now : it's related to the -webkit prefix in some function. New ones are given.
2 : at some point, when it works, you may want to pause the sound.
It will work. But, as everybody knows, a new pressing on play would raise an error. As a result, the code in this.play() after the faulty source_.start(0) is not executed.
I simply enclosed those line in a try/catch :
this.play = function() {
analyser_ = context_.createAnalyser();
// Connect the processing graph: source -> analyser -> destination
source_.connect(analyser_);
analyser_.connect(context_.destination);
try{
source_.start(0);
}
catch(e){
this.playing = true;
(function callback(time) {
processAudio_(time);
reqId_ = window.webkitRequestAnimationFrame(callback);
})();
}
And it works : you can use play/pause.
I would like to mention that this HAL simulation is really incredible. Follow those simple steps, it's worth it !