I wrote a simple application that streams from one master to several clients. Since the Master may use something like an IP-Webcam (Has ~1sec Latency) but the internal microphone (No Latency) i wanted to add a delay to the audiotrack. Unfortunately it seems like the delay does not work on Firefox and on chrome it automaticle synchronizes all tracks to the highest set playoutDelayHint. So everything becomes delayed one second. I checked both consumer RTPreceivers values for both tracks, only audio has set playoutDelayHint to one second which doesn't change over time, but after a few secons streaming the video becomes delayed for one second too.
const stream = new MediaStream;
[...]
let el = document.querySelector('#remote_video');
[...]
function addVideoAudio(consumer) {
if (consumer.kind === 'video') {
el.setAttribute('playsinline', true);
consumer._rtpReceiver.playoutDelayHint = 0;
} else {
el.setAttribute('playsinline', true);
el.setAttribute('autoplay', true);
consumer._rtpReceiver.playoutDelayHint = 1;
}
stream.addTrack(consumer.track.clone());
el.srcObject = stream;
el.consumer = consumer;
}
Even when i add another video element and another mediastream, so every stream (consumer) get's it's own html element i still get the same effect:
const stream1 = new MediaStream;
const stream2 = new MediaStream;
[...]
let el1 = document.querySelector('#remote_video');
let el2 = document.querySelector('#remote_audio');
[...]
function addVideoAudio(consumer) {
if (consumer.kind === 'video') {
el1.setAttribute('playsinline', true);
consumer._rtpReceiver.playoutDelayHint = 0;
stream1.addTrack(consumer.track);
el1.srcObject = stream1;
el1.consumer = consumer;
} else {
el2.setAttribute('playsinline', true);
el2.setAttribute('autoplay', true);
consumer._rtpReceiver.playoutDelayHint = 1;
stream2.addTrack(consumer.track);
el2.srcObject = stream2;
el2.consumer = consumer;
}
}
Is it possible to delay only one track and why does the delay only (kinda) work on chrome?
Thanks in advance. :)
You can use jitterBufferDelayHint to delay the audio.
Weirdly enough, playoutDelayHint on a video delay the video and audio.
But to delay the audio only, it seem jitterBufferDelayHint fixes it.
audioReceiver.playoutDelayHint = 1;
audioReceiver.jitterBufferDelayHint = 1;
This behavior might change over time.
Related
I'm trying to write a small Audio library for a specific web application, to solve the problem of Web Audio Buffer Sources requiring long load times, I'm trying to switch a HTML5 Audio Source(via MediaElementSourceNode) with a Buffer Source once the Buffer Source is ready to play. with a 20 minute track, it takes Web Audio's Buffer Source roughly 5 seconds to decode and start playing.
using MediaElementSourceNode is required for using the PanNode in Web Audio
First, I thought it was a JS main thread Latency issue that was throwing off the "start Time". I thought I could solve by making sure the code that disables MediaElementSource and enables BufferSourceNode are as close to execution as possible.
Then, I though it must be the HTML having a small delay when it starts, causing the recorded startTime to be off, to get around this, I used a Event Handler listening for 'play'.
I searched around and discovered Gapless 5 apparently did this without issue, looking at its source code, i could not discovered how it is switching sources seamlessly
play(offset) {
this.createNodes();
this.connectNodes();
//if webAudio's buffer source is not ready, starting playing with HTML5
if (!this.audioClip.isWebAudioReady() &&
this.audioClip.playType > 0) {
this.playHTML5();
}
//returns true if buffer != null
if (!this.audioClip.isWebAudioReady()) {
this.audioClip.addDecodeListener(this.play.bind(this));
}
if (this.audioClip.isWebAudioReady()) {
this.playBufferSource();
}
playHTML5() {
var context = AudioManager.context();
if (this.audioClip.isHTML5Ready()) {
this.createHTMLSourceNode();
console.log("playing HTML5");
this.mediaElementSourceNode.connect(this.gainNode);
this.mediaElementSourceNode.source.play();
this.startTime = context.currentTime;
}
else {
console.log('not ready yet');
this.audioClip.addLoadListener(this.playHTML5.bind(this));
}
}
playBufferSource() {
var context = AudioManager.context();
var offset = context.currentTime - this.startTime;
if (!this.bufferSourceNode) {
this.createBufferSourceNode();
}
this.bufferSourceNode.connect(this.gainNode);
//hoplessly attempt to make up for Thread latencey
offset = context.currentTime - this.startTime;
if (this.audioClip.playType > 0) {
this.mediaElementSourceNode.disconnect();
this.mediaElementSourceNode = null;
}
if (this.audioClip.playType == 0) {
offset = 0;
this.bufferSourceNode.start(0, offset);
}
else {
offset = context.currentTime - this.startTime;
this.bufferSourceNode.start(0, offset);
}
// console.log("starting web audio at" + offset);
}
createBufferSourceNode() {
var context = AudioManager.context();
if (!this.audioClip.webAudioReady) {
console.log('Web Audio not ready!, Sometihng went wrong!');
return;
}
var buffer = this.audioClip.buffer;
this.bufferSourceNode = context.createBufferSource();
//When using anything other than Buffer,
//we want to disable pitching.
if (this.audioClip.playType == NS.PlayTypes.Buffer) {
this.bufferSourceNode.playbackRate.setValueAtTime(this._pitch,
context.currentTime);
}
this.bufferSourceNode.buffer = buffer;
}
createHTMLSourceNode() {
var context = AudioManager.context();
var HTMLAudio = this.audioClip.mediaElement.cloneNode(false);
//HTMLAudio.addEventListener('ended', onHTML5Ended.bind(this), false);
HTMLAudio.addEventListener('play', this.onHTML5Play.bind(this), false);
var sourceNode = context.createMediaElementSource(HTMLAudio);
sourceNode.source = HTMLAudio;
this.mediaElementSourceNode = sourceNode;
}
/**
*
*/
onHTML5Play() {
this.startTime = AudioManager.context().currentTime;
console.log("HTML5 started playing");
}
Since I'm starting the second source as close as possible in time with the first, I should technically not hear any clicks if the waveform line up close enough, but the resulting clicks are very audible, sometimes 2 clicks are audible.
I am doing a POC and my requirement is that I want to implement the feature like OK google or Hey Siri on browser.
I am using the Chrome Browser's Web speech api. The things I noticed that I can't continuous the recognition as it terminates automatically after a certain period of time and I know its relevant because of security concern. I just does another hack like when the SpeechReognition terminates then on its end event I further start the SpeechRecogntion but it is not the best way to implement such a solution because suppose if I am using the 2 instances of same application on the different browser tab then It doesn't work or may be I am using another application in my browser that uses the speech recognition then both the application doesn't behave the same as expected. I am looking for a best approach to solve this problem.
Thanks in advance.
Since your problem is that you can't run the SpeechRecognition continuously for long periods of time, one way would be to start the SpeechRecognition only when you get some input in the mic.
This way only when there is some input, you will start the SR, looking for your magic_word.
If the magic_word is found, then you will be able to use the SR normally for your other tasks.
This can be detected by the WebAudioAPI, which is not tied by this time restriction SR suffers from. You can feed it by an LocalMediaStream from MediaDevices.getUserMedia.
For more info, on below script, you can see this answer.
Here is how you could attach it to a SpeechRecognition:
const magic_word = ##YOUR_MAGIC_WORD##;
// initialize our SpeechRecognition object
let recognition = new webkitSpeechRecognition();
recognition.lang = 'en-US';
recognition.interimResults = false;
recognition.maxAlternatives = 1;
recognition.continuous = true;
// detect the magic word
recognition.onresult = e => {
// extract all the transcripts
var transcripts = [].concat.apply([], [...e.results]
.map(res => [...res]
.map(alt => alt.transcript)
)
);
if(transcripts.some(t => t.indexOf(magic_word) > -1)){
//do something awesome, like starting your own command listeners
}
else{
// didn't understood...
}
}
// called when we detect silence
function stopSpeech(){
recognition.stop();
}
// called when we detect sound
function startSpeech(){
try{ // calling it twice will throw...
recognition.start();
}
catch(e){}
}
// request a LocalMediaStream
navigator.mediaDevices.getUserMedia({audio:true})
// add our listeners
.then(stream => detectSilence(stream, stopSpeech, startSpeech))
.catch(e => log(e.message));
function detectSilence(
stream,
onSoundEnd = _=>{},
onSoundStart = _=>{},
silence_delay = 500,
min_decibels = -80
) {
const ctx = new AudioContext();
const analyser = ctx.createAnalyser();
const streamNode = ctx.createMediaStreamSource(stream);
streamNode.connect(analyser);
analyser.minDecibels = min_decibels;
const data = new Uint8Array(analyser.frequencyBinCount); // will hold our data
let silence_start = performance.now();
let triggered = false; // trigger only once per silence event
function loop(time) {
requestAnimationFrame(loop); // we'll loop every 60th of a second to check
analyser.getByteFrequencyData(data); // get current data
if (data.some(v => v)) { // if there is data above the given db limit
if(triggered){
triggered = false;
onSoundStart();
}
silence_start = time; // set it to now
}
if (!triggered && time - silence_start > silence_delay) {
onSoundEnd();
triggered = true;
}
}
loop();
}
As a plunker, since neither StackSnippets nor jsfiddle's iframes will allow gUM in two versions...
The web audio api furnish the method .stop() to stop a sound.
I want my sound to decrease in volume before stopping. To do so I used a gain node. However I'm facing weird issues with this where some sounds just don't play and I can't figure out why.
Here is a dumbed down version of what I do:
https://jsfiddle.net/01p1t09n/1/
You'll hear that if you remove the line with setTimeout() that every sound plays. When setTimeout is there not every sound plays. What really confuses me is that I use push and shift accordingly to find the correct source of the sound, however it seems like it's another that stop playing. The only way I can see this happening is if AudioContext.decodeAudioData isn't synchronous. Just try the jsfiddle to have a better understanding and put your headset on obviously.
Here is the code of the jsfiddle:
let url = "https://raw.githubusercontent.com/gleitz/midi-js-soundfonts/gh-pages/MusyngKite/acoustic_guitar_steel-mp3/A4.mp3";
let soundContainer = {};
let notesMap = {"A4": [] };
let _AudioContext_ = AudioContext || webkitAudioContext;
let audioContext = new _AudioContext_();
var oReq = new XMLHttpRequest();
oReq.open("GET", url, true);
oReq.responseType = "arraybuffer";
oReq.onload = function (oEvent) {
var arrayBuffer = oReq.response;
makeLoop(arrayBuffer);
};
oReq.send(null);
function makeLoop(arrayBuffer){
soundContainer["A4"] = arrayBuffer;
let currentTime = audioContext.currentTime;
for(let i = 0; i < 10; i++){
//playing at same intervals
play("A4", currentTime + i * 0.5);
setTimeout( () => stop("A4"), 500 + i * 500); //remove this line you will hear all the sounds.
}
}
function play(notePlayed, start) {
audioContext.decodeAudioData(soundContainer[notePlayed], (buffer) => {
let source;
let gainNode;
source = audioContext.createBufferSource();
gainNode = audioContext.createGain();
// pushing notes in note map
notesMap[notePlayed].push({ source, gainNode });
source.buffer = buffer;
source.connect(gainNode);
gainNode.connect(audioContext.destination);
gainNode.gain.value = 1;
source.start(start);
});
}
function stop(notePlayed){
let note = notesMap[notePlayed].shift();
note.source.stop();
}
This is just to explain why I do it like this, you can skip it, it's just to explain why I don't use stop()
The reason I'm doing all this is because I want to stop the sound gracefully, so if there is a possibility to do so without using setTimeout I'd gladly take it.
Basically I have a map at the top containing my sounds (notes like A1, A#1, B1,...).
soundMap = {"A": [], "lot": [], "of": [], "sounds": []};
and a play() fct where I populate the arrays once I play the sounds:
play(sound) {
// sound is just { soundName, velocity, start}
let source;
let gainNode;
// sound container is just a map from soundname to the sound data.
this.audioContext.decodeAudioData(this.soundContainer[sound.soundName], (buffer) => {
source = this.audioContext.createBufferSource();
gainNode = this.audioContext.createGain();
gainNode.gain.value = sound.velocity;
// pushing sound in sound map
this.soundMap[sound.soundName].push({ source, gainNode });
source.buffer = buffer;
source.connect(gainNode);
gainNode.connect(this.audioContext.destination);
source.start(sound.start);
});
}
And now the part that stops the sounds :
stop(sound){
//remember above, soundMap is a map from "soundName" to {gain, source}
let dasound = this.soundMap[sound.soundName].shift();
let gain = dasound.gainNode.gain.value - 0.1;
// we lower the gain via incremental values to not have the sound stop abruptly
let i = 0;
for(; gain > 0; i++, gain -= 0.1){ // watchout funky syntax
((gain, i) => {
setTimeout(() => dasound.gainNode.gain.value = gain, 50 * i );
})(gain, i)
}
// we stop the source after the gain is set at 0. stop is in sec
setTimeout(() => note.source.stop(), i * 50);
}
Aaah, yes, yes, yes! I finally found a lot of things by eventually bothering to read "everything" in the doc (diagonally). And let me tell you this api is a diamond in the rough. Anyway, they actually have what I wanted with Audio param :
The AudioParam interface represents an audio-related parameter, usually a parameter of an AudioNode (such as GainNode.gain). An
AudioParam can be set to a specific value or a change in value, and
can be scheduled to happen at a specific time and following a specific
pattern.
It has a function linearRampToValueAtTime()
And they even have an example with what I asked !
// create audio context
var AudioContext = window.AudioContext || window.webkitAudioContext;
var audioCtx = new AudioContext();
// set basic variables for example
var myAudio = document.querySelector('audio');
var pre = document.querySelector('pre');
var myScript = document.querySelector('script');
pre.innerHTML = myScript.innerHTML;
var linearRampPlus = document.querySelector('.linear-ramp-plus');
var linearRampMinus = document.querySelector('.linear-ramp-minus');
// Create a MediaElementAudioSourceNode
// Feed the HTMLMediaElement into it
var source = audioCtx.createMediaElementSource(myAudio);
// Create a gain node and set it's gain value to 0.5
var gainNode = audioCtx.createGain();
// connect the AudioBufferSourceNode to the gainNode
// and the gainNode to the destination
gainNode.gain.setValueAtTime(0, audioCtx.currentTime);
source.connect(gainNode);
gainNode.connect(audioCtx.destination);
// set buttons to do something onclick
linearRampPlus.onclick = function() {
gainNode.gain.linearRampToValueAtTime(1.0, audioCtx.currentTime + 2);
}
linearRampMinus.onclick = function() {
gainNode.gain.linearRampToValueAtTime(0, audioCtx.currentTime + 2);
}
Working example here
They also have different type of timings, like exponential instead of linear ramp which I guess would fit this scenario more.
I have the following JS code for a canvas based game.
var EXPLOSION = "sounds/explosion.wav";
function playSound(str, vol) {
var snd = new Audio();
snd.src = str;
snd.volume = vol;
snd.play();
}
function createExplosion() {
playSound(EXPLOSION, 0.5);
}
This works, however it sends a server request to download the sound file every time it is called. Alternatively, if I declare the Audio object beforehand:
var snd = new Audio();
snd.src = EXPLOSION;
snd.volume = 0.5;
function createExplosion() {
snd.play();
}
This works, however if the createExplosion function is called before the sound is finished playing, it does not play the sound at all. This means that only a single playthrough of the sound file is allowed at a time - and in scenarios that multiple explosions are taking place it doesn't work at all.
Is there any way to properly play an audio file multiple times overlapping with itself?
I was looking for this for ages in a tetris game i'm building and I think this solution is the best.
function playSoundMove() {
var sound = document.getElementById("move");
sound.load();
sound.play();
}
just have it loaded and ready to go.
You could just duplicate the node with cloneNode() and play() that duplicate node.
My audio element looks like this:
<audio id="knight-audio" src="knight.ogg" preload="auto"></audio>
and I have an onClick listener that does just that:
function click() {
const origAudio = document.getElementById("knight-audio");
const newAudio = origAudio.cloneNode()
newAudio.play()
}
And since the audio element isn't going to be displayed, you don't actually have to attach the node to anything.
I verified client-side and server-side that Chrome only tries to download the audio file once.
Caveats: I'm not sure about performance impacts, since this on my site this clip doesn't get played more than ~40x maximum for a page. You might have to clean up the audio nodes if you're doing something much larger than that?
Try this:
(function() {
var snds = {};
window.playSound(str,vol) {
if( !snds[str]) (snds[str] = new Audio()).src = str;
snds[str].volume = vol;
snds[str].play();
}
})();
Then the first time you call it it will fetch the sound, but every time after that it will reuse the same sound object.
EDIT: You can also preload with duplicates to allow the sound to play more than once at a time:
(function() {
var snds = {}
window.playSound = function(str,vol) {
if( !snds[str]) {
snds[str] = [new Audio()];
snds[str][0].src = str;
}
var snd = snds[str], pointer = 0;
while( snd[pointer].playing) {
pointer++;
if( pointer >= snd.length) {
snd.push(new Audio());
snd[pointer].src = str;
}
}
snd[pointer].volume = vol;
snd[pointer].play();
};
})();
Note that this will send multiple requests if you play the sound overlapping itself too much, but it should return Not Modified very quickly and will only do so if you play it more times than you have previously.
In my game i'm using preoading but after the sound is initiated (its not so smart to not preload at all or preload everything on page load, some sound hasn't played in some gameplay at all, why to load them)
const audio {};
audio.dataload = {'entity':false,'entityes':[],'n':0};
audio.dataload.ordernum = function() {
audio.dataload.n = (audio.dataload.n + 1)%10;
return audio.dataload.n;
}
audio.dataload.play = function() {
audio.dataload.entity = new Audio('/some.mp3');
for (let i = 0; i<10;i++) {
audio.dataload.entityes.push(audio.dataload.entity.cloneNode());
}
audio.dataload.entityes[audio.dataload.ordernum()].play();
}
audio.dataload.play() // plays sound and preload sounds to memory when it isn't
I've created a class that allows for layered audio. This is very similar to other answers where it creates another node with the same src, but this class will only do that if necessary. If it has created a node already that has been completed, it will replay that existing node.
Another tweak to this is that initially fetch the audio and use the URL of the blob. I do this for efficiency; so the src doesn't have to be fetched externally every single time a new node is created.
class LayeredAudio {
url;
samples = [];
constructor(src){
fetch(src)
.then(response => response.blob())
.then((blob) => {
this.url = URL.createObjectURL(blob);
this.samples[0] = new Audio(this.url);
});
}
play(){
if(!this.samples.find(e => e.paused)?.play()){
this.samples.push(new Audio(this.url))
this.samples[this.samples.length - 1].play()
}
}
}
const aud = new LayeredAudio("URL");
aud.play()
Relying more on memory than process time, we can make an array of multiple clones of the Audio and then play them by order:
function gameSnd() {
tick_wav = new Audio('sounds/tick.wav');
victory_wav = new Audio('sounds/victory.wav');
counter = 0;
ticks = [];
for (var i = 0; i<10;i++)
ticks.push(tick_wav.cloneNode());
tick = function(){
counter = (counter + 1)%10;
ticks[counter].play();
}
victory = function(){
victory_wav.play();
}
}
When I tried some of the other solutions there was some delay, but I may have found a better alternative. This will plow through a good chunk of memory if you make the audio array's length high. I doubt you will need to play the same audio more than 10 times at the same time, but if you do just make the array length longer.
var audio = new Array(10);
// The length of the audio array is how many times
// the audio can overlap
for (var i = 0; i < audio.length; i++) {
audio[i] = new Audio("your audio");
}
function PlayAudio() {
// Whenever you want to play it call this function
audio[audioIndex].play();
audioIndex++;
if(audioIndex > audio.length - 1) {
audioIndex = 0;
}
}
I have found this to be the simples way to overlap the same audio over itself
<button id="btn" onclick="clickMe()">ding</button>
<script>
function clickMe() {
const newAudio = new Audio("./ding.mp3")
newAudio.play()
}
I often read that it's not possible to pause/resume audio files with the Web Audio API.
But now I saw a example where they actually made it possible to pause and resume it. I tried to figure out what how they did it. I thought maybe source.looping = falseis the key, but it wasn't.
For now my audio is always re-playing from the start.
This is my current code
var context = new (window.AudioContext || window.webkitAudioContext)();
function AudioPlayer() {
this.source = context.createBufferSource();
this.analyser = context.createAnalyser();
this.stopped = true;
}
AudioPlayer.prototype.setBuffer = function(buffer) {
this.source.buffer = buffer;
this.source.looping = false;
};
AudioPlayer.prototype.play = function() {
this.source.connect(this.analyser);
this.analyser.connect(context.destination);
this.source.noteOn(0);
this.stopped = false;
};
AudioPlayer.prototype.stop = function() {
this.analyser.disconnect();
this.source.disconnect();
this.stopped = true;
};
Does anybody know what to do, to get it work?
Oskar's answer and ayke's comment are very helpful, but I was missing a code example. So I wrote one: http://jsfiddle.net/v3syS/2/ I hope it helps.
var url = 'http://thelab.thingsinjars.com/web-audio-tutorial/hello.mp3';
var ctx = new webkitAudioContext();
var buffer;
var sourceNode;
var startedAt;
var pausedAt;
var paused;
function load(url) {
var request = new XMLHttpRequest();
request.open('GET', url, true);
request.responseType = 'arraybuffer';
request.onload = function() {
ctx.decodeAudioData(request.response, onBufferLoad, onBufferError);
};
request.send();
};
function play() {
sourceNode = ctx.createBufferSource();
sourceNode.connect(ctx.destination);
sourceNode.buffer = buffer;
paused = false;
if (pausedAt) {
startedAt = Date.now() - pausedAt;
sourceNode.start(0, pausedAt / 1000);
}
else {
startedAt = Date.now();
sourceNode.start(0);
}
};
function stop() {
sourceNode.stop(0);
pausedAt = Date.now() - startedAt;
paused = true;
};
function onBufferLoad(b) {
buffer = b;
play();
};
function onBufferError(e) {
console.log('onBufferError', e);
};
document.getElementById("toggle").onclick = function() {
if (paused) play();
else stop();
};
load(url);
In current browsers (Chrome 43, Firefox 40) there are now 'suspend' and 'resume' methods available for AudioContext:
var audioCtx = new AudioContext();
susresBtn.onclick = function() {
if(audioCtx.state === 'running') {
audioCtx.suspend().then(function() {
susresBtn.textContent = 'Resume context';
});
} else if(audioCtx.state === 'suspended') {
audioCtx.resume().then(function() {
susresBtn.textContent = 'Suspend context';
});
}
}
(modified example code from https://developer.mozilla.org/en-US/docs/Web/API/AudioContext/suspend)
Actually the web-audio API can do the pause and play task for you. It knows the current state of the audio context (running or suspended), so you can do this in this easy way:
susresBtn.onclick = function() {
if(audioCtx.state === 'running') {
audioCtx.suspend()
} else if(audioCtx.state === 'suspended') {
audioCtx.resume()
}
}
I hope this can help.
Without spending any time checking the source of your example, I'd say you'll want to use the noteGrainOn method of the AudioBufferSourceNode (https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#methodsandparams-AudioBufferSourceNode)
Just keep track of how far into the buffer you were when you called noteOff, and then do noteGrainOn from there when resuming on a new AudioBufferSourceNode.
Did that make sense?
EDIT:
See comments below for updated API calls.
EDIT 2, 2019: See MDN for updated API calls; https://developer.mozilla.org/en-US/docs/Web/API/AudioBufferSourceNode/start
For chrome fix, every time you want to play sound, set it like:
if(audioCtx.state === 'suspended') {
audioCtx.resume().then(function() {
audio.play();
});
}else{
audio.play();
}
The lack of a built-in pause functionality in the WebAudio API seems like a major oversight to me. Possibly, in the future it will be possible to do this using the planned MediaElementSource, which will let you hook up an element (which supports pausing) to Web Audio. For now, most workarounds seem to be based on remembering playback time (such as described in imbrizi's answer). Such a workaround has issues when looping sounds (does the implementation loop gapless or not?), and when you allow dynamically change the playbackRate of sounds (as both affect timing). Another, equally hack-ish and technically incorrect, but much simpler workaround you can use is:
source.playbackRate = paused?0.0000001:1;
Unfortunately, 0 is not a valid value for playbackRate (which would actually pause the sound). However, for many practical purposes, some very low value, like 0.000001, is close enough, and it won't produce any audible output.
UPDATE: This is only valid for Chrome. Firefox (v29) does not yet implement the MediaElementAudioSourceNode.mediaElement property.
Assuming that you already have the AudioContext reference and your media source (e.g. via AudioContext.createMediaElementSource() method call), you can call MediaElement.play() and MediaElement.pause()on your source, e.g.
source.mediaElement.pause();
source.mediaElement.play();
No need for hacks and workarounds, it's supported.
If you are working with an <audio> tag as your source, you should not call pause directly on the audio element in your JavaScript, that will stop playback.
In 2017, using ctx.currentTime works well for keeping track of the point in the song. The code below uses one button (songStartPause) that toggles between a play & pause button. I used global variables for simplicity's sake. The variable musicStartPoint keeps track of what time you're at in the song. The music api keeps track of time in seconds.
Set your initial musicStartPoint at 0 (beginning of the song)
var ctx = new webkitAudioContext();
var buff, src;
var musicLoaded = false;
var musicStartPoint = 0;
var songOnTime, songEndTime;
var songOn = false;
songStartPause.onclick = function() {
if(!songOn) {
if(!musicLoaded) {
loadAndPlay();
musicLoaded = true;
} else {
play();
}
songOn = true;
songStartPause.innerHTML = "||" //a fancy Pause symbol
} else {
songOn = false;
src.stop();
setPausePoint();
songStartPause.innerHTML = ">" //a fancy Play symbol
}
}
Use ctx.currentTime to subtract the time the song ends from when it started, and append this length of time to however far you were in the song initially.
function setPausePoint() {
songEndTime = ctx.currentTime;
musicStartPoint += (songEndTime - songOnTime);
}
Load/play functions.
function loadAndPlay() {
var req = new XMLHttpRequest();
req.open("GET", "//mymusic.com/unity.mp3")
req.responseType = "arraybuffer";
req.onload = function() {
ctx.decodeAudioData(req.response, function(buffer) {
buff = buffer;
play();
})
}
req.send();
}
function createBuffer() {
src = ctx.createBufferSource();
src.buffer = buff;
}
function connectNodes() {
src.connect(ctx.destination);
}
Lastly, the play function tells the song to start at the specified musicStartPoint (and to play it immediately), and also sets the songOnTime variable.
function play(){
createBuffer()
connectNodes();
songOnTime = ctx.currentTime;
src.start(0, musicStartPoint);
}
*Sidenote: I know it might look cleaner to set songOnTime up in the click function, but I figure it makes sense to grab the time code as close as possible to src.start, just like how we grab the pause time as close as possible to src.stop.
I didn't follow the full discussion, but I will soon. I simply headed over HAL demo to understand. For those who now do like me, I would like to tell
1 - how to make this code working now.
2 - a trick to get pause/play, from this code.
1 : replace noteOn(xx) with start(xx) and put any valid url in sound.load(). I think it's all I've done. You will get a few errors in the console that are pretty directive. Follow them. Or not : sometimes you can ignore them, it works now : it's related to the -webkit prefix in some function. New ones are given.
2 : at some point, when it works, you may want to pause the sound.
It will work. But, as everybody knows, a new pressing on play would raise an error. As a result, the code in this.play() after the faulty source_.start(0) is not executed.
I simply enclosed those line in a try/catch :
this.play = function() {
analyser_ = context_.createAnalyser();
// Connect the processing graph: source -> analyser -> destination
source_.connect(analyser_);
analyser_.connect(context_.destination);
try{
source_.start(0);
}
catch(e){
this.playing = true;
(function callback(time) {
processAudio_(time);
reqId_ = window.webkitRequestAnimationFrame(callback);
})();
}
And it works : you can use play/pause.
I would like to mention that this HAL simulation is really incredible. Follow those simple steps, it's worth it !