I'm trying to play a video in a web browser, the original video comes with two or more audio streams, each in a different language. I want to give the user the option to switch which audio track they're listening to.
I tried using audioTracks on the video element, but despite saying it's supported behind a flag in most browsers, at least in Firefox and Chrome I wouldn't say it's working at all (in Firefox it only shows the first track and the metadata was wrong, and in Chrome the video would pause as soon as you muted the main track, and you had to seek the video to get it to actually continue playing).
I tried using ffmpeg to save the individual audio tracks separately and tried playing them in sync with the video (setting audio.currentTime = video.currentTime in response to several events on the video like play, playing, pause, seeked, stalled), playing both audio tracks in <audio> elements connected to GainNodes using the Web Audio API (switching audio tracks sets the gain to 1 for the track you want and 0 for the rest). This seems to be working flawlessly in Chrome, but Firefox is all over the place and even after syncing the currentTime properties the actual audio is off by a second or more.
I saw other people complaining about similar issues with the timing being off with MP3, but I'm using AAC. The solution in those cases was to not use variable bitrates for the audio but that didn't seem to improve it (ffmpeg -i video.mkv -map 0:a:0 -acodec aac -b:a 128k track-0.aac)
Is there any good strategy for doing this? I'd rather not have to have duplicate video files for each audio track if I can avoid it.
The best in your case is probably to use the Media Source Extension (MSE) API.
This will allow you to switch only the audio source while keeping playing the original video.
Since we will replace the whole audio SourceBuffer's content with the other audio source, we won't have sync issues, for the player, it will be just as if there was a single audio source.
(async() => {
const vid = document.querySelector( "video" );
const check = document.querySelector( "input" );
// video track as ArrayBuffer
const bufvid = await getFileBuffer( "525d5ltprednwh1/test.webm" );
// audio track one
const buf300 = await getFileBuffer( "p56kvhwku7pdzd9/beep300hz.webm" );
// audio track two
const buf800 = await getFileBuffer( "me3y69ekxyxabhi/beep800hz.webm" );
const source = new MediaSource();
// load our MediaSource into the video
vid.src = URL.createObjectURL( source );
// when the MediaSource becomes open
await waitForEvent( source, "sourceopen" );
// append video track
const vid_buffer = source.addSourceBuffer( "video/webm;codecs=vp8" );
vid_buffer.appendBuffer( bufvid );
// append one of the audio tracks
const aud_buffer = source.addSourceBuffer( "audio/webm;codecs=opus" );
aud_buffer.appendBuffer( check.checked ? buf300 : buf800 );
// wait for both SourceBuffers to be ready
await Promise.all( [
waitForEvent( aud_buffer, "updateend" ),
waitForEvent( vid_buffer, "updateend" )
] );
// Tell the UI the stream is ended (so that 'ended' can fire)
source.endOfStream();
check.onchange = async (evt) => {
// remove all the data we had in the Audio track's buffer
aud_buffer.remove( 0, source.duration );
// it is async, so we need to wait it's done
await waitForEvent( aud_buffer, "updateend" );
// no we append the data of the other track
aud_buffer.appendBuffer( check.checked ? buf300 : buf800 );
// also async
await waitForEvent( aud_buffer, "updateend" );
// for ended to fire
source.endOfStream();
};
})();
// helpers
function getFileBuffer( filename ) {
return fetch( "https://dl.dropboxusercontent.com/s/" + filename )
.then( (resp) => resp.arrayBuffer() );
}
function waitForEvent( target, event ) {
return new Promise( res => {
target.addEventListener( event, res, { once: true } );
} );
}
video { max-width: 100%; max-height: 100% }
<label>Use 300Hz audio track instead of 800Hz <input type="checkbox"></label><br>
<video controls></video>
Related
I want to get track from webrtc and then play it, but my realization don't work. (Track's state is "live" but i dont hear audio.)
How can I do it without HTMLAudioElement and new Audio
pc.ontrack = e => addTrack(e.track);
// ...
function addTrack(track) {
const context = new AudioContext();
const source = ctx.createMediaStreamSource(new MediaStream([track]));
source.connect(context.destination);
}
Но Volume. But the next code is work:
document.getElementById('audio').srcObject = stream;
This too:
function gotStream(stream) {
const audioContext = new AudioContext();
const mediaStreamSource = audioContext.createMediaStreamSource( stream );
mediaStreamSource.connect( audioContext.destination );
}
const mediaStream = await navigator.mediaDevices.getUserMedia({audio: true})
gotStream(mediaStream);
From what you describe I assume you're testing your code in Chrome. Unfortunately this is a long standing issue in Chrome which hasn't been fixed yet.
One of the bugs mentioning it in Chromium's bug tracker is for example this one: https://bugs.chromium.org/p/chromium/issues/detail?id=933677#c4
It is a known issue that remote streams have to be assigned to a media element so that they play on web audio.
Safari on iOS puts a scrubber on its lock screen for simple HTMLAudioElements. For example:
const a = new Audio();
a.src = 'https://example.com/audio.m4a'
a.play();
JSFiddle: https://jsfiddle.net/0seckLfd/
The lock screen will allow me to choose a position in the currently playing audio file.
How can I disable the ability for the user to scrub the file on the lock screen? The metadata showing is fine, and being able to pause/play is also acceptable, but I'm also fine with disabling it all if I need to.
DISABLE Player on lock screen completely
if you want to completely remove the lock screen player you could do something like
const a = new Audio();
document.querySelector('button').addEventListener('click', (e) => {
a.src = 'http://sprott.physics.wisc.edu/wop/sounds/Bicycle%20Race-Full.m4a'
a.play();
});
document.addEventListener('visibilitychange', () => {
if (document.hidden) a.src = undefined
})
https://jsfiddle.net/5s8c9eL0/3/
that is stoping the player when changing tab or locking screen
(code to be cleaned improved depending on your needs)
From my understanding you can't block/hide the scrubbing commands unless you can tag the audio as a live stream. That being said, you can use js to refuse scrubbing server-side. Reference the answer here. Although that answer speaks of video, it also works with audio.
The lock screen / control center scrubber can also be avoided by using Web Audio API.
This is an example of preloading a sound and playing it, with commentary and error handling:
try {
// <audio> element is simpler for sound effects,
// but in iOS/iPad it shows up in the Control Center, as if it's music you'd want to play/pause/etc.
// Also, on subsequent plays, it only plays part of the sound.
// And Web Audio API is better for playing sound effects anyway because it can play a sound overlapping with itself, without maintaining a pool of <audio> elements.
window.audioContext = window.audioContext || new AudioContext(); // Interoperate with other things using Web Audio API, assuming they use the same global & pattern.
const audio_buffer_promise =
fetch("audio/sound.wav")
.then(response => response.arrayBuffer())
.then(array_buffer => audioContext.decodeAudioData(array_buffer))
var play_sound = async function () {
audioContext.resume(); // in case it was not allowed to start until a user interaction
// Note that this should be before waiting for the audio buffer,
// so that it works the first time (it would no longer be "within a user gesture")
// This only works if play_sound is called during a user gesture (at least once), otherwise audioContext.resume(); needs to be called externally.
const audio_buffer = await audio_buffer_promise; // Promises can be awaited any number of times. This waits for the fetch the first time, and is instant the next time.
// Note that if the fetch failed, it will not retry. One could instead rely on HTTP caching and just fetch() each time, but that would be a little less efficient as it would need to decode the audio file each time, so the best option might be custom caching with request error handling.
const source = audioContext.createBufferSource();
source.buffer = audio_buffer;
source.connect(audioContext.destination);
source.start();
};
} catch (error) {
console.log("AudioContext not supported", error);
play_sound = function() {
// no-op
// console.log("SFX disabled because AudioContext setup failed.");
};
}
I did a search, in search of a way to help you, but I did not find an effective way to disable the commands, however, I found a way to customize them, it may help you, follow the apple tutorial link
I think what's left to do now is wait, see if ios 13 will bring some option that will do what you want.
I am building a project that captures an image from the webcam in the browser. After the image is taken, I no longer need to use the camera, so I am trying to stop it with the following function:
function stopCamera(container) {
console.log("Stopping the camera.");
video = container.querySelector('.video-streamer');
console.log(video);
video.srcObject = null;
navigator.mediaDevices.getUserMedia({ video: true }).then(
function (stream) {
console.log(stream.getTracks().length);
stream.getTracks().forEach(function (track) {
console.log("Found a stream that needs to be stopped.")
track.stop();
});
console.log(stream.getTracks().length);
}).catch(
function (error) {
console.log('getUserMedia() error', error);
});
}
However, even after the function is called, the webcam access light stays on, and I see that the browser (both Firefox and Chrome) still show that the page is using the camera.
What is missing in the code above?
navigator.mediaDevices.getUserMedia returns a new stream (a clone), not the existing stream.
You have to stop all tracks from all stream clones returned from different calls to getUserMedia, before the light goes out.
In your case, that includes the tracks of the stream you're already playing. Use the following:
function stopCamera(container) {
const video = container.querySelector('.video-streamer');
for (const track of video.srcObject.getTracks()) {
track.stop();
}
video.srcObject = null;
}
Once all tracks are stopped, the light should go out instantly.
If you neglect to do this, the light should still go out 3-10 seconds after video.srcObject = null thanks to garbage collection (assuming it was the lone held reference to the stream).
If you've created any track clones, you need to stop them too.
Safari on iOS puts a scrubber on its lock screen for simple HTMLAudioElements. For example:
const a = new Audio();
a.src = 'https://example.com/audio.m4a'
a.play();
JSFiddle: https://jsfiddle.net/0seckLfd/
The lock screen will allow me to choose a position in the currently playing audio file.
How can I disable the ability for the user to scrub the file on the lock screen? The metadata showing is fine, and being able to pause/play is also acceptable, but I'm also fine with disabling it all if I need to.
DISABLE Player on lock screen completely
if you want to completely remove the lock screen player you could do something like
const a = new Audio();
document.querySelector('button').addEventListener('click', (e) => {
a.src = 'http://sprott.physics.wisc.edu/wop/sounds/Bicycle%20Race-Full.m4a'
a.play();
});
document.addEventListener('visibilitychange', () => {
if (document.hidden) a.src = undefined
})
https://jsfiddle.net/5s8c9eL0/3/
that is stoping the player when changing tab or locking screen
(code to be cleaned improved depending on your needs)
From my understanding you can't block/hide the scrubbing commands unless you can tag the audio as a live stream. That being said, you can use js to refuse scrubbing server-side. Reference the answer here. Although that answer speaks of video, it also works with audio.
The lock screen / control center scrubber can also be avoided by using Web Audio API.
This is an example of preloading a sound and playing it, with commentary and error handling:
try {
// <audio> element is simpler for sound effects,
// but in iOS/iPad it shows up in the Control Center, as if it's music you'd want to play/pause/etc.
// Also, on subsequent plays, it only plays part of the sound.
// And Web Audio API is better for playing sound effects anyway because it can play a sound overlapping with itself, without maintaining a pool of <audio> elements.
window.audioContext = window.audioContext || new AudioContext(); // Interoperate with other things using Web Audio API, assuming they use the same global & pattern.
const audio_buffer_promise =
fetch("audio/sound.wav")
.then(response => response.arrayBuffer())
.then(array_buffer => audioContext.decodeAudioData(array_buffer))
var play_sound = async function () {
audioContext.resume(); // in case it was not allowed to start until a user interaction
// Note that this should be before waiting for the audio buffer,
// so that it works the first time (it would no longer be "within a user gesture")
// This only works if play_sound is called during a user gesture (at least once), otherwise audioContext.resume(); needs to be called externally.
const audio_buffer = await audio_buffer_promise; // Promises can be awaited any number of times. This waits for the fetch the first time, and is instant the next time.
// Note that if the fetch failed, it will not retry. One could instead rely on HTTP caching and just fetch() each time, but that would be a little less efficient as it would need to decode the audio file each time, so the best option might be custom caching with request error handling.
const source = audioContext.createBufferSource();
source.buffer = audio_buffer;
source.connect(audioContext.destination);
source.start();
};
} catch (error) {
console.log("AudioContext not supported", error);
play_sound = function() {
// no-op
// console.log("SFX disabled because AudioContext setup failed.");
};
}
I did a search, in search of a way to help you, but I did not find an effective way to disable the commands, however, I found a way to customize them, it may help you, follow the apple tutorial link
I think what's left to do now is wait, see if ios 13 will bring some option that will do what you want.
I want to create a seamless loop of an audio file. But in all approaches I used so far, there was a noticeable gap between end & start.
This is what I tried so far:
First approach was to use the audio in the HTML and it loops but there is still a noticeable delay when going from the end of the track to the beginning.
<audio loop autoplay>
<source src="audio.mp3" type="audio/mpeg">
<audio>
Then I tried it from JavaScript with the same result:
let myAudio = new Audio(file);
myAudio.loop = true;
myAudio.play();
After that I tried this (according to this answer)
myAudio.addEventListener(
'timeupdate',
function() {
var buffer = .44;
if (this.currentTime > this.duration - buffer) {
this.currentTime = 0;
this.play();
}
},
false
);
I played around with the buffer but I only got it to reduce the gap but not leave it out entirely.
I turned to the library SeamlessLoop (GitHub) and got it to work to loop seamlessly in Chromium browsers (but not in the latest Safari. Didn't test in other browsers). Code I used for that:
let loop = new SeamlessLoop();
// My File is 58 Seconds long. Btw there aren't any gaps in the file.
loop.addUri(file, 58000, 'sound1');
loop.callback(soundsLoaded);
function soundsLoaded() {
let n = 1;
loop.start('sound' + n);
}
EDIT: I tried another approach: Looping it trough two different audio elements:
var current_player = "a";
var player_a = document.createElement("audio");
var player_b = document.createElement("audio");
player_a.src = "sounds/back_music.ogg";
player_b.src = player_a.src;
function loopIt(){
var player = null;
if(current_player == "a"){
player = player_b;
current_player = "b";
}
else{
player = player_a;
current_player = "a";
}
player.play();
/*
3104.897 is the length of the audio clip in milliseconds.
Received from player.duration.
This is a different file than the first one
*/
setTimeout(loopIt, 3104.897);
}
loopIt();
But as milliseconds in browsers are not consistent or granular enough this doesn't work too well but it does work much better than the normal "loop" property of the audio.
Can anyone guide me into the right direction to loop the audio seamlessly?
You can use the Web Audio API instead. There are a couple of caveats with this, but it will allow you to loop accurately down to the single sample level.
The caveats are that you have to load the entire file into memory. This may not be practical with large files. If the files are only a few seconds it should however not be any problem.
The second is that you have to write control buttons manually (if needed) as the API has a low-level approach. This means play, pause/stop, mute, volume etc. Scanning and possibly pausing can be a challenge of their own.
And lastly, not all browsers support Web Audio API - in this case you will have to fallback to the regular Audio API or even Flash, but if your target is modern browsers this should not be a major problem nowadays.
Example
This will load a 4 bar drum-loop and play without any gap when looped. The main steps are:
It loads the audio from a CORS enabled source (this is important, either use the same domain as your page or set up the external server to allow for cross-origin usage as Dropbox does for us in this example).
AudioContext then decodes the loaded file
The decoded file is used for the source node
The source node is connected to an output
Looping is enabled and the buffer is played from memory.
var actx = new (AudioContext || webkitAudioContext)(),
src = "https://dl.dropboxusercontent.com/s/fdcf2lwsa748qav/drum44.wav",
audioData, srcNode; // global so we can access them from handlers
// Load some audio (CORS need to be allowed or we won't be able to decode the data)
fetch(src, {mode: "cors"}).then(function(resp) {return resp.arrayBuffer()}).then(decode);
// Decode the audio file, then start the show
function decode(buffer) {
actx.decodeAudioData(buffer, playLoop);
}
// Sets up a new source node as needed as stopping will render current invalid
function playLoop(abuffer) {
if (!audioData) audioData = abuffer; // create a reference for control buttons
srcNode = actx.createBufferSource(); // create audio source
srcNode.buffer = abuffer; // use decoded buffer
srcNode.connect(actx.destination); // create output
srcNode.loop = true; // takes care of perfect looping
srcNode.start(); // play...
}
// Simple example control
document.querySelector("button").onclick = function() {
if (srcNode) {
srcNode.stop();
srcNode = null;
this.innerText = "Play";
} else {
playLoop(audioData);
this.innerText = "Stop";
}
};
<button>Stop</button>
There is a very simple solution for that, just use loopify it makes use of the html5 web audio api and works perfectly well with many formats, not only wav as the dev says.
<script src="loopify.js" type="text/javascript"></script>
<script>
loopify("yourfile.mp3|ogg|webm|flac",ready);
function ready(err,loop){
if (err) {
console.warn(err);
}
loop.play();
}
</script>
This will automatically play the file, if you want to have start and stop buttons for example take a look at his demo