WEBRtc with WEB Audio Api - javascript

I want to get track from webrtc and then play it, but my realization don't work. (Track's state is "live" but i dont hear audio.)
How can I do it without HTMLAudioElement and new Audio
pc.ontrack = e => addTrack(e.track);
// ...
function addTrack(track) {
const context = new AudioContext();
const source = ctx.createMediaStreamSource(new MediaStream([track]));
source.connect(context.destination);
}
Но Volume. But the next code is work:
document.getElementById('audio').srcObject = stream;
This too:
function gotStream(stream) {
const audioContext = new AudioContext();
const mediaStreamSource = audioContext.createMediaStreamSource( stream );
mediaStreamSource.connect( audioContext.destination );
}
const mediaStream = await navigator.mediaDevices.getUserMedia({audio: true})
gotStream(mediaStream);

From what you describe I assume you're testing your code in Chrome. Unfortunately this is a long standing issue in Chrome which hasn't been fixed yet.
One of the bugs mentioning it in Chromium's bug tracker is for example this one: https://bugs.chromium.org/p/chromium/issues/detail?id=933677#c4
It is a known issue that remote streams have to be assigned to a media element so that they play on web audio.

Related

Videos recorded with JavaScript MediaRecorder have no duration (not just from Chrome)

Before you say "this is just a Chrome bug" like every similar question from years ago was answered, I tried it on Firefox and the same thing happened.
I wrote a little program to save pictures and videos because my phone camera app is broken and I don't have the space to download a new one. The pictures work fine, but when I try to save a video it has no duration. This stops it from playing on my phone and my video editing software, and when I play it on my computer it has no information on the time besides how much has elapsed (so it'll say, for example, 7 seconds, but still have the dot at the beginning of the progress bar on an 8 second video).
Here's the main code for the video recording:
async function getCam(facingMode='user') {
media = await navigator.mediaDevices.getUserMedia({video: {facingMode}, audio: true})
mediaRecorder = new MediaRecorder(media);
mediaRecorder.ondataavailable = (e) => {
vidChunks.push(e.data)
}
mediaRecorder.onstop = () => {
const vid = new Blob(vidChunks, {type: 'video/mp4'})
vidChunks = [];
let a = document.createElement('a');
a.href = URL.createObjectURL(vid);
a.download = `${Date.now()}vid.mp4`
a.click();
}
video.srcObject = media;
video.play();
facing = facingMode;
}
The full code can be found here.
Am I missing something? Or is this now a problem with all browsers? I tried Brave (Chromium) desktop and mobile, Edge (also Chromium now afaik), and Firefox (not Chromium, so this is not just a problem with Chrome!!!)

How to manipulate webcam video before streaming over WebRTC even when the tab is inactive?

I have been through many similar QnAs and articles. Overall it seems the only solution is to connect the userMedia stream to video then use a timer function to take snapshot from the video and put it on a canvas. Now manipulate the stuff on canvas and stream that.
The problem is the timer function. If I use setTimeout or requestAnimationFrame then the video stream becomes choppy or completely stops if the user moves away from the tab. I somehow worked around that using Web worker (using below code).
(function () {
let blob = new Blob([`
onmessage = function(oEvent) {
setTimeout(() => postMessage('tick'), oEvent[1]);
};
`],{type: 'text/javascript'});
window.myTimeout = function (cb, t) {
const worker = new Worker(window.URL.createObjectURL(blob));
worker.onmessage = function(oEvent) {
cb();
worker.terminate();
};
worker.postMessage(['start', t]);
};
}());
However this works well on Chrome but on Safari (OSX) this fails completely. Safari seems to pause the JS engine completely. This issue does not happen if I stream userMedia directly.

How to stop Safari clipping fractions of time off audio on second play

So it turns out, when you use javascript to trigger audio. In the latest version of safari, if you use obj.play() twice, the second time, the part of the audio is cut off (on mac at least). This problem does not occur in Chrome. Only on Safari.
Is anyone aware of a work around for this?
Play
<audio id="t" src="https://biblicaltext.com/audio/%e1%bc%a4.mp3"></audio>
<script>
function playWord(word) {
a = document.getElementById('t');
//a.currentTime = 0
a.play();
//a.src = a.src
}
</script>
https://jsfiddle.net/8fbt7rgc/1/
Redownloading the file using a.src = a.src works, but it is not ideal.
That sounds like a bug they should be made aware of.
For the time being, if you have access to the file you play in a same-origin way, you can use the Web Audio API and its AudioBufferSourceNode interface to play your media with high precision and less latency than through HTMLMediaElements:
(async () => {
const ctx = new (window.AudioContext || window.webkitAudioContext)();
// Using a different file because biblicaltext.com
// doesn't allow cross-origin requests
const data_buf = await fetch("https://dl.dropboxusercontent.com/s/agepbh2agnduknz/camera.mp3")
.then( resp => resp.arrayBuffer() );
const audio_buf = await ctx.decodeAudioData(data_buf);
document.querySelector("a").onclick = (evt) => {
evt.preventDefault();
const source = ctx.createBufferSource();
source.buffer = audio_buf;
source.connect(ctx.destination);
source.start(0);
}
})();
<!-- Promising decodeAudioData for old Safari https://github.com/mohayonao/promise-decode-audio-data/ [MIT] -->
<script src="https://cdn.rawgit.com/mohayonao/promise-decode-audio-data/eb4b1322/build/promise-decode-audio-data.min.js"></script>
Play

Playing one of multiple audio tracks in sync with a video

I'm trying to play a video in a web browser, the original video comes with two or more audio streams, each in a different language. I want to give the user the option to switch which audio track they're listening to.
I tried using audioTracks on the video element, but despite saying it's supported behind a flag in most browsers, at least in Firefox and Chrome I wouldn't say it's working at all (in Firefox it only shows the first track and the metadata was wrong, and in Chrome the video would pause as soon as you muted the main track, and you had to seek the video to get it to actually continue playing).
I tried using ffmpeg to save the individual audio tracks separately and tried playing them in sync with the video (setting audio.currentTime = video.currentTime in response to several events on the video like play, playing, pause, seeked, stalled), playing both audio tracks in <audio> elements connected to GainNodes using the Web Audio API (switching audio tracks sets the gain to 1 for the track you want and 0 for the rest). This seems to be working flawlessly in Chrome, but Firefox is all over the place and even after syncing the currentTime properties the actual audio is off by a second or more.
I saw other people complaining about similar issues with the timing being off with MP3, but I'm using AAC. The solution in those cases was to not use variable bitrates for the audio but that didn't seem to improve it (ffmpeg -i video.mkv -map 0:a:0 -acodec aac -b:a 128k track-0.aac)
Is there any good strategy for doing this? I'd rather not have to have duplicate video files for each audio track if I can avoid it.
The best in your case is probably to use the Media Source Extension (MSE) API.
This will allow you to switch only the audio source while keeping playing the original video.
Since we will replace the whole audio SourceBuffer's content with the other audio source, we won't have sync issues, for the player, it will be just as if there was a single audio source.
(async() => {
const vid = document.querySelector( "video" );
const check = document.querySelector( "input" );
// video track as ArrayBuffer
const bufvid = await getFileBuffer( "525d5ltprednwh1/test.webm" );
// audio track one
const buf300 = await getFileBuffer( "p56kvhwku7pdzd9/beep300hz.webm" );
// audio track two
const buf800 = await getFileBuffer( "me3y69ekxyxabhi/beep800hz.webm" );
const source = new MediaSource();
// load our MediaSource into the video
vid.src = URL.createObjectURL( source );
// when the MediaSource becomes open
await waitForEvent( source, "sourceopen" );
// append video track
const vid_buffer = source.addSourceBuffer( "video/webm;codecs=vp8" );
vid_buffer.appendBuffer( bufvid );
// append one of the audio tracks
const aud_buffer = source.addSourceBuffer( "audio/webm;codecs=opus" );
aud_buffer.appendBuffer( check.checked ? buf300 : buf800 );
// wait for both SourceBuffers to be ready
await Promise.all( [
waitForEvent( aud_buffer, "updateend" ),
waitForEvent( vid_buffer, "updateend" )
] );
// Tell the UI the stream is ended (so that 'ended' can fire)
source.endOfStream();
check.onchange = async (evt) => {
// remove all the data we had in the Audio track's buffer
aud_buffer.remove( 0, source.duration );
// it is async, so we need to wait it's done
await waitForEvent( aud_buffer, "updateend" );
// no we append the data of the other track
aud_buffer.appendBuffer( check.checked ? buf300 : buf800 );
// also async
await waitForEvent( aud_buffer, "updateend" );
// for ended to fire
source.endOfStream();
};
})();
// helpers
function getFileBuffer( filename ) {
return fetch( "https://dl.dropboxusercontent.com/s/" + filename )
.then( (resp) => resp.arrayBuffer() );
}
function waitForEvent( target, event ) {
return new Promise( res => {
target.addEventListener( event, res, { once: true } );
} );
}
video { max-width: 100%; max-height: 100% }
<label>Use 300Hz audio track instead of 800Hz <input type="checkbox"></label><br>
<video controls></video>

Simple audio recorder in HTML5: Requested Range Not Satisfiable error

I am trying to create a really simple voice recorder page in HTML5 - all it needs to do is record a short snippet of audio from the device microphone and then allow the user to play it back (no persistent storage required).
I started with the sample code in this page, then replaced the oscillator with the stream from a call to getUserMedia. The result worked in Android/Chrome, but in Chrome on my Mac I get the following error:
GET blob:https://sashaforce.github.io/10801957-0b71-4bcd-9ad6-9b33db4a48d7 416 (Requested Range Not Satisfiable)
A friend tried it on their iPhone and also found that it didn't work, although I wasn't able to get more details than that.
A commenter in this question mentions having a stream closed when recording - not sure how that could happen here but did wonder if the blob was going out of scope so first declared it outside of the success handler, then tried saving to sessionStorage but got a different error. Any ideas? The code is below:
<html>
<body>
<h1>Voice Record Demo</h1>
<p>Record a short voice memo and play it back.</p>
<p>
<button>Start recording</button>
</p>
<p>
<audio controls></audio>
</p>
<script>
var blob;
var handleSuccess = function(stream) {
console.log("getUserMedia succeeded");
var button = document.querySelector("button");
var clicked = false;
var chunks = [];
var audioContext = new AudioContext();
var mediaRecorder = new MediaRecorder(stream);
button.addEventListener("click", function(element) {
if (!clicked) {
mediaRecorder.start();
element.target.innerHTML = "Stop recording";
clicked = true;
} else {
mediaRecorder.stop();
element.target.disabled = true;
}
});
mediaRecorder.ondataavailable = function(evt) {
// push each chunk (blobs) in an array
console.log("pushing blob");
chunks.push(evt.data);
};
mediaRecorder.onstop = function(evt) {
// Make blob out of our blobs, and open it.
console.log("mediaRecorder.onstop");
blob = new Blob(chunks, { 'type' : 'audio/ogg; codecs=opus' });
console.log("starting set audio source");
document.querySelector("audio").src = URL.createObjectURL(blob);
console.log("finished set audio source");
};
};
navigator.mediaDevices.getUserMedia({ audio: true, video: false })
.then(handleSuccess);
</script>
</body>
</html>
I found the fix for this. I'm not sure if it's the perfect solution, but I'm assuming the fix works because Google's still testing stuff out:
Go to chrome://flags/ and enable Experimental Web Platform features. This enables audio recording via mediaDevices/MediaRecorder again.
I'm assuming that Google is A/B Testing this feature, so the default setting of disabled or enabled will vary from browser to browser and update to update.

Categories

Resources