Whatever I do, I get an error message while trying to playing a sound:
Uncaught (in promise) DOMException.
After searching on Google I found that it should appear if I autoplayed the audio before any action on the page from the user but it's not the case for me. I even did this:
componentDidMount() {
let audio = new Audio('sounds/beep.wav');
audio.load();
audio.muted = true;
document.addEventListener('click', () => {
audio.muted = false;
audio.play();
});
}
But the message still appears and the sound doesn't play. What should I do?
The audio is an HTMLMediaElement, and calling play() returns a Promise, so needs to be handled. Depending on the size of the file, it's usually ready to go, but if it is not loaded (e.g pending promise), it will throw the "AbortError" DOMException.
You can check to see if its loaded first, then catch the error to turn off the message. For example:
componentDidMount() {
this.audio = new Audio('sounds/beep.wav')
this.audio.load()
this.playAudio()
}
playAudio() {
const audioPromise = this.audio.play()
if (audioPromise !== undefined) {
audioPromise
.then(_ => {
// autoplay started
})
.catch(err => {
// catch dom exception
console.info(err)
})
}
}
Another pattern that has worked well without showing that error is creating the component as an HTML audio element with the autoPlay attribute and then rendering it as a component where needed. For example:
const Sound = ( { soundFileName, ...rest } ) => (
<audio autoPlay src={`sounds/${soundFileName}`} {...rest} />
)
const ComponentToAutoPlaySoundIn = () => (
<>
...
<Sound soundFileName="beep.wav" />
...
</>
)
Simple error tone
If you want something as simple as playing a simple error tone (for non-visual feedback in a barcode scanner environment, for instance), and don't want to install dependencies, etc. - it can be pretty simple. Just link to your audio file:
import ErrorAudio from './error.mp3'
And in the code, reference it, and play it:
var AudioPlay = new Audio (ErrorAudio);
AudioPlay.play();
Only discovered this after messing around with more complicated options.
I think it would be better to use this component (https://github.com/justinmc/react-audio-player) instead of a direct dom manipulation
It is Very Straightforward Indeed
const [, setMuted] = useState(true)
useEffect(() => {
const player = new Audio(./sound.mp3);
const playPromise = player.play();
if (playPromise !== undefined)
return playPromise.then(() => setMuted(false)).catch(() => setMuted(false));
}, [])
I hope It works now :)
Related
I'm trying to setup audio playback which I cannot get working on Safari 14.0.3, but works fine in Chrome 88.0.4324.146. I have a function that returns a AudioContext or webkitAudioContext. I followed this answer: https://stackoverflow.com/a/29373891/586006
var sounds;
var audioContext
window.onload = function() {
audioContext = returnAudioContext()
sounds = {
drop : new Audio('sounds/drop.mp3'),
slide : new Audio('sounds/slide.mp3'),
win : new Audio('sounds/win.mp3'),
lose : new Audio('sounds/lose.mp3'),
}
playSound(sounds.drop)
}
function returnAudioContext(){
var AudioContext = window.AudioContext // Default
|| window.webkitAudioContext // Safari and old versions of Chrome
|| false;
if (AudioContext) {
return new AudioContext;
}
}
function playSound(sound){
audioContext.resume().then(() => {
console.log("playing sound")
sound.play();
});
}
Live example: http://www.mysterysystem.com/stuff/test.html
I've done my very best to make an example that uses solely the Web Audio API, but alas, Safari is not very compatible with this API. Though it is possible to use it in conjuction with the HTMLAudioElement, but unless you want to manipulate the audio; you won't need it.
The example below will play the drop sounds whenever you click anywhere in the document. It might need 2 clicks as audio in the browser can be very strict on when to play or not.
The playSound function checks if the play() method returns a promise. If it does then that promise should have a .catch() block. Otherwise it will throw the Unhandled Promise Rejection error in Safari.
const sounds = {
drop: new Audio('sounds/drop.mp3'),
slide: new Audio('sounds/slide.mp3'),
win: new Audio('sounds/win.mp3'),
lose: new Audio('sounds/lose.mp3'),
};
function playSound(audio) {
let promise = audio.play();
if (promise !== undefined) {
promise.catch(error => {
console.log(error);
});
}
}
document.addEventListener('click', () => {
playSound(sounds.drop);
});
If you do need to use the Web Audio API to do some stuff, please let me know.
I'm writing functional tests for a video chat app.
I want to make sure that when the user leaves the meeting the camera turns off. So, I'm trying to check if the camera is being used or not.
Is there a way to do that programatically? I couldn't find any methods on navigator.MediaDevices that say "hey your camera is being used".
Here is how I solved it in TestCafe by "spying" on getUserMedia:
const overWriteGetUserMedia = ClientFunction(() => {
const realGetUserMedia = navigator.mediaDevices.getUserMedia;
const allRequestedTracks = [];
navigator.mediaDevices.getUserMedia = constraints =>
realGetUserMedia(constraints).then(stream => {
stream.getTracks().forEach(track => {
allRequestedTracks.push(track);
});
return stream;
});
return allRequestedTracks;
});
test('leaving a meeting should end streams', async t => {
const allRequestedTracks = await overWriteGetUserMedia();
await t.wait(5000); // wait for streams to start;
await t.click(screen.getByLabelText(/leave/i));
await t.click(screen.getByLabelText(/yes, leave the meeting/i));
await t.wait(1000); // wait for navigation;
const actual = !allRequestedTracks.some(track => !track.ended);
const expected = true;
await t.expect(actual).eql(expected);
});
You can use navigator.mediaDevices.getUserMedia method to get access of user camera, and user active value to check is camera already activated.
If user block the permission to the camera you will receive an error.
Hope this will be work for you.
I need to stream an audio using the mp3 endpoint url.
I am new to audio streaming, so I could really use some help. Thanks in advance.
This is what I have tried:
const SongRow = ({ track }) => {
const player = useRef();
const playSong = () => {
player.src = track.preview_url;
player.play();
}
return (
<div className="songRow" onClick={() => playSong()}>
<audio ref={player} />
<div className="songRow__info">
<h1>{track.name}</h1>
</div>
</div>
)
};
Error : TypeError: Cannot add property src, object is not extensible
I am not sure how I am supposed to proceed with this. Also couldn't find any relevant docs related to this and functional components.
You need to access the current property on the ref.
From the docs:
a reference to the node becomes accessible at the current attribute of the ref.
So it would be player.current.src.
I am trying to write a small library for convenient manipulations with audio. I know about the autoplay policy for media elements, and I play audio after a user interaction:
const contextClass = window.AudioContext || window.webkitAudioContext;
const context = this.audioContext = new contextClass();
if (context.state === 'suspended') {
const clickCb = () => {
this.playSoundsAfterInteraction();
window.removeEventListener('touchend', clickCb);
this.usingAudios.forEach((audio) => {
if (audio.playAfterInteraction) {
const promise = audio.play();
if (promise !== undefined) {
promise.then(_ => {
}).catch(error => {
// If playing isn't allowed
console.log(error);
});
}
}
});
};
window.addEventListener('touchend', clickCb);
}
On android chrome everything ok and on a desktop browser. But on mobile Safari I am getting such error in promise:
the request is not allowed by the user agent or the platform in the current context safari
I have tried to create audios after an interaction, change their "src" property. In every case, I am getting this error.
I just create audio in js:
const audio = new Audio(base64);
add it to array and try to play. But nothing...
Tried to create and play after a few seconds after interaction - nothing.
I'm trying to cast a live MediaStream (Eventually from the camera) from peerA to peerB and I want peerB to receive the live stream in real time and then replay it with an added delay. Unfortunately in isn't possible to simply pause the stream and resume with play since it jump forward to the live moment.
So I have figured out that I can use MediaRecorder + SourceBuffer rewatch the live stream. Record the stream and append the buffers to MSE (SourceBuffer) and play it 5 seconds later.
This works grate on the local device (stream). But when I try to use Media Recorder on the receivers MediaStream (from pc.onaddstream) is looks like it gets some data and it's able to append the buffer to the sourceBuffer. however it dose not replay. sometime i get just one frame.
const [pc1, pc2] = localPeerConnectionLoop()
const canvasStream = canvas.captureStream(200)
videoA.srcObject = canvasStream
videoA.play()
// Note: using two MediaRecorder at the same time seem problematic
// But this one works
// stream2mediaSorce(canvasStream, videoB)
// setTimeout(videoB.play.bind(videoB), 5000)
pc1.addTransceiver(canvasStream.getTracks()[0], {
streams: [ canvasStream ]
})
pc2.onaddstream = (evt) => {
videoC.srcObject = evt.stream
videoC.play()
// Note: using two MediaRecorder at the same time seem problematic
// THIS DOSE NOT WORK
stream2mediaSorce(evt.stream, videoD)
setTimeout(() => videoD.play(), 2000)
}
/**
* Turn a MediaStream into a SourceBuffer
*
* #param {MediaStream} stream Live Stream to record
* #param {HTMLVideoElement} videoElm Video element to play the recorded video in
* #return {undefined}
*/
function stream2mediaSorce (stream, videoElm) {
const RECORDER_MIME_TYPE = 'video/webm;codecs=vp9'
const recorder = new MediaRecorder(stream, { mimeType : RECORDER_MIME_TYPE })
const mediaSource = new MediaSource()
videoElm.src = URL.createObjectURL(mediaSource)
mediaSource.onsourceopen = (e) => {
sourceBuffer = mediaSource.addSourceBuffer(RECORDER_MIME_TYPE);
const fr = new FileReader()
fr.onerror = console.log
fr.onload = ({ target }) => {
console.log(target.result)
sourceBuffer.appendBuffer(target.result)
}
recorder.ondataavailable = ({ data }) => {
console.log(data)
fr.readAsArrayBuffer(data)
}
setInterval(recorder.requestData.bind(recorder), 1000)
}
console.log('Recorder created')
recorder.start()
}
Do you know why it won't play the video?
I have created a fiddle with all the necessary code to try it out, the javascript tab is the same code as above, (the html is mostly irrelevant and dose not need to be changed)
Some try to reduce the latency, but I actually want to increase it to ~10 seconds to rewatch something you did wrong in a golf swing or something, and if possible avoid MediaRecorder altogether
EDIT:
I found something called "playout-delay" in some RTC extension
that allows the sender to control the minimum and maximum latency from capture to render time
https://webrtc.org/experiments/rtp-hdrext/playout-delay/
How can i use it?
Will it be of any help to me?
Update, there is new feature that will enable this, called playoutDelayHint.
We want to provide means for javascript applications to set their preferences on how fast they want to render audio or video data. As fast as possible might be beneficial for applications which concentrates on real time experience. For others additional data buffering may provide smother experience in case of network issues.
Refs:
https://discourse.wicg.io/t/hint-attribute-in-webrtc-to-influence-underlying-audio-video-buffering/4038
https://bugs.chromium.org/p/webrtc/issues/detail?id=10287
Demo: https://jsfiddle.net/rvekxns5/
doe i was only able to set max 10s in my browser but it's more up to the UA vendor to do it's best it can with the resources available
import('https://jimmy.warting.se/packages/dummycontent/canvas-clock.js')
.then(({AnalogClock}) => {
const {canvas} = new AnalogClock(100)
document.querySelector('canvas').replaceWith(canvas)
const [pc1, pc2] = localPeerConnectionLoop()
const canvasStream = canvas.captureStream(200)
videoA.srcObject = canvasStream
videoA.play()
pc1.addTransceiver(canvasStream.getTracks()[0], {
streams: [ canvasStream ]
})
pc2.onaddstream = (evt) => {
videoC.srcObject = evt.stream
videoC.play()
}
$dur.onchange = () => {
pc2.getReceivers()[0].playoutDelayHint = $dur.valueAsNumber
}
})
<!-- all the irrelevant part, that you don't need to know anything about -->
<h3 style="border-bottom: 1px solid">Original canvas</h3>
<canvas id="canvas" width="100" height="100"></canvas>
<script>
function localPeerConnectionLoop(cfg = {sdpSemantics: 'unified-plan'}) {
const setD = (d, a, b) => Promise.all([a.setLocalDescription(d), b.setRemoteDescription(d)]);
return [0, 1].map(() => new RTCPeerConnection(cfg)).map((pc, i, pcs) => Object.assign(pc, {
onicecandidate: e => e.candidate && pcs[i ^ 1].addIceCandidate(e.candidate),
onnegotiationneeded: async e => {
try {
await setD(await pc.createOffer(), pc, pcs[i ^ 1]);
await setD(await pcs[i ^ 1].createAnswer(), pcs[i ^ 1], pc);
} catch (e) {
console.log(e);
}
}
}));
}
</script>
<h3 style="border-bottom: 1px solid">Local peer (PC1)</h3>
<video id="videoA" muted width="100" height="100"></video>
<h3 style="border-bottom: 1px solid">Remote peer (PC2)</h3>
<video id="videoC" muted width="100" height="100"></video>
<label> Change playoutDelayHint
<input type="number" value="1" id="$dur">
</label>