Unfortunately, from time to time when making a one-on-one video call using react-native-webrtc one of the two video streams freezes or becomes black. Is there a way to detect when that happens programmatically? Thx in advance!
It looks like each video track has a listener that fires as soon as the stream freezes.
In react native it's the onmute listener:
stream.getVideoTracks().forEach(videoTrack => {
videoTrack.onmute = () => {
console.log("Frozen video stream detected!");
};
});
Note that in React Native detecting frozen streams with this method only seems to work for remote tracks!
To detect if a stream is currently frozen I use the muted property on the video track:
console.log(videoTrack.muted); // true when frozen
Another way I've found but haven't explored further is the getStats() method on the RTCPeerConnection. It returns a promise with a huge amount of data that can be used to detect frozen video streams and a lot more I suppose.
Related
I am creating a demo application in which user can record a video using ReactJs. I am able get the list of devices, and record the video.
I am stuck and couldn't find the solution to let the user switch the camera while recording is on.
as of now this is how i am getting the video and audio Stream
function getStream() {
const stream = navigator.mediaDevices.getUserMedia({audio: {deviceId: 'audioDeviceId'}, video: {deviceId : 'videoDeviceId'})
}
by using above I am able to get the stream for the selected devices when Recording is not started.
The problem is when user switched audio or video device while recording.. i get a new Stream every time, so I am losing the previous stream and only the newly generated stream is getting recorded .
I will really appreciate if any one can help me out in handling the scenario where user can switch the devices while the recording is on. or if there is any other approach for switching the devices .
Thanks for the help anyway
Did you try and add/remove tracks? I stumbled upon a similar situation. In my scenario, If the user switches to a different video device, I get the new stream and from it, extract the video track. I then remove the older video tracks from the older stream and add this newly gotten track. If this makes sense, I can post my code snippet here.
I'm trying to play a sound over itself on keydown.
In order to do so, I saw the solution is to clone the sound and play the new instance instead:
var promise = sound.cloneNode(true).play();
Reproduction online:
https://jsfiddle.net/alvarotrigo/up4c6m95/13/
This seems to be working fine in Chrome and Firefox. However in Safari this results in bad performance.
Try typing very fast with both hands to reproduce the error.
Note I added a gif image that lags when typing very quickly.
This can of course be noted as well on the Safari dev tools as can be seen in the picture below:
Whole code here:
var sound = new Audio('https://www.w3schools.com/html/horse.mp3');
document.addEventListener('keydown', playSound);
function playSound() {
//in order to play the same sound over itself
var promise = sound.cloneNode(true).play();
//we just dont want to show the console error when autoplay is disabled :)
if (typeof promise !== undefined) {
promise.then(function() {
// Autoplay started!
}).catch(function(error) {
//error
});
}
}
Safari puts a request every time for the audio file being played, while on the other hand it's not the case for Firefox and Chrome as they tend to load it only once.
In Safari on iOS (for all devices, including iPad) ... no data is loaded until the user initiates it.
This means the JavaScript play() and load() methods are also inactive
until the user initiates playback, unless the play() or load() method
is triggered by user action.
I don't think you can get around the slow performance resulting in putting out a request on each keydown.
Since Apple disabled the ability to autoplay audio via HTMLMediaElement.play()
in javascript without user interaction, I am not sure how I should play a sound when a user gets a chat message before interacting with the DOM after the page loads.
socket.on("receive message", data => {
const receiveSound = new Audio("1.mp3");
messages.push(data);
receiveSound.play();
});
I tried playing the audio element on a mousemove event. I also tried to fake a click() through an element on a React ref to initially activate it. Neither solutions worked.
Is there a way to autoplay an audio element if there is a message coming in? It must be possible since YouTube can autoplay videos without interaction.
Every time I try to play the audio, I get this error:
Unhandled Rejection (NotAllowedError): The request is not allowed by the user agent or the platform in the current context, possibly because the user denied permission.
There is a way that you can make Safari play a sound without necessarily having to allow it there in the site settings.
According to the webkit documentation, it makes explicit that the user needs to interact to play the audio/video, in this case by the click, but he does not say that after you play some audio he lets you play any other audio then without any problem. With this in mind, you can for example run some script on your index.html where it will play your notification audio without any volume and then you can run it again without any problem, as in the example below:
function unlockAudio() {
const sound = new Audio('path/to/your/sound/notification.mp3');
sound.play();
sound.pause();
sound.currentTime = 0;
document.body.removeEventListener('click', unlockAudio)
document.body.removeEventListener('touchstart', unlockAudio)
}
document.body.addEventListener('click', unlockAudio);
document.body.addEventListener('touchstart', unlockAudio);
To run your code after this workaround, just do it this way:
function soundNotification() {
const sound = new Audio('path/to/your/sound/notification.mp3');
const promise = sound.play();
if (promise !== undefined) {
promise.then(() => {}).catch(error => console.error);
}
}
Remembering that the above example is just to show you an alternative, there are several ways you can solve this problem, just keep in mind that you will need to play some sound before...
What solved it for me was using HowlerJS
from Docs:
howler.js is an audio library for the modern web. It defaults to Web
Audio API and falls back to HTML5 Audio. This makes working with audio
in JavaScript easy and reliable across all platforms.
I'm trying to remove a track from a MediaStream. MediaStream.removeTrack() removes the track from the stream, but the camera light is left on indicating that the camera is still active.
https://developer.mozilla.org/en-US/docs/Web/API/MediaStreamTrack?redirectlocale=en-US&redirectslug=DOM%2FMediaStreamTrack
This references a stop() method which I suppose would stop the camera completely, In chrome however I get "Object MediaStreamTrack has no method 'stop'"
Is there a way around this or do I have to stop the whole stream and then recreate it with the tracks I don't want gone? As an example, I want to remove the video track while the audiotrack is still there.
MediaStreamTrack.stop() is now added to the Chrome.
MediaStream.stop() is deprecated in Chrome 45.
You should use MediaStream.getVideoTracks() to get video tracks and stop the track using MediaStreamTrack.stop()
for stopping specific media stream, Maybe this help: (Link)
function stopStreamedVideo(videoElem) {
const stream = videoElem.srcObject;
const tracks = stream.getTracks();
tracks.forEach(function(track) {
track.stop();
});
videoElem.srcObject = null;
}
You need to call stop() on the MediaStream, not a MediaStreamTrack.
Take a look at simpl.info/gum. From the console, call stream.stop(): recording stops and the video camera light goes off.
It looks like the proper way to deal with this issue is to stop your MediaStream, recreate (and reattach) it as an audio-only one and then renegotiate the PeerConnection session. Unfortunately, Firefox currently doesn't support renegotiation mid-session.
The only viable hack is thus to also recreate the PeerConnection with the new MediaStream as suggested here (see "Adding video mid-call").
I've been evaluating HTML5 audio on iOS 4 and have been trying to understand its limitations. From what I can tell...
It is possible to play audio in the background
It is not possible to fire JavaScript events in the background upon track completion
It is possible to fire JavaScript events while the screen is off, but Safari must be in the foreground (before turning the screen off)
My goal for this current project is to create a dynamic playlist that will continue to fire events and move to the next track even while Safari is not in the foreground. Is this possible with the current way HTML5 audio works on iOS?
I am curious about how the chaining of JavaScript events works on iOS if anyone has additional information. It seems that you are allowed to queue back to back sounds, but it must happen shortly after a "human" function happens (for example, tapping an element). Anything else that tries to queue a sound outside of this human function is denied the ability to play.
Also...
Is it even possible to have events that fire to move a real iOS application to the next track? It seems as if the application is only allowed to finish its current audio stream and then it goes into an idle state. Just trying to figure out all the angles here!
This is quite an old question, so I'm not sure if you've found an answer already or not.
One thing I know is that an audio clip cannot be played via JavaScript on mobile Safari.
Autoplay audio files on an iPad with HTML5
The only way to make audio play, is through a click event. This wasn't the case on 3.x, but on 4.x it is. This is because Apple doesn't want the webapp to download audio on a 3g connection programmatically, so they force the user to initiate it.
I would think that if all of the tracks were started downloading (cached), then it may be possible. I would try forcing the user to start one track, and at the same time call .load() on all of the other tracks (in the same click handler). This will force the iOS device to start downloading the audio tracks, and you may be able to play the next track (not sure though).