JavaScript programatically check if camera is being used - javascript

I'm writing functional tests for a video chat app.
I want to make sure that when the user leaves the meeting the camera turns off. So, I'm trying to check if the camera is being used or not.
Is there a way to do that programatically? I couldn't find any methods on navigator.MediaDevices that say "hey your camera is being used".

Here is how I solved it in TestCafe by "spying" on getUserMedia:
const overWriteGetUserMedia = ClientFunction(() => {
const realGetUserMedia = navigator.mediaDevices.getUserMedia;
const allRequestedTracks = [];
navigator.mediaDevices.getUserMedia = constraints =>
realGetUserMedia(constraints).then(stream => {
stream.getTracks().forEach(track => {
allRequestedTracks.push(track);
});
return stream;
});
return allRequestedTracks;
});
test('leaving a meeting should end streams', async t => {
const allRequestedTracks = await overWriteGetUserMedia();
await t.wait(5000); // wait for streams to start;
await t.click(screen.getByLabelText(/leave/i));
await t.click(screen.getByLabelText(/yes, leave the meeting/i));
await t.wait(1000); // wait for navigation;
const actual = !allRequestedTracks.some(track => !track.ended);
const expected = true;
await t.expect(actual).eql(expected);
});

You can use navigator.mediaDevices.getUserMedia method to get access of user camera, and user active value to check is camera already activated.
If user block the permission to the camera you will receive an error.
Hope this will be work for you.

Related

Is there any way to change the gain individually in real time when playing a node that is a composite of two AudioBuffers?

I was having the following problems.
When I run AudioBufferSourceNode.start() when I have multiple tracks, I sometimes get a delay
Then, per chrisguttandin's answer, I tried the method using offLineAudioContext. (Thanks to chrisguttandin).
I wanted to play two different mp3 files completely simultaneously, so I used offlineAudioContext to synthesize an audioBuffer.
And I succeeded in playing the synthesized node.
The following is a demo of it.
CodeSandBox
The code in the demo is based on the code in the following page.
OfflineAudioContext - Web APIs | MDN
However, the demo does not allow you to change the gain for each of the two types of audio.
Is there any way to change the gain of the two types of audio during playback?
What I would like to do is as follows.
I want to play two pieces of audio perfectly simultaneously.
I want to change the gain of each of the two audios in real time.
Therefore, if you can achieve what you want to do as described above, you don't need to use offlineAudioContext.
The only way I can think of to do this is to run startRendering on every input type="range", but I don't think this is practical from a performance standpoint.
Also, I looked for a solution to this problem, but could not find one.
code
let ctx = new AudioContext(),
offlineCtx,
tr1,
tr2,
renderedBuffer,
renderedTrack,
tr1gain,
tr2gain,
start = false;
const trackArray = ["track1", "track2"];
const App = () => {
const [loading, setLoading] = useState(true);
useEffect(() => {
(async () => {
const bufferArray = trackArray.map(async (track) => {
const res = await fetch("/" + track + ".mp3");
const arrayBuffer = await res.arrayBuffer();
return await ctx.decodeAudioData(arrayBuffer);
});
const audioBufferArray = await Promise.all(bufferArray);
const source = audioBufferArray[0];
offlineCtx = new OfflineAudioContext(
source.numberOfChannels,
source.length,
source.sampleRate
);
tr1 = offlineCtx.createBufferSource();
tr2 = offlineCtx.createBufferSource();
tr1gain = offlineCtx.createGain();
tr2gain = offlineCtx.createGain();
tr1.buffer = audioBufferArray[0];
tr2.buffer = audioBufferArray[1];
tr1.connect(tr1gain);
tr1gain.connect(offlineCtx.destination);
tr2.connect(tr1gain);
tr2gain.connect(offlineCtx.destination);
tr1.start();
tr2.start();
offlineCtx.startRendering().then((buffer) => {
renderedBuffer = buffer;
renderedTrack = ctx.createBufferSource();
renderedTrack.buffer = renderedBuffer;
setLoading(false);
});
})();
return () => {
ctx.close();
};
}, []);
const [playing, setPlaying] = useState(false);
const playAudio = () => {
if (!start) {
renderedTrack = ctx.createBufferSource();
renderedTrack.buffer = renderedBuffer;
renderedTrack.connect(ctx.destination);
renderedTrack.start();
setPlaying(true);
start = true;
return;
}
ctx.resume();
setPlaying(true);
};
const pauseAudio = () => {
ctx.suspend();
setPlaying(false);
};
const stopAudio = () => {
renderedTrack.disconnect();
start = false;
setPlaying(false);
};
const changeVolume = (e) => {
const target = e.target.ariaLabel;
target === "track1"
? (tr1gain.gain.value = e.target.value)
: (tr2gain.gain.value = e.target.value);
};
const Inputs = trackArray.map((track, index) => (
<div key={index}>
<span>{track}</span>
<input
type="range"
onChange={changeVolume}
step="any"
max="1"
aria-label={track}
disabled={loading ? true : false}
/>
</div>
));
return (
<>
<button
onClick={playing ? pauseAudio : playAudio}
disabled={loading ? true : false}
>
{playing ? "pause" : "play"}
</button>
<button onClick={stopAudio} disabled={loading ? true : false}>
stop
</button>
{Inputs}
</>
);
};
As a test, I'd go back to your original solution, but instead of
tr1.start();
tr2.start();
try something like
t = ctx.currentTime;
tr1.start(t+0.1);
tr2.start(t+0.1);
There will be a delay of about 100 ms before audio starts, but they should be synchronized precisely. If this works, reduce the 0.1 to something smaller, but not zero. Once this is working, you can then connect separate gain nodes to each track and control the gains of each in real-time.
Oh, one other thing, instead of resuming the context after calling start, you might want to do something like
ctx.resume()
.then(() => {
let t = ctx.currentTime;
tr1.start(t + 0.1);
tr2.start(t + 0.1);
});
The clock isn't running if the context is suspended, and resuming doesn't happen instantly. It may take some time to restart the audio HW.
Oh, another approach since I see that the buffer you created with an offline context has two channels in it.
Let s be the AudioBufferSourceNode you created in the offline context.
let splitter = new ChannelSplitterNode(ctx, {numberOfOutputs: 2});
s.connect(splitter);
let g1 = new GainNode(ctx);
let g2 = new GainNode(ctx);
splitter.connect(g1, 0, 0);
splitter.connect(g2, 1, 0);
let merger = new ChannelMergerNode(ctx, {numberOfInputs: 1});
g1.connect(merger, 0, 0);
g2.connect(merger, 0 ,1);
// Connect merger to the downstream nodes or the destination.
You can now start s and modify g1 and g2 as desired to produce the output you want.
You can remove the gain nodes created in the offline context; they're not needed unless you really want to apply some kind of gain in the offline context.
But if I were doing this, I'd prefer not to use the offline context unless absolutely necessary.

How do I continuously listen for a new item while scraping a website

I am using puppeteer to scrape a website that is being live updated, to report the latest item elsewhere.
Currently the way I was thinking accomplishing this is to run a setInterval call on my async scrape and to compare if the last item has changed, checking every 30 seconds. I assume there has to be a better way of doing this then that.
Here is my current code:
const puppeteer = require('puppeteer');
playtracker = async () => {
console.log('loading');
const browser = await puppeteer.launch({});
const page = await browser.newPage();
await page.goto('URL-Being-Scraped');
await page.waitForSelector('.playlist-tracklist-view');
let html = await page.$$eval('.playlist-tracklist-view > .playlist-track', tracks => {
tracks = tracks.filter(track => track.querySelector('.playlist-trackname').textContent);
tracks = tracks.map(el => el.querySelector('.playlist-trackname').textContent);
return tracks;
});
console.log('logging', html[html.length-1]);
};
setInterval(playtracker, 30000)
There is an api called "MutationObserver". You can check that out on MDN. Here's the link https://developer.mozilla.org/en-US/docs/Web/API/MutationObserver
What it is doing is basically do whatever you want to do when the spesific element has changed. Lets say you have a list you want to listen. What you would do is
const listElement = document.querySelector( [list element] );
const callbackFunc = funcion foo () {
//do something
}
const yourMutationObserver = new MutationObserver(callbackFunc)
yourMutationObserver.observe(listElement)
You can disconnect your mutation observer with yourMutationObserver.disconnect() method whenever you want.
This could help too if you confused about how to implement it https://stackoverflow.com/a/48145840/14138428

How to find out which microphone device user gave permission to?

I ask user the permission to use Camera and Microphone:
await navigator.mediaDevices.getUserMedia({ audio: true, video: true });
And in Firefox, I get the following prompt:
Once the user gave the permission, how can I tell which Camera and Microphone was selected? The return value of getUserMedia doesn't provide much info.
Once gUM has given you a stream object do something like this:
async function getAudioDeviceLabel(stream) {
let audioDeviceLabel = 'unknown'
const tracks = stream.getAudioTracks()
if( tracks && tracks.length >= 1 && tracks[0] ) {
const settings = tracks[0].getSettings()
const chosenDeviceId = settings.deviceId
if (chosenDeviceId) {
let deviceList = await navigator.mediaDevices.enumerateDevices()
deviceList = deviceList.filter(device => device.deviceId === chosenDeviceId)
if (deviceList && deviceList.length >= 1) audioDeviceLabel = deviceList[0].label
}
}
return audioDeviceLabel
}
This gets the deviceId of the audio track of your stream, from its settings. It then looks at the list of enumerated devices to retrieve the label associated with the deviceId.
It is kind of a pain in the xxx neck to get this information.

How to play multiple audio sequentially with ionic Media plugin

I am trying to play multiple audio files with ionic media plugin : https://ionicframework.com/docs/native/media. but I am having a hard time making it work as a playlist without using a timeout function.
Here is what I have tried out
playOne(track: AudioFile): Promise<any> {
return new Promise(async resolve =>{
const AudFile = await this.media.create(this.file.externalDataDirectory+track.trackUrl);
await resolve(AudFile.play())
});
}
Then to play All , I have this :
async playAll(tracks: AudioFile[]): Promise<any>{
let player = (acc, track:AudioFile) => acc.then(() =>
this.playOne(track)
);
tracks.reduce(player, Promise.resolve());
}
This way they are all playing at the same time.
But If The PlayOne method is wrapped in a timeout function, the interval of the milli seconds set on the timeout exists among the play list, but one does not necessarily finish before the other starts and sometimes it waits for a long time before the subsequent file is plaid.
The timeout implementation looks like this :
playOne(track: AudioFile): Promise<any> {
return new Promise(async resolve =>{
setTimeout(async ()=>{
const AudFile = await this.media.create(this.file.externalDataDirectory+track.trackUrl);
await resolve(AudFile.play())
},3000)
});
}
Digging into ionic wrapper of the plugin, the create method looks like this :
/**
* Open a media file
* #param src {string} A URI containing the audio content.
* #return {MediaObject}
*/
Media.prototype.create = function (src) {
var instance;
if (checkAvailability(Media.getPluginRef(), null, Media.getPluginName()) ===
true) {
// Creates a new media object
instance = new (Media.getPlugin())(src);
}
return new MediaObject(instance);
};
Media.pluginName = "Media";
Media.repo = "https://github.com/apache/cordova-plugin-media";
Media.plugin = "cordova-plugin-media";
Media.pluginRef = "Media";
Media.platforms = ["Android", "Browser", "iOS", "Windows"];
Media = __decorate([
Injectable()
], Media);
return Media;
}(IonicNativePlugin));
Any suggestion would be appreciated
You may get it working by looping over your tracks and await playOne on each track.
async playAll(tracks: AudioFile[]): Promise<any> {
for (const track of tracks) {
await this.playOne(track);
}
}
If I'm not mistaking play function doesn't block until playing the audio file is finished. It doesn't return a promise either. A work around would be to use a seTimeout for the duration of the track
playOne(track: AudioFile): Promise<any> {
return new Promise((resolve, reject) => {
const audFile = await this.media.create(this.file.externalDataDirectory+track.trackUrl);
const duration = audFile.getDuration(); // duration in seconds
AudFile.play();
setTimeout(() => {
resolve();
},
duration * 1000 // setTimeout expect milliseconds
);
});
}
I eventually got it to work with a recursive function. This works as expected.
PlayAllList(i,tracks: AudioFile[]){
var self = this;
this.Audiofile = this.media.create(this.file.externalDataDirectory+tracks[i].trackUrl);
this.Audiofile.play()
this.Audiofile.onSuccess.subscribe(() => {
if ((i + 1) == tracks.length) {
// do nothing
} else {
self.PlayAllList(i + 1, tracks)
}
})
}
Then
this.PlayAllList(0,tracks)
If there is any improvement on this, I will appreciate.
I think you will be better of with the Web Audio API. I have used it before, and the possibilities are endless.
Apparently it can be used in Ionic without issues:
https://www.airpair.com/ionic-framework/posts/using-web-audio-api-for-precision-audio-in-ionic
I have used it on http://chordoracle.com to play multiple audio samples at the same time (up to 6 simultaneous samples for each string of the guitar). In this case i also alter their pitch to get different notes.
In order to play multiple samples, you just need to create multiple bufferSources:
https://developer.mozilla.org/en-US/docs/Web/API/BaseAudioContext/createBufferSource
Some links to get you started:
https://www.w3.org/TR/webaudio/
https://medium.com/better-programming/all-you-need-to-know-about-the-web-audio-api-3df170559378

Play audio with React

Whatever I do, I get an error message while trying to playing a sound:
Uncaught (in promise) DOMException.
After searching on Google I found that it should appear if I autoplayed the audio before any action on the page from the user but it's not the case for me. I even did this:
componentDidMount() {
let audio = new Audio('sounds/beep.wav');
audio.load();
audio.muted = true;
document.addEventListener('click', () => {
audio.muted = false;
audio.play();
});
}
But the message still appears and the sound doesn't play. What should I do?
The audio is an HTMLMediaElement, and calling play() returns a Promise, so needs to be handled. Depending on the size of the file, it's usually ready to go, but if it is not loaded (e.g pending promise), it will throw the "AbortError" DOMException.
You can check to see if its loaded first, then catch the error to turn off the message. For example:
componentDidMount() {
this.audio = new Audio('sounds/beep.wav')
this.audio.load()
this.playAudio()
}
playAudio() {
const audioPromise = this.audio.play()
if (audioPromise !== undefined) {
audioPromise
.then(_ => {
// autoplay started
})
.catch(err => {
// catch dom exception
console.info(err)
})
}
}
Another pattern that has worked well without showing that error is creating the component as an HTML audio element with the autoPlay attribute and then rendering it as a component where needed. For example:
const Sound = ( { soundFileName, ...rest } ) => (
<audio autoPlay src={`sounds/${soundFileName}`} {...rest} />
)
const ComponentToAutoPlaySoundIn = () => (
<>
...
<Sound soundFileName="beep.wav" />
...
</>
)
Simple error tone
If you want something as simple as playing a simple error tone (for non-visual feedback in a barcode scanner environment, for instance), and don't want to install dependencies, etc. - it can be pretty simple. Just link to your audio file:
import ErrorAudio from './error.mp3'
And in the code, reference it, and play it:
var AudioPlay = new Audio (ErrorAudio);
AudioPlay.play();
Only discovered this after messing around with more complicated options.
I think it would be better to use this component (https://github.com/justinmc/react-audio-player) instead of a direct dom manipulation
It is Very Straightforward Indeed
const [, setMuted] = useState(true)
useEffect(() => {
const player = new Audio(./sound.mp3);
const playPromise = player.play();
if (playPromise !== undefined)
return playPromise.then(() => setMuted(false)).catch(() => setMuted(false));
}, [])
I hope It works now :)

Categories

Resources