I use webRTC (getUserMedia) for recording sound and uploading it to backend server. All works well except i am unable to determine the microphone type (is it a built-in mic, usb mic, headset mic, sth else?)
Does anybody know how can i detect the type?
You can use navigator.mediaDevices.enumerateDevices() to list the user's cameras and microphones, and try to infer types from their labels (there's no mic-type field unfortunately).
The following code works in Firefox 39 and Chrome 45 *:
var stream;
navigator.mediaDevices.getUserMedia({ audio:true })
.then(s => (stream = s), e => console.log(e.message))
.then(() => navigator.mediaDevices.enumerateDevices())
.then(devices => {
stream && stream.stop();
console.log(devices.length + " devices.");
devices.forEach(d => console.log(d.kind + ": " + d.label));
})
.catch(e => console.log(e));
var console = { log: msg => div.innerHTML += msg + "<br>" };
<div id="div"></div>
In Firefox on my system, this produces:
5 devices.
videoinput: Logitech Camera
videoinput: FaceTime HD Camera (Built-in)
audioinput: default (Logitech Camera)
audioinput: Built-in Microphone
audioinput: Logitech Camera
Now, there are some caveats: By spec the labels only show if device access is granted, which is why the snippet asks for it (try it both ways).
Furthermore, Chrome 45 requires persistent permissions (a bug?) which is not available in insecure HTTP, so you may need to reload this question in HTTPS first to see labels. If you do that, don't forget to revoke access in the URL bar afterwards, or Chrome will persist it, which is probably a bad idea on stackoverflow!
Alternatively, try https://webrtc.github.io/samples/src/content/devices/input-output which works in regular Chrome thanks to the adapter.js polyfill, but requires you to grant persistent permission and reload the page before you see labels (because of how it was written).
(*) EDIT: Apparently, enumerateDevices just got put back under an experimental flag in Chrome 45, so you need to enable it as explained here. Sorry about that. Shouldn't be long I hope.
Related
I have a webpage served via https with the following script:
const userMediaConstraints = {
video: true,
audio: true
};
navigator.mediaDevices.getUserMedia(userMediaConstraints)
.then(stream => {
console.log("Media stream captured.")
});
When I open it in Firefox or Safari, a prompt asking for permission to use camera and microphone appears. But not in Chrome or Opera. There the access is blocked by default and I have to go to site settings and manually allow the access, which is set to Default(ask).
window.isSecureContext is true.
navigator.permissions.query({name:'camera'}) resolves to name: "video_capture", onchange: null, state: "prompt".
It looks like Chrome and Opera should show prompt, but they do not. I tested it on a different machine with different user that had no prior history with the website with the same result.
What could be wrong?
You've said you've tried Safari, which suggests to me you're on a Mac.
This issue on the Chrome issues list fits your description really well. (I got there from this on the webrtc/samples GitHub project.) It comes down to a Mac OS security setting. Quoting comment #3:
Please check in Mac system preferences > Security & Privacy > Privacy > Microphone, that Chrome is checked.
(In your case since you want video, you'd also probably have to tick the box under Security & Privacy > Privacy > Camera or similar.)
It turns out, having video with autoplay attribute in HTML or similar lines on page load in Javascript prevent Chrome from showing the prompt.
HTML that causes this problem:
<video autoplay="true"></video>
Javascript that causes this problem:
localVideo = document.createElement('video');
videoContainer.append(localVideo);
localVideo.setAttribute('id','localVideo');
localVideo.play();
My guess is that the issue has to do with Chrome autoplay policy. Perhaps, Chrome treats my website as providing bad user experience and blocks the prompt?
I removed <video> from HTML and altered Javascript to create a relevant DOM on getUserMedia:
let localStream = new MediaStream();
let localAudioTrack;
let localVideoTrack;
let localVideo;
const userMediaConstraints = {
video: true,
audio: true
};
navigator.mediaDevices.getUserMedia(userMediaConstraints)
.then(stream => {
localAudioTrack = stream.getAudioTracks()[0];
localAudioTrack.enabled = true;
localStream.addTrack(localAudioTrack);
localVideoTrack = stream.getVideoTracks()[0];
localVideoTrack.enabled = true;
localStream.addTrack(localVideoTrack);
localVideo = document.createElement('video');
videoContainer.append(localVideo);
localVideo.setAttribute('id','localVideo');
localVideo.srcObject = localStream;
localVideo.muted = true;
localVideo.play();
});
And now I get the prompt.
We have a website with a background video. And have an issue, when a user is on Low power mode on IOS devices (iPhone, Mac, etc.).
Is it possible to handle only low power mode and set fallback image instead of video with play button?
I saw a variant with a suspend event, but it fired also when the video fully loaded, so it's not a correct solution for us.
You can use javaScript battery API to track the battery status
Seems I found a solution:
The most correct way, in my opinion, is just handle error throwing (in catch block) and set a fallback image instead of video.
FYI: This solution is relevant only when the video is in the background (from a UX point of view)
So the solution is:
`
useEffect(() => {
if (videoRef.current) {
videoRef.current?
.play()
.then(() => {})
.catch((error) => {
setVideoPaused(true);
return error;
});
}
}, []);
`
Check this link in a low power mode https://video-dev.github.io/can-autoplay/
You will see that the error you get is "NotAllowedError" because the request is not allowed in the current context. That really goes for any situation where autoPlay is prevented by the userAgent or the system settings.
So throw it in the catch and set state for the fallback like mentioned above
.catch((error) => {
if (error.name === "NotAllowedError") {
//low power mode
}
Seems to work pretty reliably
I'm trying to get Web NFC to work through the Web NFC API, but I can't get it past an error message of NotAllowedError: NFC permission request denied.
I'm using this on Chrome 89 Dev on a Windows 10 computer, and the source code is being run locally.
I have tried the examples posted on the Internet also, including the Google sample but it returns the same error. I'm not concerned with it being experimental at this point as referring to this does show it has successfully passed the necessary tests, including permissions.
The HTML/JS code I'm using is below, and I've read the specification point 9.3, but I can't make sense of it to write it as code, so is there a guideline algorithm that would be helpful here to resolve this?
async function readTag() {
if ("NDEFReader" in window) {
const reader = new NDEFReader();
try {
await reader.scan();
reader.onreading = event => {
const decoder = new TextDecoder();
for (const record of event.message.records) {
consoleLog("Record type: " + record.recordType);
consoleLog("MIME type: " + record.mediaType);
consoleLog("=== data ===\n" + decoder.decode(record.data));
}
}
} catch(error) {
consoleLog(error);
}
} else {
consoleLog("Web NFC is not supported.");
}
}
async function writeTag() {
if ("NDEFWriter" in window) {
const writer = new NDEFWriter();
try {
await writer.write("helloworld");
consoleLog("NDEF message written!");
} catch(error) {
consoleLog(error);
}
} else {
consoleLog("Web NFC is not supported.");
}
}
function consoleLog(data) {
var logElement = document.getElementById('log');
logElement.innerHTML += data + '\n';
};
<!DOCTYPE html>
<html>
<head>
<script src="webnfc.js"></script>
</head>
<body>
<p>
<button onclick="readTag()">Test NFC Read</button>
<button onclick="writeTag()">Test NFC Write</button>
</p>
<pre id="log"></pre>
</body>
</html>
From https://web.dev/nfc/#security-and-permissions
Web NFC is only available to top-level frames and secure browsing contexts (HTTPS only). Origins must first request the "nfc" permission while handling a user gesture (e.g a button click). The NDEFReader scan() and write() methods trigger a user prompt, if access was not previously granted.
I guess you are running from a file:// URL as you said "locally" which is not supported.
You need to host it from a local web server using a https:// URL
Once in the right scope trying to scan or write should trigger a user prompt.
You can also check permissions see https://web.dev/nfc/#check-for-permission
Update:
So I tried the sample page https://googlechrome.github.io/samples/web-nfc/
And this works for me on Android Chrome 87 with "Experimental Web Platform features" enabled
When you hit the scan button A dialog asking for permission pops up.
Comparing the code in this sample to yours I notice that does:-
ndef.addEventListener("reading" , ({ message, serialNumber }) => { ...
Where as yours does:-
ndef.onreading = event => { ...
I don't know if it is the style setting what happens on the Event or something else (Hey this is all experimental)
Update2
To answer the question from the comments of Desktop support.
So you should be some of the desktop/browser combinations at the moment and may be in the future there will be wider support as this is no longer experimental standards. Obviously as your test link suggest Chrome on a Linux Desktop should work as this is really similar to Android Support, with all the NFC device handling done by libnfc and the browser just has to know about this library instead of every type usb or other device than can do NFC.
From what seen of NFC support on Windows, most of this is focussed on direct controlling the NFC reader via USB as just another USB device, while there is a libnfc equivalent in Windows.Networking.Proximity API's I've not come across any NFC reader saying they support this or anybody using it.
For Mac Deskstop, given that Apple are behind the curve with NFC support in iOS, I feel their desktop support will be even further behind even though it could be similar to Linux.
As you can read at https://web.dev/nfc/#browser-support, Web NFC only supports Android for now which is why you get "NotAllowedError: NFC permission request denied." error on Windows.
I'm developing a web app that has to transmit files over Bluetooth. Is this possible, and if so, how would I go about doing that? Example code would be much appreciated. I can't find any good documentation online. Also, it must be able to run on mobile devices. I'm very new to JavaScript. Thanks
Although I would strongly advise against using bluetooth as a beginner (or in general at this time due to it being a WIP for many browsers):
Web Bluetooth is NOT available for any mobile browser except Chrome & Opera for Android and Samsung Browser
The best resource is probably MDN and the specification.
Something along the lines of:
// Discovery options match any devices advertising:
// . The standard heart rate service.
// . Both 16-bit service IDs 0x1802 and 0x1803.
// . A proprietary 128-bit UUID service c48e6067-5295-48d3-8d5c-0395f61792b1.
// . Devices with name "ExampleName".
// . Devices with name starting with "Prefix".
//
// And enables access to the battery service if devices
// include it, even if devices do not advertise that service.
let options = {
filters: [
{services: ['<Your Device UUID>']}
]
}
navigator.bluetooth.requestDevice(options).then(function(device) {
console.log('Name: ' + device.name);
return device.gatt.getPrimaryService();
})
.then(function(service) {
return service.getCharacheteristic('<Your Charachteristic UUID>');
})
.then(function(characteristic) {
// Do something with the characteristic
})
.catch(function(error) {
console.log("Something went wrong. " + error);
});
I am trying to analyse the audio output from the browser, but I don't want the getUserMedia prompt to appear (which asks for microphone permission).
The sound sources are SpeechSynthesis and an Mp3 file.
Here's my code:
return navigator.mediaDevices.getUserMedia({
audio: true
})
.then(stream => new Promise(resolve => {
const track = stream.getAudioTracks()[0];
this.mediaStream_.addTrack(track);
this._source = this.audioContext.createMediaStreamSource(this.mediaStream_);
this._source.connect(this.analyser);
this.draw(this);
}));
This code is working fine, but it's asking for permission to use the microphone! I a not interested at all in the microphone I only need to gauge the audio output. If I check all available devices:
navigator.mediaDevices.enumerateDevices()
.then(function(devices) {
devices.forEach(function(device) {
console.log(device.kind + ": " + device.label +
" id = " + device.deviceId);
});
})
I get a list of available devices in the browser, including 'audiooutput'.
So, is there a way to route the audio output in a media stream that can be then used inside 'createMediaStreamSource' function?
I have checked all the documentation for the audio API but could not find it.
Thanks for anyone that can help!
There are various ways to get a MediaStream which is originating from gUM, but you won't be able to catch all possible audio output...
But, for your mp3 file, if you read it through an MediaElement (<audio> or <video>), and if this file is served without breaking CORS, then you can use MediaElement.captureStream.
If you read it from WebAudioAPI, or if you target browsers that don't support captureStream, then you can use AudioContext.createMediaStreamDestination.
For SpeechSynthesis, unfortunately you will need gUM... and a Virtual Audio Device: first you would have to set your default output to the VAB_out, then route your VAB_out to VAB_in and finally grab VAB_in from gUM...
Not an easy nor universally doable task, moreover when IIRC SpeechSynthesis doesn't have any setSinkId method.