I am trying to turn on flash on a web application running in chrome under windows 10, Panasonic tablet.
According to tutorials i should get torch true when calling track.getCapabilities()
However i dont get torch at all in the returning object.
Native apps are able to turn on the flash light.
See code:
video.onloadedmetadata = function(e) {
video.play();
setTimeout(() => {
const track = mediaStream.getVideoTracks()[0];
const capabilities = typeof track.getCapabilities === 'function' && track.getCapabilities() || {};
if (capabilities.torch) {
hasTorchMode = true;
}
}, 250);
};
Any clue how to solve this?
Related
I am trying to get my laptop's speaker level shown in my application. I am new to WebRTC and Web Audio API, so just wanted to confirm about the possibility of a feature. The application is an electron application and has a calling feature, so when the user at the other end of the call speaks, the application should display a level of output which varies accordingly to the sound. I have tried using WebRTC and Web Audio API, and even seen a sample. I am able to log values but that changes when I speak in the microphone, while I need only the values of speaker not the microphone.
export class OutputLevelsComponent implements OnInit {
constructor() { }
ngOnInit(): void {
this.getAudioLevel()
}
getAudioLevel() {
try {
navigator.mediaDevices.enumerateDevices().then(devices => {
console.log("device:", devices);
let constraints = {
audio : {
deviceId: devices[3].deviceId
}
}
navigator.mediaDevices.getUserMedia(constraints).then((stream) => {
console.log("stream test: ", stream);
this.handleSuccess(stream)
});
});
} catch(e) {
console.log("error getting media devices: ", e);
}
}
handleSuccess(stream: any) {
console.log("stream: ", stream);
var context = new AudioContext();
var analyser = context.createScriptProcessor(1024, 1, 1);
var source = context.createMediaStreamSource(stream);
source.connect(analyser);
// source.connect(context.destination);
analyser.connect(context.destination);
opacify();
function opacify() {
analyser.onaudioprocess = function(e) {
// no need to get the output buffer anymore
var int = e.inputBuffer.getChannelData(0);
var max = 0;
for (var i = 0; i < int.length; i++) {
max = int[i] > max ? int[i] : max;
}
if (max > 0.01) {
console.log("max: ", max);
}
}
}
}
}
I have tried the above code, where I use enumerateDevices() and getUserMedia() which will give a set of devices, for demo purposes I am taking the last device which has 'audiooutput' as value for kind property and accessing stream of the device.
Please let me know if this is even possible with Web Audio API. If not, is there any other tool that can help me implement this feature?
Thanks in advance.
You would need to use your handleSuccess() function with the stream that you get from the remote end. That stream usually gets exposed as part of the track event.
The problem is likely linked to the machine you are running. On macOS, there is no way to capture system audio output from Browser APIs as it requires a signed kernel extension. Potential workarounds are using Blackhole for Sunflower. On windows, the code should work fine though.
I have an html5 video element I'm trying to increase the volume off.
I'm using the code I found in this answer
However there is no sound coming out of the speakers. If I disable it sound is fine.
videoEl.muted = true //tried with this disabled or enabled
if(!window.audio)
window.audio = amplify(vol)
else
window.audio.amplify(vol)
...
export function amplify(multiplier) {
const media = document.getElementById('videoEl')
//#ts-ignore
var context = new(window.AudioContext || window.webkitAudioContext),
result = {
context: context,
source: context.createMediaElementSource(media),
gain: context.createGain(),
media,
amplify: function(multiplier) {
result.gain.gain.value = multiplier;
},
getAmpLevel: function() {
return result.gain.gain.value;
}
};
result.source.connect(result.gain)
result.gain.connect(context.destination)
result.amplify(multiplier)
return result;
}
That value is set to 3 for testing.
Any idea how why I'm getting no sound?
I also have Howler running for other audio files, could it be blocking the web audio API?
I'm trying to setup audio playback which I cannot get working on Safari 14.0.3, but works fine in Chrome 88.0.4324.146. I have a function that returns a AudioContext or webkitAudioContext. I followed this answer: https://stackoverflow.com/a/29373891/586006
var sounds;
var audioContext
window.onload = function() {
audioContext = returnAudioContext()
sounds = {
drop : new Audio('sounds/drop.mp3'),
slide : new Audio('sounds/slide.mp3'),
win : new Audio('sounds/win.mp3'),
lose : new Audio('sounds/lose.mp3'),
}
playSound(sounds.drop)
}
function returnAudioContext(){
var AudioContext = window.AudioContext // Default
|| window.webkitAudioContext // Safari and old versions of Chrome
|| false;
if (AudioContext) {
return new AudioContext;
}
}
function playSound(sound){
audioContext.resume().then(() => {
console.log("playing sound")
sound.play();
});
}
Live example: http://www.mysterysystem.com/stuff/test.html
I've done my very best to make an example that uses solely the Web Audio API, but alas, Safari is not very compatible with this API. Though it is possible to use it in conjuction with the HTMLAudioElement, but unless you want to manipulate the audio; you won't need it.
The example below will play the drop sounds whenever you click anywhere in the document. It might need 2 clicks as audio in the browser can be very strict on when to play or not.
The playSound function checks if the play() method returns a promise. If it does then that promise should have a .catch() block. Otherwise it will throw the Unhandled Promise Rejection error in Safari.
const sounds = {
drop: new Audio('sounds/drop.mp3'),
slide: new Audio('sounds/slide.mp3'),
win: new Audio('sounds/win.mp3'),
lose: new Audio('sounds/lose.mp3'),
};
function playSound(audio) {
let promise = audio.play();
if (promise !== undefined) {
promise.catch(error => {
console.log(error);
});
}
}
document.addEventListener('click', () => {
playSound(sounds.drop);
});
If you do need to use the Web Audio API to do some stuff, please let me know.
I am trying to write a small library for convenient manipulations with audio. I know about the autoplay policy for media elements, and I play audio after a user interaction:
const contextClass = window.AudioContext || window.webkitAudioContext;
const context = this.audioContext = new contextClass();
if (context.state === 'suspended') {
const clickCb = () => {
this.playSoundsAfterInteraction();
window.removeEventListener('touchend', clickCb);
this.usingAudios.forEach((audio) => {
if (audio.playAfterInteraction) {
const promise = audio.play();
if (promise !== undefined) {
promise.then(_ => {
}).catch(error => {
// If playing isn't allowed
console.log(error);
});
}
}
});
};
window.addEventListener('touchend', clickCb);
}
On android chrome everything ok and on a desktop browser. But on mobile Safari I am getting such error in promise:
the request is not allowed by the user agent or the platform in the current context safari
I have tried to create audios after an interaction, change their "src" property. In every case, I am getting this error.
I just create audio in js:
const audio = new Audio(base64);
add it to array and try to play. But nothing...
Tried to create and play after a few seconds after interaction - nothing.
I am building a project similar to this example with jsartoolkit5, and I would like to be able to select the back camera of my device instead of letting Chrome on Android select the front one as default.
According to the example in this demo, I have added the code below to switch camera automatically if the device has a back camera.
var videoElement = document.querySelector('canvas');
function successCallback(stream) {
window.stream = stream; // make stream available to console
videoElement.src = window.URL.createObjectURL(stream);
videoElement.play();
}
function errorCallback(error) {
console.log('navigator.getUserMedia error: ', error);
}
navigator.mediaDevices.enumerateDevices().then(
function(devices) {
for (var i = 0; i < devices.length; i++) {
if (devices[i].kind == 'videoinput' && devices[i].label.indexOf('back') !== -1) {
if (window.stream) {
videoElement.src = null;
window.stream.stop();
}
var constraints = {
video: {
optional: [{
sourceId: devices[i].deviceId
}]
}
};
navigator.getUserMedia(constraints, successCallback, errorCallback);
}
}
}
);
The issue is that it works perfectly for a <video> tag, but unluckily jsartoolkit renders the content inside a canvas and it consequently throws an error.
I have also tried to follow the instructions in this closed issue in the Github repository, but this time I get the following error: DOMException: play() can only be initiated by a user gesture.
Do you know or have any suggestion on how to solve this issue?
Thanks in advance for your replies!
Main problem :
You are mixing old and new getUserMedia syntax.
navigator.getUserMedia is deprecated, and navigator.mediaDevices.getUserMedia should be preferred.
Also, I think that optional is not part of the constraints dictionary anymore.
Default Solution
This part is almost a duplicate of this answer : https://stackoverflow.com/a/32364912/3702797
You should be able to call directly
navigator.mediaDevices.getUserMedia({
video: {
facingMode: {
exact: 'environment'
}
}
})
But chrome still has this bug, and even if #jib's answer states that it should work with adpater.js polyfill, I myself were unable to make it work on my chrome for Android.
So previous syntax will currently work only on Firefox for Android.
For chrome, you'll indeed need to use enumerateDevices, along with adapter.js to make it work, but don't mix up the syntax, and everything should be fine :
let handleStream = s => {
document.body.append(
Object.assign(document.createElement('video'), {
autoplay: true,
srcObject: s
})
);
}
navigator.mediaDevices.enumerateDevices().then(devices => {
let sourceId = null;
// enumerate all devices
for (var device of devices) {
// if there is still no video input, or if this is the rear camera
if (device.kind == 'videoinput' &&
(!sourceId || device.label.indexOf('back') !== -1)) {
sourceId = device.deviceId;
}
}
// we didn't find any video input
if (!sourceId) {
throw 'no video input';
}
let constraints = {
video: {
sourceId: sourceId
}
};
navigator.mediaDevices.getUserMedia(constraints)
.then(handleStream);
});
<script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>
Fiddle for chrome which need https.
Make it work with jsartoolkit
You'll have to fork jsartoolkit project and edit artoolkit.api.js.
The main project currently disables mediaDevices.getUserMedia(), so you'll need to enable it again, and you'll also have to add a check for an sourceId option, that we'll add later in the ARController.getUserMediaThreeScene() call.
You can find a rough and ugly implementation of these edits in this fork.
So once it is done, you'll have to rebuild the js files, and then remember to include adapter.js polyfill in your code.
Here is a working fiddle that uses one of the project's demo.