When I establish a WebRTC connection with output and input (video and microphone) on chrome for Android, when controlling the volume slider using hardware keys, I will get shown the volume for the STREAM_VOICE_CALL stream, which is not the right stream for the WebRTC. This make it impossible for the user to effectively control the volume.
I'm guessing this is happening because when I turn on the microphone (with getUserMedia), the system thinks i'm in a call.
Any ideas on how to fix this? Is it expected behavior or a chrome bug?
Thanks
Is that how it is built now? At least chromium issue 243506 tells the following:
Enable audio volume control on Android.
This is done by setting stream to VOICE which is the only stream type hooking up volume control automatically.
It's required to use matching audio mode and stream type. Otherwise, the volume control doesn't work. That means we have to use same mode
for both WebAudio and WebRTC. This leads to the change in audio_manager_android.cc which sets audio to communication mode so that OpenSL
stream can use VOICE stream type to adjust volume.
Bold is added by me.
(https://code.google.com/p/chromium/issues/detail?id=243506)
Looks like its already fixed in latest Chrome version.
Related
I am currently experimenting with the getDisplayMedia browser api: https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getDisplayMedia
I am successfully able to create a MediaStream video that captures an application, but it fails at capturing the audio (say for example I'm capturing a VLC video Running, there will be no audio track ever in the MediaStream).
I have tested that with both Chrome (83+) anf Firefox on Linux Mint and Windows, and it seems to fail everytime.
Also, when I'm trying to record audio only in Chrome (firefox not tested), it throws
TypeError: Failed to execute 'getDisplayMedia' on 'MediaDevices': Audio only requests are not supported
I have read multiple dev threads, and it seems that it is not mandatory that the created MediaStream will contain audio, even though it is asked in the options.
So my questions are:
is the audio option just complete decoration ? Because it seems
pretty useless
is there another way to capture audio ? for example with the Web Audio API ? I couldn't find any resource on that. Keep in mind that I want
to capture system or window audio, not microphone.
Thanks for your answers
EDIT: Alright, I found that there's one possibility to capture audio: it has to be with Chrome, on Windows, and capture a screen (not an app). According to the doc, it should work with Edge too. I wonder if there are other means to capture the audio without some loopback device.
I am developing a simple web application which only records a voice from microphone but I have some trouble.
HTML5 voice recording function works well on chrome and firefox desktop or android. But while using mobile browsers on iPhone even chrome and firefox it doesn't work.
I tried recorder.js and result did not change.
Is it possible to recording voice on safari or it is a missing function of safari or iOS?
May 2018 Update (because figuring this out has been tough with all the outdated info).
I found this demo that proves it is possible: https://kaliatech.github.io/web-audio-recording-tests/dist/#/test1
As far as I knew, even on the latest iOS (iOS 10), recording voice on iOS using HTML5 is still impossible. Since all the browsers on iOS are limited to use UIWebView which Safari on iOS uses as well, Chrome on iOS is not able to support any API that can be used for media recording.
For example, recorder.js which you used are built on Media Capture API. If you check caniuse.com, you will find this API is not supported on iOS. (Also check the issue here).
MediaRecorder API is also a promising API but still not supported by Apple's browser.
Check answers below for more information.
1. Record voice from IPhone using HTML5
2. Audio Recording html5 on iOS
It's now possible and "easy" to do on iOS11 for Safari! The mediaStream API is supported now. The mediaRecorder API however is not. This causes any existing examples out there to not work. So you'll have to implement your own mediaRecorder functionality by connecting the media stream to a webkitAudioContext ScriptProcessorNode and collect the stream buffer on the node's onaudioprocess event. You can then collect the iOS microphone's streaming audio data and do with it what you want, most likely merging it into a wav file for upload/download. This works for any browser that supports the Media Stream API.
Two gotcha's:
- iOS Safari likes to deallocate any AudioContext that wasn't created on the main thread (on a tap) so you can't initialize it on the device media access accepted callback.
- The scriptProccessorNode wont fire any audioprocessed events unless the input AND output are connected for some reason.
Since iOS11, Safari now supports Media Capture API:
New in Safari 11.0 – Camera and microphone access.
Added support for the Media Capture API.
Added ability for websites to access camera and microphone streams from a user's
device (user permission is required.)
Announcement by Apple - broken link as of Jul 2018
A copy of the announcement on someone's blog
Therefore recorder.js will work now.
I'm using the new iPhone 11 ProMax, with iOS13.3, and have been trying to build out a web app with voice recognition services via HTML5 to work in safari or any browser on my iPhone. It seems iOS developers have blocked audio / video recording at the os level. There is a limited workaround however that might be useful for someone coming here as I did.
https://blog.addpipe.com/safari-technology-preview-73-adds-limited-mediastream-recorder-api-support/
Basically, if you go into advanced settings for safari you can enable the mediaRecorder. Their demo works with video capture, I have not seen it with pure audio yet.
Safari on iOS 11 does NOT support the 2 standards which would make audio (only) recording possible (and easy to implement):
HTML Media Capture for audio (spec, correct syntax ) - audio recording should be passed to a native app which should pass the result back to the browser for upload (it works for video and picture)
MediaStream Recording API (spec, demo) - allows you to record to a blob directly in the browser. The recording can be downloaded or uploaded to a web server.
I have built a webRTC application that streams audio.
The application works as intended on all devices and the client is able to hear the audio stream. At a very high level, the RTC stream is simply attached to an audio element which works great.
The problem: I am trying to utilize the Android Chrome background audio feature. At the moment the stream keeps playing in the background (even when chrome is minimized) however about 5 seconds after screen timeout/lock, the peer connection is closed. This is not a memory issue (I have several test devices including a Galaxy S7).
In contrast if I simply point to url of an mp3 file, the audio context will keep playing indefinitely. Is there a way to achieve this indefinite background with a webRTC stream?
Cheers in advance!
Looks like this old bug made its way back into Chromium :
https://bugs.chromium.org/p/chromium/issues/detail?id=951418
Verified resolved in issue 513633 with no background logic required: https://bugs.chromium.org/p/chromium/issues/detail?id=513633
I need to obtain frequency/pitch data from the microphone of an android device on the fly using JavaScript.
I have done this for desktop/laptop browsers with getUserMedia and Web Audio API, but these are not supported on the vast majority of Android devices.
I have tried using the cordova-plugin-media-capture however this opens an audio recorder which the user can then save or discard, and after saving you can use cordova-plugin-file to obtain the data as shown here: https://stackoverflow.com/a/32097634/5674976 but I need it not to open the audio recorder, and instead perhaps just a record button, and once it is recording to provide the audio data immediately (so that it can detect the frequency data in real-time).
I have seen recording functionality in place e.g. WhatsApp, Facebook Messenger etc. and so as a last resort - since I do not know Java - would it be possible to create a plugin using Java for Cordova?
edit: I have also looked at cordova-plugin-media https://github.com/apache/cordova-plugin-media which seems to provide amplitude data and current position data. I'm thinking I could figure out frequency by looking at the amplitude over time, or am I being naive?
I managed to record audio and also analyze the frequency without either getUserMedia or Web Audio API for Android.
Firstly I installed the cordova-plugin-audioinput plugin, which outputs a stream of audio samples (from the microphone), with custom configurations such as buffer size and sample rate. You can then use this data to detect specific frequencies.
I've been looking for a solution that detects the difference between the default speakers and headphones on a computer. I understand that with Web Audio API, AudioDestinationNode represents the output device, where users hear audio.
My question (to be specific) is whether or not it is possible to detect a change in the users' audio output device (wired/wireless headphones). If this is not possible, is there a way to use phonegap to do so, for computers as well as mobile devices?
My goal is to initiate an event only when the AudioDestinationNode maps to headphones or external speakers.
There's nothing in the Web Audio API spec for this.
It might be possible in Phonegap (at least if you were willing to write your own Phonegap plugin) – but that's only going to help on mobile. As far as I know, there's no way to determine the audio output device in any of the major desktop browsers.
Just out of curiosity, what are you hoping to do as a result of the user switching between built-in speakers and an external device?