I'm getting an error on desktop chrome that utils.device.checkHasPositionalTracking() is "not a function".
If it is obsolete, where can I find an updated list of utils.device methods for device detection? The official documentation seems to be outdated and lists depreciated methods for device detection. The browser doesn't seem to recognize this one in particular at all.
let mobile = AFRAME.utils.device.isMobile ();
//// isOculusGo and isGearVR have been replaced with isMobileVR
//let gearVR = AFRAME.utils.device.isGearVR();
//let oculusGo = AFRAME.utils.device.isOculusGo();
let mobileVR = AFRAME.utils.device.isMobileVR
//let tracking = AFRAME.utils.device.checkHasPositionalTracking(); //not working
let headset =AFRAME.utils.device.checkHeadsetConnected();
if(mobile){
console.log("Viewer is mobile.");
}
if(mobileVR){
console.log("Viewer is MobileVR.");
}
/*if(tracking){
console.log("Viewer has positional tracking.");
}*/
if(headset){
console.log("Headset Connected.");
}
The previous code results in "Viewer is MobileVR" even though I'm testing on a desktop computer.
Yes, it's gone, I couldn't find it in the source code of A-Frame either.
My pull request for the deletion in the documentation was approved just now:
https://github.com/aframevr/aframe/pull/4255
Related
I currently played around with the Web Audio API a little bit. I managed to "read" a microphone and play it to my speakers which worked quite seamlessly.
Using the Web Audio API, I now would like to resample an incoming audio stream (aka. microphone) from 44.1kHz to 16kHz. 16kHz, because I am using some tools which require 16kHz. Since 44.1kHz divided by 16kHz is not an integer, I believe I cannot just simply use a low-pass filter and "skip samples", right?
I also saw that some people suggested to use the .createScriptProcessor(), but since it is deprecated I feel kind of bad to use it, so I'm searching a different approach now. Also, I don't necessarily need the audioContext.Destination to hear it! It is still fine if I get the "raw" data of the resampled output.
My approaches so far
Creating an AudioContext({sampleRate: 16000}) --> throws an error: "Connecting AudioNodes from AudioContexts with different sample-rate is currently not supported."
Using an OfflineAudioContext --> but it seems to have no option for streams (only for buffers)
Using an AudioWorkletProcessor to resample. In this case, I think, that I could use the processor to actually resample the input and output the "resampled" source. But I couldn't really figure how to resample it.
main.js
...
microphoneGranted: async function(stream){
audioContext = new AudioContext();
var microphone = audioContext.createMediaStreamSource(stream);
await audioContext.audioWorklet.addModule('resample_proc.js');
const resampleNode = new AudioWorkletNode(audioContext, 'resample_proc');
microphone.connect(resampleNode).connect(audioContext.destination);
}
...
resample_proc.js (assuming only one input and output channel)
class ResampleProcesscor extends AudioWorkletProcessor {
...
process(inputs, outputs, parameters) {
const input = inputs[0];
const output = outputs[0];
if(input.length > 0){
const inputChannel0 = input[0];
const outputChannel0 = output[0];
for (let i = 0; i < inputChannel0.length; ++i) {
//do something with resample here?
}
return true;
}
}
}
registerProcessor('resample_proc', ResampleProcesscor);
Thank you!
Your general idea looks good. While I can't provide the code to do the resampling, I can point out that you might want to start with Sample-rate conversion. Method 1 would work here with L/M = 160/441. Designing the filters takes a bit of work but only needs to be done once. You can also search for polyphase filtering for hints on how to do this effectively.
What chrome does in various parts is to use a windowed-sinc function to resample between any set of rates. This is method 2 in the wikipedia link.
The WebAudio API now allows to resample by passing the sample rate in the constructor. This code works in Chrome and Safari:
const audioStream = await navigator.mediaDevices.getUserMedia({ audio: true, video: false })
const audioContext = new AudioContext({ sampleRate: 16000 })
const audioStreamSource = audioContext.createMediaStreamSource(audioStream);
audioStreamSource.connect(audioContext.destination)
But fails in Firefox that throws a NotSupportedError exception with AudioContext.createMediaStreamSource: Connecting AudioNodes from AudioContexts with different sample-rate is currently not supported.
In the example below, I've downsampled the audio coming from the microphone to 8kHz and added a one second delay so we can clearly hear the effect of downsampling:
https://codesandbox.io/s/magical-rain-xr4g80
I am creating automation tool for Edge Browser using selenium, where I need to open three edge window with three different URLs parallelly.
First Edge browser window launch successfully, but when calling function openEdgeBrowser for second url, it throws Exception: WebDriverError: Unknown error.
const webdriver = require('selenium-webdriver');
const edgedriver = require('edgedriver');
const edge = require('selenium-webdriver/edge');
var openEdgeBrowser = async function(url){
try {
let edgeService = await new edge.ServiceBuilder(edgedriver.path);
let browser = await new webdriver.Builder().forBrowser('MicrosoftEdge').setEdgeService(edgeService).build();
await browser.get(url);
console.log('Browser launched successfully with url: ' + url);
} catch(e) {
console.log.end(`Error in launching edge browser, Exception: ${e}`);
}
};
I expect to run three instances of Edge together.
The problem is that Edge does not support multiple instances:
Hi, This is a known issue.
I just checked the Feedback Hub and I only
see a Microsoft internal posting for this issue. Will you add this to
the Feedback Hub? Open the Feedback Hub app by using the Search Bar
(Win + s) and typing “feedback hub”
The only workaround I am aware of
is to use Selenium grid with multiple Windows Clients. The Clients
can be Hyper-V instances.
Appreciate you reporting this issue and wish
I had a better answer for you. :-/ Steve
https://developer.microsoft.com/en-us/microsoft-edge/platform/issues/17754737/
The same has also been noted on twitter.
https://twitter.com/instylevii/status/783480823445987329
I can't find any indication that this bug has been fixed, so I'm going to assume it's still outstanding. It was definitely still outstanding in version 41.16299.15.0 and I can't find anything in the release notes mentioning a fix In version 42.
I am using web audio API to stream audio to my remote server. I am using OfflineAudioContext. The code works fine in chrome and firefox but in safari, it gives the above mentioned error when trying to use OfflineAudioContext. I have tried adding webkit prefix to OfflineAudioContext, then it gives me this error:
SyntaxError: The string did not match the expected pattern.
I have tried adding different values to the OfflineAudioContext constructor but its always giving me the same error.
I went through Mozilla developers page for browser compatibility and I found this :
So, here its mentioned that for OfflineAudioContext constructor the compatibility is unknown for edge and safari. So, is this the reason, why I am not able to use OfflineAudioContext in safari? Is it not supported yet? or Am I doing it wrong? Or Is there another way to solve this in safari?
This is the first time I am using the Web Audio API. So, I hope somebody can clear my doubt if I missed out somewhere. Thank You.
Code of OfflineAudioContext added below:
let sourceAudioBuffer = e.inputBuffer; // directly received by the audioprocess event from the microphone in the browser
let TARGET_SAMPLE_RATE = 16000;
let OfflineAudioContext =
window.OfflineAudioContext || window.webkitOfflineAudioContext;
let offlineCtx = new OfflineAudioContext(
sourceAudioBuffer.numberOfChannels,
sourceAudioBuffer.duration *
sourceAudioBuffer.numberOfChannels *
TARGET_SAMPLE_RATE,
TARGET_SAMPLE_RATE
);
(if more code is needed of the js file to know the problem better. Just comment for it. I will add that but I thought the snippet is enough to understand the problem)
It's a bit confusing, but a SyntaxError is what Safari throws if it doesn't like the arguments. And unfortunately Safari doesn't like a wide range of arguments which should normally be supported.
As far as I know Safari only accepts a first parameter from 1 to 10. That's the parameter for numberOfChannels.
The second parameter (the length) just needs to be positive.
The sampleRate can only be a number between 44100 and 96000.
However it is possible to translate all the computations from 16kHz to another sampleRate which then works in Safari. Let's say this is the computation you would like to do at 16kHz:
const oac = new OfflineAudioContext(1, 10, 16000);
const osciallator = oac.createOscillator();
osciallator.frequency.value = 400;
osciallator.connect(oac.destination);
osciallator.start(0);
oac.startRendering()
.then((renderedBuffer) => {
console.log(renderedBuffer.sampleRate);
console.log(renderedBuffer.getChannelData(0));
});
You can do almost the same at 48kHz. Only the sampleRate will be different but the channelData of the rendered AudioBuffer will be the same.
const oac = new webkitOfflineAudioContext(1, 10, 48000);
const osciallator = oac.createOscillator();
osciallator.frequency.value = 1200;
osciallator.connect(oac.destination);
osciallator.start(0);
oac.oncomplete = (event) => {
console.log(event.renderedBuffer.sampleRate);
console.log(event.renderedBuffer.getChannelData(0));
};
oac.startRendering();
Aside: Since I'm the author of standardized-audio-context which is a library that tries to ease out inconsistencies between browser implementations, I have to mention it here. :-) It won't help with the parameter restrictions in Safari, but it will at least throw the expected error if the parameter is out of range.
Also please note that the length is independent of the numberOfChannels. If IfsourceAudioBuffer.duration in your example is the duration in seconds, then you just have to multiply it with the TARGET_SAMPLE_RATE to get the desired length.
I've seen the following:
chrome://webrtc-internals
However I'm looking for a way to let users click a button from within the web app to either download or - preferably - POST WebRtc logs to an endpoint baked into the app. The idea is that I can enable non-technical users to share technical logs with me through the click of a UI button.
How can this be achieved?
Note: This should not be dependent on Chrome; Chromium will also be used as the app will be wrapped up in Electron.
You need to write a javascript equivalent that captures all RTCPeerConnection API calls. rtcstats.js does that but sends all data to a server. If you replace that behaviour with storing it in memory you should be good.
This is what I ended up using (replace knockout with underscore or whatever):
connectionReport.signalingState = connection.signalingState;
connectionReport.stats = [];
connection.getStats(function (stats) {
const reportCollection = stats.result();
ko.utils.arrayForEach(reportCollection, function (innerReport) {
const statReport = {};
statReport.id = innerReport.id;
statReport.type = innerReport.type;
const keys = innerReport.names();
ko.utils.arrayForEach(keys, function (reportKey) {
statReport[reportKey] = innerReport.stat(reportKey);
})
connectionReport.stats.push(statReport);
});
connectionStats.push(connectionReport);
});
UPDATE:
It appears that this getStats mechanism is soon-to-be-deprecated.
Reading through js source of chrome://webrtc-internals, I noticed that the web page is using a method called chrome.send() to send messages like chrome.send('enableEventLogRecordings');, to execute logging commands.
According to here:
chrome.send() is a private function only available to internal chrome
pages.
so the function is sandboxed which makes accessing to it not possible
In my hydbrid app (Phonegap), I am trying to write to localStorage in a very standard way :
window.localStorage.setItem("proDB", JSON.stringify(data));
or
window.localStorage["proDB"] = JSON.stringify(data);
But it doesn't work on Safari on iPad 2 (iOS 7.1).
It doesn't work and the whole app stops.
Here's the userAgent of this ipad :
Can you help me ?
Thanks
Please check whether you have Private Browsing enabled in Safari. In Safari Private Browsing mode, you get a quota of zero. Hence, all calls to localStorage.setItem will throw a quota exceeded error. Personally I think this is a huge mistake by Safari (as so many sites break), but it is what it is so we have to find a way around it. We can do this by:
Detecting whether we have a functional localStorage
Falling back to some replacement if not.
Read on if you want the details :)
1: Detecting a functional local storage
I am currently using this code to detect whether local storage is available, and fall back to a shim if not:
var DB;
try {
var x = '_localstorage_test_' + Date.now();
localStorage.setItem(x, x);
var y = localStorage.getItem(x);
localStorage.removeItem(x);
if (x !== y) {throw new Error();} // check we get back what we stored
DB = localStorage; // all fine
}
catch(e) {
// no localstorage available, use shim
DB = new MemoryStorage('my-app');
}
EDIT: Since writing this I have packaged up the feature detecting code. If you are using NPM you can install storage-available like so:
npm install --save storage-available
then you can use it in your code like this:
if (require('storage-available')('localStorage')) {
// Yay!
}
else {
// Awwww.....
}
2. Fall back to a shim
The easiest way to deal with the issue once we have detected the problem is to fall back to some other object that does not throw errors on every write.
memorystorage is a little library I wrote that follows the Web Storage API but just stores everything in memory. Because it uses the same API, you can use it as a drop-in replacement for localStorage and everything will function fine (though no data will survive page reload). It's Open Source so use as you please.
Background info
For more information on MemoryStorage and this issue in general, read my blog post on this topic: Introducing MemoryStorage.
I have set local storage key values through below logic using swift2.2
let jsStaring = "localStorage.setItem('Key', 'value')"
self.webView.stringByEvaluatingJavaScriptFromString(jsStaring)
Your first setItem example is correct. I don't believe that you can do the second option (localStorage["someKey"] = "someValue") though. Stick with the first one.
You mention hybrid - is it a PhoneGap or some other framework? Where in the app are you calling localStorage.setItem? If PhoneGap, be sure that everything has loaded via onDeviceReady first before trying to access localStorage:
<script type="text/javascript">
// Wait for PhoneGap to load
document.addEventListener("deviceready", onDeviceReady, false);
// PhoneGap is ready
function onDeviceReady() {
window.localStorage.setItem("key", "value");
}
</script>
Also, if the app freezes/stops working, in my experience it's because somewhere in the code you are accessing an object that is undefined. Perhaps try some debugging by checking if localStorage is undefined and logging it? Are you 100% sure that the "setItem" line is where it fails? Console.log is your friend, prove it! :)
if (localStorage === undefined) {
console.log("oops, localStorage not initialized yet.");
}
else {
window.localStorage.setItem("proDB", JSON.stringify(data));
console.log("localStorage available.");
}