can't fine variable OfflineAudioContext in safari - javascript

I am using web audio API to stream audio to my remote server. I am using OfflineAudioContext. The code works fine in chrome and firefox but in safari, it gives the above mentioned error when trying to use OfflineAudioContext. I have tried adding webkit prefix to OfflineAudioContext, then it gives me this error:
SyntaxError: The string did not match the expected pattern.
I have tried adding different values to the OfflineAudioContext constructor but its always giving me the same error.
I went through Mozilla developers page for browser compatibility and I found this :
So, here its mentioned that for OfflineAudioContext constructor the compatibility is unknown for edge and safari. So, is this the reason, why I am not able to use OfflineAudioContext in safari? Is it not supported yet? or Am I doing it wrong? Or Is there another way to solve this in safari?
This is the first time I am using the Web Audio API. So, I hope somebody can clear my doubt if I missed out somewhere. Thank You.
Code of OfflineAudioContext added below:
let sourceAudioBuffer = e.inputBuffer; // directly received by the audioprocess event from the microphone in the browser
let TARGET_SAMPLE_RATE = 16000;
let OfflineAudioContext =
window.OfflineAudioContext || window.webkitOfflineAudioContext;
let offlineCtx = new OfflineAudioContext(
sourceAudioBuffer.numberOfChannels,
sourceAudioBuffer.duration *
sourceAudioBuffer.numberOfChannels *
TARGET_SAMPLE_RATE,
TARGET_SAMPLE_RATE
);
(if more code is needed of the js file to know the problem better. Just comment for it. I will add that but I thought the snippet is enough to understand the problem)

It's a bit confusing, but a SyntaxError is what Safari throws if it doesn't like the arguments. And unfortunately Safari doesn't like a wide range of arguments which should normally be supported.
As far as I know Safari only accepts a first parameter from 1 to 10. That's the parameter for numberOfChannels.
The second parameter (the length) just needs to be positive.
The sampleRate can only be a number between 44100 and 96000.
However it is possible to translate all the computations from 16kHz to another sampleRate which then works in Safari. Let's say this is the computation you would like to do at 16kHz:
const oac = new OfflineAudioContext(1, 10, 16000);
const osciallator = oac.createOscillator();
osciallator.frequency.value = 400;
osciallator.connect(oac.destination);
osciallator.start(0);
oac.startRendering()
.then((renderedBuffer) => {
console.log(renderedBuffer.sampleRate);
console.log(renderedBuffer.getChannelData(0));
});
You can do almost the same at 48kHz. Only the sampleRate will be different but the channelData of the rendered AudioBuffer will be the same.
const oac = new webkitOfflineAudioContext(1, 10, 48000);
const osciallator = oac.createOscillator();
osciallator.frequency.value = 1200;
osciallator.connect(oac.destination);
osciallator.start(0);
oac.oncomplete = (event) => {
console.log(event.renderedBuffer.sampleRate);
console.log(event.renderedBuffer.getChannelData(0));
};
oac.startRendering();
Aside: Since I'm the author of standardized-audio-context which is a library that tries to ease out inconsistencies between browser implementations, I have to mention it here. :-) It won't help with the parameter restrictions in Safari, but it will at least throw the expected error if the parameter is out of range.
Also please note that the length is independent of the numberOfChannels. If IfsourceAudioBuffer.duration in your example is the duration in seconds, then you just have to multiply it with the TARGET_SAMPLE_RATE to get the desired length.

Related

Is there a way to resample an audio stream using the Web Audio API?

I currently played around with the Web Audio API a little bit. I managed to "read" a microphone and play it to my speakers which worked quite seamlessly.
Using the Web Audio API, I now would like to resample an incoming audio stream (aka. microphone) from 44.1kHz to 16kHz. 16kHz, because I am using some tools which require 16kHz. Since 44.1kHz divided by 16kHz is not an integer, I believe I cannot just simply use a low-pass filter and "skip samples", right?
I also saw that some people suggested to use the .createScriptProcessor(), but since it is deprecated I feel kind of bad to use it, so I'm searching a different approach now. Also, I don't necessarily need the audioContext.Destination to hear it! It is still fine if I get the "raw" data of the resampled output.
My approaches so far
Creating an AudioContext({sampleRate: 16000}) --> throws an error: "Connecting AudioNodes from AudioContexts with different sample-rate is currently not supported."
Using an OfflineAudioContext --> but it seems to have no option for streams (only for buffers)
Using an AudioWorkletProcessor to resample. In this case, I think, that I could use the processor to actually resample the input and output the "resampled" source. But I couldn't really figure how to resample it.
main.js
...
microphoneGranted: async function(stream){
audioContext = new AudioContext();
var microphone = audioContext.createMediaStreamSource(stream);
await audioContext.audioWorklet.addModule('resample_proc.js');
const resampleNode = new AudioWorkletNode(audioContext, 'resample_proc');
microphone.connect(resampleNode).connect(audioContext.destination);
}
...
resample_proc.js (assuming only one input and output channel)
class ResampleProcesscor extends AudioWorkletProcessor {
...
process(inputs, outputs, parameters) {
const input = inputs[0];
const output = outputs[0];
if(input.length > 0){
const inputChannel0 = input[0];
const outputChannel0 = output[0];
for (let i = 0; i < inputChannel0.length; ++i) {
//do something with resample here?
}
return true;
}
}
}
registerProcessor('resample_proc', ResampleProcesscor);
Thank you!
Your general idea looks good. While I can't provide the code to do the resampling, I can point out that you might want to start with Sample-rate conversion. Method 1 would work here with L/M = 160/441. Designing the filters takes a bit of work but only needs to be done once. You can also search for polyphase filtering for hints on how to do this effectively.
What chrome does in various parts is to use a windowed-sinc function to resample between any set of rates. This is method 2 in the wikipedia link.
The WebAudio API now allows to resample by passing the sample rate in the constructor. This code works in Chrome and Safari:
const audioStream = await navigator.mediaDevices.getUserMedia({ audio: true, video: false })
const audioContext = new AudioContext({ sampleRate: 16000 })
const audioStreamSource = audioContext.createMediaStreamSource(audioStream);
audioStreamSource.connect(audioContext.destination)
But fails in Firefox that throws a NotSupportedError exception with AudioContext.createMediaStreamSource: Connecting AudioNodes from AudioContexts with different sample-rate is currently not supported.
In the example below, I've downsampled the audio coming from the microphone to 8kHz and added a one second delay so we can clearly hear the effect of downsampling:
https://codesandbox.io/s/magical-rain-xr4g80

How can I properly end or destroy a Speaker instance without getting `illegal hardware instruction` error?

I have a script for playing a remote mp3 source through the Speaker module which is working fine. However if I want to stop playing the mp3 stream I am encountering two issues:
If I stop streaming the remote source, eg. by calling stream.pause() as in line 11 of the code below then stdout is flooded with a warning:
[../deps/mpg123/src/output/coreaudio.c:81] warning: Didn't have any audio data in callback (buffer underflow)
The warning in itself makes sense because I'm not providing it with any data anymore, but it is outputting it very frequently which is a big issue because I want to use it for CLI app.
If I attempt to end the speaker calling speaker.end() as in line 13 of the code below then I get the following error:
[1] 8950 illegal hardware instruction node index.js
I have been unable to find anything regarding this error besides some Raspberry Pi threads and I'm quite confused as to what is causing illegal hardware instruction.
How can I properly handle this issue? Can I pipe the buffer underflow warning to something like dev/null or is that a poor way of handling it? Or can I end / destroy the speaker instance in another way?
I'm running Node v7.2.0, npm v4.0.3 and Speaker v.0.3.0 on macOS v10.12.1
const request = require('request')
const lame = require('lame')
const Speaker = require('speaker')
var decoder = new lame.Decoder()
var speaker = new Speaker()
decoder.pipe(speaker)
var req = request.get(url_to_remote_mp3)
var stream = req.pipe(decoder)
setTimeout(() => {
stream.pause()
stream.unpipe()
speaker.end()
}, 1000)
I think the problem might be in cordaudio.c (guesswork here). It seems like the buffer:written gets large when you flush it. My best guess is that the usleep() time in write_coreaudio() is mis-calculated after the buffer is flushed. I think the program may be sleeping too long, so the buffer gets too big to play.
It is unclear to me why the usleep time is being calculated the way it is.
Anyway... this worked for me:
Change node_modules/speaker/deps/mpg123/src/output/coreaudio.c:258
- usleep( (FIFO_DURATION/2) * 1000000 );
+ usleep( (FIFO_DURATION/2) * 100000 );
Then in the node_modules/speaker/ directory, run:
node-gyp build
And you should be able to use:
const request = require('request')
const lame = require('lame')
const Speaker = require('speaker')
const url = 'http://aasr-stream.iha.dk:9870/dtr.mp3'
const decoder = new lame.Decoder()
let speaker = new Speaker()
decoder.pipe(speaker)
const req = request.get(url)
var stream = req.pipe(decoder)
setTimeout(() => {
console.log('Closing Speaker')
speaker.close()
}, 2000)
setTimeout(() => {
console.log('Closing Node')
}, 4000)
... works for me (no errors, stops when expected).
I hope it works for you too.
You will want to use github issues for that module and the library it uses to make sure no one else already solved the problem and if not open a bug describing it so other module users can reference it since that is the first place they will look.
A "good" way of handling it will probably involve modifying the native part of the speaker module unless you just have a sound library version mismatch.

Javascript Web Audio API error: Failed to set the 'value' property on 'AudioParam': The provided float value is non-finite

I have no idea what this means. I have a feeling something in the Web Audio API has changed recently, and the browsers have implemented the change as my application was working fine with no errors the last time I checked it about 2 weeks ago. I have changed nothing in my code recently since the last time it was working.
The error I am getting is:
Uncaught TypeError: Failed to set the 'value' property on
'AudioParam': The provided float value is non-finite.
The line where the error occurs is this line:
gainNode.gain.value = volume;
The application can be viewed here:
http://aceroinc.ca/harmanKardon/
When the power button is hit, the app should turn on and a radio station should begin streaming. (In chrome only... the .aac format will not work in Firefox and I am aware of that)
I initialize the web audio api after the DOM loads...
window.AudioContext = window.AudioContext || window.webkitAudioContext;
context = new AudioContext();
source =context.createMediaElementSource(document.getElementById('audio'));
gainNode = context.createGain();
var gainDefault = gainNode.gain.defaultValue;
Then down inside the powerOn function I have:
gainNode.gain.value = currVolume;
which is causing the error.
When I check it in safari on my iphone the application works. So this seems to be a chrome issue.
On my desktop computer I have Chrome Version 42.0.2311.90 (64-bit), and it does not work.
On my laptop I have Chrome Version 41.0.2272.118 m and it does work.
On my iPhone I have Chrome Version 40.0.2214.73 and it does work.
Its because Float values can of type infinite and non-infinite, so in order to prevent this error, check whether the parsed float value is Infinite or not.
For more details on parseFloat() read here parseFloat()
function changeVolume(volume) {
let volume = parseFloat(params.volume);
if (isFinite(volume)) {
gainNode.gain.value = volume;
}
}
"volume" is your DIV element containing the volume knob, not a value. I think you meant "currVolume".

javascript. formatting console.log

I´m working on a big angular.js project.
Now, I´ve to profile some pages about the performance.
With the console.log of the project I´m looking for the performance problems.
I miss if console.log can output the current time (yes, I know in Chrome you can set up).
Is there a way (like in log4java) to format the output?
Thanks in advance,
Be careful, the console object should not be activated in production mainly because it can breaks the code for some users (for example IE8 or less users).
But if you want to test on a tool that you know, you can use the Web API interface which provides some useful method on this object:
https://developer.mozilla.org/en-US/docs/Web/API/Console
(This doc is from the Mozilla Developer Network (MDN) and therefore mainly applies to Firefox, but at the end of that page you can find links to the corresponding docs for the IE, Chrome and Safari DevTools as well as Firebug).
For example, you can use the time() method for your use case:
console.time("answer time");
// ... some other code
console.timeEnd("answer time");
Outputs something like this:
You could try something like this:
console = window.console ? console : {};
native_log = console.log;
window.console.log = function myConsole() {
// Cannot log if console.log is not present natively,
// so fail silently (in IE without debugger opened)
if (!native_log)
return;
var args = Array.prototype.slice.call(arguments);
args.unshift(new Date());
native_log.apply(console, args);
}
console.log('hello'); // Date 2014-12-02T14:14:50.138Z "hello"
Of course, you would not let the new Date() as is, but it gives you the idea.

How to avoid playing back the sound that is being recorded

I'm talking about feedback - when you make a simple javascript application that opens a stream from the user and reads the frequency analysis (or whatever is it) it thorws all received data back to the headphones in both Google Chrome and Opera. Firefox is silent most of the time and randomly creates a huge mess with unstable feedback - it also closes the stream after few seconds. Generally the thing doesn't work in Firefox yet.
I created a fiddle. If your browser doesn't support it you'll just get error in the console I assume.
The critical part of the code is the function that is called when user accepts the request for the microphone access:
//Not sure why do I do this
var inputPoint = context.createGain();
// Create an AudioNode from the stream.
var source = context.createMediaStreamSource(stream);
source.connect(inputPoint);
//Analyser - this converts raw data into spectral analysis
window.analyser = context.createAnalyser();
//Mores stuff I know nothing about
analyser.fftSize = 2048;
//Sounds much like connecting nodes in MatLab, doesn't it?
inputPoint.connect(analyser);
analyser.connect(context.destination);
///THIS should probably make the sound silent (gain:0) but it doesn't
var zeroGain = context.createGain();
zeroGain.gain.value = 0.0;
//More connecting... are you already lost which node is which? Because I am.
inputPoint.connect(zeroGain);
zeroGain.connect(context.destination);
Zero gain idea is not mine, I have stolen it from simple sound recorder demo. But what works for them doesn't work for me.
The demo has also no problems in Firefox, like I do.
in function mediaGranted(stream) {...
comment out
..
Fiddle line #46: //analyser.connect(context.destination);
..
more info https://mdn.mozillademos.org/files/5081/WebAudioBasics.png
nice demo: http://mdn.github.io/voice-change-o-matic/

Categories

Resources