How to play only specific bitrate with dash js audio player? - javascript

I am using dash.js library and achieving Adaptive bitrate with DASH protocol for my audio player.
I am facing issue in one of the case, when instead of changing bit rate adaptively, I want to be it specific i.e. 320 kbps. I am using methods which are provided in dash.js library as following.But not able to get a static bitrate segment for my whole audio file.
(function () {
var url = "https://xxxxxxxxxxxx.xxxxxxxx.net/myplaylist.mpd";
var player = dashjs.MediaPlayer().create();
player.initialize(document.querySelector("#audioPlayer"), url, true);
player.setInitialBitrateFor('audio', 320);
player.setQualityFor('audio', 320);
player.setAutoSwitchQualityFor('audio', false);
player.getDebug().setLogToBrowserConsole(false);
})();
so basically there are two options :: auto and 320kbps
auto will allow adaptive bitrate but when 320 kbps selected at any time after that it should only get segments for that bitrate only.
For the later scenario I am facing the issue.
is there any method to do that ? am I missing something here ?

it was not setting the bitrate b'coz it is doing exact matching for bitrate.
As of now, how bitrate is set with dash.js is as following.
when you do player.setInitialBitrateFor('audio', 320); first it will get the bandwidth from mpd files.
then there is a internal mechanism which will divide bandwidth by 1000 and then round of the value. so because off this if your mpd files contains values like bandwidth="320000",
then player.setInitialBitrateFor('audio', 320); will work.
There might be variation in bandwidth like 321684 which will generate bit rate value = 321. in such case you have to do player.setInitialBitrateFor('audio', 321); will work
also setQualityFor method takes index as second parameter.
so one can do
player.setQualityFor('audio', indexValue);
where considering there are three adaption set and
low bitrate (64 kbps) ==> 0 (indexValue)
Medium bitrate (128 kbps) ==> 1 (indexValue)
High bitrate (320 kbps) ==> 2 (indexValue)

Related

Isolate ultrasounds with Web Audio API

There is any algorithm that I can use with Web Audio Api to isolate ultrasounds?
I've tried 'highpass' filters but I need to isolate sounds that are ONLY ultrasounds (horizontal lines) and ignore noises that are also sounding at lower audible frequencies (vertical lines).
var highpass = audioContext.createBiquadFilter();
highpass.type = 'highpass';
highpass.frequency.value = 17500;
highpass.gain.value = -1
Here's a test with a nice snippet from http://rtoy.github.io/webaudio-hacks/more/filter-design/filter-design.html of how the spectrum of audible noise interferes with filtered ultrasound: (there are 2 canvas, one without the filter and one with the filter https://jsfiddle.net/6gnyhvrk/3
Without filters:
With 17.500 highpass filter:
A highpass filter is what you want, but there are a few things to consider. First, the audio context has to have a high enough sample rate. Second, you have to decide what "ultrasound" means. Many people can hear frequencies above 15 kHz (as in your example). A single highpass filter may not have a sharp enough cutoff for you so you'll need to have a more complicated filter setup.

In the Web Audio API, is there a way to set a maximum volume?

I know you can boost or reduce volume with gain. I was wondering if there was a way (perhaps via a node) to cap the maximum volume of the output - not reducing any audio below that max value. It is acceptable if there is distortion for audio that gets capped like this.
An alternative that might be simpler is to use a WaveShaperNode. I think a curve equal to [-1, 0, 1] will do what you want, clamping values to +/-1. If you don't oversample, there won't be any additional delay.
Note that I'm pretty sure all browsers implement this kind of clamping before sending audio to the speakers.
This is not possible with any of the built-in AudioNodes. But it can be achieved with a custom AudioWorklet. I recently published a package which does exactly that. It's called limiter-audio-worklet.
It exports two functions:
import { addLimiterAudioWorkletModule, createLimiterAudioWorkletNode } from 'limiter-audio-worklet';
The first function can be used to add the AudioWorklet to a particular AudioContext.
await addLimiterAudioWorkletModule((url) => {
audioContext.audioWorklet.addModule(url);
});
Once that is done the actual AudioWorkletNode can be created like this:
const limiterAudioWorkletNode = createLimiterAudioWorkletNode(
AudioWorkletNode,
audioContext,
{ attack: 0 }
);
If you set the attack two zero you get the desired effect. Everything above +1/-1 gets clamped. If you increase the attack it will transition smoother using a look ahead. This will introduce a small delay (of the same size) but sounds much nicer.
Of course it's also necessary to connect the previously created limiterAudioWorkletNode close to the end of your audio graph.
yourLastAudioNode
.connect(limiterAudioWorkletNode)
.connect(audioContext.destination);

Volume velocity to gain web audio

I'm trying to set a velocity value, a value that is in a midi signal to gain. The velocity ranges from 0 to 127.
The documentation on the web audio api albeit well done, doesn't really say anything about this.
At the moment I've this to play sounds :
play(key, startTime) {
this.audioContext.decodeAudioData(this.soundContainer[key], (buffer) => {
let source = this.audioContext.createBufferSource();
source.buffer = buffer;
source.connect(this.audioContext.destination);
source.start(startTime);
});
}
I didn't find anything to use the velocity values that range from 0 to 127. However I found gain node that applies a gain.
So my function is now this:
play(key:string, startTime, velocity) {
this.audioContext.decodeAudioData(this.soundContainer[key], (buffer) => {
let source = this.audioContext.createBufferSource();
source.buffer = buffer;
source.connect(this.gainNode);
this.gainNode.connect(this.audioContext.destination);
this.gainNode.gain.value = velocity;
source.start(startTime);
});
}
Eehhh... if I apply the midi velocity value to the gain, I obviously have a sound that is insanely loud. So I'd like to know either of those two questions:
Can I somehow use the velocity value directly ?
How can I convert the velocity value to gain ?
The MIDI specification says:
Interpretation of the Velocity byte is left up to the receiving instrument. Generally, the larger the numeric value of the message, the stronger the velocity-controlled effect. If velocity is applied to volume (output level) for instance, then higher Velocity values will generate louder notes. A value of 64 (40H) would correspond to a mezzo-forte note […] Preferably, application of velocity to volume should be an exponential function.
The General MIDI specifications are not any more concrete.
The DLS Level 1 specification says:
The MIDI Note Velocity value is converted to attenuation in dB by the Concave Transform according to the following formula:
attendB = 20 × log10(1272 / Velocity2)
and fed to control either the volume or envelope generator peak level.
You then have to map this attenuation to the gain factor, i.e., gain = velocity² / 127².
And many hardware synthesizers allow to select different curves to map the velocity to volume.
I don't know if it is correct, because I don't know that much about sound but this seem to work:
this.gainNode.gain.value = velocity / 100 ;
So a velocity of 127 = a gain of 1.27
Ultimately I think what is better is dividing 1 in 127 values and each of those correspond to their respective midi value. However code is easier this way so yeah, it works.

Recorder.js calculate and offset recording for latency

I'm using Recorder.js to record audio from Google Chrome desktop and mobile browsers. In my specific use case I need to record exactly 3 seconds of audio, starting and ending at a specific time.
Now I know that when recording audio, your soundcard cannot work in realtime due to hardware delays, so there is always a memory buffer which allows you to keep up recording without hearing jumps/stutters.
Recorder.js allows you to configure the bufferLen variable exactly for this, while sampleRate is taken automatically from the audio context object. Here is a simplified version of how it works:
var context = new AudioContext();
var recorder;
navigator.getUserMedia({audio: true}, function(stream) {
recorder = new Recorder(context.createMediaStreamSource(stream), {
bufferLen: 4096
});
});
function recordLoop() {
recorder.record();
window.setTimeout(function () {
recorder.stop();
}, 3000);
}
The issue i'm facing is that record() does not offset for the buffer latency and neither does stop(). So instead of getting a three second sound, it's 2.97 seconds and the start is cut off.
This means my recordings don't start in the same place, and also when I loop them, the loops are different lengths depending on your device latency!!
There are two potentially solutions I see here:
Adjust Recorder.js code to offset the buffer automatically against your start/stop times (maybe add new startSync/stopSync functions)
Calculate the latency and create two offset timers to start and stop Recorder.js at the correct points in time.
I'm trying solution 2, because solution 1 requires knowledge of buffer arrays which I don't have :( I believe the calculation for latency is:
var bufferSize = 4096;
var sampleRate = 44100
var latency = (bufferSize / sampleRate) * 2; // 0.18575963718820862 secs
However when I run these calculations in a real test I get:
var duration = 2.972154195011338 secs
var latency = 0.18575963718820862 secs
var total = duration + latency // 3.1579138321995464 secs
Something isn't right, it doesn't make 3 seconds and it's beginning to confuse me now! I've created a working fork of Recorder.js demo with a log:
http://kmturley.github.io/Recorderjs/
Any help would be greatly appreciated. Thanks!
I'm a bit confused by your concern for the latency. Yes, it's true that the minimum possible latency is going to be the related to the length of the buffer but there are many other latencies involved. In any case, the latency has nothing to do with the recording duration, which seems to me to be what your question is about.
If you want to record an exactly 3 second long buffer at 44100 that is 44100*3=132,300 samples. The buffer size is 4096 samples and the system is only going to record an even multiple of that number. Given that the closest you are going to get is to record either 32 or 33 complete buffers. This gives either 131072 (2.97 seconds) or 135168 (3.065 seconds) samples.
You have a couple options here.
Choose a buffer length that evenly divides the sample rate. e.g. 11025. You can then record exactly 12 buffers.
Record slightly longer than the 3.0 seconds you need and then throw the extra 2868 samples away.

Web Audio frequency limitation?

My goal is to generate an audio at a certain frequency and then check at what frequency it is using the result of FFT.
function speak() {
gb.src = gb.ctx.createOscillator();
gb.src.connect(gb.ctx.destination);
gb.src.start(gb.ctx.currentTime);
gb.src.frequency.value = 1000;
}
function listen() {
navigator.getUserMedia = (navigator.getUserMedia
|| navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia);
navigator.getUserMedia({
audio : true,
video : false
}, function(stream) {
gb.stream = stream;
var input = gb.ctx.createMediaStreamSource(stream);
gb.analyser = gb.ctx.createAnalyser();
gb.analyser.fftSize = gb.FFT_SIZE;
input.connect(gb.analyser);
gb.freqs = new Uint8Array(gb.analyser.frequencyBinCount);
setInterval(detect, gb.BIT_RATE / 2);
}, function(err) {
console.log('The following gUM error occured: ' + err);
});
}
See working example at http://codepen.io/Ovilia/full/hFtrA/ . You may need to put your microphone near the speaker to see the effect.
The problem is, when the frequency is somewhere larger than 15000 (e.g. 16000), there seems not to be any response at high frequency area any more.
Is there any limit of frequency with Web Audio, or is it the limit of my device?
What is the unit of each element when I get from getByteFrequencyData?
Is there any limit of frequency with Web Audio, or is it the limit of my device?
I don't think the WebAudio framework itself limits this. Like the other answers have mentioned here. The limit is probably from microphone's and loudspeaker's physical limits.
I tried to use the my current bookshelf loudspeaker (Kurzweil KS40A) I have along with a decent microphone (Zoom H4). The microphone was about 1 cm from the tweeter.
As you see, with these loudspeakers and microphones aren't able to effeciently generate/capture sounds at those frequencies.
This is more obvious when you look at the Zoom H4's frequency response. Unfortunately I couldn't find a frequency respose for the KS40a.
You can also do something similar using non browser tools to check if you see similar results.
What is the unit of each element when I get from getByteFrequencyData?
The unit of each element from getByteFrequencyData is a normalized magnitude data of from the FFT scaled to fit the dBFS range of the maxDecibles and minDecibles attributes on the AnalyserNode. So a byte value 0 would imply minDecibles (default is -100dBFS) or lower and a byte value of 255 would imply maxDecibles (default is -30dBFS) or higher.
Lookup the concept of Nyquist Frequency - the default sampling rate of web audio is 44.1kHz - this means the theoretical maximum frequency would be 22050 hertz given perfect hardware such as microphone and analog-to-digital converter inside your computer. #Ovilia on that same computer using same microphone record the same input sound and then examine the audio file using a utility like Audacity where you can view the output of its FFT analysis - in Audacity when you open an audio file go to menu Analyze -> Plot Spectrum ... also to see a very nice FFT view click the down arror near left side of waveform view subwindow and pick Spectrogram - another excellent FFT capable audio tool is called Sonic Visualizer - are you now seeing power at frequencies you are not seeing using FFT within web audio ?
I think that the most microphones just works well in the voice range frequency, something around 80 Hz to 1100 Hz
So probably do you have a hardware limit problem, try check with manufacturer or manual the frequency input response from your device !
There is probably an anti-alias low pass filter (between the microphone and the ADC) which has a cut-off below Fs/2 in order to make sure everything is rolled off by that frequency (given a finite filter transition width).
There may also be nulls in the room's acoustics. At frequencies above 2 Khz, it might be only inches from a peak to a null location for the microphone placement.

Categories

Resources