Web Audio API - Removing filter - javascript

I'm building a visualiser with multiple graphic modes. For a few of them I need to calculate the beat of the track being played, and as I understand I then need to apply a lowpass filter like the following, to enhance frequencies that are most probable to hold drum sounds:
var filter = context.createBiquadFilter();
source.connect(filter);
filter.connect(context.destination);
filter.type = 'lowpass';
But what if I want to turn the filter off? Do I have to re-connect the source every time I need to remove the filter? Would this have any negative effect on performance?
Related question: how much performance loss would I experience if I have two two sources, from the same audio source, and apply the filter to one of them?

how much performance loss would I experience if I have two two sources, from the same audio source, and apply the filter to one of them
You can connect a single audio node to multiple destinations, thus you never need a duplicate source just to spread-connect it. If you need filtered and raw audio simultaneously, you can just setup your connections accordingly:
var filter = context.createBiquadFilter();
source.connect(filter);
source.connect(context.destination);
filter.connect(context.destination);
filter.type = "lowpass";
Anyways, setting the type property of a FilterNode to "allpass" will effectively disable all filtering, without having to reconnect:
filter.type = "allpass"

According to article WebAudio intro | html5rocks, I would have to toggle the filter on and off, by disconnecting the source and itself like so:
this.source.disconnect(0);
this.filter.disconnect(0);
// Check if we want to enable the filter.
if (filterShouldBeEnabled) {
// Connect through the filter.
this.source.connect(this.filter);
this.filter.connect(context.destination);
} else {
// Otherwise, connect directly.
this.source.connect(context.destination);
}

Related

Isolate ultrasounds with Web Audio API

There is any algorithm that I can use with Web Audio Api to isolate ultrasounds?
I've tried 'highpass' filters but I need to isolate sounds that are ONLY ultrasounds (horizontal lines) and ignore noises that are also sounding at lower audible frequencies (vertical lines).
var highpass = audioContext.createBiquadFilter();
highpass.type = 'highpass';
highpass.frequency.value = 17500;
highpass.gain.value = -1
Here's a test with a nice snippet from http://rtoy.github.io/webaudio-hacks/more/filter-design/filter-design.html of how the spectrum of audible noise interferes with filtered ultrasound: (there are 2 canvas, one without the filter and one with the filter https://jsfiddle.net/6gnyhvrk/3
Without filters:
With 17.500 highpass filter:
A highpass filter is what you want, but there are a few things to consider. First, the audio context has to have a high enough sample rate. Second, you have to decide what "ultrasound" means. Many people can hear frequencies above 15 kHz (as in your example). A single highpass filter may not have a sharp enough cutoff for you so you'll need to have a more complicated filter setup.

In the Web Audio API, is there a way to set a maximum volume?

I know you can boost or reduce volume with gain. I was wondering if there was a way (perhaps via a node) to cap the maximum volume of the output - not reducing any audio below that max value. It is acceptable if there is distortion for audio that gets capped like this.
An alternative that might be simpler is to use a WaveShaperNode. I think a curve equal to [-1, 0, 1] will do what you want, clamping values to +/-1. If you don't oversample, there won't be any additional delay.
Note that I'm pretty sure all browsers implement this kind of clamping before sending audio to the speakers.
This is not possible with any of the built-in AudioNodes. But it can be achieved with a custom AudioWorklet. I recently published a package which does exactly that. It's called limiter-audio-worklet.
It exports two functions:
import { addLimiterAudioWorkletModule, createLimiterAudioWorkletNode } from 'limiter-audio-worklet';
The first function can be used to add the AudioWorklet to a particular AudioContext.
await addLimiterAudioWorkletModule((url) => {
audioContext.audioWorklet.addModule(url);
});
Once that is done the actual AudioWorkletNode can be created like this:
const limiterAudioWorkletNode = createLimiterAudioWorkletNode(
AudioWorkletNode,
audioContext,
{ attack: 0 }
);
If you set the attack two zero you get the desired effect. Everything above +1/-1 gets clamped. If you increase the attack it will transition smoother using a look ahead. This will introduce a small delay (of the same size) but sounds much nicer.
Of course it's also necessary to connect the previously created limiterAudioWorkletNode close to the end of your audio graph.
yourLastAudioNode
.connect(limiterAudioWorkletNode)
.connect(audioContext.destination);

JavaScript WebAudio generate sound with multiple frequencies

Is there a way to create sound with multiple frequencies in JavaScript?
I can use WebAudio and create something with one frequency.
var osc = audioContext.createOscillator();
osc.frequency.value = 440; //Some frequency
But I need to create a sound signal with many frequencies on it.
Is it possible?
Pang's answer wasn't bad - just add more oscillators, and connect() them all to the AudioContext.destination, and start() them all - but you can also use different types of oscillators. The different waveforms (sawtooth, square, triangle) have different harmonic components - and in fact, you can use the PeriodicWave object and Oscillator.setPeriodicWave to select custom harmonic components.

best way to sync data with html5 video

I am building an application that takes data from an android app and replays it in a browser. The android app basically allows the user to record a video and while it is recording it logs data every 100ms such as gps position, speed and accelerometer readings to a database. So i want the user to be able to play the video back in their browser and have charts, google map etc show a realtime representation of the data synced to the video. I have already achieved this functionality but it's far from perfect and I can't help thinking there must be a better way. What I am doing at the moment is getting all of the data from the database ordered by datetime ascending and outputting it as a json encoded array. Here is the process I am doing in pseudo code:
Use video event listener to find the current datetime of video
do a while loop from the current item in the data array
For each iteration check whether the datetime for that row is less than the current datetime from the video
If it is then update the dials from the data
Increment array key
Here is my code:
var points = <?php echo json_encode($gps); ?>;
var start_time = <?php echo $gps[0]->milli; ?>;
var current_time = start_time;
$(document).ready(function()
{
top_speed = 240;
min_angle = -210;
max_angle = 30;
total_angle = 0 - ((0-max_angle)+min_angle);
multiplier = top_speed / total_angle;
speed_i=0;
video.addEventListener('timeupdate', function() {
current_time = start_time + parseInt((video.currentTime * 1000).toFixed(0));
while(typeof points[speed_i] !== 'undefined' && current_time > points[speed_i].milli)
{
newpos = new google.maps.LatLng(points[speed_i].latitude, points[speed_i].longitude);
marker.setPosition(newpos);
map.setCenter(newpos);
angle = min_angle + (points[speed_i].speed * multiplier);
$("#needle").rotate({
animateTo : angle,
center: [13,11],
duration: 100
});
speed_i++;
}
}
});
Here are the issues I seem to have encountered:
- Have to load thousands of rows into json array which can't be very good for permorance
- Have to do while loop on every video call back - again can't be very good for performance
- Playback is always a bit behind
Can anyone think of any ways this can be improved or a better way completely to do it?
There are a few reasons why this may be running slowly. First, the timeupdate event only runs about every 250ms. So, if you're going to refresh at that rate, dandavis is right and you don't need that much data. But if you want animation that's that smooth, I suggest using requestAnimationFrame to update every 16ms or so.
As it is, if you update every 250ms, you're cycling through 2 or 3 data points and updating the map and needle three times in a row, which is unnecessary.
I recommend looking into Popcorn.js, which is built exactly for this kind of thing and will take care of this for you. It will also handle seeking or playing backwards. You'll want to pre-process the data so each point has a start time and an end time in the video.
There are also some things you can do to make the data transfer more efficient. Take out any extra properties that you don't need on every point. You can store each data point as an array, so the property names don't have to be included in your JSON blob, and then you can clean that up with a few lines of JS code on the client side.
Finally, separate your data file from your script. Save it as a static JSON file (maybe even gzipped if your server configuration can handle it) and fetch it with XMLHttpRequest. That way, you can at least display the page sooner while waiting for the code to download. Better yet, look into using a JSON streaming tool like Oboe.js to start displaying data points even before the whole file is loaded.

Applying a filter to AudioContext

I'm attempting to apply a low-pass filter to a sound I load and play through SoundJS.
Right now I'm attempting to do this like this:
var audio = createjs.Sound.activePlugin;
var source = audio.context.createBufferSource();
// Create the filter
var filter = audio.context.createBiquadFilter();
// Create the audio graph.
source.connect(filter);
filter.connect(audio.context.destination);
// Create and specify parameters for the low-pass filter.
filter.type = 0; // Low-pass filter. See BiquadFilterNode docs
filter.frequency.value = 440; // Set cutoff to 440 HZ
// Playback the sound.
createjs.Sound.play("Song");
But I'm not having much luck. Could someone point me in the right direction?
Thanks
When I built the MusicVisualizer demo one of the interesting limitations I found was that all of the audio nodes had to be built using the same context to work. You can access the SoundJS context via createjs.WebAudioPlugin.context
You'll also need to connect your filter into the existing node flow if you want it to work as expected. You can see the MusicVisualizer demo on github if you want to review the source, which does this. You can also review the documentation online, which might be helpful.
Hope that helps.

Categories

Resources