JavaScript WebAudio generate sound with multiple frequencies - javascript

Is there a way to create sound with multiple frequencies in JavaScript?
I can use WebAudio and create something with one frequency.
var osc = audioContext.createOscillator();
osc.frequency.value = 440; //Some frequency
But I need to create a sound signal with many frequencies on it.
Is it possible?

Pang's answer wasn't bad - just add more oscillators, and connect() them all to the AudioContext.destination, and start() them all - but you can also use different types of oscillators. The different waveforms (sawtooth, square, triangle) have different harmonic components - and in fact, you can use the PeriodicWave object and Oscillator.setPeriodicWave to select custom harmonic components.

Related

Isolate ultrasounds with Web Audio API

There is any algorithm that I can use with Web Audio Api to isolate ultrasounds?
I've tried 'highpass' filters but I need to isolate sounds that are ONLY ultrasounds (horizontal lines) and ignore noises that are also sounding at lower audible frequencies (vertical lines).
var highpass = audioContext.createBiquadFilter();
highpass.type = 'highpass';
highpass.frequency.value = 17500;
highpass.gain.value = -1
Here's a test with a nice snippet from http://rtoy.github.io/webaudio-hacks/more/filter-design/filter-design.html of how the spectrum of audible noise interferes with filtered ultrasound: (there are 2 canvas, one without the filter and one with the filter https://jsfiddle.net/6gnyhvrk/3
Without filters:
With 17.500 highpass filter:
A highpass filter is what you want, but there are a few things to consider. First, the audio context has to have a high enough sample rate. Second, you have to decide what "ultrasound" means. Many people can hear frequencies above 15 kHz (as in your example). A single highpass filter may not have a sharp enough cutoff for you so you'll need to have a more complicated filter setup.

Webaudio FM Synthesis with two modulators

F is the carrier, and E and D are modulators.
Simple FM Synthesis with only one modulator, is pretty straightforward in webaudio.
var ctx = new AudioContext || webkitAudioContext();
var out = ctx.destination;
// Instantiating
var E = ctx.createOscillator(); // Modulator
var F = ctx.createOscillator(); // Carrier
// Setting frequencies
E.frequency.value = 440;
F.frequency.value = 440;
// Modulation depth
var E_gain = ctx.createGain();
E_gain.gain.value = 3000;
// Wiring everything up
E.connect(E_gain);
E_gain.connect(F.frequency);
F.connect(out);
// Start making sound
E.start();
F.start();
But now I would like to make something like this.
Two modulators that is. How can this be implemented in webaudio?
You can connect two nodes into the same input. Just call the connect() method twice. For example (using your diagram and naming convention):
E.connect(E_gain);
D_gain.connect(E_gain);
Each time E_gain produces an output sample, its input value will be determined by summing one sample from E with one sample from D_gain.
I think whether you want to connect to the frequency parameter or to the detune parameter depends on whether you want to implement Linear FM or Exponential FM. The frequency parameter is measured in Hertz (linear scale) whereas detune is measured in cents (exponential). Though if you do connect to frequency then you'll most probably want to adjust the gain every time the frequency of the carrier changes. E.g. you'd set the gain to 440 * d for some constant modulation depth d when a using a 440Hz carrier, but need to change the gain to 220 * d when you play the note an octave lower. Although keeping the gain constant can generate some interesting dissonant effects too.
Response:
You need to connect to detune not to frequency.
Example:
Hey, I have an example on my site for you:
http://gtube.de/
Go to the publish Area in the head and select the FM synth.
There you can see the connections and you can try it live (use the keyboard A-L)! :-)
Exampleobject:
{"name":"connection","Name":"Connection at Pos6","ConnectFrom":"1_#_MOD 1_#_object","ConnectTo":"3_#_GAIN MOD1_#_object"},
{"name":"connection","Name":"Connection at Pos7","ConnectFrom":"3_#_GAIN MOD1_#_object","ConnectTo":"0_#_OSC_#_detune"},
{"name":"connection","Name":"Connection at Pos8","ConnectFrom":"2_#_MOD 2_#_object","ConnectTo":"4_#_GAIN MOD2_#_object"},
{"name":"connection","Name":"Connection at Pos9","ConnectFrom":"4_#_GAIN MOD2_#_object","ConnectTo":"0_#_OSC_#_detune"}
{"name":"connection","Name":"Connection at Pos10","ConnectFrom":"0_#_OSC_#_object","ConnectTo":"5_#_GAIN OSC_#_object"},
{"name":"connection","Name":"Connection at Pos11","ConnectFrom":"5_#_GAIN OSC_#_object","ConnectTo":"context.destination"}]

Web Audio API - Removing filter

I'm building a visualiser with multiple graphic modes. For a few of them I need to calculate the beat of the track being played, and as I understand I then need to apply a lowpass filter like the following, to enhance frequencies that are most probable to hold drum sounds:
var filter = context.createBiquadFilter();
source.connect(filter);
filter.connect(context.destination);
filter.type = 'lowpass';
But what if I want to turn the filter off? Do I have to re-connect the source every time I need to remove the filter? Would this have any negative effect on performance?
Related question: how much performance loss would I experience if I have two two sources, from the same audio source, and apply the filter to one of them?
how much performance loss would I experience if I have two two sources, from the same audio source, and apply the filter to one of them
You can connect a single audio node to multiple destinations, thus you never need a duplicate source just to spread-connect it. If you need filtered and raw audio simultaneously, you can just setup your connections accordingly:
var filter = context.createBiquadFilter();
source.connect(filter);
source.connect(context.destination);
filter.connect(context.destination);
filter.type = "lowpass";
Anyways, setting the type property of a FilterNode to "allpass" will effectively disable all filtering, without having to reconnect:
filter.type = "allpass"
According to article WebAudio intro | html5rocks, I would have to toggle the filter on and off, by disconnecting the source and itself like so:
this.source.disconnect(0);
this.filter.disconnect(0);
// Check if we want to enable the filter.
if (filterShouldBeEnabled) {
// Connect through the filter.
this.source.connect(this.filter);
this.filter.connect(context.destination);
} else {
// Otherwise, connect directly.
this.source.connect(context.destination);
}

Web Audio synthesis: how to handle changing the filter cutoff during the attack or release phase?

I'm building an emulation of the Roland Juno-106 synthesizer using WebAudio. The live WIP version is here.
I'm hung up on how to deal with updating the filter if the cutoff frequency or envelope modulation amount are changed during the attack or release while the filter is simultaneously being modulated by the envelope. That code is located around here. The current implementation doesn't respond the way an analog synth would, but I can't quite figure out how to calculate it.
On a real synth the filter changes immediately as determined by the frequency cutoff, envelope modulation amount, and current stage in the envelope, but the ramp up or down also continues smoothly.
How would I model this behavior?
Brilliant project!
You don't need to sum these yourself - Web Audio AudioParams sum their inputs, so if you have a potentially audio-rate modulation source like an LFO (an OscillatorNode connected to a GainNode), you simply connect() it to the AudioParam.
This is the key here - that AudioParams are able to be connect()ed to - and multiple input connections to a node or AudioParam are summed. So you generally want a model of
filter cutoff = (cutoff from envelope) + (cutoff from mod/LFO) + (cutoff from cutoff knob)
Since cutoff is a frequency, and thus on a log scale not a linear one, you want to do this addition logarithmically (otherwise, an envelope that boosts the cutoff up an octave at 440Hz will only boost it half an octave at 880Hz, etc.) - which, luckily, is easy to do via the "detune" parameter on a BiquadFilter.
Detune is in cents (1200/octave), so you have to use gain nodes to adjust values (e.g. if you want your modulation to have a +1/-1 octave range, make sure the oscillator output is going between -1200 and +1200). You can see how I do this bit in my Web Audio synthesizer (https://github.com/cwilso/midi-synth): in particular, check out synth.js starting around line 500: https://github.com/cwilso/midi-synth/blob/master/js/synth.js#L497-L519. Note the modFilterGain.connect(this.filter1.detune); in particular.
You don't want to be setting ANY values directly for modulation, since the actual value will change at a potentially fast rate - you want to use the parameter scheduler and input summing from an LFO. You can set the knob value as needed in terms of time, but it turns out that setting .value will interact poorly with setting scheduled values on the same AudioParam - so you'll need to have a separate (summed) input into the AudioParam. This is the tricky bit, and to be honest, my synth does NOT do this well today (I should change it to the approach described below).
The right way to handle the knob setting is to create an audio channel that varies based on your knob setting - that is, it's an AudioNode that you can connect() to the filter.detune, although the sample values produced by that AudioNode are only positive, and only change values when the knob is changed. To do this, you need a DC offset source - that is, an AudioNode that produces a stream of constant sample values. The simplest way I can think of to do this is to use an AudioBufferSourceNode with a generated buffer of 1:
function createDCOffset() {
var buffer=audioContext.createBuffer(1,1,audioContext.sampleRate);
var data = buffer.getChannelData(0);
data[0]=1;
var bufferSource=audioContext.createBufferSource();
bufferSource.buffer=buffer;
bufferSource.loop=true;
bufferSource.start(0);
return bufferSource;
}
Then, just connect that DCOffset into a gain node, and connect your "knob" to that gain's .value to use the gain node to scale the values (remember, there are 1200 cents in an octave, so if you want your knob to represent a six-octave cutoff range, the .value should go between zero and 7200). Then connect() the DCOffsetGain node into the filter's .detune (it sums with, rather than replacing, the connection from the LFO, and also sums with the scheduled values on the AudioParam (remember you'll need to scale the scheduled values in cents, too)). This approach, BTW, makes it easy to flip the envelope polarity too (that VCF ENV switch on the Juno 106) - just invert the values you set in the scheduler.
Hope this helps. I'm a bit jetlagged at the moment, so hopefully this was lucid. :)

Generate a non-sinusoidal tone

Is it possible to generate a tone based on a specific formula? I've tried googling it, but the only things I could find were about normal sine waves, such as this other SO question. So I was wondering if it is possible to generate tones based on other kinds of formulas?
On that other SO question, I did find a link to this demo page, but it seems like that page just downloads sound files and uses them to just alter the pitch of the sounds.
I've already tried combining sine waves by using multiple oscillators, based on this answer, which works just as expected:
window.ctx = new webkitAudioContext();
window.osc = [];
function startTones() {
osc[0] = ctx.createOscillator(),
osc[1] = ctx.createOscillator()
osc[0].frequency.value = 120;
osc[1].frequency.value = 240;
osc[0].connect(ctx.destination);
osc[1].connect(ctx.destination);
osc[0].start(0);
osc[1].start(0);
}
function stopTones() {
osc[0].stop(0);
osc[1].stop(0);
}
<button onclick="startTones();">Start</button>
<button onclick="stopTones();">Stop</button>
So now I was wondering, is it possible to make a wave that's not based on adding sine waves like this, such as a sawtooth wave (x - floor(x)), or a multiplication of sine waves (sin(PI*440*x)*sin(PI*220*x))?
PS: I'm okay with not supporting some browsers - as long as it still works in at least one (although more is better).
All (periodic) waves can be expressed as the addition of sine waves, and WebAudio has a function for synthesising a wave form based on a harmonic series, context.createPeriodicWave(real, imag).
The successive elements of the supplied real and imag input arrays specify the relative amplitude and phase of each harmonic.
Should you want to create a wave procedurally, then in theory you could populate an array with the desired waveform, take the FFT of that, and then pass the resulting FFT components to the above function.
(WebAudio happens to support the sawtooth waveform natively, BTW)

Categories

Resources