I've been working in WebAudio trying to ramp a oscillator node on and off. I've been trying to verify the ramping on the signal with an oscilloscope, and noticing that the linear ramp (from linearRampToValueAtTime() function) is not working as anticipated.
First, I'm unable to change the duration of the ramping and then when I manage to get a ramp, it is not linear (using setValueAtTime() function).
So I guess my question is this - has anyone been able to validate using external equipment that the ramps are working in Chrome?
I tried verifying the example listed on the MDN documentation site:
https://mdn.github.io/webaudio-examples/audio-param/
but those demos also don't ramp - is this an issue with my audio driver, web browser, or some other issue? I'm assuming there's something wrong in the implementation, because it's not working anywhere, and others don't seem to have this issue so any help I can get would be appreciated!
snippets of the code below:
Creating nodes
for (i = 0; i < 2; i++) {
oscCarrier[i] = audioCtx.createOscillator();
panOsc[i] = audioCtx.createStereoPanner();
gainOsc[i] = audioCtx.createGain();
oscCarrier[i].frequency.value = 1000;
gainOsc[i].gain.setValueAtTime(0,audioCtx.currentTime);
oscCarrier[i].connect(panOsc[i]);
panOsc[i].connect(gainOsc[i])
gainOsc[i].connect(audioCtx.destination)
oscCarrier[i].start();
}
for ramping off:
let rampTime = .025 //(25 ms ramp off)
gainOsc[channel].gain.linearRampToValueAtTime(0, audioCtx.currentTime + rampTime)
for ramping on:
let rampTime = .025 //(25 ms ramp on)
gainOsc[channel].gain.linearRampToValueAtTime(0, audioCtx.currentTime + rampTime)
Related
I have read and implemented in the past something similar to what Chris Wilson described in his article "A Tale of Two Clocks": https://web.dev/audio-scheduling/
I recently found WAAClock which in theory implements the same principle:
https://github.com/sebpiq/WAAClock
I'm working on a MIDI Web App and I want to send MIDI Clock Messages, which requires precise scheduling. I wrote this post in WebMidiJS forum (an amazing lib I'm using in my Project):
https://github.com/djipco/webmidi/discussions/333
Essentially this is my code:
const PULSES_PER_QUARTER_NOTE = 24;
const BPM_ONE_SECOND = 60;
let context = new AudioContext();
let clock = new WAAClock(context);
const calculateClockDelay = bpm => BPM_ONE_SECOND / bpm / PULSES_PER_QUARTER_NOTE;
const startMidiClock = bpm => {
clock.start();
clock.callbackAtTime(function () {
WebMidi.outputs.forEach(outputPort => outputPort.sendClock({}));
}, 0).repeat(calculateClockDelay(bpm));`
}
const stopMidiClock = () => clock.stop();
As described in that posts, I CANNOT get the event to happen with high precision. I see the BPM Meter to slightly DRIFT. I tried sending MIDI Clock from a DAW and the timing is perfect.
I'm using ZERO as tolerance in the clock.callbackAtTime function.
Why Do I see this drift/ slight scheduling error?
Is there any other way to schedule a precise repeating MIDI event with WAAClock?
Is WAAClock capable of precise scheduling as with Chris Wilson's technique?
Thanks a lot!
Danny Bullo
Eh. I'm not deeply familiar with that library - but at a quick glance, I am deeply suspicious that it can do what it claims to. From its code, this snippet makes me VERY suspicious:
this._clockNode.onaudioprocess = function () {
setTimeout(function() { self._tick() }, 0)
}
If I understand it, this is trying to use the scriptprocessornode to get a high-stability, low-latency clock. That's not, unfortunately, something that scriptprocessornode can do. You COULD do something closer to this with audioworklets, except you wouldn't be able to call back out to the main thread to fire the MIDI calls.
I'm not sure you've fully grasped how to apply the TOTC approach to MIDI - but the key is that Web MIDI has a scheduler built in, too: instead of trying to call your Javascript code and outputPort.sendClock() at PRECISELY the right time, you need to schedule ahead a call with (essentially) outputPort.sendClock( time ) - that is, some small amount of time before the clock message needs to be sent, you need to call the MIDIOutput.send() with the timestamp parameter set - a schedule time for precisely when it needs to be sent.
This is gone over in more detail (for audio, though, not MIDI) in https://web.dev/audio-scheduling/#obtaining-rock-solid-timing-by-looking-ahead.
I'm currently working on adapting this web audio API demo for a project that I am working on, but there is no sound when I test on an iPhone. It works fine on the iPad.
I've searched for solutions and found this thread on StackOverflow with the following snippet of one of the answers:
Safari on iOS 6 effectively starts with the Web Audio API muted. It
will not unmute until you attempt to play a sound in a user input
event (create a buffer source, connect it to destination, and call
noteOn()). After this, it unmutes and audio plays unrestricted and as
it ought to. This is an undocumented aspect of how the Web Audio API
works on iOS 6 (Apple's doc is here, hopefully they update it with a
mention of this soon!)
The user input event should be the onclick event on the play button but changing to use noteOn() instead of start() still doesn't fix it.
Update: I've also tried binding the play button with the touchend event but to no avail.
Here is the function that uses noteOn():
function playNote(buffer, pan, x, y, z, sendGain, mainGain, playbackRate, noteTime) {
// Create the note
var voice = context.createBufferSource();
voice.buffer = buffer;
voice.playbackRate.value = playbackRate;
// Optionally, connect to a panner
var finalNode;
if (pan) {
var panner = context.createPanner();
panner.panningModel = "HRTF";
panner.setPosition(x, y, z);
voice.connect(panner);
finalNode = panner;
} else {
finalNode = voice;
}
// Connect to dry mix
var dryGainNode = context.createGain();
dryGainNode.gain.value = mainGain * effectDryMix;
finalNode.connect(dryGainNode);
dryGainNode.connect(masterGainNode);
// Connect to wet mix
var wetGainNode = context.createGain();
wetGainNode.gain.value = sendGain;
finalNode.connect(wetGainNode);
wetGainNode.connect(convolver);
if (iOS) {
voice.noteOn(noteTime);
}
else {
voice.start(noteTime);
}
}
Any suggestions would be greatly appreciated. Thanks.
I feel really stupid. Apparently, if you have your iPhone on vibrate mode, the sound doesn't play.
The start() method should work fine without the if else statements on iOS as long as you call the function with a user interaction event. Also flip the order you pass y and z to the panner cause z is second for some strange reason.
Here's a working example, change stuff in it to fit what you need, most isn't need and I've got others somewhere that use the dom to add event listeners
<script>
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var oscillator = audioCtx.createOscillator();
var gainNode = audioCtx.createGain();
oscillator.connect(gainNode);
gainNode.connect(audioCtx.destination);
oscillator.type = 'sine';
oscillator.frequency.value = 440;
gainNode.gain.value = 1;
</script>
<button onclick="oscillator.start();">play</button>
My own experience has been that sometimes the Web Audio API works on iPhones, sometimes it doesn't. Here is a page that worked 5 minutes ago on my iPhone 6s; 1 minute ago it didn't work; now it does again!
http://www.stephenandrewtaylor.net/dna-sonification/
Here is another one that works intermittently; it worked 2 minutes ago and now it doesn't (the animations work, there is just no audio).
http://www.stephenandrewtaylor.net/lie-sonification/
It might have to do with how many tabs are open in Safari; you could try closing some of your open tabs (right now I have 5 tabs, including the lie-sonificaton page which worked 2 minutes ago but now doesn't). I am also a novice programmer and I'm sure there are much better ways I could be writing the code.
I'm running into an issue with WebAudio on Chrome for Android.
I'm experiencing this on a Samsung Galaxy S3 (GT-I9300) with:
Chrome version 44.0.2403.133
Android 4.3.0
Here is the code I'm using to try and isolate the issue:
var audioContext;
if(window.AudioContext) {
audioContext = new AudioContext();
}
var startTime = Date.now();
var lastTrigger;
var gain = audioContext.createGain();
gain.gain.value = 1;
gain.connect(audioContext.destination);
var buttonTrigger = document.getElementById('trigger');
buttonTrigger.addEventListener('click', function(event) {
var oscillator = audioContext.createOscillator();
oscillator.type = "square";
oscillator.frequency.value = 100 + (Math.cos(audioContext.currentTime)*100);
oscillator.connect(gain);
oscillator.start(0);
oscillator.stop(audioContext.currentTime+0.1);
lastTrigger = Date.now();
});
var timer = document.getElementById('timer');
setInterval(function() {
if(lastTrigger) { timer.textContent = Date.now() - lastTrigger; }
}, 1000);
And here it is on jsfiddle
This simply creates an oscillator node and plays on clicking a button. On my phone, if I do not click the button for about a minute and a half or two minutes, I no longer get any sound.
There are no errors thrown.
Does anyone have any experience of this issue and a possible workaround?
This issue originally appeared in a much larger app using Phaser to play sounds from a m4a file, so this is not solely to do with the oscillator.
UPDATE
According to the Chromium bug ticket this issue has now been fixed.
After experiencing the same problem on Android. I found a better solution than playing a "dummy sound" every 30sec.
Just remember the time when you last played over your context:
var lastPlayed = new Date().getTime();
var audioSource = context.createBufferSource();
audioSource.connect( context.destination );
audioSource.buffer = sb;
audioSource.start( 0 );
The next time you play a sample/sound Just check the time passed and reset the AudioContext
if(new Date().getTime()-lastPlayed>30000){ // Time passed since last playing is greater than 30 secs
context.close();
context=new AudioContext();
}
For Android this works like charm.
I think what you're seeing is an auto-shutdown of Web Audio when there's no sound for a while. What happens if you click the button a second time, a second or so after the first? (Web Audio can take some time (order of tens of milliseconds, at least) to restart.)
The suspend()/resume() methods, and looking at the context.state, would be helpful here.
#RaymondToy's comment answers this question. There is a bug with Chrome on Android (at least for Samsung Galaxy S3/4). Webaudio stops playing sounds after a period of inactivity. Where inactivity is essentially silence.
The only work around I can find is to play some kind of sound at intervals. I have experimented with playing a sound every 30 seconds and that stopped the problem.
I also tried playing some kind of silent noise (silent audio buffer or silent part of an m4a audio file or muted sound), neither of which solved the problem.
I'm trying to apply a pitch vibrato to an AudioBufferSource using an oscillator.
var source = context.createBufferSource();
source.connect(context.destination);
source.buffer = buffer;
source.loop = true;
source.start(0);
// None of the below seems to have any effect in Chrome
var osc = context.createOscillator();
osc.type = "sine";
osc.frequency.value = 0.5;
osc.start(0);
var osc_gain = context.createGain();
osc_gain.connect(source.playbackRate);
osc.connect(osc_gain);
// osc_gain.gain.value = 0.1 doesn't work
osc_gain.gain.setValueAtTime(0.1, 0);
Here's a fiddle. http://jsfiddle.net/HRkcE/12/
The oscillator doesn't seem to have any effect in Chrome, but is working in Firefox (once I figured out that setting osc_gain.gain.value directly doesn't work).
Am I doing anything wrong to make it not work in Chrome?
No, you're not doing anything wrong. Blink has a bug where we do not support this, which was reported to me by someone else just last week, and I filed: https://code.google.com/p/chromium/issues/detail?id=311284. We will get that fixed.
In the meantime, it's actually relatively easy to do a vibrato effect on ANY audio connection (not just buffersourcenodes) using an LFO to drive oscillations on the delayTime of a delayNode - check out the "Vibrato" effect I added to the end of http://webaudiodemos.appspot.com/input/index.html, and the node chain I set up to do it: https://github.com/cwilso/Audio-Input-Effects/blob/master/js/effects.js#L478 is the vibrato subgraph creation routine.
Everytime I try to run my code with a Feedback Delay, my Chrome Browser crashes, I get that blue screen saying:
"Aw, Snap!
Something went wrong while displaying this webpage. To continue, reload or go to another page."
My code uses this kind of structure:
//Create any kind of input (only to test if it works or not);
var oscillator = context.createOscillator();
//Create the delay node and the gain node used on the feedback
var delayNode = context.createDelay();
var feedback = context.createGain();
//Setting the feedback gain
feedback.gain.value = 0.5;
//Make the connections
oscillator.connect(context.destination);
oscillator.connect(delayNode);
delayNode.connect(feedback);
feedback.connect(delayNode);
delayNode.connect(context.destination);//This is where it crashes
Did you put panner nodes after the delay node?
I had a similar problem.
In my case, it was like a panner nodes' bug.
After debugging for hours, I found this page:
http://lists.w3.org/Archives/Public/public-audio-dev/2013Oct/0000.html
It says that connecting panner nodes after delay causes the problem.
If your code actually is like this, it will crash.
var pannerNode = context.createPanner();
delayNode.connect(pannerNode);
pannerNode.connect(context.destination);
My program was like this code.
When I removed panner node from my program, it worked fine.
So if you're in same occasion, you can avoid the problem by writing panner by youself.
Here is a sample I wrote for my program (in CoffeeScript).
class #Panner
constructor: (#ctx) ->
#in = #ctx.createChannelSplitter(2)
#out = #ctx.createChannelMerger(2)
#l = #ctx.createGain()
#r = #ctx.createGain()
#in.connect(#l, 0)
#in.connect(#r, 1)
#l.connect(#out, 0, 0)
#r.connect(#out, 0, 1)
#setPosition(0.5)
connect: (dst) -> #out.connect(dst)
setPosition: (#pos) ->
#l.gain.value = #pos
#r.gain.value = 1.0 - #pos
I Hope this will help you.