Calculating the AnalyserNode's smoothingTimeConstant - javascript

I am using the Web Audio API to display a visualization of the audio being played. I have an <audio> element that is controlling the playback, I then hook it up to the Web Audio API with by creating a MediaElementSource node from the <audio> element. That is then connected to a GainNode and an AnalyserNode. The AnalyserNode's smoothingTimeConstant is set to 0.6. The GainNode is then connected to the AudioContext.destination.
I then call my audio processing function: onAudioProcess(). That function will continually call itself using:
audioAnimation = requestAnimationFrame(onAudioProcess);
The function uses the AnalyserNode to getByteFrequencyData from the audio, then loops through the (now populated) Uint8Array and draws each frequency magnitude on the <canvas> element's 2d context. This all works fine.
My issue is that when you pause the <audio> element, my onAudioProcess function continues to loop (by requesting animation frames on itself) which is needlessly eating up CPU cycles. I can cancelAnimationFrame(audioAnimation) but that leaves the last-drawn frequencies on the canvas. I can resolve that by also calling clearRect on the canvas's 2d context, but it looks very odd compared to just letting the audio processing loop continue (which slowly lowers each bar to the bottom of the canvas because of the smoothingTimeConstant).
So what I ended up doing was setting a timeout when the <audio> is paused, prior to canceling the animation frame. Doing this I was able to save CPU cycles when no audio was playing AND I was still able to maintain the smooth lowering of the frequency bars drawn on the <canvas>.
MY QUESTION: How do I accurately calculate the number of milliseconds it takes for a frequency magnitude of 255 to hit 0 (the range is 0-255) based on the AnalyserNode's smoothingTimeConstant value so that I can properly set the timeout to cancel the animation frame?

Based on my reading of the spec, I'd think you'd figure it out like this:
var val = 255
, smooth = 0.6
, sampl = 48000
, i = 0
, ms;
for ( ; val > 0.001; i++ ){
val = ( val + val * smooth ) / 2;
}
ms = ( i / sampl * 1000 );
The problem is that with this kind of averaging, you never really get all the way down to zero - so the loop condition is kind of arbitrary. You can make that number smaller and as you'd expect, the value for ms gets larger.
Anyway, I could be completely off-base here. But a quick look through the actual Chromium source code seems to sort of confirm that this is how it works. Although I'll be the first to admit my C++ is pretty bad.

Related

PixiJS Fixed-step loop - Stutter / Jitter

I've been facing a strange issue beyond my understanding of how to fix it.
I'm trying to create the following multiplayer game structure:
The server running at 20-30 fps
The client logic loop at the same FPS as the server
The client render loop
I'm using PixiJS for UI and here is where I got stuck.
( I've opened a thread here as well )
And I have a demo here: https://playcode.io/1045459
Ok, now let's explain the issue!
private update = () => {
let elapsed = PIXI.Ticker.shared.elapsedMS
if (elapsed > 1000) elapsed = this.frameDuration
this.lag += elapsed
//Update the frame if the lag counter is greater than or
//equal to the frame duration
while (this.lag >= this.frameDuration) {
//Update the logic
console.log(`[Update] FPS ${Math.round(PIXI.Ticker.shared.FPS)}`)
this.updateInputs(now())
//Reduce the lag counter by the frame duration
this.lag -= this.frameDuration
}
// Render elements in-between states
const lagOffset = this.lag / this.frameDuration
this.interpolateSpaceships(lagOffset)
}
In the client loop I keep track of both logic & render parts, limiting the logic one at 20FPS. It all works "cookies and clouds" until the browser has a sudden frame rate drop from 120fps to 60fps. Based on my investigation and a nice & confusing spreadsheet that I've put together when the frame rate drops, the "player" moves 2x more ( eg. 3.3 instead of 1.66 ) On paper it's normal and the math is correct, BUT this creates a small bounce / jitter / stutter or whatever naming this thing has.
In the demo that I've created in playcode it's not visible. My assumption is that the code is too basic and the framerate never drops.
Considering that the math and the algorithm are correct ( which I'm not yet sure ), I've turned my eyes to other parts that might affect this. I'm using pixi-viewport to follow the character. Could it be that the following part creates this bounce?
Does anyone have experience writing such a game loop?
Update:
Okkkk, mindblowing result. I just found out that this happens even with the most simple version of the game loop ever. Just by multiplying x = x + speed * delta every frame.
For the same reason. Sudden drops in FPS.
Ok, I've found the solution. Will post it here as there is not a lot of info about it. The solution is to smooth out sudden fps drops over multiple frames. Easy right? 😅
const ticker = new PIXI.Ticker();
// The number of frames to use for smoothing
const smoothingFrames = 10;
// The smoothed frame duration
let smoothedFrameDuration = 0;
ticker.add((deltaTime) => {
// Add the current frame duration to the smoothing array
smoothedFrameDuration = (smoothedFrameDuration * (smoothingFrames - 1) + deltaTime) / smoothingFrames;
// Update the game logic here
// Use the smoothed frame duration instead of the raw deltaTime value
});
ticker.start();

How do we schedule a series of oscillator nodes that play for a fixed duration with a smooth transition from one node’s ending to the other?

We are trying to map an array of numbers to sound and are following the approach mentioned in this ‘A Tale of 2 Clocks’ article to schedule oscillator nodes to play in the future. Each oscillator node exponentially ramps to a frequency value corresponding to the data in a fixed duration (this.pointSonificationLength). However, there’s a clicking noise as each node stops, as referenced in this article by Chris Wilson. The example here talks about smoothly stopping a single oscillator. However, we are unable to directly use this approach to smoothen the transition between one oscillator node to the other.
To clarify some of the values, pointTime refers to the node’s number in the order starting from 0, i.e. as the nodes were scheduled, they’d have pointTime = 0, 1, 2, and so forth. this.pointSonificationLength is the constant used to indicate how long the node should play for.
The first general approach was to decrease the gain at the end of the node so the change is almost imperceptible, as was documented in the article above. We tried implementing both methods, including setTargetAtTime and a combination of exponentialRampToValueAtTime and setValueAtTime, but neither worked to remove the click.
We were able to remove the click by changing some of our reasoning, and we scheduled for the gain to start transitioning to 0 at 100ms before the node ends.
However, when we scheduled more than one node, there was now a pause between each node. If we changed the function to start transitioning at 10ms, the gap was removed, but there was still a quiet click.
Our next approach was to have each node fade in as well as fade out. We added in this.delay as a constant to be the amount of time each transition in and out takes.
Below is where we’re currently at in the method to schedule an oscillator node for a given time with a given data point. The actual node scheduling is contained in another method inside the class.
private scheduleOscillatorNode(dataPoint: number, pointTime: number) {
let osc = this.audioCtx.createOscillator()
let amp = this.audioCtx.createGain()
osc.frequency.value = this.previousFrequencyOfset
osc.frequency.exponentialRampToValueAtTime(dataPoint, pointTime + this.pointSonificationLength)
osc.onended = () => this.handleOnEnded()
osc.connect(amp).connect(this.audioCtx.destination)
let nodeStart = pointTime + this.delay * this.numNode;
amp.gain.setValueAtTime(0.00001, nodeStart);
amp.gain.exponentialRampToValueAtTime(1, nodeStart + this.delay);
amp.gain.setValueAtTime(1, nodeStart + this.delay + this.pointSonificationLength);
amp.gain.exponentialRampToValueAtTime(0.00001, nodeStart + this.delay * 2 + this.pointSonificationLength);
osc.start(nodeStart)
osc.stop(nodeStart + this.delay * 2 + this.pointSonificationLength)
this.numNode++;
this.audioQueue.enqueue(osc)
// code to keep track of playback
}
We notice that there is a slight difference between the values we calculate manually and the values we see when we log the time values using the console.log statements, but the difference was too small to potentially be perceivable. As a result, we believe that this may not be causing the clicking noise, since the difference shouldn’t be perceivable if it’s so small. For example, instead of ending at 6.7 seconds, the node would end at 6.699999999999999 seconds, or if the node was meant to end at 5.6 seconds, it would actually end at 5.6000000000000005 seconds.
Is there a way to account for these delays and schedule nodes such that the transition occurs smoothly? Alternatively, is there a different approach that we need to use to make these transitions smooth? Any suggestions and pointers to code samples or other helpful resources would be of great help!

Animating HTML Video with requestAnimationFrame

I would like to use requestAnimationFrame to play an HTML <video> element. This is useful because it offers greater control over the playback (e.g. can play certain sections, control the speed, etc). However, I'm running into an issue with the following approach:
function playAnimation() {
window.cancelAnimationFrame(animationFrame);
var duration = video.seekable.end(0);
var start = null;
var step = function(timestamp) {
if (!start) start = timestamp;
const progress = timestamp - start;
const time = progress / 1000;
video.currentTime = time;
console.log(video.currentTime);
if (time > duration) {
start = null;
}
animationFrame = window.requestAnimationFrame(step);
}
animationFrame = window.requestAnimationFrame(step);
}
In Google Chrome, the video plays a little bit but then freezes. In Firefox it freezes even more. The console shows that the video's currentTime is being updated as expected, but it's not rendering the new time. Additionally, in the instances when the video is frozen, the ontimeupdate event does not fire, even though the currentTime is being updated.
A simple demo can be found here: https://codepen.io/TGordon18/pen/bGVQaXM
Any idea what's breaking?
Update:
Interestingly, controlling/throttling the animationFrame actually helps in Firefox.
setTimeout(() => {
animationFrame = window.requestAnimationFrame(step);
}, 1000 / FPS);
This doesn't seem like the right approach though
The seeking of the video is usually slower than one frame of requestAnimationFrame. One ideal frame of requestAnimationFrame is about 16.6ms (60 FPS), but the duration of the seek depends on how the video is encoded and where in the video you want to seek. When in step function you set video.currentTime and then do the same thing in the next frame, the previous seek operation most likely has not finished yet. As you continue calling video.currentTime over and over again, browser still tries to execute old tasks until the point it starts freezing because it is overwhelmed with the number of tasks. It might also influence how it fires the events like timeupdate.
The solution might be to explicitly wait for the seek to finish and only after that asking for the next animation frame.
video.onseeked = () => {
window.requestAnimationFrame(step);
}
https://codepen.io/mradionov/pen/vYNvyym?editors=0010
Nevertheless you most likely won't be able to achieve the same smooth real-time playback like in the video tag, because of how the seeking operation works. Unless you are willing to drop some current frames when the previous frame is still not ready.
Basically storing an entire image for each video frame is very expensive. One of the core video compression techniques is to store full video frame only in some intervals, like every 1 second (they are called key-frames of I-frames). The rest of the frames in between will store the difference from the previous frame (P-frames), which is pretty small compared to entire image. When video plays as usual, it already has previous frame in buffer, the only thing it needs to do is apply the difference for the next frame. But when you make a seek operation, there is no previous frame to calculate the difference from. Video decoder has to find the nearest key-frame with the full image and then apply the difference for all of the following frames up until the point it finally reaches the frame you wanted to seek to.
If you use my suggestion to wait for previous seek operation to complete before requesting for the next seek, you will see that video starts smooth, but when it gets closer to 2.5 seconds it will stutter more in more, until it reaches 2.5s+ and becomes smooth again. But then again it will start stuttering up to the point of 5s, and become smooth again after 5s+. That's because key-frame interval for this video is 2.5 seconds and the farther the timestamp you want to seek to from the key-frame, the longer it will take, because more frames need to be decoded.

Canvas animation: Benefits of separating update and render loop?

I am creating some simple user controlled simulations with JavaScript and the canvas element.
Currently I have a separate update loop (using setTimeout) and render loop (using requestAnimationFrame).
Updates are scaled using a time delta, so consistency is not critical in that sense. The reason is rather that I don't want any hick-ups in the render loop to swallow user input or otherwise make the simulation less responsive.
The update loop will likely run at a lower (but hopefully fixed) frame rate.
Is this a good practice in JavaScript, or are there any obvious pitfalls? My hope is that the update loop will receive priority, but my understanding of the event loop might be a bit simplistic. (In worst case, the behaviour differs between VM implementations.)
Example code:
function update() {
// Update simulation and process input
setTimeout(update, 1000 / UPDATE_RATE);
}
function render() {
// Render simulation onto canvas
requestAnimationFrame(render);
}
function init() {
update();
render();
}
These concerns have been addressed in Game Development with Three.js by Isaac Sukin. It covers both the case of low rendering frame rates, which was the primary concern of this question:
[...] at low frame rates and high speeds, your object will be moving large distances every frame, which can cause it to do strange things such as move through walls.
It also covers the converse case, with high rendering frame rates, and relatively slow physics computations:
At high frame rates, computing your physics might take longer than the amount of time between frames, which will cause your application to freeze or crash.
In addition, it also addresses the concept of determinism, which becomes important in multiplayer games, and games that rely on it for things like replays or anti-cheat mechanisms:
Additionally, we would like perfect reproducibility. That is, every time we run the application with the same input, we would like exactly the same output. If we have variable frame deltas, our output will diverge the longer the program runs due to accumulated rounding errors, even at normal frame rates.
The practice of running multiple loops is advised against, as this can have severe and hard to debug performance implications. Instead an approach is taken where time deltas are accumulated in the rendering loop, until a fixed, preset size is reached, at which point it is passed to the physics loop for processing:
A better solution is to separate physics update time-steps from frame refresh time-steps. The physics engine should receive fixed-size time deltas, while the rendering engine should determine how many physics updates should occur per frame.
Here's some example code, showing a minimum implementation in JavaScript:
var INVERSE_MAX_FPS = 1 / 60;
var frameDelta = 0;
var lastUpdate = Date.now();
function render() {
// Update and render simulation onto canvas
requestAnimationFrame(render);
var now = Date.now();
frameDelta += now - lastUpdate;
lastUpdate = now;
// Run as many physics updates as we have missed
while(frameDelta >= INVERSE_MAX_FPS) {
update();
frameDelta -= INVERSE_MAX_FPS;
}
}
function init() {
render();
}
With the following code, no matter how long since the last rendered frame, as many physics updates as required will be processed. Any residual time delta will be carried over to the next frame.
Note that the target maximum FPS might need to be adjusted depending on how slow the simulation runs.

Recorder.js calculate and offset recording for latency

I'm using Recorder.js to record audio from Google Chrome desktop and mobile browsers. In my specific use case I need to record exactly 3 seconds of audio, starting and ending at a specific time.
Now I know that when recording audio, your soundcard cannot work in realtime due to hardware delays, so there is always a memory buffer which allows you to keep up recording without hearing jumps/stutters.
Recorder.js allows you to configure the bufferLen variable exactly for this, while sampleRate is taken automatically from the audio context object. Here is a simplified version of how it works:
var context = new AudioContext();
var recorder;
navigator.getUserMedia({audio: true}, function(stream) {
recorder = new Recorder(context.createMediaStreamSource(stream), {
bufferLen: 4096
});
});
function recordLoop() {
recorder.record();
window.setTimeout(function () {
recorder.stop();
}, 3000);
}
The issue i'm facing is that record() does not offset for the buffer latency and neither does stop(). So instead of getting a three second sound, it's 2.97 seconds and the start is cut off.
This means my recordings don't start in the same place, and also when I loop them, the loops are different lengths depending on your device latency!!
There are two potentially solutions I see here:
Adjust Recorder.js code to offset the buffer automatically against your start/stop times (maybe add new startSync/stopSync functions)
Calculate the latency and create two offset timers to start and stop Recorder.js at the correct points in time.
I'm trying solution 2, because solution 1 requires knowledge of buffer arrays which I don't have :( I believe the calculation for latency is:
var bufferSize = 4096;
var sampleRate = 44100
var latency = (bufferSize / sampleRate) * 2; // 0.18575963718820862 secs
However when I run these calculations in a real test I get:
var duration = 2.972154195011338 secs
var latency = 0.18575963718820862 secs
var total = duration + latency // 3.1579138321995464 secs
Something isn't right, it doesn't make 3 seconds and it's beginning to confuse me now! I've created a working fork of Recorder.js demo with a log:
http://kmturley.github.io/Recorderjs/
Any help would be greatly appreciated. Thanks!
I'm a bit confused by your concern for the latency. Yes, it's true that the minimum possible latency is going to be the related to the length of the buffer but there are many other latencies involved. In any case, the latency has nothing to do with the recording duration, which seems to me to be what your question is about.
If you want to record an exactly 3 second long buffer at 44100 that is 44100*3=132,300 samples. The buffer size is 4096 samples and the system is only going to record an even multiple of that number. Given that the closest you are going to get is to record either 32 or 33 complete buffers. This gives either 131072 (2.97 seconds) or 135168 (3.065 seconds) samples.
You have a couple options here.
Choose a buffer length that evenly divides the sample rate. e.g. 11025. You can then record exactly 12 buffers.
Record slightly longer than the 3.0 seconds you need and then throw the extra 2868 samples away.

Categories

Resources