How to record HTML canvas only when stream.requestFrame() gets called, and save result? [duplicate] - javascript

I want to record a video from a HTML <canvas> element at a specific frame rate.
I am using CanvasCaptureMediaStream with canvas.captureStream(fps) and also have access to the video track via const track = stream.getVideoTracks()[0] so I create track.requestFrame() to write it to the output video buffer via MediaRecorder.
I want to precisely capture one frame at a time and then change the canvas content. Changing the canvas content can take some time (as images need to be loaded etc). So I can not capture the canvas in real-time.
Some changes on the canvas would happen in 500ms real-time so this needs also to be adjusted to rendering one frame at the time.

The MediaRecorder API is meant to record live-streams, doing edition is not what it was designed to do, and it doesn't do it very well to be honest...
The MediaRecorder itself has no concept of frame-rate, this is normally defined by the MediaStreamTrack. However, the CanvasCaptureStreamTrack doesn't really make it clear what its frame rate is.
We can pass a parameter to HTMLCanvas.captureStream(), but this only tells the max frames we want per seconds, it's not really an fps parameter.
Also, even if we stop drawing on the canvas, the recorder will still continue to extend the duration of the recorded video in real time (I think that technically only a single long frame is recorded though in this case).
So... we gonna have to hack around...
One thing we can do with the MediaRecorder is to pause() and resume() it.
Then sounds quite easy to pause before doing the long drawing operation and to resume right after it's been made? Yes... and not that easy either...
Once again, the frame-rate is dictated by the MediaStreamTrack, but this MediaStreamTrack can not be paused.
Well, actually there is one way to pause a special kind of MediaStreamTrack, and luckily I'm talking about CanvasCaptureMediaStreamTracks.
When we do call our capture-stream with a parameter of 0, we are basically having manual control over when new frames are added to the stream.
So here we can synchronize both our MediaRecorder adn our MediaStreamTrack to whatever frame-rate we want.
The basic workflow is
await the_long_drawing_task;
resumeTheRecorder();
writeTheFrameToStream(); // track.requestFrame();
await wait( time_per_frame );
pauseTheRecorder();
Doing so, the recorder is awaken only the time per frame we decided, and a single frame is passed to the MediaStream during this time, effectively mocking a constant FPS drawing for what the MediaRecorder is concerned.
But as always, hacks in this still experimental area come with a lot of browsers weirdness and the following demo actually only works in current Chrome...
For whatever reasons, Firefox will always generate files with twice the number of frames than what has been requested, and it will also occasionally prepend a long first frame...
Also to be noted, Chrome has a bug where it will update the canvas stream at drawing, even though we initiated this stream with a frameRequestRate of 0. So this means that if you start drawing before everything is ready, or if the drawing on your canvas itself takes a long time, then our recorder will record half-baked frames that we didn't asked for.
To workaround this bug, we thus need to use a second canvas, used only for the streaming. All we'll do on that canvas is to drawImage the source one, which will always be a fast enough operation. to not face that bug.
class FrameByFrameCanvasRecorder {
constructor(source_canvas, FPS = 30) {
this.FPS = FPS;
this.source = source_canvas;
const canvas = this.canvas = source_canvas.cloneNode();
const ctx = this.drawingContext = canvas.getContext('2d');
// we need to draw something on our canvas
ctx.drawImage(source_canvas, 0, 0);
const stream = this.stream = canvas.captureStream(0);
const track = this.track = stream.getVideoTracks()[0];
// Firefox still uses a non-standard CanvasCaptureMediaStream
// instead of CanvasCaptureMediaStreamTrack
if (!track.requestFrame) {
track.requestFrame = () => stream.requestFrame();
}
// prepare our MediaRecorder
const rec = this.recorder = new MediaRecorder(stream);
const chunks = this.chunks = [];
rec.ondataavailable = (evt) => chunks.push(evt.data);
rec.start();
// we need to be in 'paused' state
waitForEvent(rec, 'start')
.then((evt) => rec.pause());
// expose a Promise for when it's done
this._init = waitForEvent(rec, 'pause');
}
async recordFrame() {
await this._init; // we have to wait for the recorder to be paused
const rec = this.recorder;
const canvas = this.canvas;
const source = this.source;
const ctx = this.drawingContext;
if (canvas.width !== source.width ||
canvas.height !== source.height) {
canvas.width = source.width;
canvas.height = source.height;
}
// start our timer now so whatever happens between is not taken in account
const timer = wait(1000 / this.FPS);
// wake up the recorder
rec.resume();
await waitForEvent(rec, 'resume');
// draw the current state of source on our internal canvas (triggers requestFrame in Chrome)
ctx.clearRect(0, 0, canvas.width, canvas.height);
ctx.drawImage(source, 0, 0);
// force write the frame
this.track.requestFrame();
// wait until our frame-time elapsed
await timer;
// sleep recorder
rec.pause();
await waitForEvent(rec, 'pause');
}
async export () {
this.recorder.stop();
this.stream.getTracks().forEach((track) => track.stop());
await waitForEvent(this.recorder, "stop");
return new Blob(this.chunks);
}
}
///////////////////
// how to use:
(async() => {
const FPS = 30;
const duration = 5; // seconds
let x = 0;
let frame = 0;
const ctx = canvas.getContext('2d');
ctx.textAlign = 'right';
draw(); // we must have drawn on our canvas context before creating the recorder
const recorder = new FrameByFrameCanvasRecorder(canvas, FPS);
// draw one frame at a time
while (frame++ < FPS * duration) {
await longDraw(); // do the long drawing
await recorder.recordFrame(); // record at constant FPS
}
// now all the frames have been drawn
const recorded = await recorder.export(); // we can get our final video file
vid.src = URL.createObjectURL(recorded);
vid.onloadedmetadata = (evt) => vid.currentTime = 1e100; // workaround https://crbug.com/642012
download(vid.src, 'movie.webm');
// Fake long drawing operations that make real-time recording impossible
function longDraw() {
x = (x + 1) % canvas.width;
draw(); // this triggers a bug in Chrome
return wait(Math.random() * 300)
.then(draw);
}
function draw() {
ctx.fillStyle = 'white';
ctx.fillRect(0, 0, canvas.width, canvas.height);
ctx.fillStyle = 'black';
ctx.fillRect(x, 0, 50, 50);
ctx.fillText(frame + " / " + FPS * duration, 290, 140);
};
})().catch(console.error);
<canvas id="canvas"></canvas>
<video id="vid" controls></video>
<script>
// Some helpers
// Promise based timer
function wait(ms) {
return new Promise(res => setTimeout(res, ms));
}
// implements a sub-optimal monkey-patch for requestPostAnimationFrame
// see https://stackoverflow.com/a/57549862/3702797 for details
if (!window.requestPostAnimationFrame) {
window.requestPostAnimationFrame = function monkey(fn) {
const channel = new MessageChannel();
channel.port2.onmessage = evt => fn(evt.data);
requestAnimationFrame((t) => channel.port1.postMessage(t));
};
}
// Promisifies EventTarget.addEventListener
function waitForEvent(target, type) {
return new Promise((res) => target.addEventListener(type, res, {
once: true
}));
}
// creates a downloadable anchor from url
function download(url, filename = "file.ext") {
a = document.createElement('a');
a.textContent = a.download = filename;
a.href = url;
document.body.append(a);
return a;
}
</script>

I asked a similar question which has been linked to this one. In the meantime I came up with a solution which overlaps Kaiido's and which I think is worth reading.
I added two tricks:
I deferred the next render (see code), which fixes the problem of Firefox generating twice the number of frames
I stored an accumulated timing error to correct setTimeout's inaccuracies. I personally used it to tweak the progression of my render and for example skip frames if there is a sudden latency and keep the duration of the video close to the target duration. It is not enough to smoothen setTimeout though.
const recordFrames = (onstop, canvas, fps=30) => {
const chunks = [];
// get Firefox to initialise the canvas
canvas.getContext('2d').fillRect(0, 0, 0, 0);
const stream = canvas.captureStream();
const recorder = new MediaRecorder(stream);
recorder.addEventListener('dataavailable', ({data}) => chunks.push(data));
recorder.addEventListener('stop', () => onstop(new Blob(chunks)));
const frameDuration = 1000 / fps;
const frame = (next, start) => {
recorder.pause();
api.error += Date.now() - start - frameDuration;
setTimeout(next, 0); // helps Firefox record the right frame duration
};
const api = {
error: 0,
init() {
recorder.start();
recorder.pause();
},
step(next) {
recorder.resume();
setTimeout(frame, frameDuration, next, Date.now());
},
stop: () => recorder.stop()
};
return api;
}
how to use
const fps = 30;
const duration = 5000;
const animation = Something;
const videoOutput = blob => {
const video = document.createElement('video');
video.src = URL.createObjectURL(blob);
document.body.appendChild(video);
}
const recording = recordFrames(videoOutput, canvas, fps);
const startRecording = () => {
recording.init();
animation.play();
};
// I am assuming you can call these from your library
const onAnimationRender = nextFrame => recording.step(nextFrame);
const onAnimationEnd = () => recording.step(recording.stop);
let now = 0;
const progression = () => {
now = now + 1 + recorder.error * fps / 1000;
recorder.error = 0;
return now * 1000 / fps / duration
}
I found this solution to be satisfying at 30fps in both Chrome and Firefox. I didn't experience the Chrome bugs mentionned by Kaiido and thus didn't implement anything to deal with them.

Related

timestamp of requestAnimationFrame is not reliable

I think the timestamp argument passed by requestAnimationFrame is computed wrongly (tested in Chrome and Firefox).
In the snippet below, I have a loop which takes approx. 300ms (you may have to tweak the number of loop iterations).
The calculated delta should always be larger than the printed 'duration' of the loop.
The weird thing is, sometimes it is slower sometimes not. Why?
let timeElapsed = 0;
let animationID;
const loop = timestamp => {
const delta = timestamp - timeElapsed;
timeElapsed = timestamp;
console.log('delta', delta);
// some heavy load for the frame
const start = performance.now();
let sum = 0;
for (let i = 0; i < 10000000; i++) {
sum += i ** i;
}
console.warn('duration', performance.now() - start);
animationID = requestAnimationFrame(loop)
}
animationID = requestAnimationFrame(loop);
setTimeout(() => {
cancelAnimationFrame(animationID);
}, 2000);
jsFiddle: https://jsfiddle.net/Kritten/ohd1ysmg/53/
Please not that the snippet stops after two second.
At least in Blink and Gecko, the timestamp passed to rAF callback is the one of the last VSync pulse.
In the snippet, the CPU and the event-loop are locked for about 300ms, but the monitor still does emit its VSync pulse at the same rate, in parallel.
When the browser is done doing this 300ms computation, it has to schedule a new animation frame.
At the next event-loop iteration it will check if the monitor has sent a new VSync pulse and since it did (about 18 times on a 60Hz), it will execute the new rAF callbacks almost instantly.
The timestamp passed to rAF callback may thus indeed be the one of a time prior to when your last callback ended, because the event-loop got freed after the last VSync pulse.
One way to force this is to make your computation last just a bit more than a frame's duration, for instance on a 60Hz monitor VSync pulses will happen every 16.67ms, so if we lock the event-loop for 16.7ms we are quite sure to have a timestamp delta lesser than the actual computation time:
let stopped = false;
let perf_elapsed = performance.now();
let timestamp_elapsed = 0;
let computation_time = 0;
let raf_id;
const loop = timestamp => {
const perf_now = performance.now();
const timestamp_delta = +(timestamp - timestamp_elapsed).toFixed(2);
timestamp_elapsed = timestamp;
const perf_delta = +(perf_now - perf_elapsed).toFixed(2);
perf_elapsed = perf_now;
const ERROR = timestamp_delta < computation_time;
if (computation_time) {
console.log({
computation_time,
timestamp_delta,
perf_delta,
ERROR
});
}
// some heavy load for the frame
const computation_start = performance.now();
const frame_duration = 1000 / frequency.value;
const computation_duration = (Math.ceil(frame_duration * 10) + 1) / 10; // add 0.1 ms
while (performance.now() - computation_start < computation_duration) {}
computation_time = performance.now() - computation_start;
raf_id = requestAnimationFrame(loop)
}
frequency.oninput = evt => {
cancelAnimationFrame( raf_id );
console.clear();
raf_id = requestAnimationFrame(loop);
setTimeout(() => {
cancelAnimationFrame( raf_id );
}, 2000);
};
frequency.oninput();
In case your monitor has a different frame-rate than th common 60Hz, you can insert it here:
<input type="number" id="frequency" value="60" steps="0.1">
So what to use between this timestamp and performance.now() is your call I guess, the timestamp tells you when the frame began, performance.now() will tell you when your code executes, you could use both if needed. Even without such a big computation spanning over frames, you can very well have an other task scheduled before yours that took a few ms to complete or even a big CSS composition that should get performed after, and you have no real way to know.

web audio analyser's getFloatTimeDomainData buffer offset wrt buffers at other times and wrt buffer of 'complete file'

(question rewritten integrating bits of information from answers, plus making it more concise.)
I use analyser=audioContext.createAnalyser() in order to process audio data, and I'm trying to understand the details better.
I choose an fftSize, say 2048, then I create an array buffer of 2048 floats with Float32Array, and then, in an animation loop
(called 60 times per second on most machines, via window.requestAnimationFrame), I do
analyser.getFloatTimeDomainData(buffer);
which will fill my buffer with 2048 floating point sample data points.
When the handler is called the next time, 1/60 second has passed. To calculate how much that is in units of samples,
we have to divide it by the duration of 1 sample, and get (1/60)/(1/44100) = 735.
So the next handler call takes place (on average) 735 samples later.
So there is overlap between subsequent buffers, like this:
We know from the spec (search for 'render quantum') that everything happens in "chunck sizes" which are multiples of 128.
So (in terms of audio processing), one would expect that the next handler call will usually be either 5*128 = 640 samples later,
or else 6*128 = 768 samples later - those being the multiples of 128 closest to 735 samples = (1/60) second.
Calling this amount "Δ-samples", how do I find out what it is (during each handler call), 640 or 768 or something else?
Reliably, like this:
Consider the 'old buffer' (from previous handler call). If you delete "Δ-samples" many samples at the beginning, copy the remainder, and then append "Δ-samples" many new samples, that should be the current buffer. And indeed, I tried that,
and that is the case. It turns out "Δ-samples" often is 384, 512, 896. It is trivial but time consuming to determine
"Δ-samples" in a loop.
I would like to compute "Δ-samples" without performing that loop.
One would think the following would work:
(audioContext.currentTime() - (result of audioContext.currentTime() during last time handler ran))/(duration of 1 sample)
I tried that (see code below where I also "stich together" the various buffers, trying to reconstruct the original buffer),
and - surprise - it works about 99.9% of the time in Chrome, and about 95% of the time in Firefox.
I also tried audioContent.getOutputTimestamp().contextTime, which does not work in Chrome, and works 9?% in Firefox.
Is there any way to find "Δ-samples" (without looking at the buffers), which works reliably?
Second question, the "reconstructed" buffer (all the buffers from callbacks stitched together), and the original sound buffer
are not exactly the same, there is some (small, but noticable, more than usual "rounding error") difference, and that is bigger in Firefox.
Where does that come from? - You know, as I understand the spec, those should be the same.
var soundFile = 'https://mathheadinclouds.github.io/audio/sounds/la.mp3';
var audioContext = null;
var isPlaying = false;
var sourceNode = null;
var analyser = null;
var theBuffer = null;
var reconstructedBuffer = null;
var soundRequest = null;
var loopCounter = -1;
var FFT_SIZE = 2048;
var rafID = null;
var buffers = [];
var timesSamples = [];
var timeSampleDiffs = [];
var leadingWaste = 0;
window.addEventListener('load', function() {
soundRequest = new XMLHttpRequest();
soundRequest.open("GET", soundFile, true);
soundRequest.responseType = "arraybuffer";
//soundRequest.onload = function(evt) {}
soundRequest.send();
var btn = document.createElement('button');
btn.textContent = 'go';
btn.addEventListener('click', function(evt) {
goButtonClick(this, evt)
});
document.body.appendChild(btn);
});
function goButtonClick(elt, evt) {
initAudioContext(togglePlayback);
elt.parentElement.removeChild(elt);
}
function initAudioContext(callback) {
audioContext = new AudioContext();
audioContext.decodeAudioData(soundRequest.response, function(buffer) {
theBuffer = buffer;
callback();
});
}
function createAnalyser() {
analyser = audioContext.createAnalyser();
analyser.fftSize = FFT_SIZE;
}
function startWithSourceNode() {
sourceNode.connect(analyser);
analyser.connect(audioContext.destination);
sourceNode.start(0);
isPlaying = true;
sourceNode.addEventListener('ended', function(evt) {
sourceNode = null;
analyser = null;
isPlaying = false;
loopCounter = -1;
window.cancelAnimationFrame(rafID);
console.log('buffer length', theBuffer.length);
console.log('reconstructedBuffer length', reconstructedBuffer.length);
console.log('audio callback called counter', buffers.length);
console.log('root mean square error', Math.sqrt(checkResult() / theBuffer.length));
console.log('lengths of time between requestAnimationFrame callbacks, measured in audio samples:');
console.log(timeSampleDiffs);
console.log(
timeSampleDiffs.filter(function(val) {
return val === 384
}).length,
timeSampleDiffs.filter(function(val) {
return val === 512
}).length,
timeSampleDiffs.filter(function(val) {
return val === 640
}).length,
timeSampleDiffs.filter(function(val) {
return val === 768
}).length,
timeSampleDiffs.filter(function(val) {
return val === 896
}).length,
'*',
timeSampleDiffs.filter(function(val) {
return val > 896
}).length,
timeSampleDiffs.filter(function(val) {
return val < 384
}).length
);
console.log(
timeSampleDiffs.filter(function(val) {
return val === 384
}).length +
timeSampleDiffs.filter(function(val) {
return val === 512
}).length +
timeSampleDiffs.filter(function(val) {
return val === 640
}).length +
timeSampleDiffs.filter(function(val) {
return val === 768
}).length +
timeSampleDiffs.filter(function(val) {
return val === 896
}).length
)
});
myAudioCallback();
}
function togglePlayback() {
sourceNode = audioContext.createBufferSource();
sourceNode.buffer = theBuffer;
createAnalyser();
startWithSourceNode();
}
function myAudioCallback(time) {
++loopCounter;
if (!buffers[loopCounter]) {
buffers[loopCounter] = new Float32Array(FFT_SIZE);
}
var buf = buffers[loopCounter];
analyser.getFloatTimeDomainData(buf);
var now = audioContext.currentTime;
var nowSamp = Math.round(audioContext.sampleRate * now);
timesSamples[loopCounter] = nowSamp;
var j, sampDiff;
if (loopCounter === 0) {
console.log('start sample: ', nowSamp);
reconstructedBuffer = new Float32Array(theBuffer.length + FFT_SIZE + nowSamp);
leadingWaste = nowSamp;
for (j = 0; j < FFT_SIZE; j++) {
reconstructedBuffer[nowSamp + j] = buf[j];
}
} else {
sampDiff = nowSamp - timesSamples[loopCounter - 1];
timeSampleDiffs.push(sampDiff);
var expectedEqual = FFT_SIZE - sampDiff;
for (j = 0; j < expectedEqual; j++) {
if (reconstructedBuffer[nowSamp + j] !== buf[j]) {
console.error('unexpected error', loopCounter, j);
// debugger;
}
}
for (j = expectedEqual; j < FFT_SIZE; j++) {
reconstructedBuffer[nowSamp + j] = buf[j];
}
//console.log(loopCounter, nowSamp, sampDiff);
}
rafID = window.requestAnimationFrame(myAudioCallback);
}
function checkResult() {
var ch0 = theBuffer.getChannelData(0);
var ch1 = theBuffer.getChannelData(1);
var sum = 0;
var idxDelta = leadingWaste + FFT_SIZE;
for (var i = 0; i < theBuffer.length; i++) {
var samp0 = ch0[i];
var samp1 = ch1[i];
var samp = (samp0 + samp1) / 2;
var check = reconstructedBuffer[i + idxDelta];
var diff = samp - check;
var sqDiff = diff * diff;
sum += sqDiff;
}
return sum;
}
In above snippet, I do the following. I load with XMLHttpRequest a 1 second mp3 audio file from my github.io page (I sing 'la' for 1 second). After it has loaded, a button is shown, saying 'go', and after pressing that, the audio is played back by putting it into a bufferSource node and then doing .start on that. the bufferSource is the fed to our analyser, et cetera
related question
I also have the snippet code on my github.io page - makes reading the console easier.
I think the AnalyserNode is not what you want in this situation. You want to grab the data and keep it synchronized with raf. Use a ScriptProcessorNode or AudioWorkletNode to grab the data. Then you'll get all the data as it comes. No problems with overlap, or missing data or anything.
Note also that the clocks for raf and audio may be different and hence things may drift over time. You'll have to compensate for that yourself if you need to.
Unfortunately there is no way to find out the exact point in time at which the data returned by an AnalyserNode was captured. But you might be on the right track with your current approach.
All the values returned by the AnalyserNode are based on the "current-time-domain-data". This is basically the internal buffer of the AnalyserNode at a certain point in time. Since the Web Audio API has a fixed render quantum of 128 samples I would expect this buffer to evolve in steps of 128 samples as well. But currentTime usually evolves in steps of 128 samples already.
Furthermore the AnalyserNode has a smoothingTimeConstant property. It is responsible for "blurring" the returned values. The default value is 0.8. For your use case you probably want to set this to 0.
EDIT: As Raymond Toy pointed out in the comments the smoothingtimeconstant only has an effect on the frequency data. Since the question is about getFloatTimeDomainData() it will have no effect on the returned values.
I hope this helps but I think it would be easier to get all the samples of your audio signal by using an AudioWorklet. It would definitely be more reliable.
I'm not really following your math, so I can't tell exactly what you had wrong, but you seem to look at this in a too complicated manner.
The fftSize doesn't really matter here, what you want to calculate is how many samples have been passed since the last frame.
To calculate this, you just need to
Measure the time elapsed from last frame.
Divide this time by the time of a single frame.
The time of a single frame, is simply 1 / context.sampleRate.
So really all you need is currentTime - previousTime * ( 1 / sampleRate) and you'll find the index in the last frame where the data starts being repeated in the new one.
And only then, if you want the index in the new frame you'd subtract this index from the fftSize.
Now for why you sometimes have gaps, it's because AudioContext.prototype.currentTime returns the timestamp of the beginning of the next block to be passed to the graph.
The one we want here is AudioContext.prototype.getOuputTimestamp().contextTime which represents the timestamp of now, on the same same base as currentTime (i.e the creation of the context).
(function loop(){requestAnimationFrame(loop);})();
(async()=>{
const ctx = new AudioContext();
const buf = await fetch("https://upload.wikimedia.org/wikipedia/en/d/d3/Beach_Boys_-_Good_Vibrations.ogg").then(r=>r.arrayBuffer());
const aud_buf = await ctx.decodeAudioData(buf);
const source = ctx.createBufferSource();
source.buffer = aud_buf;
source.loop = true;
const analyser = ctx.createAnalyser();
const fftSize = analyser.fftSize = 2048;
source.loop = true;
source.connect( analyser );
source.start(0);
// for debugging we use two different buffers
const arr1 = new Float32Array( fftSize );
const arr2 = new Float32Array( fftSize );
const single_sample_dur = (1 / ctx.sampleRate);
console.log( 'single sample duration (ms)', single_sample_dur * 1000);
onclick = e => {
if( ctx.state === "suspended" ) {
ctx.resume();
return console.log( 'starting context, please try again' );
}
console.log( '-------------' );
requestAnimationFrame( () => {
// first frame
const time1 = ctx.getOutputTimestamp().contextTime;
analyser.getFloatTimeDomainData( arr1 );
requestAnimationFrame( () => {
// second frame
const time2 = ctx.getOutputTimestamp().contextTime;
analyser.getFloatTimeDomainData( arr2 );
const elapsed_time = time2 - time1;
console.log( 'elapsed time between two frame (ms)', elapsed_time * 1000 );
const calculated_index = fftSize - Math.round( elapsed_time / single_sample_dur );
console.log( 'calculated index of new data', calculated_index );
// for debugging we can just search for the first index where the data repeats
const real_time = fftSize - arr1.indexOf( arr2[ 0 ] );
console.log( 'real index', real_time > fftSize ? 0 : real_time );
if( calculated_index !== real_time > fftSize ? 0 : real_time ) {
console.error( 'different' );
}
});
});
};
document.body.classList.add('ready');
})().catch( console.error );
body:not(.ready) pre { display: none; }
<pre>click to record two new frames</pre>

Determine if there is a pause in speech using Web Audio API AudioContext

Trying to understand the Web Audio API better. We're using it to create an AudioContext and then sending audio to be transcribed. I want to be able to determine when there is a natural pause in speech or when the user stopped speaking.
Is there some data in onaudioprocess callback that can be accessed to determine pauses/breaks in speech?
let context = new AudioContext();
context.onstatechange = () => {};
this.setState({ context: context });
let source = context.createMediaStreamSource(stream);
let processor = context.createScriptProcessor(4096, 1, 1);
source.connect(processor);
processor.connect(context.destination);
processor.onaudioprocess = (event) => {
// Do some magic here
}
I tried a solution that is suggested on this post but did not achieve the results I need. Post: HTML Audio recording until silence?
When I parse for silence as the post suggests, I get the same result - either 0 or 128
let context = new AudioContext();
let source = context.createMediaStreamSource(stream);
let processor = context.createScriptProcessor(4096, 1, 1);
source.connect(processor);
processor.connect(context.destination);
/***
* Crete analyser
*
**/
let analyser = context.createAnalyser();
analyser.smoothingTimeConstant = 0;
analyser.fftSize = 2048;
let buffLength = analyser.frequencyBinCount;
let arrayFreqDomain = new Uint8Array(buffLength);
let arrayTimeDomain = new Uint8Array(buffLength);
processor.connect(analyser);
processor.onaudioprocess = (event) => {
/**
*
* Parse live real-time buffer looking for silence
*
**/
let f, t;
analyser.getByteFrequencyData(arrayFreqDomain);
analyser.getByteTimeDomainData(arrayTimeDomain);
for (var i = 0; i < buffLength; i++) {
arrayFreqDomain[i]; <---- gives 0 value always
arrayTimeDomain[i]; <---- gives 128 value always
}
}
Looking at the documentation for the getByteFrequencyData method I can see how it is supposed to be giving a different value (in the documentation example it will give a different barHeight), but it isn't working for me. https://developer.mozilla.org/en-US/docs/Web/API/AnalyserNode/getByteFrequencyData#Example

Reducing sample rate of a Web Audio spectrum analyzer using mic input

I'm using the Web Audio API to create a simple spectrum analyzer using the computer microphone as the input signal. The basic functionality of my current implementation works fine, using the default sampling rate (usually 48KHz, but could be 44.1KHz depending on the browser).
For some applications, I would like to use a lower sampling rate (~8KHz) for the FFT.
It looks like the Web Audio API is adding support to customize the sample rate, currently only available on FireFox (https://developer.mozilla.org/en-US/docs/Web/API/AudioContextOptions/sampleRate).
Adding sample rate to the context constructor:
// create AudioContext object named 'audioCtx'
var audioCtx = new (AudioContext || webkitAudioContext)({sampleRate: 8000,});
console.log(audioCtx.sampleRate)
The console outputs '8000' (in FireFox), so it appears to be working up to this point.
The microphone is turned on by the user using a pull-down menu. This is the function servicing that pull-down:
var microphone;
function getMicInputState()
{
let selectedValue = document.getElementById("micOffOn").value;
if (selectedValue === "on") {
navigator.mediaDevices.getUserMedia({audio: true})
.then(stream => {
microphone = audioCtx.createMediaStreamSource(stream);
microphone.connect(analyserNode);
})
.catch(err => { alert("Microphone is required."); });
} else {
microphone.disconnect();
}
}
In FireFox, using the pulldown to activate the microphone displays a popup requesting access to the microphone (as normally expected). After clicking to allow the microphone, the console displays:
"Connecting AudioNodes from AudioContexts with different sample-rate is currently not supported".
The display of the spectrum analyzer remains blank.
Any ideas how to overcome this error? If we can get past this, any guidance on how to specify sampleRate when the user's soundcard sampling rate is unknown?
One approach to overcome this is passing audio packets captured from microphone to analyzer node via a script processor node that re-samples the audio packets passing through it.
Brief overview of script processor node
Every script processor node has an input buffer and an output buffer.
When audio enters the input buffer, the script processor node fires
onaudioprocess event.
Whatever is placed in the output buffer of script processor node becomes its output.
For detailed specs, refer : Script processor node
Here is the pseudo-code:
Create live media source, script processor node and analyzer node
Connect live media source to analyzer node via script processor
node
Whenever an audio packet enters the script processor
node, onaudioprocess event is fired
When onaudioprocess event is fired :
4.1) Extract audio data from input buffer
4.2) Re-sample audio data
4.3) Place re-sampled data in output buffer
The following code snippet implements the above pseudocode:
var microphone;
// *** 1) create a script processor node
var scriptProcessorNode = audioCtx.createScriptProcessor(4096, 1, 1);
function getMicInputState()
{
let selectedValue = document.getElementById("micOffOn").value;
if (selectedValue === "on") {
navigator.mediaDevices.getUserMedia({audio: true})
.then(stream => {
microphone = audioCtx.createMediaStreamSource(stream);
// *** 2) connect live media source to analyserNode via script processor node
microphone.connect(scriptProcessorNode);
scriptProcessorNode.connect(analyserNode);
})
.catch(err => { alert("Microphone is required."); });
} else {
microphone.disconnect();
}
}
// *** 3) Whenever an audio packet passes through script processor node, resample it
scriptProcessorNode.onaudioprocess = function(event){
var inputBuffer = event.inputBuffer;
var outputBuffer = event.outputBuffer;
for(var channel = 0; channel < outputBuffer.numberOfChannels; channel++){
var inputData = inputBuffer.getChannelData(channel);
var outputData = outputBuffer.getChannelData(channel);
// *** 3.1) Resample inputData
var fromSampleRate = audioCtx.sampleRate;
var toSampleRate = 8000;
var resampledAudio = downsample(inputData, fromSampleRate, toSampleRate);
// *** 3.2) make output equal to the resampled audio
for (var sample = 0; sample < outputData.length; sample++) {
outputData[sample] = resampledAudio[sample];
}
}
}
function downsample(buffer, fromSampleRate, toSampleRate) {
// buffer is a Float32Array
var sampleRateRatio = Math.round(fromSampleRate / toSampleRate);
var newLength = Math.round(buffer.length / sampleRateRatio);
var result = new Float32Array(newLength);
var offsetResult = 0;
var offsetBuffer = 0;
while (offsetResult < result.length) {
var nextOffsetBuffer = Math.round((offsetResult + 1) * sampleRateRatio);
var accum = 0, count = 0;
for (var i = offsetBuffer; i < nextOffsetBuffer && i < buffer.length; i++) {
accum += buffer[i];
count++;
}
result[offsetResult] = accum / count;
offsetResult++;
offsetBuffer = nextOffsetBuffer;
}
return result;
}
Update - 03 Nov, 2020
Script Processor Node is being deprecated and replaced with AudioWorklets.
The approach to changing the sample rate remains the same.
Downsampling from the constructor and connecting an AnalyserNode is now possible in Chrome and Safari.
So the following code, taken from the corresponding MDN documentation, would work:
const audioContext = new (window.AudioContext || window.webkitAudioContext)({
sampleRate: 8000
});
const mediaStream = await navigator.mediaDevices.getUserMedia({
audio: true,
video: false
});
const mediaStreamSource = audioContext.createMediaStreamSource(mediaStream);
const analyser = audioContext.createAnalyser();
analyser.fftSize = 256;
const bufferLength = analyser.frequencyBinCount;
const dataArray = new Uint8Array(bufferLength);
analyser.getByteFrequencyData(dataArray);
mediaStreamSource.connect(analyser);
const title = document.createElement("div");
title.innerText = `Sampling frequency 8kHz:`;
const wrapper = document.createElement("div");
const canvas = document.createElement("canvas");
wrapper.appendChild(canvas);
document.body.appendChild(title);
document.body.appendChild(wrapper);
const canvasCtx = canvas.getContext("2d");
function draw() {
requestAnimationFrame(draw);
analyser.getByteFrequencyData(dataArray);
canvasCtx.fillStyle = "rgb(0, 0, 0)";
canvasCtx.fillRect(0, 0, canvas.width, canvas.height);
var barWidth = canvas.width / bufferLength;
var barHeight = 0;
var x = 0;
for (var i = 0; i < bufferLength; i++) {
barHeight = dataArray[i] / 2;
canvasCtx.fillStyle = "rgb(" + (2 * barHeight + 100) + ",50,50)";
canvasCtx.fillRect(x, canvas.height - barHeight / 2, barWidth, barHeight);
x += barWidth + 1;
}
}
draw();
See here for a demo where both 48kHz and 8kHz sampled signal frequencies are displayed: https://codesandbox.io/s/vibrant-moser-cfex33

Controlling fps with requestAnimationFrame?

It seems like requestAnimationFrame is the de facto way to animate things now. It worked pretty well for me for the most part, but right now I'm trying to do some canvas animations and I was wondering: Is there any way to make sure it runs at a certain fps? I understand that the purpose of rAF is for consistently smooth animations, and I might run the risk of making my animation choppy, but right now it seems to run at drastically different speeds pretty arbitrarily, and I'm wondering if there's a way to combat that somehow.
I'd use setInterval but I want the optimizations that rAF offers (especially automatically stopping when the tab is in focus).
In case someone wants to look at my code, it's pretty much:
animateFlash: function() {
ctx_fg.clearRect(0,0,canvasWidth,canvasHeight);
ctx_fg.fillStyle = 'rgba(177,39,116,1)';
ctx_fg.strokeStyle = 'none';
ctx_fg.beginPath();
for(var i in nodes) {
nodes[i].drawFlash();
}
ctx_fg.fill();
ctx_fg.closePath();
var instance = this;
var rafID = requestAnimationFrame(function(){
instance.animateFlash();
})
var unfinishedNodes = nodes.filter(function(elem){
return elem.timer < timerMax;
});
if(unfinishedNodes.length === 0) {
console.log("done");
cancelAnimationFrame(rafID);
instance.animate();
}
}
Where Node.drawFlash() is just some code that determines radius based off a counter variable and then draws a circle.
How to throttle requestAnimationFrame to a specific frame rate
Demo throttling at 5 FPS: http://jsfiddle.net/m1erickson/CtsY3/
This method works by testing the elapsed time since executing the last frame loop.
Your drawing code executes only when your specified FPS interval has elapsed.
The first part of the code sets some variables used to calculate elapsed time.
var stop = false;
var frameCount = 0;
var $results = $("#results");
var fps, fpsInterval, startTime, now, then, elapsed;
// initialize the timer variables and start the animation
function startAnimating(fps) {
fpsInterval = 1000 / fps;
then = Date.now();
startTime = then;
animate();
}
And this code is the actual requestAnimationFrame loop which draws at your specified FPS.
// the animation loop calculates time elapsed since the last loop
// and only draws if your specified fps interval is achieved
function animate() {
// request another frame
requestAnimationFrame(animate);
// calc elapsed time since last loop
now = Date.now();
elapsed = now - then;
// if enough time has elapsed, draw the next frame
if (elapsed > fpsInterval) {
// Get ready for next frame by setting then=now, but also adjust for your
// specified fpsInterval not being a multiple of RAF's interval (16.7ms)
then = now - (elapsed % fpsInterval);
// Put your drawing code here
}
}
I suggest wrapping your call to requestAnimationFrame in a setTimeout:
const fps = 25;
function animate() {
// perform some animation task here
setTimeout(() => {
requestAnimationFrame(animate);
}, 1000 / fps);
}
animate();
You need to call requestAnimationFrame from within setTimeout, rather than the other way around, because requestAnimationFrame schedules your function to run right before the next repaint, and if you delay your update further using setTimeout you will have missed that time window. However, doing the reverse is sound, since you’re simply waiting a period of time before making the request.
Update 2016/6
The problem throttling the frame rate is that the screen has a constant update rate, typically 60 FPS.
If we want 24 FPS we will never get the true 24 fps on the screen, we can time it as such but not show it as the monitor can only show synced frames at 15 fps, 30 fps or 60 fps (some monitors also 120 fps).
However, for timing purposes we can calculate and update when possible.
You can build all the logic for controlling the frame-rate by encapsulating calculations and callbacks into an object:
function FpsCtrl(fps, callback) {
var delay = 1000 / fps, // calc. time per frame
time = null, // start time
frame = -1, // frame count
tref; // rAF time reference
function loop(timestamp) {
if (time === null) time = timestamp; // init start time
var seg = Math.floor((timestamp - time) / delay); // calc frame no.
if (seg > frame) { // moved to next frame?
frame = seg; // update
callback({ // callback function
time: timestamp,
frame: frame
})
}
tref = requestAnimationFrame(loop)
}
}
Then add some controller and configuration code:
// play status
this.isPlaying = false;
// set frame-rate
this.frameRate = function(newfps) {
if (!arguments.length) return fps;
fps = newfps;
delay = 1000 / fps;
frame = -1;
time = null;
};
// enable starting/pausing of the object
this.start = function() {
if (!this.isPlaying) {
this.isPlaying = true;
tref = requestAnimationFrame(loop);
}
};
this.pause = function() {
if (this.isPlaying) {
cancelAnimationFrame(tref);
this.isPlaying = false;
time = null;
frame = -1;
}
};
Usage
It becomes very simple - now, all that we have to do is to create an instance by setting callback function and desired frame rate just like this:
var fc = new FpsCtrl(24, function(e) {
// render each frame here
});
Then start (which could be the default behavior if desired):
fc.start();
That's it, all the logic is handled internally.
Demo
var ctx = c.getContext("2d"), pTime = 0, mTime = 0, x = 0;
ctx.font = "20px sans-serif";
// update canvas with some information and animation
var fps = new FpsCtrl(12, function(e) {
ctx.clearRect(0, 0, c.width, c.height);
ctx.fillText("FPS: " + fps.frameRate() +
" Frame: " + e.frame +
" Time: " + (e.time - pTime).toFixed(1), 4, 30);
pTime = e.time;
var x = (pTime - mTime) * 0.1;
if (x > c.width) mTime = pTime;
ctx.fillRect(x, 50, 10, 10)
})
// start the loop
fps.start();
// UI
bState.onclick = function() {
fps.isPlaying ? fps.pause() : fps.start();
};
sFPS.onchange = function() {
fps.frameRate(+this.value)
};
function FpsCtrl(fps, callback) {
var delay = 1000 / fps,
time = null,
frame = -1,
tref;
function loop(timestamp) {
if (time === null) time = timestamp;
var seg = Math.floor((timestamp - time) / delay);
if (seg > frame) {
frame = seg;
callback({
time: timestamp,
frame: frame
})
}
tref = requestAnimationFrame(loop)
}
this.isPlaying = false;
this.frameRate = function(newfps) {
if (!arguments.length) return fps;
fps = newfps;
delay = 1000 / fps;
frame = -1;
time = null;
};
this.start = function() {
if (!this.isPlaying) {
this.isPlaying = true;
tref = requestAnimationFrame(loop);
}
};
this.pause = function() {
if (this.isPlaying) {
cancelAnimationFrame(tref);
this.isPlaying = false;
time = null;
frame = -1;
}
};
}
body {font:16px sans-serif}
<label>Framerate: <select id=sFPS>
<option>12</option>
<option>15</option>
<option>24</option>
<option>25</option>
<option>29.97</option>
<option>30</option>
<option>60</option>
</select></label><br>
<canvas id=c height=60></canvas><br>
<button id=bState>Start/Stop</button>
Old answer
The main purpose of requestAnimationFrame is to sync updates to the monitor's refresh rate. This will require you to animate at the FPS of the monitor or a factor of it (ie. 60, 30, 15 FPS for a typical refresh rate # 60 Hz).
If you want a more arbitrary FPS then there is no point using rAF as the frame rate will never match the monitor's update frequency anyways (just a frame here and there) which simply cannot give you a smooth animation (as with all frame re-timings) and you can might as well use setTimeout or setInterval instead.
This is also a well known problem in the professional video industry when you want to playback a video at a different FPS then the device showing it refresh at. Many techniques has been used such as frame blending and complex re-timing re-building intermediate frames based on motion vectors, but with canvas these techniques are not available and the result will always be jerky video.
var FPS = 24; /// "silver screen"
var isPlaying = true;
function loop() {
if (isPlaying) setTimeout(loop, 1000 / FPS);
... code for frame here
}
The reason why we place setTimeout first (and why some place rAF first when a poly-fill is used) is that this will be more accurate as the setTimeout will queue an event immediately when the loop starts so that no matter how much time the remaining code will use (provided it doesn't exceed the timeout interval) the next call will be at the interval it represents (for pure rAF this is not essential as rAF will try to jump onto the next frame in any case).
Also worth to note that placing it first will also risk calls stacking up as with setInterval. setInterval may be slightly more accurate for this use.
And you can use setInterval instead outside the loop to do the same.
var FPS = 29.97; /// NTSC
var rememberMe = setInterval(loop, 1000 / FPS);
function loop() {
... code for frame here
}
And to stop the loop:
clearInterval(rememberMe);
In order to reduce frame rate when the tab gets blurred you can add a factor like this:
var isFocus = 1;
var FPS = 25;
function loop() {
setTimeout(loop, 1000 / (isFocus * FPS)); /// note the change here
... code for frame here
}
window.onblur = function() {
isFocus = 0.5; /// reduce FPS to half
}
window.onfocus = function() {
isFocus = 1; /// full FPS
}
This way you can reduce the FPS to 1/4 etc.
These are all good ideas in theory, until you go deep. The problem is you can't throttle an RAF without de-synchronizing it, defeating it's very purpose for existing. So you let it run at full-speed, and update your data in a separate loop, or even a separate thread!
Yes, I said it. You can do multi-threaded JavaScript in the browser!
There are two methods I know that work extremely well without jank, using far less juice and creating less heat. Accurate human-scale timing and machine efficiency are the net result.
Apologies if this is a little wordy, but here goes...
Method 1: Update data via setInterval, and graphics via RAF.
Use a separate setInterval for updating translation and rotation values, physics, collisions, etc. Keep those values in an object for each animated element. Assign the transform string to a variable in the object each setInterval 'frame'. Keep these objects in an array. Set your interval to your desired fps in ms: ms=(1000/fps). This keeps a steady clock that allows the same fps on any device, regardless of RAF speed. Do not assign the transforms to the elements here!
In a requestAnimationFrame loop, iterate through your array with an old-school for loop-- do not use the newer forms here, they are slow!
for(var i=0; i<sprite.length-1; i++){ rafUpdate(sprite[i]); }
In your rafUpdate function, get the transform string from your js object in the array, and its elements id. You should already have your 'sprite' elements attached to a variable or easily accessible through other means so you don't lose time 'get'-ing them in the RAF. Keeping them in an object named after their html id's works pretty good. Set that part up before it even goes into your SI or RAF.
Use the RAF to update your transforms only, use only 3D transforms (even for 2d), and set css "will-change: transform;" on elements that will change. This keeps your transforms synced to the native refresh rate as much as possible, kicks in the GPU, and tells the browser where to concentrate most.
So you should have something like this pseudocode...
// refs to elements to be transformed, kept in an array
var element = [
mario: document.getElementById('mario'),
luigi: document.getElementById('luigi')
//...etc.
]
var sprite = [ // read/write this with SI. read-only from RAF
mario: { id: mario ....physics data, id, and updated transform string (from SI) here },
luigi: { id: luigi .....same }
//...and so forth
] // also kept in an array (for efficient iteration)
//update one sprite js object
//data manipulation, CPU tasks for each sprite object
//(physics, collisions, and transform-string updates here.)
//pass the object (by reference).
var SIupdate = function(object){
// get pos/rot and update with movement
object.pos.x += object.mov.pos.x; // example, motion along x axis
// and so on for y and z movement
// and xyz rotational motion, scripted scaling etc
// build transform string ie
object.transform =
'translate3d('+
object.pos.x+','+
object.pos.y+','+
object.pos.z+
') '+
// assign rotations, order depends on purpose and set-up.
'rotationZ('+object.rot.z+') '+
'rotationY('+object.rot.y+') '+
'rotationX('+object.rot.x+') '+
'scale3d('.... if desired
; //...etc. include
}
var fps = 30; //desired controlled frame-rate
// CPU TASKS - SI psuedo-frame data manipulation
setInterval(function(){
// update each objects data
for(var i=0; i<sprite.length-1; i++){ SIupdate(sprite[i]); }
},1000/fps); // note ms = 1000/fps
// GPU TASKS - RAF callback, real frame graphics updates only
var rAf = function(){
// update each objects graphics
for(var i=0; i<sprite.length-1; i++){ rAF.update(sprite[i]) }
window.requestAnimationFrame(rAF); // loop
}
// assign new transform to sprite's element, only if it's transform has changed.
rAF.update = function(object){
if(object.old_transform !== object.transform){
element[object.id].style.transform = transform;
object.old_transform = object.transform;
}
}
window.requestAnimationFrame(rAF); // begin RAF
This keeps your updates to the data objects and transform strings synced to desired 'frame' rate in the SI, and the actual transform assignments in the RAF synced to GPU refresh rate. So the actual graphics updates are only in the RAF, but the changes to the data, and building the transform string are in the SI, thus no jankies but 'time' flows at desired frame-rate.
Flow:
[setup js sprite objects and html element object references]
[setup RAF and SI single-object update functions]
[start SI at percieved/ideal frame-rate]
[iterate through js objects, update data transform string for each]
[loop back to SI]
[start RAF loop]
[iterate through js objects, read object's transform string and assign it to it's html element]
[loop back to RAF]
Method 2. Put the SI in a web-worker. This one is FAAAST and smooth!
Same as method 1, but put the SI in web-worker. It'll run on a totally separate thread then, leaving the page to deal only with the RAF and UI. Pass the sprite array back and forth as a 'transferable object'. This is buko fast. It does not take time to clone or serialize, but it's not like passing by reference in that the reference from the other side is destroyed, so you will need to have both sides pass to the other side, and only update them when present, sort of like passing a note back and forth with your girlfriend in high-school.
Only one can read and write at a time. This is fine so long as they check if it's not undefined to avoid an error. The RAF is FAST and will kick it back immediately, then go through a bunch of GPU frames just checking if it's been sent back yet. The SI in the web-worker will have the sprite array most of the time, and will update positional, movement and physics data, as well as creating the new transform string, then pass it back to the RAF in the page.
This is the fastest way I know to animate elements via script. The two functions will be running as two separate programs, on two separate threads, taking advantage of multi-core CPU's in a way that a single js script does not. Multi-threaded javascript animation.
And it will do so smoothly without jank, but at the actual specified frame-rate, with very little divergence.
Result:
Either of these two methods will ensure your script will run at the same speed on any PC, phone, tablet, etc (within the capabilities of the device and the browser, of course).
How to easily throttle to a specific FPS:
// timestamps are ms passed since document creation.
// lastTimestamp can be initialized to 0, if main loop is executed immediately
var lastTimestamp = 0,
maxFPS = 30,
timestep = 1000 / maxFPS; // ms for each frame
function main(timestamp) {
window.requestAnimationFrame(main);
// skip if timestep ms hasn't passed since last frame
if (timestamp - lastTimestamp < timestep) return;
lastTimestamp = timestamp;
// draw frame here
}
window.requestAnimationFrame(main);
Source: A Detailed Explanation of JavaScript Game Loops and Timing by Isaac Sukin
The simplest way
note: It might behave differently on different screens with different frame rate.
const FPS = 30;
let lastTimestamp = 0;
function update(timestamp) {
requestAnimationFrame(update);
if (timestamp - lastTimestamp < 1000 / FPS) return;
/* <<< PUT YOUR CODE HERE >>> */
lastTimestamp = timestamp;
}
update();
var time = 0;
var time_framerate = 1000; //in milliseconds
function animate(timestamp) {
if(timestamp > time + time_framerate) {
time = timestamp;
//your code
}
window.requestAnimationFrame(animate);
}
A simple solution to this problem is to return from the render loop if the frame is not required to render:
const FPS = 60;
let prevTick = 0;
function render()
{
requestAnimationFrame(render);
// clamp to fixed framerate
let now = Math.round(FPS * Date.now() / 1000);
if (now == prevTick) return;
prevTick = now;
// otherwise, do your stuff ...
}
It's important to know that requestAnimationFrame depends on the users monitor refresh rate (vsync). So, relying on requestAnimationFrame for game speed for example will make it unplayable on 200Hz monitors if you're not using a separate timer mechanism in your simulation.
Skipping requestAnimationFrame cause not smooth(desired) animation at custom fps.
// Input/output DOM elements
var $results = $("#results");
var $fps = $("#fps");
var $period = $("#period");
// Array of FPS samples for graphing
// Animation state/parameters
var fpsInterval, lastDrawTime, frameCount_timed, frameCount, lastSampleTime,
currentFps=0, currentFps_timed=0;
var intervalID, requestID;
// Setup canvas being animated
var canvas = document.getElementById("c");
var canvas_timed = document.getElementById("c2");
canvas_timed.width = canvas.width = 300;
canvas_timed.height = canvas.height = 300;
var ctx = canvas.getContext("2d");
var ctx2 = canvas_timed.getContext("2d");
// Setup input event handlers
$fps.on('click change keyup', function() {
if (this.value > 0) {
fpsInterval = 1000 / +this.value;
}
});
$period.on('click change keyup', function() {
if (this.value > 0) {
if (intervalID) {
clearInterval(intervalID);
}
intervalID = setInterval(sampleFps, +this.value);
}
});
function startAnimating(fps, sampleFreq) {
ctx.fillStyle = ctx2.fillStyle = "#000";
ctx.fillRect(0, 0, canvas.width, canvas.height);
ctx2.fillRect(0, 0, canvas.width, canvas.height);
ctx2.font = ctx.font = "32px sans";
fpsInterval = 1000 / fps;
lastDrawTime = performance.now();
lastSampleTime = lastDrawTime;
frameCount = 0;
frameCount_timed = 0;
animate();
intervalID = setInterval(sampleFps, sampleFreq);
animate_timed()
}
function sampleFps() {
// sample FPS
var now = performance.now();
if (frameCount > 0) {
currentFps =
(frameCount / (now - lastSampleTime) * 1000).toFixed(2);
currentFps_timed =
(frameCount_timed / (now - lastSampleTime) * 1000).toFixed(2);
$results.text(currentFps + " | " + currentFps_timed);
frameCount = 0;
frameCount_timed = 0;
}
lastSampleTime = now;
}
function drawNextFrame(now, canvas, ctx, fpsCount) {
// Just draw an oscillating seconds-hand
var length = Math.min(canvas.width, canvas.height) / 2.1;
var step = 15000;
var theta = (now % step) / step * 2 * Math.PI;
var xCenter = canvas.width / 2;
var yCenter = canvas.height / 2;
var x = xCenter + length * Math.cos(theta);
var y = yCenter + length * Math.sin(theta);
ctx.beginPath();
ctx.moveTo(xCenter, yCenter);
ctx.lineTo(x, y);
ctx.fillStyle = ctx.strokeStyle = 'white';
ctx.stroke();
var theta2 = theta + 3.14/6;
ctx.beginPath();
ctx.moveTo(xCenter, yCenter);
ctx.lineTo(x, y);
ctx.arc(xCenter, yCenter, length*2, theta, theta2);
ctx.fillStyle = "rgba(0,0,0,.1)"
ctx.fill();
ctx.fillStyle = "#000";
ctx.fillRect(0,0,100,30);
ctx.fillStyle = "#080";
ctx.fillText(fpsCount,10,30);
}
// redraw second canvas each fpsInterval (1000/fps)
function animate_timed() {
frameCount_timed++;
drawNextFrame( performance.now(), canvas_timed, ctx2, currentFps_timed);
setTimeout(animate_timed, fpsInterval);
}
function animate(now) {
// request another frame
requestAnimationFrame(animate);
// calc elapsed time since last loop
var elapsed = now - lastDrawTime;
// if enough time has elapsed, draw the next frame
if (elapsed > fpsInterval) {
// Get ready for next frame by setting lastDrawTime=now, but...
// Also, adjust for fpsInterval not being multiple of 16.67
lastDrawTime = now - (elapsed % fpsInterval);
frameCount++;
drawNextFrame(now, canvas, ctx, currentFps);
}
}
startAnimating(+$fps.val(), +$period.val());
input{
width:100px;
}
#tvs{
color:red;
padding:0px 25px;
}
H3{
font-weight:400;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<h3>requestAnimationFrame skipping <span id="tvs">vs.</span> setTimeout() redraw</h3>
<div>
<input id="fps" type="number" value="33"/> FPS:
<span id="results"></span>
</div>
<div>
<input id="period" type="number" value="1000"/> Sample period (fps, ms)
</div>
<canvas id="c"></canvas><canvas id="c2"></canvas>
Original code by #tavnab.
For throttling FPS to any value, pls see jdmayfields answer.
However, for a very quick and easy solution to halve your frame rate, you can simply do your computations only every 2nd frame by:
requestAnimationFrame(render);
function render() {
// ... computations ...
requestAnimationFrame(skipFrame);
}
function skipFrame() { requestAnimationFrame(render); }
Similarly you could always call render but use a variable to control whether you do computations this time or not, allowing you to also cut FPS to a third or fourth (in my case, for a schematic webgl-animation 20fps is still enough while considerably lowering computational load on the clients)
I always do it this very simple way without messing with timestamps:
let fps, eachNthFrame, frameCount;
fps = 30;
//This variable specifies how many frames should be skipped.
//If it is 1 then no frames are skipped. If it is 2, one frame
//is skipped so "eachSecondFrame" is renderd.
eachNthFrame = Math.round((1000 / fps) / 16.66);
//This variable is the number of the current frame. It is set to eachNthFrame so that the
//first frame will be renderd.
frameCount = eachNthFrame;
requestAnimationFrame(frame);
//I think the rest is self-explanatory
function frame() {
if (frameCount === eachNthFrame) {
frameCount = 0;
animate();
}
frameCount++;
requestAnimationFrame(frame);
}
Here is an idea to reach desired fps:
detect browser's animationFrameRate (typically 60fps)
build a bitSet, according to animationFrameRate and your disiredFrameRate (say 24fps)
lookup bitSet and conditionally "continue" the animation frame loop
It uses requestAnimationFrame so the actual frame rate won't be greater than animationFrameRate. you may adjust disiredFrameRate according to animationFrameRate.
I wrote a mini lib, and a canvas animation demo.
function filterNums(nums, jitter = 0.2, downJitter = 1 - 1 / (1 + jitter)) {
let len = nums.length;
let mid = Math.floor(len % 2 === 0 ? len / 2 : (len - 1) / 2), low = mid, high = mid;
let lower = true, higher = true;
let sum = nums[mid], count = 1;
for (let i = 1, j, num; i <= mid; i += 1) {
if (higher) {
j = mid + i;
if (j === len)
break;
num = nums[j];
if (num < (sum / count) * (1 + jitter)) {
sum += num;
count += 1;
high = j;
} else {
higher = false;
}
}
if (lower) {
j = mid - i;
num = nums[j];
if (num > (sum / count) * (1 - downJitter)) {
sum += num;
count += 1;
low = j;
} else {
lower = false;
}
}
}
return nums.slice(low, high + 1);
}
function snapToOrRound(n, values, distance = 3) {
for (let i = 0, v; i < values.length; i += 1) {
v = values[i];
if (n >= v - distance && n <= v + distance) {
return v;
}
}
return Math.round(n);
}
function detectAnimationFrameRate(numIntervals = 6) {
if (typeof numIntervals !== 'number' || !isFinite(numIntervals) || numIntervals < 2) {
throw new RangeError('Argument numIntervals should be a number not less than 2');
}
return new Promise((resolve) => {
let num = Math.floor(numIntervals);
let numFrames = num + 1;
let last;
let intervals = [];
let i = 0;
let tick = () => {
let now = performance.now();
i += 1;
if (i < numFrames) {
requestAnimationFrame(tick);
}
if (i === 1) {
last = now;
} else {
intervals.push(now - last);
last = now;
if (i === numFrames) {
let compareFn = (a, b) => a < b ? -1 : a > b ? 1 : 0;
let sortedIntervals = intervals.slice().sort(compareFn);
let selectedIntervals = filterNums(sortedIntervals, 0.2, 0.1);
let selectedDuration = selectedIntervals.reduce((s, n) => s + n, 0);
let seletedFrameRate = 1000 / (selectedDuration / selectedIntervals.length);
let finalFrameRate = snapToOrRound(seletedFrameRate, [60, 120, 90, 30], 5);
resolve(finalFrameRate);
}
}
};
requestAnimationFrame(() => {
requestAnimationFrame(tick);
});
});
}
function buildFrameBitSet(animationFrameRate, desiredFrameRate){
let bitSet = new Uint8Array(animationFrameRate);
let ratio = desiredFrameRate / animationFrameRate;
if(ratio >= 1)
return bitSet.fill(1);
for(let i = 0, prev = -1, curr; i < animationFrameRate; i += 1, prev = curr){
curr = Math.floor(i * ratio);
bitSet[i] = (curr !== prev) ? 1 : 0;
}
return bitSet;
}
let $ = (s, c = document) => c.querySelector(s);
let $$ = (s, c = document) => Array.prototype.slice.call(c.querySelectorAll(s));
async function main(){
let canvas = $('#digitalClock');
let context2d = canvas.getContext('2d');
await new Promise((resolve) => {
if(window.requestIdleCallback){
requestIdleCallback(resolve, {timeout:3000});
}else{
setTimeout(resolve, 0, {didTimeout: false});
}
});
let animationFrameRate = await detectAnimationFrameRate(10); // 1. detect animation frame rate
let desiredFrameRate = 24;
let frameBits = buildFrameBitSet(animationFrameRate, desiredFrameRate); // 2. build a bit set
let handle;
let i = 0;
let count = 0, then, actualFrameRate = $('#actualFrameRate'); // debug-only
let draw = () => {
if(++i >= animationFrameRate){ // shoud use === if frameBits don't change dynamically
i = 0;
/* debug-only */
let now = performance.now();
let deltaT = now - then;
let fps = 1000 / (deltaT / count);
actualFrameRate.textContent = fps;
then = now;
count = 0;
}
if(frameBits[i] === 0){ // 3. lookup the bit set
handle = requestAnimationFrame(draw);
return;
}
count += 1; // debug-only
let d = new Date();
let text = d.getHours().toString().padStart(2, '0') + ':' +
d.getMinutes().toString().padStart(2, '0') + ':' +
d.getSeconds().toString().padStart(2, '0') + '.' +
(d.getMilliseconds() / 10).toFixed(0).padStart(2, '0');
context2d.fillStyle = '#000000';
context2d.fillRect(0, 0, canvas.width, canvas.height);
context2d.font = '36px monospace';
context2d.fillStyle = '#ffffff';
context2d.fillText(text, 0, 36);
handle = requestAnimationFrame(draw);
};
handle = requestAnimationFrame(() => {
then = performance.now();
handle = requestAnimationFrame(draw);
});
/* debug-only */
$('#animationFrameRate').textContent = animationFrameRate;
let frameRateInput = $('#frameRateInput');
let frameRateOutput = $('#frameRateOutput');
frameRateInput.addEventListener('input', (e) => {
frameRateOutput.value = e.target.value;
});
frameRateInput.max = animationFrameRate;
frameRateOutput.value = frameRateOutput.value = desiredFrameRate;
frameRateInput.addEventListener('change', (e) => {
desiredFrameRate = +e.target.value;
frameBits = buildFrameBitSet(animationFrameRate, desiredFrameRate);
});
}
document.addEventListener('DOMContentLoaded', main);
<div>
Animation Frame Rate: <span id="animationFrameRate">--</span>
</div>
<div>
Desired Frame Rate: <input id="frameRateInput" type="range" min="1" max="60" step="1" list="frameRates" />
<output id="frameRateOutput"></output>
<datalist id="frameRates">
<option>15</option>
<option>24</option>
<option>30</option>
<option>48</option>
<option>60</option>
</datalist>
</div>
<div>
Actual Frame Rate: <span id="actualFrameRate">--</span>
</div>
<canvas id="digitalClock" width="240" height="48"></canvas>
Simplified explanation of earlier answer. At least if you want real-time, accurate throttling without the janks, or dropping frames like bombs. GPU and CPU friendly.
setInterval and setTimeout are both CPU-oriented, not GPU.
requestAnimationFrame is purely GPU-oriented.
Run them separately. It's simple and not janky. In your setInterval, update your math and create a little CSS script in a string. With your RAF loop, only use that script to update the new coordinates of your elements. Don't do anything else in the RAF loop.
The RAF is tied inherently to the GPU. Whenever the script does not change (i.e. because the SI is running a gazillion times slower), Chromium-based browsers know they do not need to do anything, because there are no changes. So the on-the-fly script created each "frame", say 60 times per second, is still the same for say 1000 RAF GPU frames, but it knows nothing has changed, and the net result is it wastes no energy on this. If you check in DevTools, you will see your GPU frame-rate registers at the rate delineated by the setInterval.
Truely, it is just that simple. Separate them, and they will cooperate.
No jankies.
I tried multiple solutions provided on this question. Even though the solutions work as expected, they result in not so professional output.
Based on my personal experience, I would highly recommend not to control FPS on the browser side, especially using requestAnimationFrame. Because, when you do that, it'll make the frame rendering experience very choppy, users will clearly see the frames jumping and finally, it won't look real or professional at all.
So, my advice would be to control the FPS from the server side at the time of sending itself and simply render the frames as soon as you receive them on the browser side.
Note: if you still want to control on the client-side, try avoiding
usage of setTimeout or Date object in your logic of controlling fps.
Because, when the FPS is high, these will introduce their own delay in
terms of event loops or object creations.
Here's a good explanation I found: CreativeJS.com, to wrap a setTimeou) call inside the function passed to requestAnimationFrame. My concern with a "plain" requestionAnimationFrame would be, "what if I only want it to animate three times a second?" Even with requestAnimationFrame (as opposed to setTimeout) is that it still wastes (some) amount of "energy" (meaning that the Browser code is doing something, and possibly slowing the system down) 60 or 120 or however many times a second, as opposed to only two or three times a second (as you might want).
Most of the time I run my browsers with JavaScript intentially off for just this reason. But, I'm using Yosemite 10.10.3, and I think there's some kind of timer problem with it - at least on my old system (relatively old - meaning 2011).

Categories

Resources