Get animation status in microseconds - javascript

I'm animating transition of 36 000 colours in 18 seconds using JavaScript. When user presses a button, he should be notified on what colour the animation was at the point when the button has been clicked. However, JavaScript measures time in milliseconds, which basically means that if the user presses the button on t=10ms the animation will be on colour 20 and when he presses it on t=11ms the animation will already be on 22.
Is there a way to measure time in JavaScript more accurately? So for example, I can be able to tell that the button was pressed on t=10.5ms so the animation would be on colour 21.

The newer browser versions support performance.now which gives time
measured in milliseconds, accurate to one thousandth of a millisecond.
performance.now uses DomHighResTimeStamp as the time value which has the following property
The unit is milliseconds and should be accurate to 5 µs (microseconds).
var t1 = performance.now();
var t2 = performance.now();
console.log('passed ' + (t2 - t1) * 1000.0 + ' microseconds');
A polyfill can be found here and the list of the supported browsers can be found here.
Here's a good article on performance.now by Paul Irish
One thing to note here is that the browser usually renders the screen at 60 frames per second, or once every ~16.67 milliseconds, irrelevant of the page performing animations or not. This means that what you're trying to do probably isn't possible since, although JS code can run in less than a millisecond, animations will always be separated by at least those 16.67ms. Thus every ~32nd color will actually be displayed because you're trying to display 2 colors per 1ms.
To update the colors at the frame rate of your browser, use requestAnimationFrame.
Note: The 60fps is most common but the browser will adjust to the refresh rate of the screen.

Related

How do we schedule a series of oscillator nodes that play for a fixed duration with a smooth transition from one node’s ending to the other?

We are trying to map an array of numbers to sound and are following the approach mentioned in this ‘A Tale of 2 Clocks’ article to schedule oscillator nodes to play in the future. Each oscillator node exponentially ramps to a frequency value corresponding to the data in a fixed duration (this.pointSonificationLength). However, there’s a clicking noise as each node stops, as referenced in this article by Chris Wilson. The example here talks about smoothly stopping a single oscillator. However, we are unable to directly use this approach to smoothen the transition between one oscillator node to the other.
To clarify some of the values, pointTime refers to the node’s number in the order starting from 0, i.e. as the nodes were scheduled, they’d have pointTime = 0, 1, 2, and so forth. this.pointSonificationLength is the constant used to indicate how long the node should play for.
The first general approach was to decrease the gain at the end of the node so the change is almost imperceptible, as was documented in the article above. We tried implementing both methods, including setTargetAtTime and a combination of exponentialRampToValueAtTime and setValueAtTime, but neither worked to remove the click.
We were able to remove the click by changing some of our reasoning, and we scheduled for the gain to start transitioning to 0 at 100ms before the node ends.
However, when we scheduled more than one node, there was now a pause between each node. If we changed the function to start transitioning at 10ms, the gap was removed, but there was still a quiet click.
Our next approach was to have each node fade in as well as fade out. We added in this.delay as a constant to be the amount of time each transition in and out takes.
Below is where we’re currently at in the method to schedule an oscillator node for a given time with a given data point. The actual node scheduling is contained in another method inside the class.
private scheduleOscillatorNode(dataPoint: number, pointTime: number) {
let osc = this.audioCtx.createOscillator()
let amp = this.audioCtx.createGain()
osc.frequency.value = this.previousFrequencyOfset
osc.frequency.exponentialRampToValueAtTime(dataPoint, pointTime + this.pointSonificationLength)
osc.onended = () => this.handleOnEnded()
osc.connect(amp).connect(this.audioCtx.destination)
let nodeStart = pointTime + this.delay * this.numNode;
amp.gain.setValueAtTime(0.00001, nodeStart);
amp.gain.exponentialRampToValueAtTime(1, nodeStart + this.delay);
amp.gain.setValueAtTime(1, nodeStart + this.delay + this.pointSonificationLength);
amp.gain.exponentialRampToValueAtTime(0.00001, nodeStart + this.delay * 2 + this.pointSonificationLength);
osc.start(nodeStart)
osc.stop(nodeStart + this.delay * 2 + this.pointSonificationLength)
this.numNode++;
this.audioQueue.enqueue(osc)
// code to keep track of playback
}
We notice that there is a slight difference between the values we calculate manually and the values we see when we log the time values using the console.log statements, but the difference was too small to potentially be perceivable. As a result, we believe that this may not be causing the clicking noise, since the difference shouldn’t be perceivable if it’s so small. For example, instead of ending at 6.7 seconds, the node would end at 6.699999999999999 seconds, or if the node was meant to end at 5.6 seconds, it would actually end at 5.6000000000000005 seconds.
Is there a way to account for these delays and schedule nodes such that the transition occurs smoothly? Alternatively, is there a different approach that we need to use to make these transitions smooth? Any suggestions and pointers to code samples or other helpful resources would be of great help!

setInterval is not run at exact interval

If you create a very simple program that has a setInterval with 1 second delay, and you log the times its function is called, you will notice that the interval 'drifts'.
Basically, it actually takes (1,000ms + some amount of time) between each call.
For this program, it actually takes ~1,005ms between each call.
What causes the drift?
Is it taking 5ms to requeue setInterval?
Is it the length of the time it takes to run the function? (I doubt this, but having trouble concluding.)
Why does setInterval behave this way, and not just base itself on some clock time? (e.g. if you have 1,000ms delay and you started at time 3... just check if 1,003 then 2,003 and so on has elapsed?)
Example:
const startTime = new Date().valueOf();
function printElapsedTime(startTime) {
console.log(new Date().valueOf() - startTime);
}
let intervalObj = setInterval(printElapsedTime, 1000, startTime);
Output:
1005
2010
3015
4020
So you are not sync'd to 1 second anymore. Since it drifts by about 5, after 100 runs it will be running a half second 'later' than expected.
This question discusses how to avoid this drift, but does not explain WHY this drift is happening. (As in it does not say that setInterval is recursively adding itself to the event queue after each call - which takes 3ms ... which is just a guess at the drift cause).
While no Javascript running on a standard browser claims to be real-time (as pointed out in several comments) there are steps you an take to make things not get as out of hand as it appears the example in the question does (the errors being cumulative).
Just to start with an experiment I ran this on my Windows 10 Chrome:
const startTime = new Date().valueOf();
function printElapsedTime(startTime) {
let curTime = new Date().valueOf();
console.log(curTime - startTime);
}
let intervalObj = setInterval(printElapsedTime, 1000, startTime);
<div id="show">0</div>
This gave fairly consistent error each second and around the minute time you can see there was no cumulative drift:
However, using Firefox on the same system there was cumulative drift and this can be seen as pretty significant by the one minute mark:
So the question is, can anything be done to make it a bit better across browsers?
This snippet ditches setInterval and instead uses setTimeout on each invocation:
const startTime = new Date().valueOf();
let nextExpected = startTime + 1000;
function printElapsedTime(startTime) {
let curTime = new Date().valueOf();
console.log(curTime - startTime);
let nextInterval = 1000 + nextExpected - curTime;
setTimeout(printElapsedTime, nextInterval, startTime);
nextExpected = curTime + nextInterval;
}
let intervalObj = setTimeout(printElapsedTime, 1000, startTime);
<div id="show">0</div>
On Firefox this gave:
There was no cumulative drift and the error around the one minute mark was no worse than earlier.
So, in attempt to actually answer the question:
Computers do have other duties to attend to and cannot guarantee to process a timeout function at an exact time (though the spec requires them not to process before the interval has elapsed). In the given code in particular console.log will take time, settingup a new interval (in the final example) takes time, but the laptop/phone etc will also be dealing with lots of other stuff at the same time, housekeeping in the background, listening for interrupts etc etc.
Different browsers seem to treat setInterval differently - the spec doesn't seem to say what if anything they should do about cumulative drift. From the experiments here it seems that Chrome/Edge at least on my Windows10 laptop does some mitigating which means the drift isn't cumulative whereas FF doesn't seem to adjust and the drift can be significant.
It would be interesting to know if others on different systems get equivalent results. Anyway, the basic message is don't rely on such timeouts, it is not a real time system.
Long story short, none of desktop operating systems is real-time os
https://en.m.wikipedia.org/wiki/Real-time_operating_system
Thus, executing a task like calling the callback function is not guaranteed in an exact time. The os does it’s best to juggle all the tasks, take care of power/resource constraints to optimize the performance as a whole. As a result, timings float around a little.
Interestingly, you get a consistent 5 ms shift. I have no explanation for that

Javascript Date.now() function [duplicate]

I got this code over here:
var date = new Date();
setTimeout(function(e) {
var currentDate = new Date();
if(currentDate - date >= 1000) {
console.log(currentDate, date);
console.log(currentDate-date);
}
else {
console.log("It was less than a second!");
console.log(currentDate-date);
}
}, 1000);
In my computer, it always executes correctly, with 1000 in the console output. Interestedly in other computer, the same code, the timeout callback starts in less than a second and the difference of currentDate - date is between 980 and 998.
I know the existence of libraries that solve this inaccuracy (for example, Tock).
Basically, my question is: What are the reasons because setTimeout does not fire in the given delay? Could it be the computer that is too slow and the browser automatically tries to adapt to the slowness and fires the event before?
PS: Here is a screenshot of the code and the results executed in the Chrome JavaScript console:
It's not supposed to be particularly accurate. There are a number of factors limiting how soon the browser can execute the code; quoting from MDN:
In addition to "clamping", the timeout can also fire later when the page (or the OS/browser itself) is busy with other tasks.
In other words, the way that setTimeout is usually implemented, it is just meant to execute after a given delay, and once the browser's thread is free to execute it.
However, different browsers may implement it in different ways. Here are some tests I did:
var date = new Date();
setTimeout(function(e) {
var currentDate = new Date();
console.log(currentDate-date);
}, 1000);
// Browser Test1 Test2 Test3 Test4
// Chrome 998 1014 998 998
// Firefox 1000 1001 1047 1000
// IE 11 1006 1013 1007 1005
Perhaps the < 1000 times from Chrome could be attributed to inaccuracy in the Date type, or perhaps it could be that Chrome uses a different strategy for deciding when to execute the code—maybe it's trying to fit it into the a nearest time slot, even if the timeout delay hasn't completed yet.
In short, you shouldn't use setTimeout if you expect reliable, consistent, millisecond-scale timing.
In general, computer programs are highly unreliable when trying to execute things with higher precision than 50 ms. The reason for this is that even on an octacore hyperthreaded processor the OS is usually juggling several hundreds of processes and threads, sometimes thousands or more. The OS makes all that multitasking work by scheduling all of them to get a slice of CPU time one after another, meaning they get 'a few milliseconds of time at most to do their thing'.
Implicity this means that if you set a timeout for 1000 ms, chances are far from small that the current browser process won't even be running at that point in time, so it's perfectly normal for the browser not to notice until 1005, 1010 or even 1050 milliseconds that it should be executing the given callback.
Usually this is not a problem, it happens, and it's rarely of utmost importance. If it is, all operating systems supply kernel level timers that are far more precise than 1 ms, and allow a developer to execute code at precisely the correct point in time. JavaScript however, as a heavily sandboxed environment, doesn't have access to kernel objects like that, and browsers refrain from using them since it could theoretically allow someone to attack the OS stability from inside a web page, by carefully constructing code that starves other threads by swamping it with a lot of dangerous timers.
As for why the test yields 980 I'm not sure - that would depend on exactly which browser you're using and which JavaScript engine. I can however fully understand if the browser just manually corrects a bit downwards for system load and/or speed, ensuring that "on average the delay is still about the correct time" - it would make a lot of sense from the sandboxing principle to just approximate the amount of time required without potentially burdening the rest of the system.
Someone please correct me if I am misinterpreting this information:
According to a post from John Resig regarding the inaccuracy of performance tests across platforms (emphasis mine)
With the system times constantly being rounded down to the last queried time (each about 15 ms apart) the quality of performance results is seriously compromised.
So there is up to a 15 ms fudge on either end when comparing to the system time.
I had a similar experience.
I was using something like this:
var iMillSecondsTillNextWholeSecond = (1000 - (new Date().getTime() % 1000));
setTimeout(function ()
{
CountDownClock(ElementID, RelativeTime);
}, iMillSecondsTillNextWholeSecond);//Wait until the next whole second to start.
I noticed it would Skip a Second every couple Seconds, sometimes it would go for longer.
However, I'd still catch it Skipping after 10 or 20 Seconds and it just looked rickety.
I thought, "Maybe the Timeout is too slow or waiting for something else?".
Then I realized, "Maybe it's too fast, and the Timers the Browser is managing are off by a few Milliseconds?"
After adding +1 MilliSeconds to my Variable I only saw it skip once.
I ended up adding +50ms, just to be on the safe side.
var iMillSecondsTillNextWholeSecond = (1000 - (new Date().getTime() % 1000) + 50);
I know, it's a bit hacky, but my Timer is running smooth now. :)
Javascript has a way of dealing with exact time frames. Here’s one approach:
You could just save a Date.now when you start to wait, and create an interval with a low ms update frame, and calculate the difference between the dates.
Example:
const startDate = Date.now()
setInterval(() => {
const currentDate = Date.now()
if (currentDate - startDate === 1000 {
// it was a second
clearInterval()
return
}
// it was not a second
}, 50)

Do we get all the data with createMediaStreamSource in webaudio?

I am using webaudio with javascript, and this simple example (to be used with google-chrome),
https://www-fourier.ujf-grenoble.fr/~faure/enseignement/javascript/code/web_audio/ex_microphone_to_array/
the data are collected from the microphone into an array, in real time.
Then we compare the true time (t1) with the time spent by the data (t2) and they differ by a fixed ratio t2/t1 = 1.4.
Remark: here, true time t1 means the duration time measured by the clock,i.e. obtained by the function Date().getTime();, whereas
time t2 = N*Dt where N is the number of data obtained from the microphone and Dt=1/(Sample rate) = 1/44100 sec. is the time between two data.
My question is: does it mean that the sample data rate is not 44100Hz but 30700Hz*2 (i.e. with two channels)?
or they are some repetitions within the data?
Another related question please: is there a way to check that during such a real time acquisition process, we have not lost any data?
From a quick glance at your test code, you are using an AnalyserNode to determine t2, and you call the function F3() via rAF. This happens about every 16.6 ms or 732 samples (at 44.1 kHz). But you increment t2 by N = 1024 frames each time. Hence your t2 value is about 1.4 times larger than the actual number of frames. (Which is what you're actually getting!)
If you really want to measure how many samples you've received you have to do synchronously in the audio graph so use either a ScriptProcessorNode or an AudioWorklet to count how many samples of data have been processed. You can then increment t2 by the correct amount. This should match your t1 values more closely. But note that the clock that drives the t1 value is very likely different from the audio clock that drives the audio system. They will drift over time, although the drift is probably pretty small as long as you don't run this for days at a time.

Is it safe to assume 60 fps for browser rendering?

I want to make a JavaScript animation take 5 seconds to complete using requestAnimationFrame().
I don't want a strict and precise timing, so anything close to 5 seconds is OK and I want my code to be simple and readable, so solutions like this won't work for me.
My question is, is it safe to assume most browsers render the page at 60 fps? i.e. if I want my animation to take 5 seconds to complete, I'll divide it to 60 * 5 = 300 steps and with each call of function draw() using requestAnimationFrame(), draw the next step of animation. (Given the fact the animation is pretty simple, just moving a colored div around.)
By the way, I can't use jQuery.
Edit: Let me rephrase the question this way: Do all browsers 'normally' try to render the page at 60 fps? I want to know if Chrome for example renders at 75 fps or Firefox renders at 70 fps.
(Normal condition: CPU isn't highly loaded, RAM is not full, there are no storage failures, room is properly ventilated and nobody tries to throw my laptop out the window.)
Relying on 60 frames per second is very unsafe, because the browser isn't always in the same conditions, and even if it tries to render the page at the maximum fps possible, there's always a chance of the processor/cpu/gpu being busy doing something else, causing the FPS to drop down.
If you want to rely on FPS (although I wouldn't suggest you so), you should first detect the current fps, and adjust the speed of your animation frame per frame. Here's an example:
var lastCall, fps;
function detectFps() {
var delta;
if (lastCall) {
delta = (Date.now() - lastCall)/1000;
lastCall = Date.now();
fps = 1/delta;
} else {
lastCall = Date.now();
fps = 0;
}
}
function myFunc() {
detectFps();
// Calculate the speed using the var fps
// Animate...
requestAnimationFrame(myFunc);
}
detectFps(); // Initialize fps
requestAnimationFrame(myFunc); // Start your animation
It depends on the GPU and monitor combination. I have a good GPU and a 120 hertz monitor, so it renders at 120 fps. During the render, If I move to 60 hertz monitor, it will max out at 60 fps.
Another factor, that happens in some browsers/OS, is the iGPU being used instead of the discrete gpu.
As already stated by others, it isn't.
But if you need to end your animation in approximately 5 seconds and it's not crucial not to miss any frames in the animation, you can use the old setTimeout() way. That way you can miss a target by a few milliseconds, and some of the frames in your animation will be skipped (not rendered) because of the fps mismatch, but this can be a "good enough" solution, especially if your animation is simple as you state it is, there's a chance that users won't even see the glitch.
It's not safe to assume everyone can handle animation.
People will have different needs.
A lot of common animations, and common web design practices, give me awful migraines, so I set my browser to 1 frame per second to kill the animation without causing too much fast flashing.

Categories

Resources