I am using webaudio with javascript, and this simple example (to be used with google-chrome),
https://www-fourier.ujf-grenoble.fr/~faure/enseignement/javascript/code/web_audio/ex_microphone_to_array/
the data are collected from the microphone into an array, in real time.
Then we compare the true time (t1) with the time spent by the data (t2) and they differ by a fixed ratio t2/t1 = 1.4.
Remark: here, true time t1 means the duration time measured by the clock,i.e. obtained by the function Date().getTime();, whereas
time t2 = N*Dt where N is the number of data obtained from the microphone and Dt=1/(Sample rate) = 1/44100 sec. is the time between two data.
My question is: does it mean that the sample data rate is not 44100Hz but 30700Hz*2 (i.e. with two channels)?
or they are some repetitions within the data?
Another related question please: is there a way to check that during such a real time acquisition process, we have not lost any data?
From a quick glance at your test code, you are using an AnalyserNode to determine t2, and you call the function F3() via rAF. This happens about every 16.6 ms or 732 samples (at 44.1 kHz). But you increment t2 by N = 1024 frames each time. Hence your t2 value is about 1.4 times larger than the actual number of frames. (Which is what you're actually getting!)
If you really want to measure how many samples you've received you have to do synchronously in the audio graph so use either a ScriptProcessorNode or an AudioWorklet to count how many samples of data have been processed. You can then increment t2 by the correct amount. This should match your t1 values more closely. But note that the clock that drives the t1 value is very likely different from the audio clock that drives the audio system. They will drift over time, although the drift is probably pretty small as long as you don't run this for days at a time.
Related
How can I create a Node.JS accurate timer? I am trying to make a chess website where you can play against other players on time.
I am currently using setInterval() and am not sure how accurate that is. I also need some extra accurate timer for the server that should be able to check in a 100th of a second precision can tell if the move is in time and when the game has ended.
Thanks in advance
For ordinary time-of-day, Date.now() gives you the date and time in milliseconds as a Javascript number. It has millisecond resolution. Its precision depends on your underlying operating system, but is typically between 10 and 50 milliseconds.
You can use process.hrtime.bigint(), described here, to retrieve the number of nanoseconds elapsed in nanoseconds.
Like this:
const then = process.hrtime.bigint()
/* do something you want to measure */
const now = process.hrtime.bigint()
const elapsedTimeInSeconds = (now-then) / 1_000_000_000
But be aware of this. Date.now() gives you a number of milliseconds since the UNIX epoch so it can be used to represent calendar dates and clock times. process.hrtime.bigint() gives you the number of nanoseconds since some arbitrary start time in the recent past. So it's only really useful for measuring elapsed times within nodejs processes.
And, I'm sure you're aware of single threading in Javascript, so elapsed time doesn't equal CPU time unless you don't do any sort of await operation in the code you're measuring.
You could also try to use `process.cpuUsage(), described here. Something like this.
const then = process.cpuUsage()
/* do something you want to measure */
const now = process.cpuUsage(then)
const userTimeInSeconds = (now.user - then.user) / 1_000_000
const systemTimeInSeconds = (now.system - then.system) / 1_000_000
Explaining the difference between user and system CPU time is beyond the scope of a Stack Overflow answer, but you can read about it.
Look at this code:
function wait(time) {
let i = 0;
let a = Date.now();
let x = a + (time || 0);
let b;
while ((b = Date.now()) <= x) ++i;
return i;
}
If I run it in browser (particularly Google Chrome, but I don't think it matters) in the way like wait(1000), the machine will obviously freeze for a second and then return recalculated value of i.
Let it be 10 000 000 (I'm getting values close to this one). This value varies every time, so lets take an average number.
Did I just got current number of operations per second of the processor in my machine?
Not at all.
What you get is the number of loop cycles completed by the Javascript process in a certain time. Each loop cycle consists of:
Creating a new Date object
Comparing two Date objects
Incrementing a Number
Incrementing the Number variable i is probably the least expensive of these, so the function is not really reporting how much it takes to make the increment.
Aside from that, note that the machine is doing a lot more than running a Javascript process. You will see interference from all sorts of activity going on in the computer at the same time.
When running inside a Javascript process, you're simply too far away from the processor (in terms of software layers) to make that measurement. Beneath Javascript, there's the browser and the operating system, each of which can (and will) make decisions that affect this result.
No. You can get the number of language operations per second, though the actual number of machine operations per second on a whole processor is more complicated.
Firstly the processor is not wholly dedicated to the browser, so it is actually likely switching back and forth between prioritized processes. On top of that memory access is obscured and the processor uses extra operations to manage memory (page flushing, etc.) and this is not gonna be very transparent to you at a given time. On top of that physical properties means that the real clock rate of the processor is dynamic... You can see it's pretty complicated already ;)
To really calculate the number of machine operations per second you need to measure the clock rate of the processor and multiply it by the number of instructions per cycle the processor can perform. Again this varies, but really the manufacturer specs will likely be good enough of an estimate :P.
If you wanted to use a program to measure this, you'd need to somehow dedicate 100% of the processor to your program and have it run a predictable set of instructions with no other hangups (like memory management). Then you need to include the number of instructions it takes to load the program instructions into the code caches. This is not really feasible however.
As others have pointed out, this will not help you determine the number of operations the processor does per second due to the factors that prior answers have pointed out. I do however think that a similar experiment could be set up to estimate the number of operations to be executed by your JavaScript interpreter running on your browser. For example given a function: factorial(n) an operation that runs in O(n). You could execute an operation such as factorial(100) repeatedly over the course of a minute.
function test(){
let start = Date.now();
let end = start + 60 * 1000;
let numberOfExecutions = 0;
while(Date.now() < end){
factorial(100);
numberOfExecutions++;
}
return numberOfExecutions/(60 * 100);
}
The idea here is that factorial is by far the most time consuming function in the code. And since factorial runs in O(n) we know factorial(100) is approximately 100 operations. Note that this will not be exact and that larger numbers will make for better approximations. Also remember that this will estimate the number of operations executed by your interpreter and not your processor.
There is a lot of truth to all previous comments, but I want to invert the reasoning a little bit because I do believe it is easier to understand it like that.
I believe that the fairest way to calculate it is with the most basic loop, and not relying on any dates or functions, and instead calculate the values later.
You will see that the smaller the function, the bigger the initial overload is. That means it takes a small amount of time to start and finish each function, but at a certain point they all start reaching a number that can clearly be seen as close-enough to be considered how many operations per second can JavaScript run.
My example:
const oneMillion = 1_000_000;
const tenMillion = 10_000_000;
const oneHundredMillion = 100_000_000;
const oneBillion = 1_000_000_000;
const tenBillion = 10_000_000_000;
const oneHundredBillion = 100_000_000_000;
const oneTrillion = 1_000_000_000_000;
function runABunchOfTimes(times) {
console.time('timer')
for (let i = 0; i < times; ++i) {}
console.timeEnd('timer')
}
I've tried on a machine that has a lot of load already on it with many processes running, 2020 macbook, these were my results:
at the very end I am taking the time the console showed me it took to run, and I divided the number of runs by it. The oneTrillion and oneBillion runs are virtually the same, however when it goes to oneMillion and 1000 you can see that they are not as performant due to the initial load of creating the for loop in the first place.
We usually try to sway away from O(n^2) and slower functions exactly because we do not want to reach for that maximum. If you were to perform a find inside of a map for an array with all cities in the world (around 10_000 according to google, I haven't counted) we would already each 100_000_000 iterations, and they would certainly not be as simple as just iterating through nothing like in my example. Your code then would take minutes to run, but I am sure you are aware of this and that is why you posted the question in the first place.
Calculating how long it would take is tricky not only because of the above, but also because you cannot predict which device will run your function. Nowadays I can open in my TV, my watch, a raspberry py and none of them would be nearly as fast as the computer I am running from when creating these functions. Sure. But if I were to try to benchmark a device I would use something like the function above since it is the simplest loop operation I could think of.
I'm using Recorder.js to record audio from Google Chrome desktop and mobile browsers. In my specific use case I need to record exactly 3 seconds of audio, starting and ending at a specific time.
Now I know that when recording audio, your soundcard cannot work in realtime due to hardware delays, so there is always a memory buffer which allows you to keep up recording without hearing jumps/stutters.
Recorder.js allows you to configure the bufferLen variable exactly for this, while sampleRate is taken automatically from the audio context object. Here is a simplified version of how it works:
var context = new AudioContext();
var recorder;
navigator.getUserMedia({audio: true}, function(stream) {
recorder = new Recorder(context.createMediaStreamSource(stream), {
bufferLen: 4096
});
});
function recordLoop() {
recorder.record();
window.setTimeout(function () {
recorder.stop();
}, 3000);
}
The issue i'm facing is that record() does not offset for the buffer latency and neither does stop(). So instead of getting a three second sound, it's 2.97 seconds and the start is cut off.
This means my recordings don't start in the same place, and also when I loop them, the loops are different lengths depending on your device latency!!
There are two potentially solutions I see here:
Adjust Recorder.js code to offset the buffer automatically against your start/stop times (maybe add new startSync/stopSync functions)
Calculate the latency and create two offset timers to start and stop Recorder.js at the correct points in time.
I'm trying solution 2, because solution 1 requires knowledge of buffer arrays which I don't have :( I believe the calculation for latency is:
var bufferSize = 4096;
var sampleRate = 44100
var latency = (bufferSize / sampleRate) * 2; // 0.18575963718820862 secs
However when I run these calculations in a real test I get:
var duration = 2.972154195011338 secs
var latency = 0.18575963718820862 secs
var total = duration + latency // 3.1579138321995464 secs
Something isn't right, it doesn't make 3 seconds and it's beginning to confuse me now! I've created a working fork of Recorder.js demo with a log:
http://kmturley.github.io/Recorderjs/
Any help would be greatly appreciated. Thanks!
I'm a bit confused by your concern for the latency. Yes, it's true that the minimum possible latency is going to be the related to the length of the buffer but there are many other latencies involved. In any case, the latency has nothing to do with the recording duration, which seems to me to be what your question is about.
If you want to record an exactly 3 second long buffer at 44100 that is 44100*3=132,300 samples. The buffer size is 4096 samples and the system is only going to record an even multiple of that number. Given that the closest you are going to get is to record either 32 or 33 complete buffers. This gives either 131072 (2.97 seconds) or 135168 (3.065 seconds) samples.
You have a couple options here.
Choose a buffer length that evenly divides the sample rate. e.g. 11025. You can then record exactly 12 buffers.
Record slightly longer than the 3.0 seconds you need and then throw the extra 2868 samples away.
I am looking for a solution to calculate the transmitted bytes per second of a repeatedly invoked function (below). Due to its inaccuracy, I do not want to simply divide the transmitted bytes by the elapsed overall time: it resulted in the inability to display rapid speed changes after running for a few minutes.
The preset (invoked approximately every 50ms):
function uploadProgress(loaded, total){
var bps = ?;
$('#elem').html(bps+' bytes per second');
};
How to obtain the average bytes per second for (only) the last n seconds and is it a good idea?
What other practices for calculating a non-flickering but precise bps value are available?
Your first idea is not bad, it's called a moving average, and providing you call your update function in regular intervals you only need to keep a queue (a FIFO buffer) of a constant length:
var WINDOW_SIZE = 10;
var queue = [];
function updateQueue(newValue) {
// fifo with a fixed length
queue.push(newValue);
if (queue.length > WINDOW_SIZE)
queue.shift();
}
function getAverageValue() {
// if the queue has less than 10 items, decide if you want to calculate
// the average anyway, or return an invalid value to indicate "insufficient data"
if (queue.length < WINDOW_SIZE) {
// you probably don't want to throw if the queue is empty,
// but at least consider returning an 'invalid' value in order to
// display something like "calculating..."
return null;
}
// calculate the average value
var sum = 0;
for (var i = 0; i < queue.length; i++) {
sum += queue[i];
}
return sum / queue.length;
}
// calculate the speed and call `updateQueue` every second or so
var updateTimer = setInterval(..., 1000);
An even simpler way to avoid sudden changes in calculated speed would be to use a low-pass filter. A simple discrete approximation of the PT1 filter would be:
Where u[k] is the input (or actual value) at sample k, y[k] is the output (or filtered value) at sample k, and T is the time constant (larger T means that y will follow u more slowly).
That would be translated to something like:
var speed = null;
var TIME_CONSTANT = 5;
function updateSpeed(newValue) {
if (speed === null) {
speed = newValue;
} else {
speed += (newValue - speed) / TIME_CONSTANT;
}
}
function getFilteredValue() {
return speed;
}
Both solutions will give similar results (for your purpose at least), and the latter one seems a bit simpler (and needs less memory).
Also, I wouldn't update the value that fast. Filtering will only turn "flickering" into "swinging" at a refresh rate of 50ms. I don't think anybody expects to have an upload speed shown at a refresh rate of more than once per second (or even a couple of seconds).
A simple low-pass filter is ok for just making sure that inaccuracies don't build up. But if you think a little deeper about measuring transfer rates, you get into maintaining separate integer counters to do it right.
If you want it to be an exact count, note that there is a simplification available. First, when dealing with rates, arithmetic mean of them is the wrong thing to apply to bytes/sec (sec/byte is more correct - which leads to harmonic mean). The other problem is that they should be weighted. Because of this, simply keeping int64 running totals of bytes versus observation time actually does the right thing - as stupid as it sounds. Normally, you are weighting by 1/n for each w. Look at a neat simplification that happens when you weigh by time:
(w0*b0/t0 + w1*b1/t1 + w2*b2/t2 + ...)/(w0+w1+w2+...)
totalBytes/totalWeight
(b0+b1+b2+...)/(w0+w1+w2+...)
So just keep separate (int64!) totals of bytes and milliseconds. And only divide them as a rendering step to visualize the rate. Note that if you instead used the harmonic mean (which you should do for rates - because you are really averaging sec/byte), then that's the same as the time it takes to send a byte, weighted by how many bytes there were.
1 / (( w0*t0/b0 + w1*t1/b0 + ... )/(w0+w1+w2+...)) =
totalBytes/totalTime
So arithmetic mean weighted by time is same as harmonic mean weighted by bytes. Just keep a running total of bytes in one var, and time in another. There is a deeper reason that this simplistic count actually the right one. Think of integrals. Assuming no concurrency, this is literally just total bytes transferred divided by total observation time. Assume that the computer actually takes 1 step per millisecond, and only sends whole bytes - and that you observe the entire time interval without gaps. There are no approximations.
Notice that if you think about an integral with (msec, byte/msec) as the units for (x,y), the area under the curve is the bytes sent during the observation period (exactly). You will get the same answer no matter how the observations got cut up. (ie: reported 2x as often).
So by simply reporting (size_byte, start_ms,stop_ms), you just accumulate (stop_ms-start_ms) into time and accumulate size_byte per observation. If you want to partition these rates to graph in minute buckets, then just maintain the (byte,ms) pair per minute (of observation).
Note that these are rates experienced for individual transfers. The individual transfers may experience 1MB/s (user point of view). These are the rates that you guarantee to end users.
You can leave it here for simple cases. But doing this counting right, allows for more interesting things.
From the server point of view, load matters. Presume that there were two users experiencing 1MB/s simultaneously. For that statistic, you need to subtract out the double-counted time. If 2 users do 1MB/s simultaneously for 1s, then that's 2MB/s for 1s. You need to effectively reconstruct time overlaps, and subtract out the double-counting of time periods. Explicitly logging at the end of a transfer (size_byte,start_ms,stop_ms) allows you to measure interesting things:
The number of outstanding transfers at any given time (queue length distribution - ie: "am I going to run out of memory?")
The throughput as a function of the number of transfers (throughput for a queue length - ie: "does the website collapse when our ad shows on TV?")
Utilization - ie: "are we overpaying our cloud provider?"
In this situation, all of the accumulated counters are exact integer arithmetic. Subtracting out the double-counted time suddenly gets you into more complicated algorithms (when computed efficiently and in real-time).
Use a decaying average, then you won't have to keep the old values around.
UPDATE: Basically it's a formula like this:
average = new_value * factor + average_old * (100 - factor);
You don't have to keep any old values around, they're all in the there at smaller and smaller proportions. You have to choose a value for factor that are appropriate to the mix of new and old values you want, and how often the average gets updated.
This is how the Unix "load average" is calculated I believe.
So, I know I can get current time in milliseconds using JavaScript. But, is it possible to get the current time in nanoseconds instead?
Achieve microsecond accuracy in most browsers using:
window.performance.now()
See also:
https://developer.mozilla.org/en-US/docs/Web/API/Performance.now()
http://www.w3.org/TR/hr-time/
https://caniuse.com/high-resolution-time
Building on Jeffery's answer, to get an absolute time-stamp (as the OP wanted) the code would be:
var TS = window.performance.timing.navigationStart + window.performance.now();
result is in millisecond units but is a floating-point value reportedly "accurate to one thousandth of a millisecond".
In Server side environments like Node.js you can use the following function to get time in nanosecond
function getNanoSecTime() {
var hrTime = process.hrtime();
return hrTime[0] * 1000000000 + hrTime[1];
}
Also get micro seconds in a similar way as well:
function getMicSecTime() {
var hrTime = process.hrtime();
return hrTime[0] * 1000000 + parseInt(hrTime[1] / 1000);
}
Milliseconds since the UNIX epoch, with the microseconds resolution.
performance.timing.navigationStart has been deprecated! Use the following instead:
(performance.now() + performance.timeOrigin)
Relevant quotes from the specification
This specification defines an API that provides the time origin, and current time in sub-millisecond resolution, such that it is not subject to system clock skew or adjustments.
The timeOrigin attribute MUST return a DOMHighResTimeStamp representing the high resolution time of the time origin timestamp for the relevant global object of the Performance object.
The time origin timestamp is the high resolution time value at which time origin is zero.
The time origin is the time value from which time is measured
The now() method MUST return the current high resolution time.
The current high resolution time is the high resolution time from the time origin to the present time (typically called “now”).
Note that actually it is not that accurate for security reasons (to prevent side-channel attacks)
This specification defines an API that provides sub-millisecond time resolution, which is more accurate than the previously available millisecond resolution exposed by DOMTimeStamp. However, even without this new API an attacker may be able to obtain high-resolution estimates through repeat execution and statistical analysis. To ensure that the new API does not significantly improve the accuracy or speed of such attacks, the minimum resolution of the DOMHighResTimeStamp type should be inaccurate enough to prevent attacks: the current minimum recommended resolution is no less than 5 microseconds and, where necessary, should be set higher by the User Agent to address privacy and security concerns due to architecture or software constraints, or other considerations.
Yes! Try the excellent sazze's nano-time
let now = require('nano-time');
now(); // '1476742925219947761' (returns as string due to JS limitation)
No. There is not a chance you will get nanosecond accuracy at the JavaScript layer.
If you're trying to benchmark some very quick operation, put it in a loop that runs it a few thousand times.
JavaScript records time in milliseconds, so you won't be able to get time to that precision. The smart-aleck answer is to "multiply by 1,000,000".