How can I create a Node.JS accurate timer? I am trying to make a chess website where you can play against other players on time.
I am currently using setInterval() and am not sure how accurate that is. I also need some extra accurate timer for the server that should be able to check in a 100th of a second precision can tell if the move is in time and when the game has ended.
Thanks in advance
For ordinary time-of-day, Date.now() gives you the date and time in milliseconds as a Javascript number. It has millisecond resolution. Its precision depends on your underlying operating system, but is typically between 10 and 50 milliseconds.
You can use process.hrtime.bigint(), described here, to retrieve the number of nanoseconds elapsed in nanoseconds.
Like this:
const then = process.hrtime.bigint()
/* do something you want to measure */
const now = process.hrtime.bigint()
const elapsedTimeInSeconds = (now-then) / 1_000_000_000
But be aware of this. Date.now() gives you a number of milliseconds since the UNIX epoch so it can be used to represent calendar dates and clock times. process.hrtime.bigint() gives you the number of nanoseconds since some arbitrary start time in the recent past. So it's only really useful for measuring elapsed times within nodejs processes.
And, I'm sure you're aware of single threading in Javascript, so elapsed time doesn't equal CPU time unless you don't do any sort of await operation in the code you're measuring.
You could also try to use `process.cpuUsage(), described here. Something like this.
const then = process.cpuUsage()
/* do something you want to measure */
const now = process.cpuUsage(then)
const userTimeInSeconds = (now.user - then.user) / 1_000_000
const systemTimeInSeconds = (now.system - then.system) / 1_000_000
Explaining the difference between user and system CPU time is beyond the scope of a Stack Overflow answer, but you can read about it.
Related
I have written a JavaScript function that I simply copy paste into a browser console and runs, all works great and is working exactly as I want it to.
Looks like:
function test(d) {
// ....
}
test(num);
I'm looking to wrap this function with kind of like a "while" statement. Please do keep in mind I'm not the greatest with JavaScript, yet.
Basically, What I'm looking for is while its NOT 6:30PM EST... keep waiting and check again. The second it hits 6:30 PM EST, execute the script.
Can anyone help me with what the syntax would look like? I found a lot on Stack Overflow but the syntax isn't really making sense to me.
Any help would be greatly appreciated.
Okay, a couple of notes. If you're not required to run a script in the browser itself but simply run some JavaScript at specific intervals you should checkout some schedulers like cron and more specifically for JavaScript - node-cron.
A solution which revolves around checking "is it time, is it time,..." each second or so in a loop is a pretty bad way to do it.
It is highly wasteful and poorly performant.
It will block your script execution so you have to execute it in a separate process like a web worker.
The simplest way, without using any dependencies is to schedule your work manually using a combination of setTimeout and setInterval. Here is a pretty basic and unpolished solution which should get you going.
const msInSecond = 1000;
const msInMinute = 60 * msInSecond;
const msInHour = 60 * msInMinute;
const msInDay = 24 * msInHour;
const desiredTimeInHoursInUTC = 18; // fill out your desired hour in UTC!
const desiredTimeInMinutesInUTC = 30; // fill out your desired minutes in UTC!
const desiredTimeInSecondsInUTC = 0; // fill out your desired seconds in UTC!
const currentDate = new Date();
const controlDate = new Date(currentDate.getUTCFullYear(), currentDate.getUTCMonth(), currentDate.getUTCDate(), desiredTimeInHoursInUTC, desiredTimeInMinutesInUTC, desiredTimeInSecondsInUTC);
let desiredDate;
if (currentDate.getTime() <= controlDate.getTime()) {
desiredDate = controlDate;
}
else {
desiredDate = new Date(controlDate.getTime() + msInDay);
}
const msDelta = desiredDate.getTime() - currentDate.getTime();
setTimeout(setupInterval, msDelta);
function setupInterval() {
actualJob();
setInterval(actualJob, msInDay);
}
function actualJob() {
console.log('test');
}
In short, it calculates the difference between the current time and the next possible upcoming time slot for execution. Then, we use this time difference to execute the desired task and further schedule executions on every 24h after that.
You need to provide values for desiredTimeInHoursInUTC, desiredTimeInMinutesInUTC and desiredTimeInSecondsInUTC. All of them should be in UTC (the normal difference between EST and UTC is -4 or in other words - (hour EST - hour UTC = -4). You can (and actually should) improve this solution to handle timezones and the time difference calculations in general more elegantly.
A caveat here is that it won't handle daylight saving for you so you should keep that in mind. You can tackle this easily by using a dedicated library like moment.js. Also, the code is simple and in this version doesn't support cases like skipping specific days and such. You can always extend it to handle those, though.
Lastly, every time you restart the script, it will schedule your function execution times properly so you don't need to keep your tab open at all times. As an added benefit, the solution is quite performant as it doesn't do checks continuously if the time for execution has come but schedules everything in advance and doesn't do any more checks after that.
I got this code over here:
var date = new Date();
setTimeout(function(e) {
var currentDate = new Date();
if(currentDate - date >= 1000) {
console.log(currentDate, date);
console.log(currentDate-date);
}
else {
console.log("It was less than a second!");
console.log(currentDate-date);
}
}, 1000);
In my computer, it always executes correctly, with 1000 in the console output. Interestedly in other computer, the same code, the timeout callback starts in less than a second and the difference of currentDate - date is between 980 and 998.
I know the existence of libraries that solve this inaccuracy (for example, Tock).
Basically, my question is: What are the reasons because setTimeout does not fire in the given delay? Could it be the computer that is too slow and the browser automatically tries to adapt to the slowness and fires the event before?
PS: Here is a screenshot of the code and the results executed in the Chrome JavaScript console:
It's not supposed to be particularly accurate. There are a number of factors limiting how soon the browser can execute the code; quoting from MDN:
In addition to "clamping", the timeout can also fire later when the page (or the OS/browser itself) is busy with other tasks.
In other words, the way that setTimeout is usually implemented, it is just meant to execute after a given delay, and once the browser's thread is free to execute it.
However, different browsers may implement it in different ways. Here are some tests I did:
var date = new Date();
setTimeout(function(e) {
var currentDate = new Date();
console.log(currentDate-date);
}, 1000);
// Browser Test1 Test2 Test3 Test4
// Chrome 998 1014 998 998
// Firefox 1000 1001 1047 1000
// IE 11 1006 1013 1007 1005
Perhaps the < 1000 times from Chrome could be attributed to inaccuracy in the Date type, or perhaps it could be that Chrome uses a different strategy for deciding when to execute the code—maybe it's trying to fit it into the a nearest time slot, even if the timeout delay hasn't completed yet.
In short, you shouldn't use setTimeout if you expect reliable, consistent, millisecond-scale timing.
In general, computer programs are highly unreliable when trying to execute things with higher precision than 50 ms. The reason for this is that even on an octacore hyperthreaded processor the OS is usually juggling several hundreds of processes and threads, sometimes thousands or more. The OS makes all that multitasking work by scheduling all of them to get a slice of CPU time one after another, meaning they get 'a few milliseconds of time at most to do their thing'.
Implicity this means that if you set a timeout for 1000 ms, chances are far from small that the current browser process won't even be running at that point in time, so it's perfectly normal for the browser not to notice until 1005, 1010 or even 1050 milliseconds that it should be executing the given callback.
Usually this is not a problem, it happens, and it's rarely of utmost importance. If it is, all operating systems supply kernel level timers that are far more precise than 1 ms, and allow a developer to execute code at precisely the correct point in time. JavaScript however, as a heavily sandboxed environment, doesn't have access to kernel objects like that, and browsers refrain from using them since it could theoretically allow someone to attack the OS stability from inside a web page, by carefully constructing code that starves other threads by swamping it with a lot of dangerous timers.
As for why the test yields 980 I'm not sure - that would depend on exactly which browser you're using and which JavaScript engine. I can however fully understand if the browser just manually corrects a bit downwards for system load and/or speed, ensuring that "on average the delay is still about the correct time" - it would make a lot of sense from the sandboxing principle to just approximate the amount of time required without potentially burdening the rest of the system.
Someone please correct me if I am misinterpreting this information:
According to a post from John Resig regarding the inaccuracy of performance tests across platforms (emphasis mine)
With the system times constantly being rounded down to the last queried time (each about 15 ms apart) the quality of performance results is seriously compromised.
So there is up to a 15 ms fudge on either end when comparing to the system time.
I had a similar experience.
I was using something like this:
var iMillSecondsTillNextWholeSecond = (1000 - (new Date().getTime() % 1000));
setTimeout(function ()
{
CountDownClock(ElementID, RelativeTime);
}, iMillSecondsTillNextWholeSecond);//Wait until the next whole second to start.
I noticed it would Skip a Second every couple Seconds, sometimes it would go for longer.
However, I'd still catch it Skipping after 10 or 20 Seconds and it just looked rickety.
I thought, "Maybe the Timeout is too slow or waiting for something else?".
Then I realized, "Maybe it's too fast, and the Timers the Browser is managing are off by a few Milliseconds?"
After adding +1 MilliSeconds to my Variable I only saw it skip once.
I ended up adding +50ms, just to be on the safe side.
var iMillSecondsTillNextWholeSecond = (1000 - (new Date().getTime() % 1000) + 50);
I know, it's a bit hacky, but my Timer is running smooth now. :)
Javascript has a way of dealing with exact time frames. Here’s one approach:
You could just save a Date.now when you start to wait, and create an interval with a low ms update frame, and calculate the difference between the dates.
Example:
const startDate = Date.now()
setInterval(() => {
const currentDate = Date.now()
if (currentDate - startDate === 1000 {
// it was a second
clearInterval()
return
}
// it was not a second
}, 50)
Let's introduce by a note from www.w3.org including two important links to compare.
The PerformanceTiming interface was defined in [NAVIGATION-TIMING] and
is now considered obsolete. The use of names from the
PerformanceTiming interface is supported to remain backwards
compatible, but there are no plans to extend this functionality to
names in the PerformanceNavigationTiming interface defined in
[NAVIGATION-TIMING-2] (or other interfaces) in the future.
I have made a function to get a Navigation Time that should be both backward and forward compatible, because we are in the middle era of transforming to level 2. So this function to get a time from an event name works in Chrome but not Firefox:
function nav(eventName) {
var lev1 = performance.timing; //deprecated unix epoch time in ms since 1970
var lev2 = performance.getEntriesByType("navigation")[0]; //ms since page started to load. (since performance.timing.navigationStart)
var nav = lev2 || lev1; //if lev2 is undefined then use lev1
return nav[eventName]
}
Explanation: When there is no "navigation" entry this falls back to the deprecated way to do navigation timing based on Unix epoch time time in milliseconds since 1970 (lev1), while the new way (lev2) is HR time in milliseconds since the current document navigation started to load, that is useful together with User Timing that always have had the HR time format.
How can we get the function return HR time in all cases?
When I see a number with more than 10 digits without a period I know it is a time got from the deprecated Navigation Timing level 1. All other test cases give decimal point numbers meaning it is HR times with higher precision. The biggest issue is that they have different time origin.
I have gone through confusion, trial errors and frustrated serching (MDN has not updated to level 2) to confirm and state that:
Navigation Timing Level 1 use unix epoch time and the rest...
Navigation Timing Level 2 use HR time
User Timing Level 1 use HR time
User Timing Level 2 use HR time
Also performance.now() has HR time both in Chrome and Firefox.
How to convert unix epoch time to HR time?
SOLVED .:
The code is corrected by help from Amadan.
See comments in tha accepted answer.
function nav(eventName, fallback) {
var lev1 = performance.timing; //deprecated unix epoch time in ms since 1970
var lev2 = performance.getEntriesByType("navigation")[0]; //ms since page started to load
var nav = lev2 || lev1; //if lev2 is undefined then use lev1
if (!nav[eventName] && fallback) eventName = fallback
// approximate t microseconds it takes to execute performance.now()
var i = 10000, t = performance.now()
while(--i) performance.now()
t = (performance.now() - t)/10000 // < 10 microseconds
var oldTime = new Date().getTime(),
newTime = performance.now(),
timeOrigin = performance.timeOrigin?
performance.timeOrigin:
oldTime - newTime - t; // approximate
return nav[eventName] - (lev2? 0: timeOrigin);
// return nav[eventName] - (lev2? 0: lev1.navigationStart); //alternative?
}
The performance.timeOrigin is reduced in the case where old timing lev1 is used.
If browser does not have it then approximate timeOrigin by reducing performance.now() the time since timeOrigin, from (new Date().getTime()) the time since Unix Epoch to result in the time to timeOrigin since Unix Epoch. Apparently it is the definition though the link was a bit vague about it. I confirmed by testing and I trust the answer. Hopefully w3c have a better definition of timeOrigin than: the high resolution timestamp of the start time of the performance measurement.
The functions returned value represents the time elapsed since the time origin.
It may be insignificant in most cases, but the measured time t it took to execute performance.now() is removed to approximate simultaneous execution.
I measured t to almost 10 microseconds on my Raspberry Pi that was fairly stable with various loop sizes. But my Lenovo was not as precise rounding off decimals and getting shorter times on t when tested bigger loop sizes.
An alternative solution is commented away in the last line of code.
The deprecated performance.timing.navigationStart:
representing the moment, in miliseconds since the UNIX epoch, right
after the prompt for unload terminates on the previous document in the
same browsing context. If there is no previous document, this value
will be the same as PerformanceTiming.fetchStart
So, to check current document (ignoring any previous) then use the deprecated performance.timing.fetchStart:
representing the moment, in miliseconds since the UNIX epoch, the
browser is ready to fetch the document using an HTTP request. This
moment is before the check to any application cache.
It is of course correct to use a deprecated property if it is the only one the browser understand. It is used when "navigation" is not defined in the getEntriesByType otherwise having good browser support.
A quick check confirmed each other by this line just before return:
console.log(performance.timeOrigin + '\n' + lev1.navigationStart + '\n' + lev1.fetchStart)
With a result that looks like this in my Chrome
1560807558225.1611
1560807558225
1560807558241
It is only possible if the browser supports HR time 2:
let unixTime = hrTime + performance.timeOrigin;
let hrTime = unixTime - performance.timeOrigin;
However, performance is generally used for time diffs, which do not care what the origin of absolute timestamps is.
For the browsers that do not support HR time 2, or those that "support" it half-heartedly, you can fake it this way:
const hrSyncPoint = performance.now();
const unixSyncPoint = new Date().getTime();
const timeOrigin = unixSyncPoint - hrSyncPoint;
It's not super-exact, but should be good enough for most purposes (on my system, performance.timeOrigin - timeOrigin is sub-millisecond).
Look at this code:
function wait(time) {
let i = 0;
let a = Date.now();
let x = a + (time || 0);
let b;
while ((b = Date.now()) <= x) ++i;
return i;
}
If I run it in browser (particularly Google Chrome, but I don't think it matters) in the way like wait(1000), the machine will obviously freeze for a second and then return recalculated value of i.
Let it be 10 000 000 (I'm getting values close to this one). This value varies every time, so lets take an average number.
Did I just got current number of operations per second of the processor in my machine?
Not at all.
What you get is the number of loop cycles completed by the Javascript process in a certain time. Each loop cycle consists of:
Creating a new Date object
Comparing two Date objects
Incrementing a Number
Incrementing the Number variable i is probably the least expensive of these, so the function is not really reporting how much it takes to make the increment.
Aside from that, note that the machine is doing a lot more than running a Javascript process. You will see interference from all sorts of activity going on in the computer at the same time.
When running inside a Javascript process, you're simply too far away from the processor (in terms of software layers) to make that measurement. Beneath Javascript, there's the browser and the operating system, each of which can (and will) make decisions that affect this result.
No. You can get the number of language operations per second, though the actual number of machine operations per second on a whole processor is more complicated.
Firstly the processor is not wholly dedicated to the browser, so it is actually likely switching back and forth between prioritized processes. On top of that memory access is obscured and the processor uses extra operations to manage memory (page flushing, etc.) and this is not gonna be very transparent to you at a given time. On top of that physical properties means that the real clock rate of the processor is dynamic... You can see it's pretty complicated already ;)
To really calculate the number of machine operations per second you need to measure the clock rate of the processor and multiply it by the number of instructions per cycle the processor can perform. Again this varies, but really the manufacturer specs will likely be good enough of an estimate :P.
If you wanted to use a program to measure this, you'd need to somehow dedicate 100% of the processor to your program and have it run a predictable set of instructions with no other hangups (like memory management). Then you need to include the number of instructions it takes to load the program instructions into the code caches. This is not really feasible however.
As others have pointed out, this will not help you determine the number of operations the processor does per second due to the factors that prior answers have pointed out. I do however think that a similar experiment could be set up to estimate the number of operations to be executed by your JavaScript interpreter running on your browser. For example given a function: factorial(n) an operation that runs in O(n). You could execute an operation such as factorial(100) repeatedly over the course of a minute.
function test(){
let start = Date.now();
let end = start + 60 * 1000;
let numberOfExecutions = 0;
while(Date.now() < end){
factorial(100);
numberOfExecutions++;
}
return numberOfExecutions/(60 * 100);
}
The idea here is that factorial is by far the most time consuming function in the code. And since factorial runs in O(n) we know factorial(100) is approximately 100 operations. Note that this will not be exact and that larger numbers will make for better approximations. Also remember that this will estimate the number of operations executed by your interpreter and not your processor.
There is a lot of truth to all previous comments, but I want to invert the reasoning a little bit because I do believe it is easier to understand it like that.
I believe that the fairest way to calculate it is with the most basic loop, and not relying on any dates or functions, and instead calculate the values later.
You will see that the smaller the function, the bigger the initial overload is. That means it takes a small amount of time to start and finish each function, but at a certain point they all start reaching a number that can clearly be seen as close-enough to be considered how many operations per second can JavaScript run.
My example:
const oneMillion = 1_000_000;
const tenMillion = 10_000_000;
const oneHundredMillion = 100_000_000;
const oneBillion = 1_000_000_000;
const tenBillion = 10_000_000_000;
const oneHundredBillion = 100_000_000_000;
const oneTrillion = 1_000_000_000_000;
function runABunchOfTimes(times) {
console.time('timer')
for (let i = 0; i < times; ++i) {}
console.timeEnd('timer')
}
I've tried on a machine that has a lot of load already on it with many processes running, 2020 macbook, these were my results:
at the very end I am taking the time the console showed me it took to run, and I divided the number of runs by it. The oneTrillion and oneBillion runs are virtually the same, however when it goes to oneMillion and 1000 you can see that they are not as performant due to the initial load of creating the for loop in the first place.
We usually try to sway away from O(n^2) and slower functions exactly because we do not want to reach for that maximum. If you were to perform a find inside of a map for an array with all cities in the world (around 10_000 according to google, I haven't counted) we would already each 100_000_000 iterations, and they would certainly not be as simple as just iterating through nothing like in my example. Your code then would take minutes to run, but I am sure you are aware of this and that is why you posted the question in the first place.
Calculating how long it would take is tricky not only because of the above, but also because you cannot predict which device will run your function. Nowadays I can open in my TV, my watch, a raspberry py and none of them would be nearly as fast as the computer I am running from when creating these functions. Sure. But if I were to try to benchmark a device I would use something like the function above since it is the simplest loop operation I could think of.
So, I know I can get current time in milliseconds using JavaScript. But, is it possible to get the current time in nanoseconds instead?
Achieve microsecond accuracy in most browsers using:
window.performance.now()
See also:
https://developer.mozilla.org/en-US/docs/Web/API/Performance.now()
http://www.w3.org/TR/hr-time/
https://caniuse.com/high-resolution-time
Building on Jeffery's answer, to get an absolute time-stamp (as the OP wanted) the code would be:
var TS = window.performance.timing.navigationStart + window.performance.now();
result is in millisecond units but is a floating-point value reportedly "accurate to one thousandth of a millisecond".
In Server side environments like Node.js you can use the following function to get time in nanosecond
function getNanoSecTime() {
var hrTime = process.hrtime();
return hrTime[0] * 1000000000 + hrTime[1];
}
Also get micro seconds in a similar way as well:
function getMicSecTime() {
var hrTime = process.hrtime();
return hrTime[0] * 1000000 + parseInt(hrTime[1] / 1000);
}
Milliseconds since the UNIX epoch, with the microseconds resolution.
performance.timing.navigationStart has been deprecated! Use the following instead:
(performance.now() + performance.timeOrigin)
Relevant quotes from the specification
This specification defines an API that provides the time origin, and current time in sub-millisecond resolution, such that it is not subject to system clock skew or adjustments.
The timeOrigin attribute MUST return a DOMHighResTimeStamp representing the high resolution time of the time origin timestamp for the relevant global object of the Performance object.
The time origin timestamp is the high resolution time value at which time origin is zero.
The time origin is the time value from which time is measured
The now() method MUST return the current high resolution time.
The current high resolution time is the high resolution time from the time origin to the present time (typically called “now”).
Note that actually it is not that accurate for security reasons (to prevent side-channel attacks)
This specification defines an API that provides sub-millisecond time resolution, which is more accurate than the previously available millisecond resolution exposed by DOMTimeStamp. However, even without this new API an attacker may be able to obtain high-resolution estimates through repeat execution and statistical analysis. To ensure that the new API does not significantly improve the accuracy or speed of such attacks, the minimum resolution of the DOMHighResTimeStamp type should be inaccurate enough to prevent attacks: the current minimum recommended resolution is no less than 5 microseconds and, where necessary, should be set higher by the User Agent to address privacy and security concerns due to architecture or software constraints, or other considerations.
Yes! Try the excellent sazze's nano-time
let now = require('nano-time');
now(); // '1476742925219947761' (returns as string due to JS limitation)
No. There is not a chance you will get nanosecond accuracy at the JavaScript layer.
If you're trying to benchmark some very quick operation, put it in a loop that runs it a few thousand times.
JavaScript records time in milliseconds, so you won't be able to get time to that precision. The smart-aleck answer is to "multiply by 1,000,000".