Is there any way to get current time in nanoseconds using JavaScript? - javascript

So, I know I can get current time in milliseconds using JavaScript. But, is it possible to get the current time in nanoseconds instead?

Achieve microsecond accuracy in most browsers using:
window.performance.now()
See also:
https://developer.mozilla.org/en-US/docs/Web/API/Performance.now()
http://www.w3.org/TR/hr-time/
https://caniuse.com/high-resolution-time

Building on Jeffery's answer, to get an absolute time-stamp (as the OP wanted) the code would be:
var TS = window.performance.timing.navigationStart + window.performance.now();
result is in millisecond units but is a floating-point value reportedly "accurate to one thousandth of a millisecond".

In Server side environments like Node.js you can use the following function to get time in nanosecond
function getNanoSecTime() {
var hrTime = process.hrtime();
return hrTime[0] * 1000000000 + hrTime[1];
}
Also get micro seconds in a similar way as well:
function getMicSecTime() {
var hrTime = process.hrtime();
return hrTime[0] * 1000000 + parseInt(hrTime[1] / 1000);
}

Milliseconds since the UNIX epoch, with the microseconds resolution.
performance.timing.navigationStart has been deprecated! Use the following instead:
(performance.now() + performance.timeOrigin)
Relevant quotes from the specification
This specification defines an API that provides the time origin, and current time in sub-millisecond resolution, such that it is not subject to system clock skew or adjustments.
The timeOrigin attribute MUST return a DOMHighResTimeStamp representing the high resolution time of the time origin timestamp for the relevant global object of the Performance object.
The time origin timestamp is the high resolution time value at which time origin is zero.
The time origin is the time value from which time is measured
The now() method MUST return the current high resolution time.
The current high resolution time is the high resolution time from the time origin to the present time (typically called “now”).
Note that actually it is not that accurate for security reasons (to prevent side-channel attacks)
This specification defines an API that provides sub-millisecond time resolution, which is more accurate than the previously available millisecond resolution exposed by DOMTimeStamp. However, even without this new API an attacker may be able to obtain high-resolution estimates through repeat execution and statistical analysis. To ensure that the new API does not significantly improve the accuracy or speed of such attacks, the minimum resolution of the DOMHighResTimeStamp type should be inaccurate enough to prevent attacks: the current minimum recommended resolution is no less than 5 microseconds and, where necessary, should be set higher by the User Agent to address privacy and security concerns due to architecture or software constraints, or other considerations.

Yes! Try the excellent sazze's nano-time
let now = require('nano-time');
now(); // '1476742925219947761' (returns as string due to JS limitation)

No. There is not a chance you will get nanosecond accuracy at the JavaScript layer.
If you're trying to benchmark some very quick operation, put it in a loop that runs it a few thousand times.

JavaScript records time in milliseconds, so you won't be able to get time to that precision. The smart-aleck answer is to "multiply by 1,000,000".

Related

Node js accurate timer

How can I create a Node.JS accurate timer? I am trying to make a chess website where you can play against other players on time.
I am currently using setInterval() and am not sure how accurate that is. I also need some extra accurate timer for the server that should be able to check in a 100th of a second precision can tell if the move is in time and when the game has ended.
Thanks in advance
For ordinary time-of-day, Date.now() gives you the date and time in milliseconds as a Javascript number. It has millisecond resolution. Its precision depends on your underlying operating system, but is typically between 10 and 50 milliseconds.
You can use process.hrtime.bigint(), described here, to retrieve the number of nanoseconds elapsed in nanoseconds.
Like this:
const then = process.hrtime.bigint()
/* do something you want to measure */
const now = process.hrtime.bigint()
const elapsedTimeInSeconds = (now-then) / 1_000_000_000
But be aware of this. Date.now() gives you a number of milliseconds since the UNIX epoch so it can be used to represent calendar dates and clock times. process.hrtime.bigint() gives you the number of nanoseconds since some arbitrary start time in the recent past. So it's only really useful for measuring elapsed times within nodejs processes.
And, I'm sure you're aware of single threading in Javascript, so elapsed time doesn't equal CPU time unless you don't do any sort of await operation in the code you're measuring.
You could also try to use `process.cpuUsage(), described here. Something like this.
const then = process.cpuUsage()
/* do something you want to measure */
const now = process.cpuUsage(then)
const userTimeInSeconds = (now.user - then.user) / 1_000_000
const systemTimeInSeconds = (now.system - then.system) / 1_000_000
Explaining the difference between user and system CPU time is beyond the scope of a Stack Overflow answer, but you can read about it.

Javascript Date.now() function [duplicate]

I got this code over here:
var date = new Date();
setTimeout(function(e) {
var currentDate = new Date();
if(currentDate - date >= 1000) {
console.log(currentDate, date);
console.log(currentDate-date);
}
else {
console.log("It was less than a second!");
console.log(currentDate-date);
}
}, 1000);
In my computer, it always executes correctly, with 1000 in the console output. Interestedly in other computer, the same code, the timeout callback starts in less than a second and the difference of currentDate - date is between 980 and 998.
I know the existence of libraries that solve this inaccuracy (for example, Tock).
Basically, my question is: What are the reasons because setTimeout does not fire in the given delay? Could it be the computer that is too slow and the browser automatically tries to adapt to the slowness and fires the event before?
PS: Here is a screenshot of the code and the results executed in the Chrome JavaScript console:
It's not supposed to be particularly accurate. There are a number of factors limiting how soon the browser can execute the code; quoting from MDN:
In addition to "clamping", the timeout can also fire later when the page (or the OS/browser itself) is busy with other tasks.
In other words, the way that setTimeout is usually implemented, it is just meant to execute after a given delay, and once the browser's thread is free to execute it.
However, different browsers may implement it in different ways. Here are some tests I did:
var date = new Date();
setTimeout(function(e) {
var currentDate = new Date();
console.log(currentDate-date);
}, 1000);
// Browser Test1 Test2 Test3 Test4
// Chrome 998 1014 998 998
// Firefox 1000 1001 1047 1000
// IE 11 1006 1013 1007 1005
Perhaps the < 1000 times from Chrome could be attributed to inaccuracy in the Date type, or perhaps it could be that Chrome uses a different strategy for deciding when to execute the code—maybe it's trying to fit it into the a nearest time slot, even if the timeout delay hasn't completed yet.
In short, you shouldn't use setTimeout if you expect reliable, consistent, millisecond-scale timing.
In general, computer programs are highly unreliable when trying to execute things with higher precision than 50 ms. The reason for this is that even on an octacore hyperthreaded processor the OS is usually juggling several hundreds of processes and threads, sometimes thousands or more. The OS makes all that multitasking work by scheduling all of them to get a slice of CPU time one after another, meaning they get 'a few milliseconds of time at most to do their thing'.
Implicity this means that if you set a timeout for 1000 ms, chances are far from small that the current browser process won't even be running at that point in time, so it's perfectly normal for the browser not to notice until 1005, 1010 or even 1050 milliseconds that it should be executing the given callback.
Usually this is not a problem, it happens, and it's rarely of utmost importance. If it is, all operating systems supply kernel level timers that are far more precise than 1 ms, and allow a developer to execute code at precisely the correct point in time. JavaScript however, as a heavily sandboxed environment, doesn't have access to kernel objects like that, and browsers refrain from using them since it could theoretically allow someone to attack the OS stability from inside a web page, by carefully constructing code that starves other threads by swamping it with a lot of dangerous timers.
As for why the test yields 980 I'm not sure - that would depend on exactly which browser you're using and which JavaScript engine. I can however fully understand if the browser just manually corrects a bit downwards for system load and/or speed, ensuring that "on average the delay is still about the correct time" - it would make a lot of sense from the sandboxing principle to just approximate the amount of time required without potentially burdening the rest of the system.
Someone please correct me if I am misinterpreting this information:
According to a post from John Resig regarding the inaccuracy of performance tests across platforms (emphasis mine)
With the system times constantly being rounded down to the last queried time (each about 15 ms apart) the quality of performance results is seriously compromised.
So there is up to a 15 ms fudge on either end when comparing to the system time.
I had a similar experience.
I was using something like this:
var iMillSecondsTillNextWholeSecond = (1000 - (new Date().getTime() % 1000));
setTimeout(function ()
{
CountDownClock(ElementID, RelativeTime);
}, iMillSecondsTillNextWholeSecond);//Wait until the next whole second to start.
I noticed it would Skip a Second every couple Seconds, sometimes it would go for longer.
However, I'd still catch it Skipping after 10 or 20 Seconds and it just looked rickety.
I thought, "Maybe the Timeout is too slow or waiting for something else?".
Then I realized, "Maybe it's too fast, and the Timers the Browser is managing are off by a few Milliseconds?"
After adding +1 MilliSeconds to my Variable I only saw it skip once.
I ended up adding +50ms, just to be on the safe side.
var iMillSecondsTillNextWholeSecond = (1000 - (new Date().getTime() % 1000) + 50);
I know, it's a bit hacky, but my Timer is running smooth now. :)
Javascript has a way of dealing with exact time frames. Here’s one approach:
You could just save a Date.now when you start to wait, and create an interval with a low ms update frame, and calculate the difference between the dates.
Example:
const startDate = Date.now()
setInterval(() => {
const currentDate = Date.now()
if (currentDate - startDate === 1000 {
// it was a second
clearInterval()
return
}
// it was not a second
}, 50)

Get Navigation Timing backward/forward compatible - Convert from epoch to HR time

Let's introduce by a note from www.w3.org including two important links to compare.
The PerformanceTiming interface was defined in [NAVIGATION-TIMING] and
is now considered obsolete. The use of names from the
PerformanceTiming interface is supported to remain backwards
compatible, but there are no plans to extend this functionality to
names in the PerformanceNavigationTiming interface defined in
[NAVIGATION-TIMING-2] (or other interfaces) in the future.
I have made a function to get a Navigation Time that should be both backward and forward compatible, because we are in the middle era of transforming to level 2. So this function to get a time from an event name works in Chrome but not Firefox:
function nav(eventName) {
var lev1 = performance.timing; //deprecated unix epoch time in ms since 1970
var lev2 = performance.getEntriesByType("navigation")[0]; //ms since page started to load. (since performance.timing.navigationStart)
var nav = lev2 || lev1; //if lev2 is undefined then use lev1
return nav[eventName]
}
Explanation: When there is no "navigation" entry this falls back to the deprecated way to do navigation timing based on Unix epoch time time in milliseconds since 1970 (lev1), while the new way (lev2) is HR time in milliseconds since the current document navigation started to load, that is useful together with User Timing that always have had the HR time format.
How can we get the function return HR time in all cases?
When I see a number with more than 10 digits without a period I know it is a time got from the deprecated Navigation Timing level 1. All other test cases give decimal point numbers meaning it is HR times with higher precision. The biggest issue is that they have different time origin.
I have gone through confusion, trial errors and frustrated serching (MDN has not updated to level 2) to confirm and state that:
Navigation Timing Level 1 use unix epoch time and the rest...
Navigation Timing Level 2 use HR time
User Timing Level 1 use HR time
User Timing Level 2 use HR time
Also performance.now() has HR time both in Chrome and Firefox.
How to convert unix epoch time to HR time?
SOLVED .:
The code is corrected by help from Amadan.
See comments in tha accepted answer.
function nav(eventName, fallback) {
var lev1 = performance.timing; //deprecated unix epoch time in ms since 1970
var lev2 = performance.getEntriesByType("navigation")[0]; //ms since page started to load
var nav = lev2 || lev1; //if lev2 is undefined then use lev1
if (!nav[eventName] && fallback) eventName = fallback
// approximate t microseconds it takes to execute performance.now()
var i = 10000, t = performance.now()
while(--i) performance.now()
t = (performance.now() - t)/10000 // < 10 microseconds
var oldTime = new Date().getTime(),
newTime = performance.now(),
timeOrigin = performance.timeOrigin?
performance.timeOrigin:
oldTime - newTime - t; // approximate
return nav[eventName] - (lev2? 0: timeOrigin);
// return nav[eventName] - (lev2? 0: lev1.navigationStart); //alternative?
}
The performance.timeOrigin is reduced in the case where old timing lev1 is used.
If browser does not have it then approximate timeOrigin by reducing performance.now() the time since timeOrigin, from (new Date().getTime()) the time since Unix Epoch to result in the time to timeOrigin since Unix Epoch. Apparently it is the definition though the link was a bit vague about it. I confirmed by testing and I trust the answer. Hopefully w3c have a better definition of timeOrigin than: the high resolution timestamp of the start time of the performance measurement.
The functions returned value represents the time elapsed since the time origin.
It may be insignificant in most cases, but the measured time t it took to execute performance.now() is removed to approximate simultaneous execution.
I measured t to almost 10 microseconds on my Raspberry Pi that was fairly stable with various loop sizes. But my Lenovo was not as precise rounding off decimals and getting shorter times on t when tested bigger loop sizes.
An alternative solution is commented away in the last line of code.
The deprecated performance.timing.navigationStart:
representing the moment, in miliseconds since the UNIX epoch, right
after the prompt for unload terminates on the previous document in the
same browsing context. If there is no previous document, this value
will be the same as PerformanceTiming.fetchStart
So, to check current document (ignoring any previous) then use the deprecated performance.timing.fetchStart:
representing the moment, in miliseconds since the UNIX epoch, the
browser is ready to fetch the document using an HTTP request. This
moment is before the check to any application cache.
It is of course correct to use a deprecated property if it is the only one the browser understand. It is used when "navigation" is not defined in the getEntriesByType otherwise having good browser support.
A quick check confirmed each other by this line just before return:
console.log(performance.timeOrigin + '\n' + lev1.navigationStart + '\n' + lev1.fetchStart)
With a result that looks like this in my Chrome
1560807558225.1611
1560807558225
1560807558241
It is only possible if the browser supports HR time 2:
let unixTime = hrTime + performance.timeOrigin;
let hrTime = unixTime - performance.timeOrigin;
However, performance is generally used for time diffs, which do not care what the origin of absolute timestamps is.
For the browsers that do not support HR time 2, or those that "support" it half-heartedly, you can fake it this way:
const hrSyncPoint = performance.now();
const unixSyncPoint = new Date().getTime();
const timeOrigin = unixSyncPoint - hrSyncPoint;
It's not super-exact, but should be good enough for most purposes (on my system, performance.timeOrigin - timeOrigin is sub-millisecond).

Accurately measure a Javascript Function performance while displaying the output to user

As you can see in code below, when I increase the size of the string it leads to a 0 milliseconds difference. And moreover there is an inconsistency as the string count goes on increasing.
Am I doing something wrong here?
let stringIn = document.getElementById('str');
let button = document.querySelector('button');
button.addEventListener('click', () => {
let t1 = performance.now();
functionToTest(stringIn.value);
let t2 = performance.now();
console.log(`time taken is ${t2 - t1}`);
});
function functionToTest(str) {
let total = 0;
for(i of str) {
total ++;
}
return total;
}
<input id="str">
<button type="button">Test string</button>
I tried using await too, but result is the same (see code snippet below). The function enclosing the code below is async:
let stringArr = this.inputString.split(' ');
let longest = '';
const t1 = performance.now();
let length = await new Promise(resolve => {
stringArr.map((item, i) => {
longest = longest.length < item.length ? longest : item;
i === stringArr.length - 1 ? resolve(longest) : '';
});
});
const diff = performance.now() - t1;
console.log(diff);
this.result = `The time taken in mili seconds is ${diff}`;
I've also tried this answer as, but it is also inconsistent.
As a workaround I tried using console.time feature, but It doesn't allow the time to be rendered and isn't accurate as well.
Update: I want to build an interface like jsPerf, which will be quite similar to it but for a different purpose. Mostly I would like to compare different functions which will depend on user inputs.
There are 3 things which may help you understand what happening:
Browsers are reducing performance.now() precision to prevent Meltdown and Spectre attacks, so Chrome gives max 0.1 ms precision, FF 1ms, etc. This makes impossible to measure small timeframes. If function is extremely quick - 0 ms is understandable result. (Thanks #kaiido) Source: paper, additional links here
Any code(including JS) in multithreaded environment will not execute with constant performance (at least due to OS threads switching). So getting consistent values for several single runs is unreachable goal. To get some precise number - function should be executed multiple times, and average value is taken. This will even work with low precision performance.now(). (Boring explanation: if function is much faster than 0.1 ms, and browser often gives 0ms result, but time from time some function run will win a lottery and browser will return 0.1ms... longer functions will win this lottery more often)
There is "optimizing compiler" in most JS engines. It optimizes often used functions. Optimization is expensive, so JS engines optimize only often used functions. This explains performance increase after several runs. At first function is executed in slowest way. After several executions, it is optimized, and performance increases. (should add warmup runs?)
I was able to get non-zero numbers in your code snipet - by copy-pasting 70kb file into input. After 3rd run function was optimized, but even after this - performance is not constant
time taken is 11.49999990593642
time taken is 5.100000067614019
time taken is 2.3999999975785613
time taken is 2.199999988079071
time taken is 2.199999988079071
time taken is 2.099999925121665
time taken is 2.3999999975785613
time taken is 1.7999999690800905
time taken is 1.3000000035390258
time taken is 2.099999925121665
time taken is 1.9000000320374966
time taken is 2.300000051036477
Explanation of 1st point
Lets say two events happened, and the goal is to find time between them.
1st event happened at time A and second event happened at time B. Browser is rounding precise values A and B and returns them.
Several cases to look at:
A B A-B floor(A) floor(B) Ar-Br
12.001 12.003 0.002 12 12 0
11.999 12.001 0.002 11 12 1
Browsers are smarter than we think, there are lot's of improvements and caching techniques take in place for memory allocation, repeatable code execution, on-demand CPU allocation and so on. For instance V8, the JavaScript engine that powers Chrome and Node.js caches code execution cycles and results. Furthermore, your results may get affected by the resources that your browser utilizes upon code execution, so your results even with multiple executions cycles may vary.
As you have mentioned that you are trying to create a jsPerf clone take a look at Benchmark.js, this library is used by the jsPerf development team.
Running performance analysis tests is really hard and I would suggest running them on a Node.js environment with predefined and preallocated resources in order to obtain your results.
You might what to take a look at https://github.com/anywhichway/benchtest which just reuses Mocha unit tests. Note, performance testing at the unit level should only be one part of your performance testing, you should also use simulators that emulate real world conditions and test your code at the application level to assess
network impacts, module interactions, etc.

Do I got number of operations per second in this way?

Look at this code:
function wait(time) {
let i = 0;
let a = Date.now();
let x = a + (time || 0);
let b;
while ((b = Date.now()) <= x) ++i;
return i;
}
If I run it in browser (particularly Google Chrome, but I don't think it matters) in the way like wait(1000), the machine will obviously freeze for a second and then return recalculated value of i.
Let it be 10 000 000 (I'm getting values close to this one). This value varies every time, so lets take an average number.
Did I just got current number of operations per second of the processor in my machine?
Not at all.
What you get is the number of loop cycles completed by the Javascript process in a certain time. Each loop cycle consists of:
Creating a new Date object
Comparing two Date objects
Incrementing a Number
Incrementing the Number variable i is probably the least expensive of these, so the function is not really reporting how much it takes to make the increment.
Aside from that, note that the machine is doing a lot more than running a Javascript process. You will see interference from all sorts of activity going on in the computer at the same time.
When running inside a Javascript process, you're simply too far away from the processor (in terms of software layers) to make that measurement. Beneath Javascript, there's the browser and the operating system, each of which can (and will) make decisions that affect this result.
No. You can get the number of language operations per second, though the actual number of machine operations per second on a whole processor is more complicated.
Firstly the processor is not wholly dedicated to the browser, so it is actually likely switching back and forth between prioritized processes. On top of that memory access is obscured and the processor uses extra operations to manage memory (page flushing, etc.) and this is not gonna be very transparent to you at a given time. On top of that physical properties means that the real clock rate of the processor is dynamic... You can see it's pretty complicated already ;)
To really calculate the number of machine operations per second you need to measure the clock rate of the processor and multiply it by the number of instructions per cycle the processor can perform. Again this varies, but really the manufacturer specs will likely be good enough of an estimate :P.
If you wanted to use a program to measure this, you'd need to somehow dedicate 100% of the processor to your program and have it run a predictable set of instructions with no other hangups (like memory management). Then you need to include the number of instructions it takes to load the program instructions into the code caches. This is not really feasible however.
As others have pointed out, this will not help you determine the number of operations the processor does per second due to the factors that prior answers have pointed out. I do however think that a similar experiment could be set up to estimate the number of operations to be executed by your JavaScript interpreter running on your browser. For example given a function: factorial(n) an operation that runs in O(n). You could execute an operation such as factorial(100) repeatedly over the course of a minute.
function test(){
let start = Date.now();
let end = start + 60 * 1000;
let numberOfExecutions = 0;
while(Date.now() < end){
factorial(100);
numberOfExecutions++;
}
return numberOfExecutions/(60 * 100);
}
The idea here is that factorial is by far the most time consuming function in the code. And since factorial runs in O(n) we know factorial(100) is approximately 100 operations. Note that this will not be exact and that larger numbers will make for better approximations. Also remember that this will estimate the number of operations executed by your interpreter and not your processor.
There is a lot of truth to all previous comments, but I want to invert the reasoning a little bit because I do believe it is easier to understand it like that.
I believe that the fairest way to calculate it is with the most basic loop, and not relying on any dates or functions, and instead calculate the values later.
You will see that the smaller the function, the bigger the initial overload is. That means it takes a small amount of time to start and finish each function, but at a certain point they all start reaching a number that can clearly be seen as close-enough to be considered how many operations per second can JavaScript run.
My example:
const oneMillion = 1_000_000;
const tenMillion = 10_000_000;
const oneHundredMillion = 100_000_000;
const oneBillion = 1_000_000_000;
const tenBillion = 10_000_000_000;
const oneHundredBillion = 100_000_000_000;
const oneTrillion = 1_000_000_000_000;
function runABunchOfTimes(times) {
console.time('timer')
for (let i = 0; i < times; ++i) {}
console.timeEnd('timer')
}
I've tried on a machine that has a lot of load already on it with many processes running, 2020 macbook, these were my results:
at the very end I am taking the time the console showed me it took to run, and I divided the number of runs by it. The oneTrillion and oneBillion runs are virtually the same, however when it goes to oneMillion and 1000 you can see that they are not as performant due to the initial load of creating the for loop in the first place.
We usually try to sway away from O(n^2) and slower functions exactly because we do not want to reach for that maximum. If you were to perform a find inside of a map for an array with all cities in the world (around 10_000 according to google, I haven't counted) we would already each 100_000_000 iterations, and they would certainly not be as simple as just iterating through nothing like in my example. Your code then would take minutes to run, but I am sure you are aware of this and that is why you posted the question in the first place.
Calculating how long it would take is tricky not only because of the above, but also because you cannot predict which device will run your function. Nowadays I can open in my TV, my watch, a raspberry py and none of them would be nearly as fast as the computer I am running from when creating these functions. Sure. But if I were to try to benchmark a device I would use something like the function above since it is the simplest loop operation I could think of.

Categories

Resources