Is it safe to use performance.now() to calibrate time? - javascript

I'm wondering if I'm not foreseeing something that might be a problem with my approach.
class RDate {
constructor(serverIsoDate) {
this.serverDate = new Date(serverIsoDate);
this.localDate = new Date();
this.localTick = performance.now();
}
getCompensatedDate() {
return this.serverDate.getTime() + (performance.now()-this.localTick);
}
}
I get serverDate periodically, but If the user leaves the webapp running for a long time I can guarantee correct time even if the user spoofs his clock.
Does performance.now overflows or get paused when the tab gets suspended?
I can detect how much the clock drifted or got spoofed by calculating the difference between localDate and localTick
_spoofed() {
return Math.abs(this.localDate.getTime() - this.localTick - (Date.now() - performance.now())) > 1000 * 60 * 60;
}

The MDN documentation on the Performance API says:
Since a platform's system clock is subject to various skews (such as NTP adjustments), the interfaces support a monotonic clock i.e. a clock that is always increasing.
This seems to say that the clock used for performance.now() should be independent of the system clock.
I don't think its clock can be paused. The description of performance.now() says:
The returned value represents the time elapsed since the time origin.
The time origin doesn't change when a tab is suspended, so it should include the suspension time.

Related

setInterval is not run at exact interval

If you create a very simple program that has a setInterval with 1 second delay, and you log the times its function is called, you will notice that the interval 'drifts'.
Basically, it actually takes (1,000ms + some amount of time) between each call.
For this program, it actually takes ~1,005ms between each call.
What causes the drift?
Is it taking 5ms to requeue setInterval?
Is it the length of the time it takes to run the function? (I doubt this, but having trouble concluding.)
Why does setInterval behave this way, and not just base itself on some clock time? (e.g. if you have 1,000ms delay and you started at time 3... just check if 1,003 then 2,003 and so on has elapsed?)
Example:
const startTime = new Date().valueOf();
function printElapsedTime(startTime) {
console.log(new Date().valueOf() - startTime);
}
let intervalObj = setInterval(printElapsedTime, 1000, startTime);
Output:
1005
2010
3015
4020
So you are not sync'd to 1 second anymore. Since it drifts by about 5, after 100 runs it will be running a half second 'later' than expected.
This question discusses how to avoid this drift, but does not explain WHY this drift is happening. (As in it does not say that setInterval is recursively adding itself to the event queue after each call - which takes 3ms ... which is just a guess at the drift cause).
While no Javascript running on a standard browser claims to be real-time (as pointed out in several comments) there are steps you an take to make things not get as out of hand as it appears the example in the question does (the errors being cumulative).
Just to start with an experiment I ran this on my Windows 10 Chrome:
const startTime = new Date().valueOf();
function printElapsedTime(startTime) {
let curTime = new Date().valueOf();
console.log(curTime - startTime);
}
let intervalObj = setInterval(printElapsedTime, 1000, startTime);
<div id="show">0</div>
This gave fairly consistent error each second and around the minute time you can see there was no cumulative drift:
However, using Firefox on the same system there was cumulative drift and this can be seen as pretty significant by the one minute mark:
So the question is, can anything be done to make it a bit better across browsers?
This snippet ditches setInterval and instead uses setTimeout on each invocation:
const startTime = new Date().valueOf();
let nextExpected = startTime + 1000;
function printElapsedTime(startTime) {
let curTime = new Date().valueOf();
console.log(curTime - startTime);
let nextInterval = 1000 + nextExpected - curTime;
setTimeout(printElapsedTime, nextInterval, startTime);
nextExpected = curTime + nextInterval;
}
let intervalObj = setTimeout(printElapsedTime, 1000, startTime);
<div id="show">0</div>
On Firefox this gave:
There was no cumulative drift and the error around the one minute mark was no worse than earlier.
So, in attempt to actually answer the question:
Computers do have other duties to attend to and cannot guarantee to process a timeout function at an exact time (though the spec requires them not to process before the interval has elapsed). In the given code in particular console.log will take time, settingup a new interval (in the final example) takes time, but the laptop/phone etc will also be dealing with lots of other stuff at the same time, housekeeping in the background, listening for interrupts etc etc.
Different browsers seem to treat setInterval differently - the spec doesn't seem to say what if anything they should do about cumulative drift. From the experiments here it seems that Chrome/Edge at least on my Windows10 laptop does some mitigating which means the drift isn't cumulative whereas FF doesn't seem to adjust and the drift can be significant.
It would be interesting to know if others on different systems get equivalent results. Anyway, the basic message is don't rely on such timeouts, it is not a real time system.
Long story short, none of desktop operating systems is real-time os
https://en.m.wikipedia.org/wiki/Real-time_operating_system
Thus, executing a task like calling the callback function is not guaranteed in an exact time. The os does it’s best to juggle all the tasks, take care of power/resource constraints to optimize the performance as a whole. As a result, timings float around a little.
Interestingly, you get a consistent 5 ms shift. I have no explanation for that

Javascript Date.now() function [duplicate]

I got this code over here:
var date = new Date();
setTimeout(function(e) {
var currentDate = new Date();
if(currentDate - date >= 1000) {
console.log(currentDate, date);
console.log(currentDate-date);
}
else {
console.log("It was less than a second!");
console.log(currentDate-date);
}
}, 1000);
In my computer, it always executes correctly, with 1000 in the console output. Interestedly in other computer, the same code, the timeout callback starts in less than a second and the difference of currentDate - date is between 980 and 998.
I know the existence of libraries that solve this inaccuracy (for example, Tock).
Basically, my question is: What are the reasons because setTimeout does not fire in the given delay? Could it be the computer that is too slow and the browser automatically tries to adapt to the slowness and fires the event before?
PS: Here is a screenshot of the code and the results executed in the Chrome JavaScript console:
It's not supposed to be particularly accurate. There are a number of factors limiting how soon the browser can execute the code; quoting from MDN:
In addition to "clamping", the timeout can also fire later when the page (or the OS/browser itself) is busy with other tasks.
In other words, the way that setTimeout is usually implemented, it is just meant to execute after a given delay, and once the browser's thread is free to execute it.
However, different browsers may implement it in different ways. Here are some tests I did:
var date = new Date();
setTimeout(function(e) {
var currentDate = new Date();
console.log(currentDate-date);
}, 1000);
// Browser Test1 Test2 Test3 Test4
// Chrome 998 1014 998 998
// Firefox 1000 1001 1047 1000
// IE 11 1006 1013 1007 1005
Perhaps the < 1000 times from Chrome could be attributed to inaccuracy in the Date type, or perhaps it could be that Chrome uses a different strategy for deciding when to execute the code—maybe it's trying to fit it into the a nearest time slot, even if the timeout delay hasn't completed yet.
In short, you shouldn't use setTimeout if you expect reliable, consistent, millisecond-scale timing.
In general, computer programs are highly unreliable when trying to execute things with higher precision than 50 ms. The reason for this is that even on an octacore hyperthreaded processor the OS is usually juggling several hundreds of processes and threads, sometimes thousands or more. The OS makes all that multitasking work by scheduling all of them to get a slice of CPU time one after another, meaning they get 'a few milliseconds of time at most to do their thing'.
Implicity this means that if you set a timeout for 1000 ms, chances are far from small that the current browser process won't even be running at that point in time, so it's perfectly normal for the browser not to notice until 1005, 1010 or even 1050 milliseconds that it should be executing the given callback.
Usually this is not a problem, it happens, and it's rarely of utmost importance. If it is, all operating systems supply kernel level timers that are far more precise than 1 ms, and allow a developer to execute code at precisely the correct point in time. JavaScript however, as a heavily sandboxed environment, doesn't have access to kernel objects like that, and browsers refrain from using them since it could theoretically allow someone to attack the OS stability from inside a web page, by carefully constructing code that starves other threads by swamping it with a lot of dangerous timers.
As for why the test yields 980 I'm not sure - that would depend on exactly which browser you're using and which JavaScript engine. I can however fully understand if the browser just manually corrects a bit downwards for system load and/or speed, ensuring that "on average the delay is still about the correct time" - it would make a lot of sense from the sandboxing principle to just approximate the amount of time required without potentially burdening the rest of the system.
Someone please correct me if I am misinterpreting this information:
According to a post from John Resig regarding the inaccuracy of performance tests across platforms (emphasis mine)
With the system times constantly being rounded down to the last queried time (each about 15 ms apart) the quality of performance results is seriously compromised.
So there is up to a 15 ms fudge on either end when comparing to the system time.
I had a similar experience.
I was using something like this:
var iMillSecondsTillNextWholeSecond = (1000 - (new Date().getTime() % 1000));
setTimeout(function ()
{
CountDownClock(ElementID, RelativeTime);
}, iMillSecondsTillNextWholeSecond);//Wait until the next whole second to start.
I noticed it would Skip a Second every couple Seconds, sometimes it would go for longer.
However, I'd still catch it Skipping after 10 or 20 Seconds and it just looked rickety.
I thought, "Maybe the Timeout is too slow or waiting for something else?".
Then I realized, "Maybe it's too fast, and the Timers the Browser is managing are off by a few Milliseconds?"
After adding +1 MilliSeconds to my Variable I only saw it skip once.
I ended up adding +50ms, just to be on the safe side.
var iMillSecondsTillNextWholeSecond = (1000 - (new Date().getTime() % 1000) + 50);
I know, it's a bit hacky, but my Timer is running smooth now. :)
Javascript has a way of dealing with exact time frames. Here’s one approach:
You could just save a Date.now when you start to wait, and create an interval with a low ms update frame, and calculate the difference between the dates.
Example:
const startDate = Date.now()
setInterval(() => {
const currentDate = Date.now()
if (currentDate - startDate === 1000 {
// it was a second
clearInterval()
return
}
// it was not a second
}, 50)

Get Navigation Timing backward/forward compatible - Convert from epoch to HR time

Let's introduce by a note from www.w3.org including two important links to compare.
The PerformanceTiming interface was defined in [NAVIGATION-TIMING] and
is now considered obsolete. The use of names from the
PerformanceTiming interface is supported to remain backwards
compatible, but there are no plans to extend this functionality to
names in the PerformanceNavigationTiming interface defined in
[NAVIGATION-TIMING-2] (or other interfaces) in the future.
I have made a function to get a Navigation Time that should be both backward and forward compatible, because we are in the middle era of transforming to level 2. So this function to get a time from an event name works in Chrome but not Firefox:
function nav(eventName) {
var lev1 = performance.timing; //deprecated unix epoch time in ms since 1970
var lev2 = performance.getEntriesByType("navigation")[0]; //ms since page started to load. (since performance.timing.navigationStart)
var nav = lev2 || lev1; //if lev2 is undefined then use lev1
return nav[eventName]
}
Explanation: When there is no "navigation" entry this falls back to the deprecated way to do navigation timing based on Unix epoch time time in milliseconds since 1970 (lev1), while the new way (lev2) is HR time in milliseconds since the current document navigation started to load, that is useful together with User Timing that always have had the HR time format.
How can we get the function return HR time in all cases?
When I see a number with more than 10 digits without a period I know it is a time got from the deprecated Navigation Timing level 1. All other test cases give decimal point numbers meaning it is HR times with higher precision. The biggest issue is that they have different time origin.
I have gone through confusion, trial errors and frustrated serching (MDN has not updated to level 2) to confirm and state that:
Navigation Timing Level 1 use unix epoch time and the rest...
Navigation Timing Level 2 use HR time
User Timing Level 1 use HR time
User Timing Level 2 use HR time
Also performance.now() has HR time both in Chrome and Firefox.
How to convert unix epoch time to HR time?
SOLVED .:
The code is corrected by help from Amadan.
See comments in tha accepted answer.
function nav(eventName, fallback) {
var lev1 = performance.timing; //deprecated unix epoch time in ms since 1970
var lev2 = performance.getEntriesByType("navigation")[0]; //ms since page started to load
var nav = lev2 || lev1; //if lev2 is undefined then use lev1
if (!nav[eventName] && fallback) eventName = fallback
// approximate t microseconds it takes to execute performance.now()
var i = 10000, t = performance.now()
while(--i) performance.now()
t = (performance.now() - t)/10000 // < 10 microseconds
var oldTime = new Date().getTime(),
newTime = performance.now(),
timeOrigin = performance.timeOrigin?
performance.timeOrigin:
oldTime - newTime - t; // approximate
return nav[eventName] - (lev2? 0: timeOrigin);
// return nav[eventName] - (lev2? 0: lev1.navigationStart); //alternative?
}
The performance.timeOrigin is reduced in the case where old timing lev1 is used.
If browser does not have it then approximate timeOrigin by reducing performance.now() the time since timeOrigin, from (new Date().getTime()) the time since Unix Epoch to result in the time to timeOrigin since Unix Epoch. Apparently it is the definition though the link was a bit vague about it. I confirmed by testing and I trust the answer. Hopefully w3c have a better definition of timeOrigin than: the high resolution timestamp of the start time of the performance measurement.
The functions returned value represents the time elapsed since the time origin.
It may be insignificant in most cases, but the measured time t it took to execute performance.now() is removed to approximate simultaneous execution.
I measured t to almost 10 microseconds on my Raspberry Pi that was fairly stable with various loop sizes. But my Lenovo was not as precise rounding off decimals and getting shorter times on t when tested bigger loop sizes.
An alternative solution is commented away in the last line of code.
The deprecated performance.timing.navigationStart:
representing the moment, in miliseconds since the UNIX epoch, right
after the prompt for unload terminates on the previous document in the
same browsing context. If there is no previous document, this value
will be the same as PerformanceTiming.fetchStart
So, to check current document (ignoring any previous) then use the deprecated performance.timing.fetchStart:
representing the moment, in miliseconds since the UNIX epoch, the
browser is ready to fetch the document using an HTTP request. This
moment is before the check to any application cache.
It is of course correct to use a deprecated property if it is the only one the browser understand. It is used when "navigation" is not defined in the getEntriesByType otherwise having good browser support.
A quick check confirmed each other by this line just before return:
console.log(performance.timeOrigin + '\n' + lev1.navigationStart + '\n' + lev1.fetchStart)
With a result that looks like this in my Chrome
1560807558225.1611
1560807558225
1560807558241
It is only possible if the browser supports HR time 2:
let unixTime = hrTime + performance.timeOrigin;
let hrTime = unixTime - performance.timeOrigin;
However, performance is generally used for time diffs, which do not care what the origin of absolute timestamps is.
For the browsers that do not support HR time 2, or those that "support" it half-heartedly, you can fake it this way:
const hrSyncPoint = performance.now();
const unixSyncPoint = new Date().getTime();
const timeOrigin = unixSyncPoint - hrSyncPoint;
It's not super-exact, but should be good enough for most purposes (on my system, performance.timeOrigin - timeOrigin is sub-millisecond).

The javascript timing resolution in my browsers seems to be ~8ms. How can I increase it? [duplicate]

Something that has always bugged me is how unpredictable the setTimeout() method in Javascript is.
In my experience, the timer is horribly inaccurate in a lot of situations. By inaccurate, I mean the actual delay time seems to vary by 250-500ms more or less. Although this isn't a huge amount of time, when using it to hide/show UI elements the time can be visibly noticeable.
Are there any tricks that can be done to ensure that setTimeout() performs accurately (without resorting to an external API) or is this a lost cause?
Are there any tricks that can be done
to ensure that setTimeout() performs
accurately (without resorting to an
external API) or is this a lost cause?
No and no. You're not going to get anything close to a perfectly accurate timer with setTimeout() - browsers aren't set up for that. However, you don't need to rely on it for timing things either. Most animation libraries figured this out years ago: you set up a callback with setTimeout(), but determine what needs to be done based on the value of (new Date()).milliseconds (or equivalent). This allows you to take advantage of more reliable timer support in newer browsers, while still behaving appropriately on older browsers.
It also allows you to avoid using too many timers! This is important: each timer is a callback. Each callback executes JS code. While JS code is executing, browser events - including other callbacks - are delayed or dropped. When the callback finishes, additional callbacks must compete with other browser events for a chance to execute. Therefore, one timer that handles all pending tasks for that interval will perform better than two timers with coinciding intervals, and (for short timeouts) better than two timers with overlapping timeouts!
Summary: stop using setTimeout() to implement "one timer / one task" designs, and use the real-time clock to smooth out UI actions.
.
REF; http://www.sitepoint.com/creating-accurate-timers-in-javascript/
This site bailed me out on a major scale.
You can use the system clock to compensate for timer inaccuracy. If you run a timing function as a series of setTimeout calls — each instance calling the next — then all you have to do to keep it accurate is work out exactly how inaccurate it is, and subtract that difference from the next iteration:
var start = new Date().getTime(),
time = 0,
elapsed = '0.0';
function instance()
{
time += 100;
elapsed = Math.floor(time / 100) / 10;
if(Math.round(elapsed) == elapsed) { elapsed += '.0'; }
document.title = elapsed;
var diff = (new Date().getTime() - start) - time;
window.setTimeout(instance, (100 - diff));
}
window.setTimeout(instance, 100);
This method will minimize drift and reduce the inaccuracies by more than 90%.
It fixed my issues, hope it helps
I had a similar problem not long ago and came up with an approach which combines requestAnimationFrame with performance.now() which works very effectively.
Im now able to make timers accurate to approx 12 decimal places:
window.performance = window.performance || {};
performance.now = (function() {
return performance.now ||
performance.mozNow ||
performance.msNow ||
performance.oNow ||
performance.webkitNow ||
function() {
//Doh! Crap browser!
return new Date().getTime();
};
})();
http://jsfiddle.net/CGWGreen/9pg9L/
If you need to get an accurate callback on a given interval, this gist may help you:
https://gist.github.com/1185904
function interval(duration, fn){
var _this = this
this.baseline = undefined
this.run = function(){
if(_this.baseline === undefined){
_this.baseline = new Date().getTime()
}
fn()
var end = new Date().getTime()
_this.baseline += duration
var nextTick = duration - (end - _this.baseline)
if(nextTick<0){
nextTick = 0
}
_this.timer = setTimeout(function(){
_this.run(end)
}, nextTick)
}
this.stop = function(){
clearTimeout(_this.timer)
}
}
shog9's answer is pretty much what I'd say, although I'd add the following about UI animation/events:
If you've got a box that's supposed to slide onto the screen, expand downwards, then fade in its contents, don't try to make all three events separate with delays timed to make them fire one after another - use callbacks, so once the first event is done sliding it calls the expander, once that's done it calls the fader. jQuery can do it easily, and I'm sure other libraries can as well.
If you're using setTimeout() to yield quickly to the browser so it's UI thread can catch up with any tasks it needs to do (such as updating a tab, or to not show the Long Running Script dialog), there is a new API called Efficient Script Yielding, aka, setImmediate() that may work a bit better for you.
setImmediate() operates very similarly to setTimeout(), yet it may run immediately if the browser has nothing else to do. In many situations where you are using setTimeout(..., 16) or setTimeout(..., 4) or setTimeout(..., 0) (i.e. you want the browser to run any outstanding UI thread tasks and not show a Long Running Script dialog), you can simply replace your setTimeout() with setImmediate(), dropping the second (millisecond) argument.
The difference with setImmediate() is that it is basically a yield; if the browser has sometime to do on the UI thread (e.g., update a tab), it will do so before returning to your callback. However, if the browser is already all caught up with its work, the callback specified in setImmediate() will essentially run without delay.
Unfortunately it is only currently supported in IE9+, as there is some push back from the other browser vendors.
There is a good polyfill available though, if you want to use it and hope the other browsers implement it at some point.
If you are using setTimeout() for animation, requestAnimationFrame is your best bet as your code will run in-sync with the monitor's refresh rate.
If you are using setTimeout() on a slower cadence, e.g. once every 300 milliseconds, you could use a solution similar to what user1213320 suggests, where you monitor how long it was from the last timestamp your timer ran and compensate for any delay. One improvement is that you could use the new High Resolution Time interface (aka window.performance.now()) instead of Date.now() to get greater-than-millisecond resolution for the current time.
You need to "creep up" on the target time. Some trial and error will be necessary but in essence.
Set a timeout to complete arround 100ms before the required time
make the timeout handler function like this:
calculate_remaining_time
if remaining_time > 20ms // maybe as much as 50
re-queue the handler for 10ms time
else
{
while( remaining_time > 0 ) calculate_remaining_time;
do_your_thing();
re-queue the handler for 100ms before the next required time
}
But your while loop can still get interrupted by other processes so it's still not perfect.
Here's an example demoing Shog9's suggestion. This fills a jquery progress bar smoothly over 6 seconds, then redirects to a different page once it's filled:
var TOTAL_SEC = 6;
var FRAMES_PER_SEC = 60;
var percent = 0;
var startTime = new Date().getTime();
setTimeout(updateProgress, 1000 / FRAMES_PER_SEC);
function updateProgress() {
var currentTime = new Date().getTime();
// 1000 to convert to milliseconds, and 100 to convert to percentage
percent = (currentTime - startTime) / (TOTAL_SEC * 1000) * 100;
$("#progressbar").progressbar({ value: percent });
if (percent >= 100) {
window.location = "newLocation.html";
} else {
setTimeout(updateProgress, 1000 / FRAMES_PER_SEC);
}
}
This is a timer I made for a music project of mine which does this thing. Timer that is accurate on all devices.
var Timer = function(){
var framebuffer = 0,
var msSinceInitialized = 0,
var timer = this;
var timeAtLastInterval = new Date().getTime();
setInterval(function(){
var frametime = new Date().getTime();
var timeElapsed = frametime - timeAtLastInterval;
msSinceInitialized += timeElapsed;
timeAtLastInterval = frametime;
},1);
this.setInterval = function(callback,timeout,arguments) {
var timeStarted = msSinceInitialized;
var interval = setInterval(function(){
var totaltimepassed = msSinceInitialized - timeStarted;
if (totaltimepassed >= timeout) {
callback(arguments);
timeStarted = msSinceInitialized;
}
},1);
return interval;
}
}
var timer = new Timer();
timer.setInterval(function(){console.log("This timer will not drift."),1000}
Hate to say it, but I don't think there is a way to alleviate this. I do think that it depends on the client system, though, so a faster javascript engine or machine may make it slightly more accurate.
To my experience it is lost effort, even as the smallest reasonable amount of time I ever recognized js act in is around 32-33 ms. ...
There is definitely a limitation here. To give you some perspective, the Chrome browser Google just released is fast enough that it can execute setTimeout(function() {}, 0) in 15-20 ms whereas older Javascript engines took hundreds of milliseconds to execute that function. Although setTimeout uses milliseconds, no javascript virtual machine at this point in time can execute code with that precision.
Dan, from my experience (that includes implementation of SMIL2.1 language in JavaScript, where time management is in subject) I can assure you that you actually never need high precision of setTimeout or setInterval.
What does however matter is the order in which setTimeout/setInterval gets executed when queued - and that always works perfectly.
JavaScript timeouts have a defacto limit of 10-15ms (I'm not sure what you're doing to get 200ms, unless you're doing 185ms of actual js execution). This is due to windows having a standard timer resolution of 15ms, the only way to do better is to use Windows' higher resolution timers which is a system wide setting so can screw with other applications on the system and also chews battery life (Chrome has a bug from Intel on this issue).
The defacto standard of 10-15ms is due to people using 0ms timeouts on websites but then coding in a way that assumes that assumes a 10-15ms timeout (eg. js games which assume 60fps but ask 0ms/frame with no delta logic so the game/site/animation goes a few orders of magnitude faster than intended). To account for that, even on platforms that don't have windows' timer problems, the browsers limit timer resolution to 10ms.
Here are what I use. Since it's JavaScript, I will post both my Frontend and node.js solutions:
For both, I use the same decimal rounding function that I highly recommend you keep at arms length because reasons:
const round = (places, number) => +(Math.round(number + `e+${places}`) + `e-${places}`)
places - Number of decimal places at which to round, this should be safe and should avoid any issues with floats (some numbers like 1.0000000000005~ can be problematic). I Spent time researching the best way to round decimals provided by high-resolution timers converted to milliseconds.
that + symbol - It is a unary operator that converts an operand into a number, virtually identical to Number()
Browser
const start = performance.now()
// I wonder how long this comment takes to parse
const end = performance.now()
const result = (end - start) + ' ms'
const adjusted = round(2, result) // see above rounding function
node.js
// Start timer
const startTimer = () => process.hrtime()
// End timer
const endTimer = (time) => {
const diff = process.hrtime(time)
const NS_PER_SEC = 1e9
const result = (diff[0] * NS_PER_SEC + diff[1])
const elapsed = Math.round((result * 0.0000010))
return elapsed
}
// This end timer converts the number from nanoseconds into milliseconds;
// you can find the nanosecond version if you need some seriously high-resolution timers.
const start = startTimer()
// I wonder how long this comment takes to parse
const end = endTimer(start)
console.log(end + ' ms')
You could consider using the html5 webaudio clock which uses the system time for better accuracy

Displaying another computer's time on a web page using Javascript? [duplicate]

This question already has answers here:
Clock on webpage using server and system time?
(8 answers)
Closed 9 years ago.
I am working on a very time-sensitive web application. One of the business rules given to me is that the application's behavior must always depend on the time on the web server, regardless of what time is on the client's clock. To make this clear to the user, I was asked to display the server's time in the web application.
To this end, I wrote the following Javascript code:
clock = (function () {
var hours, minutes, seconds;
function setupClock(updateDisplayCallback) {
getTimeAsync(getTimeCallback);
function getTimeCallback(p_hours, p_minutes, p_seconds) {
hours = p_hours;
minutes = p_minutes;
seconds = p_seconds;
setInterval(incrementSecondsAndDisplay, 1000);
}
function incrementSecondsAndDisplay() {
seconds++;
if (seconds === 60) {
seconds = 0;
minutes++;
if (minutes === 60) {
minutes = 0;
hours++;
if (hours === 24) {
hours = 0;
}
}
}
updateDisplayCallback(hours, minutes, seconds);
}
}
// a function that makes an AJAX call and invokes callback, passing hours, minutes, and seconds.
function getTimeAsync(callback) {
$.ajax({
type: "POST",
url: "Default.aspx/GetLocalTime",
contentType: "application/json; charset=utf-8",
dataType: "json",
success: function (response) {
var date, serverHours, serverMinutes, serverSeconds;
date = GetDateFromResponse(response);
serverHours = date.getHours();
serverMinutes = date.getMinutes();
serverSeconds = date.getSeconds();
callback(serverHours, serverMinutes, serverSeconds);
}
})
}
return {
setup: setupClock
};
})();
The function passed in for updateDisplayCallback is a simple function to display the date on the web page.
The basic idea is that the Javascript makes an asynchronous call to look up the server's time, store it on the client, and then update it once per second.
At first, this appears to work, but as time goes by, the displayed time gets behind a few seconds every minute. I left it running overnight, and when I came in the next morning, it was off by more than an hour! This is entirely unacceptable because the web application may be kept open for days at a time.
How can I modify this code so that the web browser will continuously and accurately display the server's time?
Javascript's setInterval is not accurate enough to allow you to keep the time like this.
My solution would be:
Periodically get the server's time in milliseconds (it does not need to be very often as the two clocks will hardly deviate that much)
Get the client time in milliseconds
Calculate the clock deviation between server and client (client-server)
Periodically update the display of the clock by getting the client time and adding the clock deviation
Edit:
To be more accurate, you could measure the round trip time of the server's request, divide it by 2 and factor that delay into the clock deviation. Assuming round trips are symmetrical in their duration, this would give a more accurate calculation.
setInterval is not a reliable way to schedule time critical events. It may take less or more than 1000ms to run your callback depending on how busy JavaScript it is at the moment.
A better approach would be to take a shorter interval and use new Date().getTime() to check if a second has passed.
The minimum interval browsers allow is as high 10.
Thanks for the answers. I have up-voted both answers so far as they contain useful information. However, I am not using the exact answer prescribed in either answer.
What I finally decided on is a bit different.
I wrote about what I learned on my personal web page.
First of all, I now understand that using setInterval(..., 1000) is not good enough to have something done once per second for a long time. However, 'polling' the time with a much shorter interval looking for the second to change seems very inefficient to me.
I decided that it does make sense to keep track of the 'offset' between the server time and the client time.
My final solution is to do the following:
(1) Do an AJAX call to the server to get the time. The function also checks the client time and computes the difference between the server time and the client time, in milliseconds. Due to network latency and other factors, this initial fetch may be off by a few seconds. For my purposes, this is okay.
(2) Execute a tick function. Each time tick executes, it checks how long it has been since the last time tick executed. It will use this time to compute an argument to be passed to setTimeout so that the time display is updated approximately once per second.
(3) Each time the tick function computes the time to be displayed, it takes the client time and adds the difference that was computed in step (1). This way, I don't depend upon the client to have the time set correctly, but I do depend upon the client to accurately measure elapsed time. For my purposes, this is okay. The most important thing is that regardless of how setTimeout may be inaccurate or interrupted by other processes (such as a modal dialog, for instance), the time displayed should always be accurate.

Categories

Resources