I'm trying to develop a simple LiveDateTime object that's reliable, accurate and somewhat independent of the client system date as far as the updating of the time goes.
I've tried several solutions:
Taking the server date and the time difference with the client system date to calculate the elapsed time and update the time accordingly every 150ms.
Taking the client system UTC date and increase it with a second every 1000ms with setInterval() or setTimeout(). Also tried it with a self-correcting delay loop.
Some similar variants of the above.
All of the above didn't meet the requirements, either because if the client system date was changed it would set the live time off as well, or just the delay of the loop was inaccurate and drifting off over time.
So, I decided to just go online and search for some popular websites that have what I want already implemented and working. I stumbled upon this Greenwich Mean Time example that's reliable, accurate and somewhat independent of the client system. If I change the date of my system, the date that they display is unaffected.
Upon inspecting their code I found it hard trying to understand what they are doing that I could replicate, because either their code is badly written or I'm not that advanced enough to understand it.
I do know that they use AJAX calls (request.js) to the server for something (maybe the server time). And that they're using a setInterval(func, 150) and a whole bunch of other presentational methods of which I think aren't the core of the live time's functionality.
Anyone who could help me figuring out the core of the functionality of their accurate live time? If I can figure that out, from there it would be easy for me to replicate.
The example is combined with a server-side language to get the server time. The time is directly written into the JS by PHP, in this line for example:
var ServerDSTCheck = new Date(parseFloat(1426849548940.7)).getTime();
That float 1426849548940.7 comes from the server, and that's why it's not affected by the client machine time. The AJAX call returns the same thing in intervals to maintain the server time, so it can't be messed up on the client and to avoid the client lag.
That's the base of what you need. The rest of the code is about daylight savings, timezone compensation and presentational stuff.
As I understand, you want independent clock in JS, right? It's pretty simple:
Make an ajax request to your server, while the script is loaded by the user, to get current time (UNIX).
Set interval for example 1 sec, and just increment your time, that you get in pt 1, for the interval you choose.
When user change his time on local machine it wouldn't affect the script, also refreshing script would again load time from your server.
Since js timers are extremely unreliable by default I'd say you must use two parallel intervals.
One, which will correct the frontend's time drift against server time and other which will fill in the blanks on the frontend (to not strain the server and user's connection).
var time = Date.now(); // Or initial server time
var oldTime = Date.now();
var newTime = oldTime;
var feInterval = setInterval( function() {
// Incrementing "time" by whatever frontend difference, aka Delta time
time += ( newTime = Date.now() ) - oldTime;
oldTime = newTime;
}, 40 );
var beInterval = setInterval( function() {
// Fetch server UTC milliseconds and overwrite "time".
// Above interval will just use that as a new base time.
var serverTime = Math.random() * 10000000000; //use Ajax to get server time.
time = serverTime;
}, 10000 );
setInterval( function() {
// Do something with "time"
// This interval is just for demo, you don't need it in general - use your own "time" consumer.
console.log( 'Fairly correct time (random each 10 seconds):', new Date ( time ).toString() );
}, 40 );
I guess there is someone else trying solve the same problem,
check out below page,
show server time including timezone offset
Related
I have a websocket client that receives 200-300 messages per second from a websocket stream. My JavaScript client that receives the messages is doing a few DOM manipulations with each message received. I'm afraid that after a few minutes of running the app in my browser, the processing of the messages could fall behind. Like a chat application that gets overwhelmed with incoming messages and after a while the latency between receiving the websocket message and displaying it to the user grows and grows... Another example would be a real-time stock market page, but after being open for a few minutes, the real-time prices aren't real-time anymore...
How do I determine if my browser is keeping up with the incoming websocket messages?
UPDATE
I ended up having every message update a clock on the page to see if it ever fell behind:
function onMessage(evt)
{
var dt = new Date();
$("#clock").text( dt.toLocaleTimeString() ); // "8:43:55 PM"
// other DOM manipulations related to each websocket message ...
}
If you mean how do you determine it when debugging, then you can look at your CPU utilization to see if you're swamping your CPU.
If you mean how to you determine it live from within your Javascript browser then I can think of some ideas:
Put a server time stamp in each message. Then, when you start processing messages, calculate the diff between current system time and the time stamp in the message. If that diff is going up and up, then the client is getting behind. If it's staying relatively constant, then the client is keeping up.
You can give yourself some idea about how back-logged your server is with something like this:
code:
let start = Date.now();
setTimeout(function() {
console.log(Date.now() - start);
}, 0);
If that outputs a larger number, then your event queue is backed up some number of milliseconds.
I built an HTML5 multiplayer game that depends on having a reasonably accurate time sync between server and client. For the most part, the algorithm I use is very accurate -- all it does is estimate what the client-server time delta is, i.e. the difference between the current time on the server and the current time on client. For example, if the server time is exactly 5 seconds ahead of the client time, the time delta is 5000 ms.
The client and server (node.js) are both written in Javascript. The algorithm works as follows:
Record time on the client:
var clientTime = Date.now();
Ping the server. When the server receives the message, it immediately sends a response containing just one thing: the time on the server when the message was received.
var serverTime = Date.now();
// Send serverTime to the client
When the client receives the server response, immediately record the time:
var clientTime2 = Date.now();
Now, we know that when the server received the message, the client time must have been somewhere between clientTime and clientTime2.
If the server received the message when client time was clientTime (i.e. client->server request took 0ms somehow), then the time delta is
var delta1 = (serverTime - clientTime);
If the server received the message when client time was clientTime (i.e. server->client response took 0ms somehow), then the time delta is
var delta2 = (serverTime - clientTime2).
Thus we can safely say that the time delta is somewhere between delta1 and delta2. Now, repeat this process a bunch of times, each time narrowing the range based on whatever results you got, and you can get a pretty good estimate of the time delta.
I've tested this hundreds of times on 7 different browsers and multiple machines and have never had any issue with it. It's never been inconsistent.
The issue, though, is that my server logs show that, every now and then, a few people will get wildly inconsistent time sync results. Here is an actual example of one player's time sync:
The client went through 74 cycles of the above algorithm and successfully narrow the range of possible time deltas to: [-186460, -186431] without a single inconsistency. 29ms accuracy.
On the 75th cycle, possibly a few seconds after the 74th cycle, the client calculated the range of possible time deltas to be: [-601, -596]. 5ms accuracy, except for it's extremely inconsistent with the past 74 cycles: it's 3 minutes off!
I would blame this on crazy edge cases, except it happens almost 100 times a day... how could this happen? Is there any possible error when using Date.now()?
performance.now() instead of Date.now(), because performance.now() is monotonically increasing and not subject to clock drift. See the comments, thanks to everyone for their help!
Your difficulty is that you depend on estimating round-trip times to the server, over an Internet that has variance in round-trip times. Sometimes that variance will be unexpectedly substantial, as in cases where temporary congestion and large router buffers delay a single message far longer than normal. (Cf "bufferbloat".)
Your second difficulty is that you are using self-reported client times, and that means that if a client has a clock that's weird it will look to you like a round-trip estimation gone wrong. As another poster noted, internet time protocols will sometimes slew a clock rapidly to correct from local timekeeping anomalies.
What it sounds like is that you need some filtering in your code that takes into account previous results so that when you get an anomalous result you don't immediately accept it as true.
I'm developping an offline web application and I'm trying to find any datetime available, on the device running the app, that is not editable by the user. I have to be able to access it through javascript. The reason is that I want my application to be insensitive to time change hack on mobiles. Any ideas ?
You can't throught javascript, maybe you can limit the hack by launching a javascript timer at the launch of the webapps and use it to check the time instead of the time of the device ? It depends on the accuracy you want but something like that will be updated every second :
var time = new Date();
function loop(){
setInterval(function(){ time=new Date(time.getTime() + 1000);loop(); }, 1000);
}
loop();
So the time variable is the real time elapsed since the launch of your apps.
Downside of this, it will use ressources...
I'm playing with realtime whiteboards with meteor. My first attempt was working very well, if you open 2 browsers and draw in one of them, the other one updates in a few milliseconds ( http://pen.meteor.com/stackoverflow )
Now, my second project, is to make an infinite realtime whiteboard. The main thing that changes now, is that all lines are grouped by zones, and the viewer only subscribe to the lines in the visible zones. And now there is a dealy of 5 seconds (!) when you do something in one browser to see it happen in the other one ( http://carve.meteor.com/love ).
I've tried to add indexes in the mongo database for the fields determining the zones.
I've tried updating the Collection only for a full line (and not each time I push a new point like i my first project).
I've tried adding a timeout not to subscribe too often when scrolling or zooming the board.
Nothing changes, always a 5 seconds delay.
I don't have this delay when working locally.
Here is the piece of code responsible for subscribing to the lines you the visible area :
subscribeTimeout=false;
Deps.autorun(function () {
var vT=Session.get("visible_tiles");
var board_key=Session.get("board_key");
if (subscribeTimeout) Meteor.clearTimeout(subscribeTimeout);
subscribeTimeout=Meteor.setTimeout(subscribeLines, 500);
});
function subscribeLines() {
subscribeTimeout=false;
var vT=Session.get("visible_tiles");
console.log("SUBSCRIBE");
Meteor.subscribe("board_lines", Session.get("board_key"),vT.left,vT.right,vT.top,vT.bottom, function() {
console.log("subscribe board_lines "+Session.get("board_key"));
});
}
I've been a SysAdmin for 15 years. Without running the code, it sounds like an imposed limitation of the meteor.com server. They probably put in delays on the resources so everyone gets a fair share. I'd publish to another server like heroku for an easy deploy or manually to another server like linode or my favorite Joyent. Alternatively you could try and contact meteor.com directly and ask them if/how they limit resource usage.
Since the code runs fast/instantly locally, you should see sub-second response times from a good server over a good network.
here is my question:
I've a website that works only during the night (after 21:00 until 24:00)
I have a button that says "Enter", but i want that button to alert() a message such as 'The website is not available yet'.
But to do so it must check the time so in pseudocode:
if (time is less more than 21:00 and less than 24:00) {
return true;
} else {
alert('the website is not available yet');
e.preventDefault;
return false;
}
But I don't understand how I can do that in terms time difference, in any day,
any hint?
thank you guys!
new Date().getHours()
will return current hour (13 when I am writing this at 13:42). However this solution has several drawbacks:
it uses system time, changing the time in the computer will fool your script
it uses system time zone, consider getUTCHours()
it can be easily bypassed by disabling JavaScript or modifying the script on the fly
Thus consider fetching time from the server when rendering the page and repeating the test on the server side when user enters (to make sure the check was not bypassed).