Right now I'm getting a different number on my Desktop than I am on my laptop.
getTime() uses the local time settings. The time settings can be changed in one of two places:
Your OS (such as windows) may manage time; double click it on the taskbar
Your OS relies on the system BIOS, it's one of the reasons that motherboards have a battery installed on the mobo (to keep time and settings in case of system failure), modifications to the time in your BIOS should be reflect in the OS, and consequently in JavaScript
An alternative is to make your own getTime function which pulls the time from one source, such as a server. If you want to minimize network calls, it might be worthwhile to pull this time once and at the same time make a call to getTime(). Then, at some other time, when it's needed, issue a getTime() again, subtract the difference and add it to the server time. Note: if time is important, I advise against this, since a user can easily alter their system clock
Otherwise, if your computers are on the same network, you can use a scheduler and a batch process to sync the times - it won't be perfect, but it'll be close enough
Rather than use the javascript getTime() method, you should rethink your usecase.
If you're looking to timestamp a form submission (e.g. for a comment or form post) you should have the server generate the timestamp when handling the POST
If you want to have a consistent time client-side, consider making an ajax call to a simple web application that returns a timestamp. You could easily write one yourself, or you could use Yahoo's time service
Sync the clocks.
The OS should have an option to use a network time service. There could still be a slight difference, but it should not be more than a second or two, which is good enough for most purposes. How close do you need them to be?
getTime() gets the system time of the computer. However, if you really need to sync, it's best to sync with a web server that you have. Cheers!
You can't, at least not with JavaScript alone. JavaScript uses the system information on the client machine to determine what the time is. Now, if they're in different time zones, you can use the UTC() function to get them in UTC basis (seconds from midnight, Jan 1, 1970), but it's most likely that they just aren't synchronized.
That could be for a variety of reasons. The most likely causes:
The clocks might not be set identically. One (or both) could be wrong.
The clocks could be using different time zones.
Related
Is it a good practice to compare the date and time using the epoch(UTC) time?
I checked it on internet, but did not got example of this. Does this approach has any negative?
if(date_utc1>dateutc2){
//do something
}
Here date_utc1 and date_utc2 are time in epoch
I gathered from the comments that one of the dates is a server-side generated date, but the other is a client-side generated date. Without fully understanding the logic involved here, I´d just like make a short note (sorry don´t have reps for comments) that these two clocks may not fully agree on time (represented in epoch or not).
If possible, a better solution is to only rely on one (the server´s) clock. When the client initially receives data from server, the client persist the server-side timestamp (needs to be part of the response). Down the line, if the client wants to check if the server has more data, the persisted value should be sent back. This way we are sure that the server only returns stuff that has been changed since the last fetch.
You can use Date.now() to get the current time in all recent browsers. You could also use +new Date() to obtain the same number in older browsers if you need to.
Since the data returned from the server is already a number in this milliseconds-since-epoch format, it makes sense to use this information for comparisons, since there is no other calculations or parsing of the data coming from the server that must be done.
I don't believe there are any negatives here.
I need to code myself a mini, locally running HTML5 + JavaScript app, which I will use as a timer to time a person performing squats.
The idea is simple: When I press A on the keyboard, it will store the current time with seconds and miliseconds into a local table as a repetition start. When I press B, it will store the current time as a repetition end.
What I'm not 100% sure about is how reliable the JavaScript timestamp really is. What is my best bet here? Here are a few ideas:
run it on the latest version of Chrome
disable the internet connection, so that the OS will not sync/change its current time
Is there anything else I should be careful about?
I don't need the time to be absolutely exact, only relatively; meaning that the last timestamp minus the first timestamp will yield the real time taken to perform the whole session. I don't care to know exactly at what time it started.
If you're retrieving the system time in Javascript with something like Date.now() in order to measure the time between two events, then that will be exactly as accurate as the system time is on the local computer. How exactly accurate that is will depend entirely upon the clock in the local system and whether there are any changes to the system time during the measurement period.
If there are no changes to the system time (such as a clock sync with an external source), then most system clocks are pretty darn accurate these days. Measuring an event that takes minutes would likely be accurate within a few milliseconds which is more accuracy than you can achieve by marking start and stop with just a keypress anyway since the precision on exactly when the key is pressed relative to the start and stop of the event is certainly not better than several hundred milliseconds.
Let's take as an example a js "app", that basically does CRUD, so it creates, updates and deletes (not really) some "records".
In the most basic case, one does not need to resolve conflicts in such an application because the ACID properties of the DBMS are used to eleminate concurrent updates (I'm skimming over a ton of details here, I know). When there's no way to emulate serial execution of updates, one can use timestamps so determine whch update "wins". Even then the client need not worry about timestamps, because they can be generated at request time on the server.
But what if we take it one step further and allow the updates to queue up on the client for some unspecified amount of time (say, to allow the app to work when there's no network connectivity) and then pushed to the server? Then the timestamp can not be generated on the server, since the time when the update was pushed to the server and the actual time when the update was performed may vary greatly.
In the ideal world, where all the clocks are synchronized this is not a problem - just generate a timestamp on the client at the time when the update is performed. But in reality, time often drifts from the "server" time (which is assumed to be perfect, after all, its us configuring the server, what could ever go wrong with it?) or is just plain wrong by hours (possible when you don't set the time zone, but instead update the time / date of the system to match). What would one do to account for reality in such a case?
Perhaps there's some other way of conflict resolution, that may be used in such a case?
Your question has two aspects :
Synchronizing/serializing at server using timestamps via ACID properties of database.
Queues that are with client (delays which server is not aware of).
If you are maintaining queues at client which push to server when it sees fit, then it better have trivial synchronizing. Because it is just defeating the purpose of timestamps, which server relies on.
The scope of ACID is limited here because if clients updates are not realtime, it cannot serialize based on timestamp of request created than or request arrival. It creates a scenario where a request R2 created later than request R1 arrives before R1.
Time is a relative concept, using a local time for client or for server will cause drift for the other. Also it does not scale (inefficient if you have several peer nodes - distributed). And it introduces a single point of failure.
To resolve this vector-clocks were designed. They are logical clocks that increment clock when event occurs on the machine atomically. BASE databases (Basically Available, Soft state, Eventual consistency) use it.
The result is that 1 and 2 will never work out. You should never queue requests which use timestamp for conflict resolution.
Nice challenge. While I appreciated the user568109's anwser, this is how I handled a similar situation in a CQRS/DDD application.
While in a DDD application I had a lot of different commands and queries, in a CRUD application, for each type of "record" we have CREATE, UPDATE and DELETE commands and a READ query.
In my system, on the client I kept track of the previous sync in a tuple containing: UTC Time on the Server, Time on the Client (lets call this LastSync).
READ
Read query won't partecipate to syncronization. Still, in some situation you could have to send to the server a LogRead command to keep track of the informations that were used to take decisions. Such kind of commands did contain entity's type, entity's identifier and LastSync.ServerTime).
CREATE
Create commands are idempotents by definition: they either success or fail (when a record with the same identity already exists). At sync time you will have to either notify the user of the conflict (so that he can handle the situation, eg by changing the identifier) or to fix timestamp as explained later.
UPDATE
Update commands are a bit trickier, since you should probably handle them differently on different type of records. To keep it simple you should be able to impose the users that the last update always wins and design the command to carry only the properties that should be updated (just like a SQL UPDATE statement). Otherwise you'll have to handle automatic/manual merge (but believe me, it's a nightmere: most users won't ever understand it!) Initially my customer required this feature for most of entities, but after a while they accepted that the last update wins to avoid such complexity. Moreover, in case of Update on a Deleted object you should notify the situation to the user and, according to the type of entity updated, apply the update or not.
DELETE
Detele commands should be simple, unless you have to notify the user that an update occurred that could have lead him to keep the record instead of delete it.
You should carefully analyze how to handle each of this command for each of your type of entity (and in the case of UPDATEs you could be forced to handle them differently for different set of properties to update).
SYNC PROCESS
The sync session should start sending to the server a message with
Current Time on the Client
LastSync
This way the server could calculate the offset between its time and the client's time and apply such offset to every command he recieve. Moreover it can check if the offset changed after the LastSync and choose a strategy to handle this change. Note that, this way, the server won't know when the client's clock was adjusted.
At the end of a successful sync (it's up to you to decide what successful means here), the client would update the LastSync tuple.
A final note
This is a quite complex solution. You should carefully ponder with the customer if such complexity give you enough value before starting implementing it.
I'm developing an online chess variant game. I want to create a javascript function that has the purpose of communicating to the server on behalf of the player the exact time spent on a move.
This message will be encrypted, of course, but in order to trust this function, I want to obfuscate it to the point that I can rely on the obfuscation algorithm.
I only know a few obfuscation algorithms, hieroglyphy being the most interesting. But it isn't unbreakable. Speed of execution and size are not critical, I can deduct the time spent by the function that sends the message in that same function, and the size can be even up to 2MB.
I'm pretty sure that there is no unbreakable algorithm because as long as it is required to run in a browser, anyone with enough patience can take it piece by piece and see what it does.
Do I have an alternative that would require more effort and time from a user with bad intentions?
Edit I've done some tests in every browser on WindowsXP and it appears that in FF, IE, Opera and Chrome the setTimeout function will trigger after a delay that is passed as the second parameter, regardless of any changes to system time during the delay. If no other information is presented to suggest otherwise, the logical conclusion would be that time can be measured client-side regardless of system time changes, using the setTimeout function but not the Date() object, up to a precision given by the setTimeout delay time.
Hamish mentioned in an answer below that modifying the browser date/time APIs is trivial. In that case, the javascript code is vulnerable to a modification that will increase the setTimeout real delay time. Some code should be set in place so that the server should start suspecting of cheating someone who has unreasonable lag time. This will always be a problem if lag time isn't included in thinking time.
There's a reason I can't use server side timing. The lag times would sometimes exceed a reasonable amount and that will leave users dissatisfied. And sometimes the lag can make all the difference.
Which brings me back to the original question. I'm looking for the best obfuscation method, where best is measured in the effort an attacker has to make to deobfuscate. Ideally, I would want to change the obfuscation algorithm faster than an attacker can deobfuscate, and then never to use that algorithm again or use it rarely, at a time the attacker won't expect.
I could set my computer's clock to three hours ago and your script would happily send -10800 seconds. NEVER rely on JavaScript to handle information in a trusted manner. Use your server-side code to time the difference between when the player's turn started and when they made their move, and absolutely keep a representation of the game on the server and make sure the move is valid.
Obfuscating your code doesn't help, for two reasons:
Users can still inspect the messages being sent from the browser to the server. You would also have to sign the message somehow, to prevent it being intercepted and modified. Generally, it will be even easier to unpack the message than the function used to generate it.
You're trying to measure the time taken on the move, which means your obfuscated function still has to trust the system clock and the browser date/time APIs. Both are trivial to modify.
A sensible solution would be to measure the time messages are sent and received on your server, and measure the latency of the connection to correct for transmission speeds (if you need to be very accurate).
I have created a JavaScript timestamps and also a PHP timestamp. There is about 170 sec difference between them.
1302162686 PHP - time()
1302162517 JavaScript - Math.round(new Date().getTime() / 1000)
Can anyone please tell me why I'm having this issue?
PHP is executed on the server side, and in your example, JavaScript works on the client-side.
Both sides have their own time configuration. For the server, time zone settings etc. will stay the same (unless you change them), but the server has no idea which time zone the current visitor is in. There’s no way for you to control that.
If I change my system clock on my laptop, it will influence client-side JavaScript date/time, but your server timer won’t be affected.
PHP and JavaScript, both look at the system time. Whose system? The one they are running on. The server could be located in another country with different time, hence the difference.
Also, the client's (or less often, server's) clock could be incorrect.
One way, which I often use to counter this problem is like this:
var referenceTime = new Date('<?php echo date("M n, Y"); ?>');
// referenceTime is now the same as server time
PHP looks at the system time, which is the server running it.
JavaScript looks at the client's system, which could be any time.
php uses the time on your server, javascript will use the time on the client (users) machine.
Mathias is correct. Generally this should not happen with that big a difference because modern computers recognize their clocks drift over time and employ protocols such as NTP to keep their clocks in sync.
Nevertheless you should never assume the time at client and server is the same, for two reasons:
Some clients/servers don't have clock adjustments (such as NTP) and their clocks drift away over time
More importantly, many users/admins can be clueless or late in setting their time zone or adjusting daylight savings times, so the time given to you may be accurate to a second but be several hours off.
When comparing/calculating times, I would rely on the server only. You have no control over the client.
If you are concerned about consistency for whatever purpose, I recommend using the server as your time source and do timezone conversions if necessary:
This may be of interest: handling timezone conversion with php