How can I sync two javascript timers via websockets - javascript

I have a site with a js visual metronome (like a conductor) that can be triggered via a websocket on multiple machines.
How can I make the start of the metronomes as sync as possible?
Thanks.

What I would do is measure latency, and compensate accordingly.
For each connection to the server, the server should regularly send ping messages (not a real ICMP ping... just some message over your established websocket channel), and measure the amount of time it takes to get a response back from the client. Divide this latency by 2, and you have a decent guess as to the time it takes for your messages to get from the server to the client.
Once you know this, you can estimate the timing difference between the two clients, and adjust accordingly.
Note that this isn't perfect by any means. I would maintain your metronome clock entirely client-side, and only use the communication with the server to adjust. This will give smoother performance for the user.

This is a clock recovery problem.
The problem you will face is that wall-time is likely to be different on each system, and furthermore, will tend to drift apart over the longer term. In addition to this, audio word-clock is not driven from wall-time either and will have a drift of its own from it. This means you have no chance of achieving sample or phase accuracy. Both IEEE1394 audio streaming and the MPEG Transport stream layers pull this trick off by sending a time-stamp embedded with bit-accuracy into the data stream. You clearly don't get this luxury with the combination of Ethernet, a network stack, The Nagel algorithm and any transmit queue in webSockets.
10s to 100ms might be more realistic.
You might like to look at the techniques the Network Time Protocol uses to solve an essentially similar problem.

Get the time from all the clients (UTC). Find client's offset. Schedule a start time according to each client offset.

Related

What are some good use cases for Server Sent Events

I discovered SSE (Server Sent Events) pretty late, but I can't seem to figure out some use cases for it, so that it would be more efficient than using setInterval() and ajax.
I guess, if we'd have to update the data multiple times per second then having one single connection created would produce less overhead. But, except this case, when would one really choose SSE?
I was thinking of this scenario:
A new user comment from the website is added in the database
Server periodically queries DB for changes. If it finds new comment, send notification to client with SSE
Also, this SSE question came into my mind after having to do a simple "live" website change (when someone posts a comment, notify everybody who is on the site). Is there really another way of doing this without periodically querying the database?
Nowadays web technologies are used to implmement all sort of applications, including those which need to fetch constant updates from the server.
As an example, imagine to have a graph in your web page which displays real time data. Your page must refresh the graph any time there is new data to display.
Before Server Sent Events the only way to obtain new data from the server was to perform a new request every time.
Polling
As you pointed out in the question, one way to look for updates is to use setInterval() and an ajax request. With this technique, our client will perform a request once every X seconds, no matter if there is new data or not. This technique is known as polling.
Events
Server Sent Events on the contrary are asynchronous. The server itself will notify to the client when there is new data available.
In the scenario of your example, you would implement SSE such in a way that the server sends an event immediately after adding the new comment, and not by polling the DB.
Comparison
Now the question may be when is it advisable to use polling vs SSE. Aside from compatibility issues (not all browsers support SSE, although there are some polyfills which essentially emulate SSE via polling), you should focus on the frequency and regularity of the updates.
If you are uncertain about the frequency of the updates (how often new data should be available), SSE may be the solution because they avoid all the extra requests that polling would perform.
However, it is wrong to say in general that SSE produce less overhead than polling. That is because SSE requires an open TCP connection to work. This essentially means that some resources on the server (e.g. a worker and a network socket) are allocated to one client until the connection is over. With polling instead, after the request is answered the connection may be reset.
Therefore, I would not recommend to use SSE if the average number of connected clients is high, because this could create some overhead on the server.
In general, I advice to use SSE only if your application requires real time updates. As real life example, I developed a data acquisition software in the past and had to provide a web interface for it. In this case, a lot of graphs were updated every time a new data point was collected. That was a good fit for SSE because the number of connected clients was low (essentially, only one), the user interface should update in real-time, and the server was not flooded with requests as it would be with polling.
Many applications do not require real time updates, and thus it is perfectly acceptable to display the updates with some delay. In this case, polling with a long interval may be viable.

Designing Real Time Web Application

We have a real time application where AppServer( written in c/c++) used to broadcast network event details
(typically few hundred to around thousand rows with around 40-50 columns per second) to Client GUI app written in gtk
over network using XML/RPC, as well as writing to DB(running on same machine or LAN). 95% communication is from server to client.
It was working fine with no issues.
We have to port the application to web. In initial attempt we kept C/C++ app server as it is. Now we are sending the data to java web
application server through xml/rpc. Java server keeps a circular queue (initially kept small size of 2000) and pushes data
to client through websockets. At client we are using angular ui-grid to display the data.
Currently the problem is browser can not handle this amount of data at this frequency and
after some time (few hours) become unresponsive. This is a network monitoring app supposed to be run 24/7 and though we have very few
users, we don't have much control over their machine's configuration (mostly low/medium end). Server (tomacat 8) was running on 2*6 core, 16 GB RAM.
Can you give suggestions for improving browser performance? We are even ready to implement new solution from scratch.
But I think browser performance will be always the bottlneck.
if the pushed data is persisted somewhere else, you should implement a worker to clean the browser data periodically.
For an example say you have been pushing 1000 records per minute, and the browser crashes around one hour period, you could implement a worker where every half an hour, it cleans the browser DOM in order to keep the memory footprint at its lowest.
To access previous data, you should have an API implemented where a user could fetch data for a given time/data period.
it's hard to identify what is causing the crashes, but IMO it should be a heavy DOM which the browser is unable to handle after a certain time. It's better if you could provide an details report of the browser when it crashed.

Which is best way to implement timer in turn based multiplayer game?

I am implementing multiplayer turn based game using Node.js for server side scripting.
Game is just like monopoly where multiple room and single room have multiple players.
I want to implement timer for all rooms, i have gone through many article, I have two options as follows:
1} I will emit with current time to each player at once and they will process timer and emit to server on turn or time up.
2} I may manage timer at server and emit at every second but it will create load to server, also confuse as Node.js is single threaded then how will i manage multiple setInterval() for multiple room. it will add in queue and will create latency.
So please assist me best option.
A hybrid approach may suit best for this problem. You could start, initially, by receiving the current value of the timer from the server. Subsequently, the clients can run the timer down independently without requesting the server for the current time. If accuracy is important to this project, then you can request the server every 15 seconds or so to adjust for time drift that may cause discrepancies between the server and its clients.
Also, note that even though Node.js is single threaded it is nonetheless inherently asynchronous.

HTTP Long Polling - Timeout best practice

I play with Javascript AJAX and long-polling.
Try to find best value for server response timeout.
I read many docs but couldn't find a detailed explanation for timeout.
Someone choose 20 secs, other 30 secs...
I use logic like on diagram
How can I choose better value for timeout?
Can I use 5 minutes?
Is it normal practice?
PS: Possible Ajax client internet connections: Ethernet RJ-45, WiFi, 3G, 4G, also, with NAT, Proxy.
I worry about connection can be dropped by third party in some cases by long timeout.
Maybe its your grasp of English which is the problem, but its the lifetime of the connection (time between connection opening and closing) you need to worry about more than the timeout (length of time with no activity after which the connection will be terminated).
Despite the existence of websockets, there is still a lot of deployed hardware which will drop connections regardless of activity (and some which will look for inactivity) where it thinks the traffic is HTTP or HTTPS - sometimes as a design fault, sometimes as a home-grown mitigation to sloloris attacks. That you have 3G and 4G clients means you can probably expect problems with a 5 minute lifespan.
Unfortunately there's no magic solution to knowing what will work universally. The key thing is to know how widely distributed your users are. If they're all on your LAN and connecting directly to the server, then you should be able to use a relatively large value, however setting the duration to unlimited will reveal any memory leaks in your app - sometimes its better to do refresh every now and again anyway.
Taking the case where there is infrastructure other than hubs and switches between your server and the clients, you need to provide a mechanism for detecting and re-establishing a dropped connection regardless of the length of time. When you have worked out how to do this, then:
dropped connections are only a minor performance glitch and do not have a significant effect on the functionality
it's trivial to then add the capability to log dropped connections and thereby determine the optimal connection time to eliminate the small problem described in (1)
Your English is fine.
TL;DR - 5-30s depending on user experience.
I suggest long poll timeouts be 100x the server's "request" time. This makes a strong argument for 5-20s timeouts, depending on your urgency to detect dropped connections and disappeared clients.
Here's why:
Most examples use 20-30 seconds.
Most routers will silently drop connections that stay open too long.
Clients may "disappear" for reasons like network errors or going into low power state.
Servers cannot detect dropped connections. This makes 5 min timeouts a bad practice as they will tie up sockets and resources. This would be an easy DOS attack on your server.
So, < 30 seconds would be "normal". How should you choose?
What is the cost-benefit of making the long-poll connections?
Let's say a regular request takes 100ms of server "request" time to open/close the connection, run a database query, and compute/send a response.
A 10 second timeout would be 10,000 ms, and your request time is 1% of the long-polling time. 100 / 10,000 = .01 = 1%
A 20 second timeout would be 100/20000 = 0.5%
A 30 second timeout = 0.33%, etc.
After 30 seconds, the practical benefit of the longer timeout will always be less than: 0.33% performance improvement. There is little reason for > 30s
Conclusion
I suggest long poll timeouts be 100x the server's "request" time. This makes a strong argument for 5-20s timeouts, depending on your urgency to detect dropped connections and disappeared clients.
Best practice: Configure your client and server to abandon requests at the same timeout. Give the client extra network ping time for safety. E.g. server = 100x request time, client = 102x request time.
Best practice: Long polling is superior to websockets for many/most use cases because of the lack of complexity, more scalable architecture, and HTTP's well-known security attack surface area.

webgame with simultaneous players

I have seen many webbrowser based games with players playing simultaneously.
Usually after waiting some time you can join a room where everyone is playing or you can play against one other player.
All those games use Flash.
How they achieve that?
It would be very complex to accomplice without Flash?
There are any toolkit (rails, etc) or plugin that provides this functionality?
Or it is just a matter of storing sessions and mixing them ?
Just a quick edit:
I am not interested in Flash or Silverlight solutions
There are a couple options for a JavaScript-only solution. They all involve AJAX of one form or another. (See my answer on AJAX and Client-Server Architecture with JavaScript for a larger breakdown.)
You have a choice between AJAX Polling, Long Polling, COMET, or the upcoming Web Sockets. The first step is to understand AJAX. Once you are comfortable with it, you can setup a polling system (with setInterval) to poll the server for new data every n miliseconds.
It's possible to do it without flash if you're comfortable with ajax and your game doesn't require rapid interactions between users. But in either case, I believe you have to poll the server. You might also want to read about comet http://en.wikipedia.org/wiki/Comet_(programming)).
Can you clarify what kind of game you would like to make? Turn based or real-time?
Since you're not interested in flash or silverlight solutions (which can use sockets and thus scale well with thousands of users) you can use javascript and use ajax to send and receive data.
Essentially you can use ajax like a socket by sending out input then letting the script "long poll" the server by having the server delay responding to it until it has data to send. The only problem is that you can only keep a connection open for so long before it times out (~30 seconds). This isn't usually a problem though since you're passing data back and forth frequently.
I'd research fastCGI (or so I believe it can work like this) and have a game server daemon respond to the requests directly. That way it can open a single database connection and process all of the clients quickly. While this isn't necessary it would probably scale really well if implemented correctly.
At the moment I've been making a proof of concept that's kind of naive. (Less naive than using the database as state and just using PHP scripts to update and receive the database's state. I should note though that for a only a few users and your own database this works rather well. I had 20 clients walking around at 100 ms updates. Sure it doesn't scale well and kills the database with 10 connections per client per second but it "works"). Basically the idea is that I have javascript generate packets and sends them to a PHP script. That PHP script opens a unix domain socket and forwards the data to a C++ daemon. Haven't benchmarked it though, so it's hard to tell how well it'll scale.
If you feel comfortable though I really do recommend learning flash and socket networking. Using Epoll on linux or IOCP on windows you can host hundreds of clients. I've done tests of 100 clients on a C# socket server in the past and it took less than 5% CPU handling constant streams of small packets.
Depends what technology you want to use. Flash can be used to create a game like that, so can Silverlight. They both use javascript to send mouse movements and other user input asynchronously to the server so that the game state can be updated on the server.
An article of flash game development:
http://www.brighthub.com/internet/web-development/articles/11010.aspx
Silverlight:
http://www.brighthub.com/internet/web-development/articles/14494.aspx
Java Applets are able to communicate with JavaScript (e.g. you want your UI to be HTML&CSS). So in theory you could implement your network code in a signed Java Applet. In this case you would not be limited to the plain client-server model.

Categories

Resources