Let's look at this problem as at a process of searching a game in Hearthstone.
I have two websocket connections. I tag websockets with user tokens so I am able to differentiate users easily.
On the client I can find a game by pressing a button "Play". Client then will send a request to find a game, which on the backend side is going to result in up to 10 repeated requests each after 10 seconds.
If the game woudn't be found, there is going to be an error 'No game found' returned to the client.
But the problem is if the game WAS found.
The requests from two different hosts are concurrent.
So, if the game was found for the first one, then websocket will send a game id to both users. But the second user request to find a game is still in 10s timeout.
So after 10s backend will make another request and also find a game and websocket again will send data to both users.
Should I control this on the client side by simply ignoring websocket sent data if the game is already on, or is it better to somehow do that on the backend?
Forgot to add, I can access my database from backend only via API, so I have to make HTTP requests from it.
Related
I have not been able to get an answer to this anywhere online. I want to remove possible jitter from my nodejs server. I am using socket.io to create connections to node.
If a user goes to a specific part of my website, a connection is started. However, if the user refreshes the site too quickly and often, the connection is created very frequently, and issues arise with my server.
While I realized it's possible this could be solved a couple different ways, I am hoping a server solution is out there. Meaning, whenever a user connects, make sure the user is connected for at least 5 seconds. Then move on. Otherwise, disconnect the user. Thanks for any insight!
First off a little background. With a default configuration, when a socket.io connection starts, it first does 2-5 http connections and then once it has established the "logical" connection, it tries to establish a connection using the webSocket transport. If that is successful, then it keeps that webSocket connection as a long lasting connection and sends socket.io packets over it.
If the client refreshes in the middle of the transition to a webSocket connection, it creates a period of unknown state on the server where the server isn't sure if the user is just still in the middle of the transition to a lasting webSocket connection, if the user is gone entirely already, if the user is having some sort of connection issues or if the user is doing some refresh thing. You can easily end up with a situation where the server thinks there are multiple connections all from the same user in the process of being confirmed. It can be a bit messy if your server is sensitive to that kind of thing.
The quickest thing you can do is to force the connection process to go immediately to the webSocket transport. You can do that in the client by adding an options to your connection code:
let socket = io(yourURL, {transports: ["websocket"]});
You can also configure the server to only accept webSocket connections if you're try to protect against any other types of connections besides just from your own web pages.
This will then go through the usual webSocket connection which starts with a single http request that is then "upgraded" to the webSocket protocol. Once connection, one socket. The server will know right away, either the user is or isn't connected. And, once they've switched over to the webSocket protocol, the server will known immediately if the user hits refresh because the browser will close the webSocket immediately.
The "start with http first" feature in socket.io is largely present because in the early days of webSockets, there were some browsers that didn't yet support them and some network infrastructure (like corporate proxies) that didn't always support webSocket connections. The browser issue is completely gone now. All browsers in use support webSocket connections. I don't personally have any data on the corporate proxies issues, but I don't ever hear about any issues with people using webSockets these days so I don't think that's much of an issue any more either.
So, the above change will get you a quick, confirmed connection and get rid of the confusion around whether a user is or isn't connected early in the connection process.
Now, if you still have users who are messing things up by rapid refresh, you probably need to just implement some protection on your server for that. If you cookie each user that arrives on your server, you could create some middleware that would keep track of how many page requests in some recent time interval have come from the browser with this cookie and just return them an error page that explains they can't make requests that quickly. I would probably implement this at the web page level, not the webSocket level as that will give users better feedback to stop hitting refresh. If it's really a refresh you're trying to protect against and not general navigation on your site, then you can keep a record of a combination cookie and URL and if you see even two of those within a few seconds, then return the error page instead of the expected content. If you redirect to an error page, it forces a more conscious action to go back to the right page again before they can get to the content.
In terms of efficiency which is better: maintaining one single long connection with server, or sending multiple instant requests?
I have a web application with multiple interfaces. On one, when user submits data on a web page, it gets sent to the server by ajax query asap, but server cannot send the required answer for some 3-5 seconds. I have 2 ways: the first is to keep querying the server for answer every second; the second is to make server side keep connection alive until it can answer.
Which way is more efficient for the server and which for the client? The design should allow large number of users.
What am I working on?
I am trying to establish a communication between a PHP app and a telephony system's REST API.
Since I my site is all written in PHP, I decided to build the communication using PHP by making cURL calls to the API.
to bring you up to speed, there are 2 types of communication between the user and the API and I like to put them into two different categories
Send Once / Receive Once Example of this would, be a user attempt to dial a new phone number "dial 800-123-4567." The API takes the request and return back an interaction id to allow the user to control the call (i.e. disconnect, mute, put on hold.... )
Send Once / Receive Every Second In this communication, I will create "persistent" connection between the user's session and the API. Then every second, I will check the API for new messages. After the message from the API is received, I must update the user's cache, read the latest user's cache, and finally send the browser the cache data.
Problem? HTTP is stateless.
Every request the user send to the web server, it generates a new TCP connection. The issue with this is that every second I query the API for new messages I will have a new TCP connection. On average about 200 TCP connections are needed at any giving time per user. So if I have 300 users using the app/server, then that is about 60,000 TCP connection open for the web server. As you can clearly see the solution does not scale well here and it is a matter of time before the server blow up in my face... :(
Another issue is that PHP is not asynchronous which cause problem if the communication to the API took longer or return errors.
FWIW, I have tried to user JavaScript SharedWorker to eliminate some of the overhead. I every tried Server-sent-events but a user still generated too many TCP connections to the server. nothing first the problem I was only able to reduce the connection a little.
Can Nodejs help?
I was advised by couple of people to use Nodejs instead of PHP for this task. Of course, I am not going to change my PHP application into Nodejs as this would be insane since my app is huge.
I would like to consider running a nodejs server as a middle man between the PHP server and the API service. The idea is to have a WebSocket running on the node server. Then, the client will pass any communication to the websocket and the websocket will then send the communication to the server. It does not sound bad at a high level but once a dig deeper, it seems to be getting trickier.
Nodejs Challenge
When a user logs into my PHP App, I validate their credentials and once they are in then I create a session which is stored into MySQL database. A session to be valid the following must be correct
IP Address must match the Ip which created the session
The agent data must also match (I can live without it for nodejs)
The idle time of the session must be less that 900 seconds.
In order for Nodejs to start communication it must first create a new connection to the API. After the connection is accepted, nodejs must keep track of the following data "received by the API"
CSFR token
Session Id
Http Cookie
In order for Nodejs to make a connection to the API it must pass a username, password, server name, port, and a station name. I have all the needed info stored into MySQL database and I can easily get that using PHP.
The challenge is that NodeJS have to take the PHP session, validates it, pull the API needed info from the database then establish connection to the API.
Questions
Can nodejs use the PHP session to validate the user? If so how?
How will can nodejs use the TCP connection to prevent me from overloading the server?
This is your arrangement:
[User browser] -> [PHP] -> [Node.js] -> [API]
When your user's browser sends a request to your PHP server the request includes a cookie - one of the values of this cookie is the session id which PHP then uses to look up the session. The session id acts like a password, PHP will assume that if you've got the session id then you are the original user that was issued that id.
When your PHP script communicates with Node it needs to pass along that session id as part of the request. In Node you just then need to do a lookup of your sessions table in MySQL for the corresponding session.
PHP session data is stored as the $_SESSION array serialised. To extract data from it you will need to unserialise it first. There are a number of libraries out that can provide this functionality (e.g. https://github.com/naholyr/js-php-unserialize, https://github.com/kvz/phpjs/blob/master/functions/var/unserialize.js). However if the session data is simple and conforms to a known format you could 'hand parse' the data.
I'm currently starting working with node.js and as a first little project have decided to code a little chat application. I am using socket.io and have followed the chat tutorial on http://socket.io/get-started/chat/ so far. Now I want to expand on the script, but I am stuck.
I fail to understand how (or if at all) socket.io is managing all connected clients on the server side. I can use the io.emit function to send a message to all connected users. But what if I want to send a message to one specific user?
I could of course "fake" it by sending the message to all clients and then on the client side check for an identifyer, etc. and only process the message if there is a match. But this would still send the message to everyone, and it could easily be circumvented by users with minimal JS knowledge.
The "filtering"/"targeting" needs to happen on the server side for security reasons. But I fail to see/understand how (or if at all) socket.io actually manages all client connections. Is there a way to get a list of all current connected clients? Can you interate over all connected clients in a loop? Can you assign custom identifiers/name to connections? And - as the practical application - can you then emit/send a message to a single connected user?
Every socket connection has a unique socket id assigned with it at the very first time when a new connection is established.You can emit the data using that particular id.
var socketId=socket.id;
socket.broadcast.to(socketId).emit("messageToReceiver",data);
Thus, you can send it to a particular person.
To get all the connected clients check this URL:
Socket.IO - how do I get a list of connected sockets/clients?
What would be best mechanism, for achieving ability, for users, that are logged in, receive messages, generated by server. As there is no way for a server, to send information to user, when it has new message to deliver, a user browser should poll with some specific interval, to receive in response new messages, additionally, there should be a way for server, to not send messages, that are already delivered to user. You could draw a connections with something like public chat mechanism, but the thing I need is message delay as close to realtime and ability to handle about 100 users simultaniously, making least traffic possible. Additional note: data is needed only when user is online, no need to store that data in server, for other users to read "history".
In my mind, there are one way of achieving this - global "message box" where server puts all messages, user browser is constantly polling the server, to check, if last received message ID is equal to last message ID in message box.
The question is, if this is right way to do that, or there are another ways for such tasks, as need for realtime data can be found everywhere: sensor data, multiplayer games, chat, stock market and more...
XEP-0124: Bidirectional-streams Over Synchronous HTTP (BOSH)
https://github.com/ssoper/jquery-bosh
Build a web-based notification tool with XMPP
Write real-time web applications with XMPP, PHP, and JavaScript
Hope this helps.
Isn't pushing a better strategy? Keep a tcp connection open between server and browser and stream changes to the browser when new information is available.
Take a look at html 5 websockets. (which does exactly this)
heres a demo
Have you looked at Comet?
Comet is a web application model in
which a long-held HTTP request allows
a web server to push data to a
browser, without the browser
explicitly requesting it.
If you search stackoverflow there is plenty of info about its use.