Event broadcast between servers in node - javascript

I have several node servers running on AWS (but they might be running anywhere).
An admin user can trigger a database save on one server (via an Express endpoint) which will in turn require all servers to reload that particular record. So, a particular server needs to be able to trigger an event onto N other servers.
Since the node processes are potentially on different servers, I can't use a Unix socket. And since I don't know in advance how many servers there are, I assume I will need to have one specific server which will "orchestrate" all of the other ones (see: each server "registers" and "pushes" events to that particular server).
What is the easiest, simplest solution to achieve this? I am talking about minimal amount of code and moving parts.

Related

is it conflicting for multiple users on one backend server websockets

I'm planning on building some backend logic on a server for personal use. Its connected to a websocket from another server and I've set code to handle data from that socket. I'm still fairly new to using websockets so the whole concept is still a little foreign to me.
If I allowed more users to use that backend and the websocket has specific logic running wouldn't it be conflicted by multiple users? Or would each user have their own instance of the script running?
Does it make any sense of what I'm trying to ask?
If I allowed more users to use that backend and the websocket has specific logic running wouldn't it be conflicted by multiple users? Or would each user have their own instance of the script running?
In node.js, there is only one copy of the script running (unless you use something like clustering to run a copy of the script for each core, which it does not sound like you are asking about). So, if you have multiple webSocket connections to the same server, they will all be running in the same server code with the same variables, etc... This is how node.js works. One running Javascript engine and one code base serves many connections.
node.js is an event-driven system so it will serve an incoming event from one webSocket, then return control back to the Javascript system and serve the next event in the event queue and so on. Whenever a request handler calls some asynchronous operation and waits for a response, that is an opportunity for another event to be pulled from the incoming event queue and another request handler can run. In this way, multiple requests handlers can be interleaved with all making progress toward completion, even though there is only one single thread of Javascript running.
What this architecture generally means is that you never want to put request-specific state in the global or module scope because those scopes are shared by all request handlers. Instead, the state should be in the request-specific scope or in a session that is bound to that particular user.
Is it conflicting for multiple users on one backend server websockets
No, it will not conflict if you write your server code properly. Yes, it will conflict if you write your server code wrongly.

Nginx and Node.js server - multiple tasks

UPDATE
I have a few questions about the combination of Nginx and Nodejs.
I've used Nodejs to create my server and now I'm facing with an issue about catching the server for an actions (writing, removing and etc..).
We are using Redis to lock the server when there are requests to the server, for example if a new user is doing a sign up action all the rest of the requests are waiting until the process is done, or if there is another process (longer one) all the other requests will wait longer.
We thought about creating a Load balancer (using Nginx) that will check if the server is locked, and if the server is locked it will open a new task and won't wait until the first process is done.
I used this tutorial and created a dummy server, then I've struggled with the idea of do this functionality of opening a new ports.
I'm new with load balancing implementation and I will be happy to hear your thoughts and help.
Thank you.
The gist of it is that your server needs to not crash if more than one connection attempt are made to it. Even if you use NGINX as a load balancer and have five different instances of your server running...what happens when six clients try to access your app at once?
I think you are thinking about load balancers slightly wrong. There are different load balancing methods, but the simplest one to think about is "round robin" in which each connection gets forwarded to the next server in the list (the rest are just more robust and complicated versions of this one). When there are no more servers to forward to, the next connection gets forwarded to the first server again (whether or not it is done with its last connection) and the circle starts over. Thus, load balancers aren't supposed to manage "unique connections" from clients...they are supposed to distribute connections among servers.
Your server doesn't necessarily need to accept connections and handle them all at once. But it needs to at least allow connections to queue up without crashing, and then accept and deal with each one by one.
You can go the route you are discussing. That is, you can fire up a unique instance of your server...via Heroku or other...for every single connection that is made to your app. But this is not efficient and will ultimately create more work for you in trying to architect a system that can do that well. Why not just fix your server?

Restarting a realtime node.js server without closing sockets?

We have a small set of multiplayer servers using node.js that are currently serving roughly 1 million messages a minute during peak usage. Is there a way to 'gracefully' restart the server without causing sockets to drop? Basically, I'm wondering what is the best way to handle restarts were it would normally be very disruptive to players?
When a process exits, the OS cleans up any sockets that belong to it by closing them. So, there's no way to just do a simple server restart and preserve your socket connections.
In some operating systems, you can pass ownership of a socket from one process to another so it might be technically feasible for you to create a temporary process or perhaps a previously existing parent process), pass ownership of the sockets to that other process, restart your server, then transfer ownership back to the newly started process. I've never tried this (or heard about it being done), but it sounds like something that might be feasible.
Here's some information on transferring a socket to a child process using child.send() in node.js. It appears this can only be done for a node.js socket created by the net module and there are some caveats about doing it, but it is possible.
If not, the usual work-around is have the clients automatically reconnect when their connection is closed. Done properly, this can be fairly transparent to the client (except for the momentary time when the server is not running).
use redis or some in-memory database for storing connection so that you can easily reconnect even after server restart without loosing any sessions or connection. Try this if it suits your need. Also please note during restart connection may drop but due to having persistence you will be connected again very easily.
socket.io-redis

Socket.io and Node.Js multiple servers

I'm new to Web Sockets in general, but get the main concept.
I am trying to build a simple multiplayer game and would like to have a server selection where I can run sockets on multiple IPs and it will connect the client through that, to mitigate connections in order to improve performance, this is hypothetical in the case of there being thousands of players at once, but would like some insight into how this would work and if there are any resources I can use to integrate this before hand, in order to prevent extra work at a later date. Is this at all possible, as I understand it Node.Js runs on a server and uses the Socket.io dependencies to create sockets within that, so I can't think of a possible solution to route it through another server unless I had multiple sites running it separately.
The first question I have is this:
Are you hosting on AWS or in a local datacenter?
The reason I ask is because SOCKET.io requires sticky sessions to work properly across multiple servers. Due to the fact that SOCKET.io will attempt to upgrade each connection, and because that upgrade request must reach the original server that authorized the session, you'll need to route websocket (TCP) connections back to that original server via sticky sessions. Unfortunately AWS makes this extremely tricky and will require you to learn how to:
A) Modify elastic load balancer policies to forward protocol information
B) Split apart TCP connections from standard web requests using something like HA PROXY or NGINX. This is necessary in order to handle web socket UPGRADE requests properly, as you will be setting TCP to sticky and web requests to round-robin.
C) Attach your socket.io configuration to a common storage source, like Redis (elasticache).
Once you've figured out what's needed for AWS (or if you've got full control over request routing at your local datacenter), you'll want to architect your SOCKET application to use multicast rooms rather than direct socket messaging.
Example:
To send a message to users in game #4444, emit a message to room 'games:4444', rather than direct to the user's socket.
If your socket instance is configured using REDIS, REDIS will automatically take care of maintaining lists of people who are connected to your 'games:4444' channel. Otherwise you'll need to maintain the list yourself using a database or other shared mechanism.
Other than that, there are plenty of resources online that can help you figure out each step along the way. I'd start with understanding something like HA PROXY and how it can help split apart your SOCKETS from your web requests.

Moving node.js server javascript processing to the client

I'd like some opinions on the practical implications of moving processing that would traditionally be done on the server to be handled instead by the client in a node.js web app.
Example case study:
The user uploads a CSV file containing a years worth of their bank statement entries. We want to parse the file, categorise each entry and calculate cumulative values for each category so that we can store the newly categorised statement in a db and display spending analysis to the user.
The entries are categorised by matching strings in the descriptions. There are many categories and many entries and it takes a fair amount of time to process.
In our node.js server, we can happily free up the event loop whilst waiting for network responses and so on, but if there is any data crunching or similar processing, the server will be blocked from responding to requests, and this seems unavoidable.
Traditionally, the CSV file would be passed to the server, the server would process, save in db, and send back the output of the processing.
It seems to make sense in our single threaded node.js server that this processing is handled by the browser, and the output displayed and sent to server to be stored. Of course the client will have to wait while this is done, but their processing will not be preventing the server from responding to requests from other clients.
I'm interested to see if anyone has had experience build apps using this model.
So, the question is.. are there any issues in getting browsers rather than the server to handle, wherever possible, any processing that will block the event loop? Is this a good/sensible/viable approach to node.js application development?
I don't think trusting client processed data is a good idea.
Instead you should look into creating a work queue that a separate process listens on, separating the CPU intensive tasks from your node.js process handling HTTP requests.
My proposed data flow would be:
HTTP upload request
App server (save raw file somewhere the worker process can access)
Notification to 'csv' work queue
Worker processes uploaded csv file.
Although perfectly possible, simply shifting the processing to the client machine does not solve the basic problem.
Now the client's event loop is blocked, preventing the user from interacting with the browser. Browsers tend to detect this problem and stop execution of the page's script altogether. Something your users will certainly hate.
There is no way around either delegating or splitting up the work-load.
Using a second process (for example a 2nd node instance) for doing the number crunching server-side has the added benefit of allowing the operating system to use a 2nd CPU core. Ideally you run as many Node instances as you have CPU cores in the server and balance your work-load between them. Have a look at the diode module for some inspiration on how to implement multi-process communication in node.

Categories

Resources