I'm new to Web Sockets in general, but get the main concept.
I am trying to build a simple multiplayer game and would like to have a server selection where I can run sockets on multiple IPs and it will connect the client through that, to mitigate connections in order to improve performance, this is hypothetical in the case of there being thousands of players at once, but would like some insight into how this would work and if there are any resources I can use to integrate this before hand, in order to prevent extra work at a later date. Is this at all possible, as I understand it Node.Js runs on a server and uses the Socket.io dependencies to create sockets within that, so I can't think of a possible solution to route it through another server unless I had multiple sites running it separately.
The first question I have is this:
Are you hosting on AWS or in a local datacenter?
The reason I ask is because SOCKET.io requires sticky sessions to work properly across multiple servers. Due to the fact that SOCKET.io will attempt to upgrade each connection, and because that upgrade request must reach the original server that authorized the session, you'll need to route websocket (TCP) connections back to that original server via sticky sessions. Unfortunately AWS makes this extremely tricky and will require you to learn how to:
A) Modify elastic load balancer policies to forward protocol information
B) Split apart TCP connections from standard web requests using something like HA PROXY or NGINX. This is necessary in order to handle web socket UPGRADE requests properly, as you will be setting TCP to sticky and web requests to round-robin.
C) Attach your socket.io configuration to a common storage source, like Redis (elasticache).
Once you've figured out what's needed for AWS (or if you've got full control over request routing at your local datacenter), you'll want to architect your SOCKET application to use multicast rooms rather than direct socket messaging.
Example:
To send a message to users in game #4444, emit a message to room 'games:4444', rather than direct to the user's socket.
If your socket instance is configured using REDIS, REDIS will automatically take care of maintaining lists of people who are connected to your 'games:4444' channel. Otherwise you'll need to maintain the list yourself using a database or other shared mechanism.
Other than that, there are plenty of resources online that can help you figure out each step along the way. I'd start with understanding something like HA PROXY and how it can help split apart your SOCKETS from your web requests.
Related
Everyone that's a bit of a strange question.
I have a product(a Node.js Backend) that's I sell and people deploy it by themselves.
And my goal now is to make them sending data to each other if they want.
they are deployed on private servers.
The data is simple text(writing some text send it to other servers).
it needs to be end-to-end encrypted.
It's not a flow of data in continue(chat Room).
they should have the possibility to send to multiple different servers.
I don't want to use a third-party server to host everything and save the data(they have their own MongoDB Local).
And I don't really know where to start and what to use.
My first problem is do I use regular HTTP requests or WebSocket or any other technology?
Basically, I want to make a peer to peer architecture, using JavaScript (Ionic).
Since, JS cannot create sockets/etc; a NodeJS server has to be introduced between the clients; acting as the Socket.IO server between the clients.
The problem with this, is that the Socket.IO (NodeJS) server would need to be automatically found within the local network -- by the clients (instead of hardcoded/configured).
Are there any ways to implement such a thing; or alternatives to this architecture?
Thanks for the help!
Are there any ways to implement such a thing; or alternatives to this architecture?
Currently your architecture is using a browser app plus a Node app that users need to have on their network just to create TCP connections.
What you can do instead is create an Electron app that combines a Node app, a browser app, and a browser itself. See:
https://electron.atom.io/
With Electron you can write your frontend code almost the same way as for the regular browser, but you can use the entire Node API including the TCP sockets so there will be no need to create a separate Node app and to search for that app in the network. This can greatly simplify your architecture.
Note: this is not an answer to the first part of the question: "How to detect a server in the network using JS?" but to the second part of the question: "Are there any ways to implement such a thing; or alternatives to this architecture?" Detecting the servers on the local network with client-side JavaScript will not be easy - and in fact it shouldn't be even possible because websites being able to scan your LAN for active services would be a serious problem for privacy and security.
I would like to know if is possible create a socket connection using TCP protocol between servers.
For example: I have two servers, one is a API, and the other only services, so the API calls the service, how could i send messages if they aren't in the same machine?
I'm using ZeroMQ and i definitely need this separation.
Other import piece is about secure. Is TCP socket connections secure??
Thanks.
It is absolutely possible, and is the base for the internet. Your browser on your machine opens up sockets to remote servers perpetually.
There are so many ways to lock down TCP connections: spanning Network Level, OS level, or application level.
If you need to encrypt the data you are sending TLS/SSL is the defacto way, if your servers are on your own private subnet with no access to the outside, unencrypted communication is often used.
I've never used ZeroMQ, but if you are using it as a central store or message bus accross your services, you could bind it to an interface with the appropriate visibilty on your net, then connect to it from any of your servers.
Since it is 100% possible, I feel liek the issues become:
How should you expose their external interfaces?, on which level of abstraction should you expose your servers? ie.
Should they communicate through raw sockets?
Should they communicate through RPC?
Should they have a higher level interace like REST?
Should all communcation take place through a zeroMQ?
I would highly recommend whatever course you decide to take, you audit the security very careful and make sure that nothing is exposed to the outside world that shouldn't be.
You can use Redis for this purpose. If two servers (one is API and other you called as services) has to communicate other than API calls(client side). Then Redis pub/sub is good suggestion.
If they are not in same server
yes It can communicate within a machine or remotely.Depends upon how you configure it.
TCP Secure Socket
yes it can be configure to use SSL see Redis Encryption.
Here is great topic about this how to communicate via redis. see Real time communication over redis
I'd like to know if anyone has managed to set up a peer-to-peer app for the Windows Store using HTML5 and JavaScript. Basically I want app client A to be able to connect and send data to app client B via a TCP or UDP socket (the problem I'm facing seems to be irrelevant to the socket type).
My main problem is that I am unsure how to obtain a suitable IP/port which the other client would be able to connect to. It seems like there would be issues with router firewalls and whatnot, but MS claims that peer-to-peer is possible.
Any tips would be greatly appreciated, thanks.
Edit: I cannot use a third party service for communicating the data, because I want my app to be able to connect with other applications besides the one I'm writing. So something standard like TCP sockets is necessary.
Take a look at TideSDK for the desktop app development with web technology.
I don't know is TideSDK provide an API to access TCP or UDP communication, may be you could try read the docs. IMO best is you need to create a server to handle 1:1 connection between clients.
If you dont want to get mess with the server, you should get to know PubNub and Pusher
This question follows a previous one: Shall I use Node.js Instead of Rails for Real-time WebApps?
The question:
What's the best way of communicating between a Rails app and a Node.js app in order to take advantage of both technologies?
Thanks
Why not open a TCP socket for communication between node & RoR ?
var net = require('net');
// create TCP server
var server = net.createServer(function (socket) {
// write down socket
socket.write("Echo server\r\n");
socket.pipe(socket);
})
// start server listening on port 8124
server.listen(8124, "127.0.0.1");
And in RoR you can connect to the socket
require 'socket' # Sockets are in standard library
hostname = '127.0.0.1'
port = 8124
s = TCPSocket.open(hostname, port)
while line = s.gets # Read lines from the socket
puts line.chop # And print with platform line terminator
end
s.close # Close the socket when done
Then just write an abstraction on top of this TCP socket to synchronize your communication nicely and without requiring low level fiddling.
Why do the apps need to communicate?
If you simply need a Rails app to get some realtime data into the browser, then using a node.js server app and Socket.IO would be sufficient.
You have to remember that any Rails apps, is actually two applications, one written in Ruby running on the server, and one written in Javascript running on the client. They usually communicate over HTTP, sometimes with AJAX and sometimes not. Which part of your app needs the functionality of node.js?
If it is the case that the app deals with login, then displays a web page, and then continually refreshes that web page with real-time data, you only really get a benefit from node.js for the realtime data refreshes whether you do it with AJAX polling or with Websockets. Shared databases are a nice way for apps to communicate, but not for realtime.
To make it clear, if you are an expert in Ruby with Rails, you will be more productive if you add a node,js server app and only use it for high-volume data, such as realtime updates. You then have a hybrid web app that leverages the best of both platforms.
What about keeping Rails and use Faye?
the latest Railscast is awesome: http://railscasts.com/episodes/260-messaging-with-faye
One way is to have a common back-end database or some kind of memory storage which will act as intermediary layer between the two technologies. Popular is for example to use NoSQL DB like Redis which is fast, memory based and supports advanced data structures which are handy for this scenario. Also node.js and RoR both have a good client libraries for communication with Redis.
I would say the main problem is in initial authentication between the two separate systems which both needs to be synchronized. There are similar questions/answers related to this topic which may come useful to read, for example these two shows what are the possible ways how to solve the authentication problem.
It depends on exactly why you're separating the functionality from one to the other. Rails supports REST based separation without any extra work on your part. It's built based on resources from the ground up. That means it would be very simple for you to use an http.Client (or something like Restler) to query against it. You can certainly do the exact same the other way around, using standard Node.js routing (or something like Express) and an HTTP client for Ruby (such as Typhoeus). Though this method incurs the overhead of using a full HTTP request (not necessarily a problem if on an internal network). If you are looking for a more speedy way of communication, I'd say you could go about it using a persistant socket as Raynos suggests.
Depending on your need, I would suggest that using two separate systems creates extra code complexity, and it may be best for you to reduce it to one framework/language. I'm all for Service Oriented Design, but Rails is a pretty heavy weight and may slow down your over all response times, even with having Node.js working with it.