This question follows a previous one: Shall I use Node.js Instead of Rails for Real-time WebApps?
The question:
What's the best way of communicating between a Rails app and a Node.js app in order to take advantage of both technologies?
Thanks
Why not open a TCP socket for communication between node & RoR ?
var net = require('net');
// create TCP server
var server = net.createServer(function (socket) {
// write down socket
socket.write("Echo server\r\n");
socket.pipe(socket);
})
// start server listening on port 8124
server.listen(8124, "127.0.0.1");
And in RoR you can connect to the socket
require 'socket' # Sockets are in standard library
hostname = '127.0.0.1'
port = 8124
s = TCPSocket.open(hostname, port)
while line = s.gets # Read lines from the socket
puts line.chop # And print with platform line terminator
end
s.close # Close the socket when done
Then just write an abstraction on top of this TCP socket to synchronize your communication nicely and without requiring low level fiddling.
Why do the apps need to communicate?
If you simply need a Rails app to get some realtime data into the browser, then using a node.js server app and Socket.IO would be sufficient.
You have to remember that any Rails apps, is actually two applications, one written in Ruby running on the server, and one written in Javascript running on the client. They usually communicate over HTTP, sometimes with AJAX and sometimes not. Which part of your app needs the functionality of node.js?
If it is the case that the app deals with login, then displays a web page, and then continually refreshes that web page with real-time data, you only really get a benefit from node.js for the realtime data refreshes whether you do it with AJAX polling or with Websockets. Shared databases are a nice way for apps to communicate, but not for realtime.
To make it clear, if you are an expert in Ruby with Rails, you will be more productive if you add a node,js server app and only use it for high-volume data, such as realtime updates. You then have a hybrid web app that leverages the best of both platforms.
What about keeping Rails and use Faye?
the latest Railscast is awesome: http://railscasts.com/episodes/260-messaging-with-faye
One way is to have a common back-end database or some kind of memory storage which will act as intermediary layer between the two technologies. Popular is for example to use NoSQL DB like Redis which is fast, memory based and supports advanced data structures which are handy for this scenario. Also node.js and RoR both have a good client libraries for communication with Redis.
I would say the main problem is in initial authentication between the two separate systems which both needs to be synchronized. There are similar questions/answers related to this topic which may come useful to read, for example these two shows what are the possible ways how to solve the authentication problem.
It depends on exactly why you're separating the functionality from one to the other. Rails supports REST based separation without any extra work on your part. It's built based on resources from the ground up. That means it would be very simple for you to use an http.Client (or something like Restler) to query against it. You can certainly do the exact same the other way around, using standard Node.js routing (or something like Express) and an HTTP client for Ruby (such as Typhoeus). Though this method incurs the overhead of using a full HTTP request (not necessarily a problem if on an internal network). If you are looking for a more speedy way of communication, I'd say you could go about it using a persistant socket as Raynos suggests.
Depending on your need, I would suggest that using two separate systems creates extra code complexity, and it may be best for you to reduce it to one framework/language. I'm all for Service Oriented Design, but Rails is a pretty heavy weight and may slow down your over all response times, even with having Node.js working with it.
Related
Basically, I want to make a peer to peer architecture, using JavaScript (Ionic).
Since, JS cannot create sockets/etc; a NodeJS server has to be introduced between the clients; acting as the Socket.IO server between the clients.
The problem with this, is that the Socket.IO (NodeJS) server would need to be automatically found within the local network -- by the clients (instead of hardcoded/configured).
Are there any ways to implement such a thing; or alternatives to this architecture?
Thanks for the help!
Are there any ways to implement such a thing; or alternatives to this architecture?
Currently your architecture is using a browser app plus a Node app that users need to have on their network just to create TCP connections.
What you can do instead is create an Electron app that combines a Node app, a browser app, and a browser itself. See:
https://electron.atom.io/
With Electron you can write your frontend code almost the same way as for the regular browser, but you can use the entire Node API including the TCP sockets so there will be no need to create a separate Node app and to search for that app in the network. This can greatly simplify your architecture.
Note: this is not an answer to the first part of the question: "How to detect a server in the network using JS?" but to the second part of the question: "Are there any ways to implement such a thing; or alternatives to this architecture?" Detecting the servers on the local network with client-side JavaScript will not be easy - and in fact it shouldn't be even possible because websites being able to scan your LAN for active services would be a serious problem for privacy and security.
I'm new to Web Sockets in general, but get the main concept.
I am trying to build a simple multiplayer game and would like to have a server selection where I can run sockets on multiple IPs and it will connect the client through that, to mitigate connections in order to improve performance, this is hypothetical in the case of there being thousands of players at once, but would like some insight into how this would work and if there are any resources I can use to integrate this before hand, in order to prevent extra work at a later date. Is this at all possible, as I understand it Node.Js runs on a server and uses the Socket.io dependencies to create sockets within that, so I can't think of a possible solution to route it through another server unless I had multiple sites running it separately.
The first question I have is this:
Are you hosting on AWS or in a local datacenter?
The reason I ask is because SOCKET.io requires sticky sessions to work properly across multiple servers. Due to the fact that SOCKET.io will attempt to upgrade each connection, and because that upgrade request must reach the original server that authorized the session, you'll need to route websocket (TCP) connections back to that original server via sticky sessions. Unfortunately AWS makes this extremely tricky and will require you to learn how to:
A) Modify elastic load balancer policies to forward protocol information
B) Split apart TCP connections from standard web requests using something like HA PROXY or NGINX. This is necessary in order to handle web socket UPGRADE requests properly, as you will be setting TCP to sticky and web requests to round-robin.
C) Attach your socket.io configuration to a common storage source, like Redis (elasticache).
Once you've figured out what's needed for AWS (or if you've got full control over request routing at your local datacenter), you'll want to architect your SOCKET application to use multicast rooms rather than direct socket messaging.
Example:
To send a message to users in game #4444, emit a message to room 'games:4444', rather than direct to the user's socket.
If your socket instance is configured using REDIS, REDIS will automatically take care of maintaining lists of people who are connected to your 'games:4444' channel. Otherwise you'll need to maintain the list yourself using a database or other shared mechanism.
Other than that, there are plenty of resources online that can help you figure out each step along the way. I'd start with understanding something like HA PROXY and how it can help split apart your SOCKETS from your web requests.
I'm fairly new to the world of JS and its abundance of libraries. I'm looking to get into a project that involves network communication (sockets) between clients and a server. In a world with tons of libraries, I cannot make a decision as to which to use. I'm looking for something that will bring efficiency and stability.
I've been told that Node.js is like the middleman between you, as the developer, and Socket.IO. I've been told it's a huge framework that you may not use at least half of. I've been told that to maximize efficiency, you're better off using Socket.IO to make your own functionalities. I've done some research on my own and found that Socket.IO NEEDS Node.js and Node.js DOESN'T NEED Socket.IO. Which is completely opposite of what I was told. Then I find that most developers use both Socket.IO and Node.js at the same time?
Like I said, I'm fairly new, but I cannot find the right resources that would help me accomplish a websocket communication between a client and a server with maximum efficiency, or at least explain the difference between Socket.IO and Node.js. If anyone here could, please let me know! I would greatly appreciate it.
node.js is a general purpose javascript-based run-time environment (somewhat similar to other language runtimes like python in scope). You can create apps in it that don't even use the network. It is often used as a web server for created web apps and has a great set of tools and rich library of add-ons for doing so. It does not need socket.io.
socket.io is a specific library to enable web-socket-like communication between a client and a server (e.g. a chat room app is the canonical example). The server side of socket.io assumes a javascript run-time (because it's written in javascript) so that generally means node.js (though I'm not sure if a different JS runtime could perhaps be substituted).
You can think of node.js like the platform and socket.io like a specific tool to do a specific job that runs on that platform. You would use socket.io (on top of node.js) if you needed web socket connectivity between client and server.
You would use only node.js if you need any of the other things node is good at, but did not need websocket connectivity.
websockets themselves can be programmed on the server side without socket.io and without node.js. They could be programmed in strait C++ or in Java. But socket.io (running in node) provides a very easy way to set them up because the socket.io library covers both client and server in one library and one API and it's all in the same language (javascript). Look at the chat room app example on the socket.io site and you will be unlikely to find any other solution that can accomplish that in as few lines of code as it does and with the same interface on client and server.
If you were only setting up a websocket server (no web server or web app of any kind), you could still use node and socket.io and use it just for the websocket server and it would still be quite efficient. While node is capable of doing lots of other things, if you don't configure and install all those other things, they aren't costing you anything - they are just unused capabilities that aren't running.
I should add that one other thing the socket.io library does is it handles an auto-negotiation between client and server to find the best channel for the client and server to communicate on. If websockets are available, then socket.io will likely use them, but if web sockets are not available, socket.io has alternate methods that will work (even in older browsers). That functionality comes for free in socket.io without you even doing anything.
In case this isn't completely clear to you, websockets are typically used to provide real-time communication between client and server. While clients can ask for data from a server at any time with an ajax call or a web page request, what websockets allow is a two way real-time communication between client and server and the biggest advantage of websockets is that a server can send a client real-time data at any time while they are connected.
For example, I have a web page that receives real-time data from my server anytime the web page is open. The web page is served over the typical node.js web server installation, but the real-time data is sent from server to client over a websocket connection.
In addition, if there's a chatty conversation happening between client and server, websockets can be much more efficient than a series of ajax calls because with a websocket, a connection is opened once and used repeatedly whereas with ajax, each successive ajax call is like a new connection.
Node.js is a runtime environment. It's a javascript engine with a standard library built around asynchronous I/O. It plays the same role that Java, Python, Ruby, .NET, etc., play for many other web applications.
I've been told it's a huge framework that you may not use at least half of.
It might be true that most people never use most of the standard library, but I wouldn't think it's more true of Node.js than other runtimes. "Framework" isn't an accurate word to describe it.
I've been told that to maximize efficiency, you're better off using Socket.IO to make your own functionalities.
Whoever told you that was mistaken, or meant that to maximize efficiency, you're better off using [Node.js and] Socket.IO [instead of other solutions]. Many other non-Node.js solutions require a single thread or process per connection, which limits the number of simultaneous connections a server can handle. Node.js is built around asynchronous I/O which is better for keeping many connections open at once, and Socket.IO is a library for Node.js for using WebSockets.
TL;DR: Socket.IO can fire events in realtime between your client and server, so there is no need for you to reload the page to notice something changing. This can be used for "live" applications like collaborative drawing, live chats, online games and more!
Just started deal with NodeJS web apps and have a fundamental question.
Since i came from the PHP realm, i know PHP have a built-in HTTP server but no one actually use it and we used nginx and in the prehistoric projects Apache as HTTP server, when i came into ExpressJS i found that all examples talking about listening to the HTTP server that ExpressJS open (via http NodeJS module of-course) but no one talking about use it via FastCGI (nginx -> FastCGI (e.g. node-fastcgi) -> my ExpressJS app) like i used to do with PHP (nginx -> PHP-fpm -> my PHP env) and i wonder why?
As far as i understood, NodeJS app is very fast, non-blocking I/O and so on but there is a security hole using the app like everybody show, since the service that run have same common resources in the JavaScript environment, one user can share by mistake (or not) sensitive information with others, for instance. let's assume the developer made a mistake like this:
router.post('/set-user-cc', function(res){
global.user = new User({
creditCard: req.param('cc')
});
});
And other user do request like that:
router.get('/get-user-cc', funciton(req, res){
res.json(global.user);
});
At this point each user will get the user's CC info.
Using my ExpressJS app via FastCGI will open a clean JavaScript environment for each HTTP request and users won't hurt each other.
It'll nice to hear form NodeJS (web) apps experienced developers why no one suggest to use the FastCGI solution (searched on Google and found almost nothing) and if so, why it's too bad?
(p.s. the example is just to demonstrate the problem it's not something that someone actually did, but as we know lot of stupid people exists in the universe :)
Thank you!
You won't do mistakes like that if you lint your code, run under strict mode, and don't use global variables like that.
Also in nodejs web applications you generally want to make the server stateless and keep all the data in the databases. This would also make it a more scalable architecture.
In applications that are security super important, you can throw heavy fuzzy testing at it to find problems like that too.
If you do all this, plus having a strict code review process, you won't have to worry about it at all.
FastCGI doesn't prevent this problem as a single or a few connections is used to communicate with the server that processes the requests(node.js in this case) and HTTP connections are multiplexed through it. A node.js process will handle multiple requests at a time.
You can potentially somehow make a solution that launches a thread but it'll be a lot slower. In case if you are using node.js for things that are required to be have high reliability or can't afford small mistakes(for example health related devices), node.js is the wrong platform for it.
I'm looking at options to connect directly--without a web server or middleware--to a PostgreSQL server using JavaScript from a web browser client. On github, I found three projects:
node_postgres
node-postgres
postgres-js
They all appear to be in early but at least somewhat active development.
Do they all do roughly the same thing? Is what they do even what I'm looking for? Does anyone have experience with any of them that could recommend one over the others?
node-postgres was inspired by postgres-js and does roughly the same thing.
However, they both seem to be their own sort of middleware, because they require node.js, which is a server-side JavaScript implementation of a web server. So they would cut out a layer, but still not be the same thing as connecting directly to the PostgreSQL server.
There might be a way to combine the code in them with some HTML5 socket examples, though, to make connections directly from a web browser client.
If you are interested in CLIENT side JavaScript, as the OP's question implied, but you don't insist on owning the server, there is a commercial service that can help you.
The Rdbhost service makes PostgreSQL servers accessible from client-side JavaScript. There is a security system to prevent unauthorized queries, using a server-side white-list and an automated white-list populating system.
It uses plain old AJAX style http requests, provides a jQuery extension to facilitate the querying.
See https://www.rdbhost.com .
There is no secure solution today. One of possible solutions would be htsql:
http://htsql.org/
However there you use web addresses to query, even with https your queries will be plain text!
You should/could use a small webserver to handle requests. Alternativelly you can write an app, or use a local postgres server to handle the connection (in this case you still will need some kind of webserver).
The problem is very simple: your webbrowsers are limited in protocols to talk to the web, and postgres is not on this list. In fact you should not try to overcome this issue, using a server-client architecture is a very good solution. Format your request with JS to make it as small as possible, and let your web-server scripts interpret it into functional sql requests. The answer can be parsed into shorter response, then a sql data transfer, and you just need to interpret it on your side. Since you will create interperers on all sides, you will achieve a higher abstraction then in case of direct db connection, and thus independency towards the backend engines you use.