I'm confused. The main question I have is, when to use pure node.js, when shall I use a framework/MVC like "express" or "connect".
I know that "express" is just adding a bunch of functionality to "connect", but what is it really good for? Lets say, I want all my HTTP stuff do against an "Apache" server and only do some partial stuff with node.js (like WebSocket connections, CouchDB, etc.).
Would it make sense in this scenario to use "express" or "connect" for some reason?
As far as I know, Socket.IO also handles HTTP requests as a fallback, so would it be just enough to use Socket.IO for those needs ?
What else is the big advantage using Express/Connect ?
Express (or Connect) is an application framework for HTTP web applications. It's the entry point of your application. It provides some very common functionnalities such as :
HTTP Server
URL Routing
Request args
Sessions
It also allows other functionnalities to be easily used (they are called middleware) such as Authentication handling, templating.
If you just want to implement a pub/sub service through SocketIO, without any pages or other API, just use the Socket.io library (S.io homepage example) :
var io = require('socket.io').listen(80);
io.sockets.on('connection', function (socket) {
socket.emit('news', { hello: 'world' });
socket.on('my other event', function (data) {
console.log(data);
});
});
If you want to use Socket.io within a more complexe application, serving pages and API, you can (must ?) integrate it with Express (see How To Use)
Hi I have been using Expressjs for some time now and find it particularly useful for the Jade templating engine it provides by default. Jade is a new templating language and though I admit it takes some time to get familiar with it, its pretty useful. You can write conditionals, mixins, pass variables to your pages, use partials etc. It just makes client side html really easy. Also expressjs sets up your view, javascript, stylesheets directory structure... If followed properly catching responses and rendering html pages are a matter of couple of line of codes. As discussed above, the http middlewear is a lot easier to implement..
Related
I'm starting to develop a new NodeJS (with Express) Web / CRUD / REST Multi-page Application and I would like to begin in the best way.
The Application object will have a lot of modules, just for example:
Users management at several levels with policies for the operations which they can do, based on the the user level.
Simple forms to insert, update, retrieve database data.
Screens to display real-time sensors (have already thought to make use of libraries such Socket.io).
The basic application (NodeJS server-side API) will also be "called" by Android / iOS applications to fetch / edit data from mobile.
Considering the project as a multi-page application with many asynchronous calls, I have a doubt about the views management and I would like to figure out which one of the following approaches is the most convenient:
1 conjecture (PURE API style?): write, for each request, an express route that returns only a JSON response and compose the view client-side (startly with simple jquery DOM editing, later with React or Angular), after downloading the same (via socket or ajax call).
Server side:
1 Call: /getFooView
router.get('/getFooView', function(req, res, next) {
//code
res.send(htmlview);
});
2 Call: /getFooData
router.get('/getFooView', function(req, res, next) {
//code
res.json({"foo": "bar"});
});
Client side:
Javascript / JQuery client-side will compose the "getFootView" then it will be showed to the user.
2 conjecture: write, for each request, an express route that returns the view (usually a tiny html block, list or similar) already composed server-side (let's suppose by handlerbars). In this case, each controller route should interpret the request according to the requester.
Server side:
router.get('/getFoo', function(req, res, next) {
// pseudo code
if(request == "ANDROID" || request == "iOS")
res.json({data});
else
res.render('index', {
// handlebars parsed view
title: blockTitle,
data: blockData
sensors: sensorsData
});
});
Client Side:
JQuery code to append the received block.
Which approach is better suited to my purpose? What are the pros and cons?
Sorry for my bad english.
If we talk about single-page-application, then you shouldn't send views. You can use nginx for client files including css, html, js and etc. Also nginx will caching and proxing client requests to you REST API.
So my answer is PURE API + nginx for static files.
We want to build a Javascript/HTML gui for our gRPC-microservices. Since gRPC is not supported on the browser side, we thought of using web-sockets to connect to a node.js server, which calls the target service via grpc.
We struggle to find an elegant solution to do this. Especially, since we use gRPC streams to push events between our micro-services.
It seems that we need a second RPC system, just to communicate between the front end and the node.js server. This seems to be a lot of overhead and additional code that must be maintained.
Does anyone have experience doing something like this or has an idea how this could be solved?
Edit: Since Oct 23,2018 the gRPC-Web project is GA, which might be the most official/standardized way to solve your problem. (Even if it's already 2018 now... ;) )
From the GA-Blog: "gRPC-Web, just like gRPC, lets you define the service “contract” between client (web) and backend gRPC services using Protocol Buffers. The client can then be auto generated. [...]"
We recently built gRPC-Web (https://github.com/improbable-eng/grpc-web) - a browser client and server wrapper that follows the proposed gRPC-Web protocol. The example in that repo should provide a good starting point.
It requires either a standalone proxy or a wrapper for your gRPC server if you're using Golang. The proxy/wrapper modifies the response to package the trailers in the response body so that they can be read by the browser.
Disclosure: I'm a maintainer of the project.
Unfortunately, there isn't any good answer for you yet.
Supporting streaming RPCs from the browser fully requires HTTP2 trailers to be supported by the browsers, and at the time of the writing of this answer, they aren't.
See this issue for the discussion on the topic.
Otherwise, yes, you'd require a full translation system between WebSockets and gRPC. Maybe getting inspiration from grpc-gateway could be the start of such a project, but that's still a very long shot.
An official grpc-web (beta) implementation was released on 3/23/2018. You can find it at
https://github.com/grpc/grpc-web
The following instructions are taken from the README:
Define your gRPC service:
service EchoService {
rpc Echo(EchoRequest) returns (EchoResponse);
rpc ServerStreamingEcho(ServerStreamingEchoRequest)
returns (stream ServerStreamingEchoResponse);
}
Build the server in whatever language you want.
Create your JS client to make calls from the browser:
var echoService = new proto.grpc.gateway.testing.EchoServiceClient(
'http://localhost:8080');
Make a unary RPC call
var unaryRequest = new proto.grpc.gateway.testing.EchoRequest();
unaryRequest.setMessage(msg);
echoService.echo(unaryRequest, {},
function(err, response) {
console.log(response.getMessage());
});
Streams from the server to the browser are supported:
var stream = echoService.serverStreamingEcho(streamRequest, {});
stream.on('data', function(response) {
console.log(response.getMessage());
});
Bidirectional streams are NOT supported:
This is a work in progress and on the grpc-web roadmap. While there is an example protobuf showing bidi streaming, this comment make it clear that this example doesn't actually work yet.
Hopefully this will change soon. :)
https://github.com/tmc/grpc-websocket-proxy sounds like it may meet your needs. This translates json over web sockets to grpc (layer on top of grpc-gateway).
The grpc people at https://github.com/grpc/ are currently building a js implementation.
The repro is at https://github.com/grpc/grpc-web (gives 404 ->) which is currently (2016-12-20) in early access so you need to request access.
GRPC Bus WebSocket Proxy does exactly this by proxying all GRPC calls over a WebSocket connection to give you something that looks very similar to the Node GRPC API in the browser. Unlike GRPC-Gateway, it works with both streaming requests and streaming responses, as well as non-streaming calls.
There is both a server and client component.
The GRPC Bus WebSocket Proxy server can be run with Docker by doing docker run gabrielgrant/grpc-bus-websocket-proxy
On the browser side, you'll need to install the GRPC Bus WebSocket Proxy client with npm install grpc-bus-websocket-client
and then create a new GBC object with: new GBC(<grpc-bus-websocket-proxy address>, <protofile-url>, <service map>)
For example:
var GBC = require("grpc-bus-websocket-client");
new GBC("ws://localhost:8080/", 'helloworld.proto', {helloworld: {Greeter: 'localhost:50051'}})
.connect()
.then(function(gbc) {
gbc.services.helloworld.Greeter.sayHello({name: 'Gabriel'}, function(err, res){
console.log(res);
}); // --> Hello Gabriel
});
The client library expects to be able to download the .proto file with an AJAX request. The service-map provides the URLs of the different services defined in your proto file as seen by the proxy server.
For more details, see the GRPC Bus WebSocket Proxy client README
I see a lot of answers didn't point to a bidirectional solution over WebSocket, as the OP asked for browser support.
You may use JSON-RPC instead of gRPC, to get a bidirectional RPC over WebSocket, which supports a lot more, including WebRTC (browser to browser).
I guess it could be modified to support gRPC if you really need this type of serialization.
However, for browser tab to browser tab, request objects are not serializsed and are transfered natively, and the same with NodeJS cluster or thread workers, which offers a lot more performance.
Also, you can transfer "pointers" to SharedArrayBuffer, instead of serializing through the gRPC format.
JSON serialization and deserialization in V8 is also unbeatable.
https://github.com/bigstepinc/jsonrpc-bidirectional
Looking at the current solutions with gRPC over web, here is what's available out there at the time of writing this (and what I found):
gRPC-web: requires TypeScript for client
gRPC-web-proxy: requires Go
gRPC-gateway: requires .proto modification and decorations
gRPC-bus-websocket-proxy-server: as of writing this document it lacks tests and seems abandoned (edit: look at the comments by the original author!)
gRPC-dynamic-gateway: a bit of an overkill for simple gRPC services and authentication is awkward
gRPC-bus: requires something for the transport
I also want to shamelessly plug my own solution which I wrote for my company and it's being used in production to proxy requests to a gRPC service that only includes unary and server streaming calls:
gRPC-express
Every inch of the code is covered by tests. It's an Express middleware so it needs no additional modifications to your gRPC setup. You can also delegate HTTP authentication to Express (e.g with Passport).
I very much like Meteor's pub/sub. I wonder if there is a way to get a similar workflow, using sails.js or just a socket library in general.
In particular, what I would like to be able to do is something along the lines of:
// Server-side:
App.publish('myCollection', -> collection.find({}))
// Client-side:
let myCollection = App.subscribe('myCollection')
let bob = myCollection.find({name: 'Bob'})
myCollection.insert({name: 'Amelie'}, callback)
All interaction with the server should happen in the background.
I very much like Meteor's pub/sub. I wonder if there is a way to get a similar workflow, using sails.js or just a socket library in general
Basically yes, at least about realtime sync between backend and frontend. Let's review what meteor's have and answer point by point.
Pub/sub
The Pub / Sub concept, as stated by Sabbir, is also supported by sails.js. Though the basics are slightly different :
In meteor, the client can subscribes to everything he wants, and the server control what it receives by only publishing to who he wants;
whereas in sails.js, the server both does subscribe some clients sockets and publish to all binded sockets
Note that, by default:
meteor contains the autopublish package that just notify every client without any kind of filtering. To acheive some filtering, you have to meteor remove autopublish then you can handle what will your client receive by adding a mongo request to it, like explained here.
sails by default, on its automatic "select" blueprints actions, auto-subscribes the calling socket to the events on the objects returned by the "select".
As a server-side conclusion:
Subscribe: just call findor findOne blueprint default action, through a socket (attaching some where filters or not) and your socket will automatically be subscribed to every events concerning returned objects => you don't have to code anything on the server, in most cases, for the Subscribe logic.
Publish: every blueprint default actions (create, update, destroy, add, remove) auto-publish to subscribed sockets => you don't have to code anything on the server, in most cases, for the Publish logic.
(Though, if you find yourself implementing some manual controller actions, sails API helps you publishing and subscribing easily)
Client handling
Therefore, with both meteor and sails, clients only receive what they're supposed to receive. Time for front-end now.
Philosophy
meteor in one hand, with it's isomorphic dimension, does provide a front-end connector by nature, exposing it's data-bound collections.
sails on the other hand, is front-end agnostic, and can be attacked by any http REST connector (JS or not), such as $http, $resource, or more advanced ones like Restangular.
Though, being aware of the complexity using raw sockets on their API (when it comes to session, CORS, CSRF and stuff), they developped a javascript socket.io wrapper called sails.io.js designed to be REST-like-over-socket, and just works like a charm.
Basically, The main difference is that meteor is one step higher-level than sails, because it provides the logic of syncing collections and objects.
All interaction with the server should happen in the background.
sails.io.js, the official front-end component, is just not that high-level. When it comes to Angular.js.
Though, you can find some community connectors that aim to, kinda, provide the same feature as mongo data-bound collections and objects. There is sails-resource, spinnaker or angular resource sails. I tried both of them, and I should say that I was disapointed. The abstraction level is so high that it just becomes annoying, IMHO. For example, with not-very-RESTful-friendly custom actions, like a login, it becomes very hard to adapt it for your needs.
==> I would advice to use a low-level connector, such as angularSails or (my prefered) https://github.com/janpantel/angular-sails, or even raw sails.io.js if you're not using Angular.
Edit: just foun a backbone version, by the sails' creator
It just works great, and believe me, the "keep my collection in sync with that socket" code is so ridiculous, that finding a module for this is just not worth it.
Some code please, stop talking
In particular, what I would like to be able to do is something along the lines of:
Server
Meteor
# Server-side:
App.publish('myCollection', -> collection.find({}))
Sails
//Nothing to do, just sails generate api myCollection
Client
Meteor
# Client-side:
myCollection = App.subscribe('myCollection')
Sails, with sails.io.js
(Here using lodash for convenience)
var myCollection;
sails.io.get('/myCollection').then(
function(res) {
myCollection = res.data;
},
function(err) {
//Handle error
}
);
sails.io.on('myCollection').function(msg) {
switch(msg.verb) {
case 'created':
myCollection.push(msg.data);
break;
case 'updated':
_.extend(_.find(myCollection, 'id', msg.id), msg.data);
break;
case 'destroyed':
_.remove(myCollection, 'id', msg.id);
break;
};
});
(I leave the find where and create to your imagination with [the doc])
All interaction with the server should happen in the background.
Well, Sails, only for angular, with sails ressources
I'm not pretty used to that process, so I leave you reading here or here, but once again I'd choose manual .on()method.
Since I asked this question, I've learned a few things and some new projects have popped up. I decided against sails.io, because when developing with React.js, most of the community's weight is behind webpack, but sails.io uses gulp. I realize these can be used together and there is even an npm package for this, but I wasn't too keen on making my stack bigger than it had to be, so I went with a simple express.js server that I could tailor to my needs.
In order to sync my data, I'm using rethinkdb which allows me to asynchronously watch the database for changes and then publish the changes to the clients through websockets.
I've set up a simple script where I keep an instance of a baobab tree on both the client and the server.
When the tree gets modified on the server, it sends transaction data to the appropriate clients through the websocket
The client merges the transaction with the tree.
This method does not make use of local storage and keeps the data in memory in the node.js process. The data in the transaction is also quite redundant.
The future plan has always been to set something up using redis and local storage ...
... until yesterday when I found deepstream.io!
This is a tool that does exactly what I want and need! Nothing more, nothing less.
Another project worth mention is meatier: "like meteor, but meatier". It is composed of many other well supported open source projects, so you could even pick and choose.
I am new to MEAN stack, I am trying to create a basic one page application at the moment.
I am trying to connect to the mongodb and then list the values in a certain collection in a controller.
However, when I looked for the answer, I came across this answer
Using AngularJs and MongoDB/Mongoose
Which then confuses me as what is the point of having the code below if you can't use it between angular and mongo ? Or are there other interim steps that use it?
var mongoose = require('mongoose');
var db = mongoose.createConnection('mongodb://localhost:3000/database');
var orderSchema = new mongoose.Schema({
routeFrom : String,
routeTo : String,
leaving: String
});
var Order = db.model('Order', orderSchema);
module.exports = Order;
Edit: The situation i am trying to use it in is such:
Geek.html
<div class="jumbotron text-center">
<h1>Geek City</h1>
<p>{{tagline}}</p>
<ul>
<li ng-repeat="value in dataValues">
{{value.name}}
</li>
</ul>
</div>
GeekController
angular.module('GeekCtrl', []).controller('GeekController', function($scope) {
$scope.tagline = 'The square root of life is pi!';
$scope.dataValues = function(){
var mongo = require('../config/db.js');
var collectionValues = mongo.myCollection.find();
return collectionValues;
};
});
You cannot require db.js config file in Angular because it's not set to be used on the client side. What you describe is so called 'Isomorphic' approach.
What I mean by that: mongo is a database system (roughly speaking). To get data from the database, we usually don't trust the client. So we have some server-side code (PHP, Ruby, Node.js, Java, what have you) which authorizes the client, processes and filters the data and returns it to the client (in this case Angular.js). So your Mongoose models are set to be used by the server-side javascript and that part of the app. That server side should also serve data to Angular so you'd connect to Node.js from Angular, not directly to Mongo. So the same server that (usually) serves your angular files, will also serve the data it reads from mongo.
If you want server-less data with Angular, you can take a look at Firebase.js. It's angular-ready and it could help you not mess around with Mongo, mongoose and the server-side code.
You could try a hybrid approach with something like meteor.js or backbone.js set to work both on client and server, or take a look at this article for more info.
Or for what it's worth, if you want to run your own Mongo, you could start mongo with --rest, then you'd be able to connect to Angular directly to Mongo, at http://somehost:28017/mydatabase or something similar, depending on your setup.
Mongoose is a node module, and as far as I know it doesn't have a front end component, so you won't be using it directly in your frontend js code. It's only going to help you on the server side of your app. If you're relatively new to Node then this stuff can get pretty confusing, since it's all end-to-end javascript and sometimes it's not clear what modules work on the server or frontend, especially since some can do both.
Node, MongoDB, Express, and Mongoose all live on the server.
Angular lives in the browser, and can't use any of the server-side components directly.
Using the MEAN stack, You will be building a node app that uses mongoose to talk to mongodb and express to expose an api to your front end. Then in in your html/js code you'll be using angular and its $http service to talk to the server to get and set data.
There is a great tutorial that walks you through the entire process on scotch.io:
http://scotch.io/bar-talk/setting-up-a-mean-stack-single-page-application
I'm just about done reading "Node.js in Action", and I'm trying to put together the pieces of Node.js --> Connect --> Express. I have a question about the "servers" that we create in Node.
node_server = http.createServer();
connect_app = Connect();
express_app = Express();
In the code above, is it true that connect_app is basically a "subclass" of node_server? (I know, this is JavaScript, so we don't really have subclassing, but I don't know what else to call it; extension?). And likewise express_app is basically a "subclass" of connect_app? It's my understanding that all of these objects are servers which could be bound to a port and respond to requests, but that in practice we typically only bind ONE of them to a port and use it to proxy requests to other server objects.
Am I on the right track in learning this?
First of all, shake off the idea that there are 3 running servers - because there's only one.
Express is a framework that relies on Connect, which is another framework/set of middlewares. Further, Connect relies on the NodeJS's API (HTTP module). Basically an abstraction, one on top of another.
An analogy is that Express is a car, Connect is like an engine, NodeJS is the engine parts. You only have one running car (one server in your case), but multiple components powering it.
#josh3736 Has commented a better explanation how it works.