Socket.io: how to keep server running, despite errors - javascript

A short preface: I am currently learning Socket.io, coming from Asyncio in Python. I am making multiclient socket servers. In Asyncio with Python, I am used to making a separate class file for my client. So in the event of any errors, the client's specific class only has the error, and it doesn't affect the rest of the server.
Now, I understand classes are sort of "looked down" upon in Node, so I am unsure how to structure this Socket.io chat program. So far, I have the following code:
server.on("connection", (socket) => {
clientLog(socket.handshake.address, "Client connected.");
socket.on("auth", (auth) => {
if(valid(auth['token']))
{
var clientBase = {
"index" : clientsIndex++,
"addr" : socket.handshake.address,
"token" : auth['token'],
}
clients[socket] = clientBase;
clientLog(clients[socket]['addr'], "Client verified.");
}
else
{
log(socket.handshake.address, "Client failed verification!");
socket.disconnect();
}
});
socket.on("message", (message) => {
handleMessage(socket, message);
});
socket.on("disconnect", () => {
if(socket in clients)
{
delete clients[socket];
}
console.log("Client Disconnected!");
});
});
There is more at the top of the file, but it is not very significant. Basically, this is the start of my simple chat messenger program. Now, there is an issue: when errors come up, the entire program dies. This is different from Python/Asyncio, because in Python, when errors show up, unless they are fatal, the program continues to run, and only the client is disconnected. In Socket.io, the entire program seems to die as soon as the smallest error comes up.
Errors are inevitable, and this chat messenger is planned to be used by a significant amount of people. Therefore, I was wondering if I can restructure this program in a way where any unhandled errors are just logged or something, and the program continues to run. I understand I can do this with unhandledException, but I also understand unhandledException is unethical, and should be used as a last-resort method.
With that, I was wondering if my program has some fundamental problem in its structure. If so, how should I restructure this program to better handle multiple clients? This is just the base of it, and I can restructure if necessary.
It is just a simple Socket.io program, which passes around messages (and will eventually use MySQL to authenticate users).
What do I do?

I don't think your issue is directly related to Node.js or Socket.io. I'd say you should start using exception handling with try..catch blocks (optionally with finally statement if necessary). This is a generic technique present in every modern programming language.
See: https://nodejs.org/en/knowledge/errors/what-is-try-catch/
An alternative to try/catch suggested by the same page above:
But wait, isn't it Node.js convention to not use try-catch?
In the core Node.js libraries, the only place that one really needs to use a try-catch is around JSON.parse(). All of the other methods use either the standard Error object through the first parameter of the callback or emit an error event. Because of this, it is generally considered standard to return errors through the callback rather than to use the throw statement.
According to this alternative technique you should start your callbacks with:
if (error){
//error handling code goes here
}
Which is often seen in Node.js codes.
I found a good example here:
https://stackify.com/node-js-error-handling/ at the Handling async (callback) errors section.
var request = require('request'); //http wrapped module
function requestWrapper(url, callback) {
request.get(url, function (err, response) {
if (err) {
callback(err);
} else {
callback(null, response);
}
})
}

Can u check on nodemon? Its a package which is used in dev environments to keep the server running in case of run time errors. Use it along with pm2 to run the server in the background.

Related

How to effectively .then chain on client with .catch?

I have the following .then example chain in my React Native client code, currently without a .catch because I am looking for advice on how to set it up:
await getUserInfo(userId, JSON.stringify(ratingsQueryRatingType), 1)
.then(async userRatingData => {
await findMatchHistory(userId, '', 3)
.then(async matchHistoryData => {
These functions make calls to my NodeJS server. The NodeJS server then sends back the data.
I am trying to find out how I can effectively send back an error from the server to the client, and have the .catch part in the client handle that (e.g. with Alert.alert(error)).
I tried to throw an error on my server as follows but then on my server I get Unhandled promise rejection. It appears that it does not send the error back to client.
// Other code before this part
if (response==='Success') {
return res.status(200).json({'status': 'success'})
} else {
throw 'Match record was not confirmed successfully'
}
Or is it common practice to send response objects from the server (instead of Errors) and then handling those on the client with some kind of if-statement, such as the following?
if (results['status']==='success') {
// Code
} else if (results['status']==='failure') {
// Code
}
I do read about .then chaining with .catch being an attractive option so it feels like this would not be the correct solution..
I think we should send error-codes to the client instead of sending message although you can do it too. you can check the status code based on the error occurred on the backend -> https://developer.mozilla.org/en-US/docs/Web/HTTP/Status
Also, on client side you can use interceptor and create an error-handler service layer and based on the code you will be sending to the client side you can handle that. you can follow these steps to setup one for your app : https://bilot.group/articles/using-react-router-inside-axios-interceptors/
And, On the backend side if there is an error try logging it to log files on your server.
// Other code before this part
if (response==='Success') {
return res.status(200).json({'status': 'success'})
} else {
res.status(based on what happened on server).json({'status': 'failure'})
}
For .then channing instead you should use async/await. for better understanding how to use them and convert chain to asyn/await. This doc contains the step-by-step guide: https://advancedweb.hu/how-to-refactor-a-promise-chain-to-async-functions/

Meteor method returns undefined to the client (asynchronous)

I've been working on integrating Google Recaptcha into a Meteor and AngularJS web application. Everything was smooth sailing until I had to validate the recaptcha response -- for some bizarre reason, I can't get an async response from the backend to the frontend.
I've tried a lot of different variations and have read many, many posts on SO and the internet in general, but with no luck -- so I opted to post my own question.
Here's what I'm doing:
Client:
Meteor.call('recaptcha.methods.validateRecaptcha', { 'response' : this.recaptcha.getResponse(this.id) }, function(error, result) {
// error and result are both undefined
console.log('Do something with the ' + error + ' or ' + result + '.');
}
So, I'm calling a Meteor method and passing in a callback that is run when the method is done. However, the error and result parameters are both undefined.
Server:
run: function(data) {
if (this.isSimulation) {
/*
* Client-side simulations won't have access to any of the
* Meteor.settings.private variables, so we should just stop here.
*/
return;
}
return Meteor.wrapAsync(HTTP.post)(_someUrl, _someOptions);
}
That last line is a shortened version of the sync/async structure that I've found in several Meteor guides (I also tried this version), namely:
var syncFunc = Meteor.wrapAsync(HTTP.post);
var result = syncFunc(Meteor.settings.private.grecaptcha.verifyUrl, _options);
return result;
I've also tried a version using Futures:
var Future = Npm.require( 'fibers/future' );
var future = new Future();
var callback = future.resolver();
HTTP.post(Meteor.settings.private.grecaptcha.verifyUrl, _options, callback);
return future.wait();
Now, the intention here is that I use Meteor.call() to call this method from the client, the client-side stub runs (to prevent simulation errors since we use private Meteor.settings variables in the real non-SO server-side code) and returns immediately (which happens), and the server hits Google's Recaptcha API (which happens and the server receives a response) before returning the result to the client (which doesn't happen -- the callback occurs but with no error/success data).
My thought is that one of two things are happening:
I'm just doing something wrong and I'm not properly sending the data back to the client.
The synchronous client stub (which returns immediately) is telling the client that the server response isn't important, so it never waits for the proper asynchronous response.
Could any of the Meteor gurus weigh in here and let me know what's going on and how to get async requests to play nicely in a Meteor application?
Thanks!
From the documentation for HTTP.call, which is the generic version of HTTP.post, it says
Optional callback. If passed, the method runs asynchronously, instead of synchronously, and calls asyncCallback. On the client, this callback is required.
So, on server, you can run it asynchronously like this
run: function(data) {
if (this.isSimulation) {
/*
* Client-side simulations won't have access to any of the
* Meteor.settings.private variables, so we should just stop here.
*/
return;
}
// No need to pass callback on server.
// Since this part is not executed on client, you can do this
// Or you can use Meteor.isClient to run it asynchronously when the call is from client.
return HTTP.post(Meteor.settings.private.grecaptcha.verifyUrl, _options);
}

In Meteor, how do I know on the client-side when the server-side operation is done?

I know Meteor does client-side caching of the database for better effective performance. In the client-side Meteor method call, is there any way to know when the server-side database operation actually finishes (or if it actually failed)? Are there events I can hook into to get notification when the full remote procedure call finishes? Is there some way to use subscribe() to know when this particular call "really" finishes?
For example, from the simple-todos tutorial, is there a way to get notification when the server-side deleteTask implementation is completely done (i.e. the server-side database has been updated successfully)?
Template.task.events({
"click .delete": function () {
Meteor.call("deleteTask", this._id);
},
});
I know Meteor intentionally hides the server processing delay, but I'm curious about the net operation performance of the Meteor methods I'm writing.
Just include a callback with your Meteor.call. The callback will be run after the server has completed processing the request.
Template.task.events({
'click .delete': function () {
Meteor.call('deleteTask', this._id, function(err, result){
if (err){
// an error was thrown
} else {
// everything worked!
}
})
}
});

error handling in asynchronous node.js calls

I'm new to node.js although I'm pretty familiar with JavaScript in general. My question is regarding "best practices" on how to handle errors in node.js.
Normally when programming web servers, FastCGI servers or web pages in various languages I'm using Exceptions with blocking handlers in a multi-threading environment. When a request comes in I usually do something like this:
function handleRequest(request, response) {
try {
if (request.url=="whatever")
handleWhateverRequest(request, response);
else
throw new Error("404 not found");
} catch (e) {
response.writeHead(500, {'Content-Type': 'text/plain'});
response.end("Server error: "+e.message);
}
}
function handleWhateverRequest(request, response) {
if (something)
throw new Error("something bad happened");
Response.end("OK");
}
This way I can always handle internal errors and send a valid response to the user.
I understand that with node.js one is supposed to do non-blocking calls which obviously leads to various number of callbacks, like in this example:
var sys = require('sys'),
fs = require('fs');
require("http").createServer(handleRequest).listen(8124);
function handleRequest(request, response) {
fs.open("/proc/cpuinfo", "r",
function(error, fd) {
if (error)
throw new Error("fs.open error: "+error.message);
console.log("File open.");
var buffer = new require('buffer').Buffer(10);
fs.read(fd, buffer, 0, 10, null,
function(error, bytesRead, buffer) {
buffer.dontTryThisAtHome(); // causes exception
response.end(buffer);
}); //fs.read
}); //fs.open
}
This example will kill the server completely because exceptions aren't being catched.
My problem is here that I can't use a single try/catch anymore and thus can't generally catch any error that may be raised during the handling of the request.
Of course I could add a try/catch in each callback but I don't like that approach because then it's up to the programmer that he doesn't forget a try/catch. For a complex server with lots of different and complex handlers this isn't acceptable.
I could use a global exception handler (preventing the complete server crash) but then I can't send a response to the user since I don't know which request lead to the exception. This also means that the request remains unhandled/open and the browser is waiting forever for a response.
Does someone have a good, rock solid solution?
Node 0.8 introduces a new concept called "Domains". They are very roughly analogousness to AppDomains in .net and provide a way of encapsulating a group of IO operations. They basically allow you to wrap your request processing calls in a context specific group. If this group throws any uncaught exceptions then they can be handled and dealt with in a manner which gives you access to all the scope and context specific information you require in order to successfully recover from the error (if possible).
This feature is new and has only just been introduced, so use with caution, but from what I can tell it has been specifically introduced to deal with the problem which the OP is trying to tackle.
Documentation can be found at: http://nodejs.org/api/domain.html
Checkout the uncaughtException handler in node.js. It captures the thrown errors that bubble up to the event loop.
http://nodejs.org/docs/v0.4.7/api/process.html#event_uncaughtException_
But not throwing errors is always a better solution. You could just do a return res.end('Unabled to load file xxx');
This is one of the problems with Node right now. It's practically impossible to track down which request caused an error to be thrown inside a callback.
You're going to have to handle your errors within the callbacks themselves (where you still have a reference to the request and response objects), if possible. The uncaughtException handler will stop the node process from exiting, but the request that caused the exception in the first place will just hang there from the user point of view.
Very good question. I'm dealing with the same problem now. Probably the best way, would be to use uncaughtException. The reference to respone and request objects is not the problem, because you can wrap them into your exception object, that is passed to uncaughtException event. Something like this:
var HttpException = function (request, response, message, code) {
this.request = request;
this.response = response;
this.message = message;
this.code = code || 500;
}
Throw it:
throw new HttpException(request, response, 'File not found', 404);
And handle the response:
process.on('uncaughtException', function (exception) {
exception.response.writeHead(exception.code, {'Content-Type': 'text/html'});
exception.response.end('Error ' + exception.code + ' - ' + exception.message);
});
I haven't test this solution yet, but I don't see the reason why this couldn't work.
I give an answer to my own question... :)
As it seems there is no way around to manually catch errors. I now use a helper function that itself returns a function containing a try/catch block. Additionally, my own web server class checks if either the request handling function calls response.end() or the try/catch helper function waitfor() (raising an exception otherwise). This avoids to a great extent that request are mistakenly left unprotected by the developer. It isn't a 100% error-prone solution but works well enough for me.
handler.waitfor = function(callback) {
var me=this;
// avoid exception because response.end() won't be called immediately:
this.waiting=true;
return function() {
me.waiting=false;
try {
callback.apply(this, arguments);
if (!me.waiting && !me.finished)
throw new Error("Response handler returned and did neither send a "+
"response nor did it call waitfor()");
} catch (e) {
me.handleException(e);
}
}
}
This way I just have to add a inline waitfor() call to be on the safe side.
function handleRequest(request, response, handler) {
fs.read(fd, buffer, 0, 10, null, handler.waitfor(
function(error, bytesRead, buffer) {
buffer.unknownFunction(); // causes exception
response.end(buffer);
}
)); //fs.read
}
The actual checking mechanism is a little more complex, but it should be clear how it works. If someone is interested I can post the full code here.
One idea: You could just use a helper method to create your call backs and make it your standard practice to use it. This does put the burden on the developer still, but at least you can have a "standard" way of handling your callbacks such that the chance of forgetting one is low:
var callWithHttpCatch = function(response, fn) {
try {
fn && fn();
}
catch {
response.writeHead(500, {'Content-Type': 'text/plain'}); //No
}
}
<snipped>
var buffer = new require('buffer').Buffer(10);
fs.read(fd, buffer, 0, 10, null,
function(error, bytesRead, buffer) {
callWithHttpCatch(response, buffer.dontTryThisAtHome()); // causes exception
response.end(buffer);
}); //fs.read
}); //fs.open
I know that probably isn't the answer you were looking for, but one of the nice things about ECMAScript (or functional programming in general) is how easily you can roll your own tooling for things like this.
At the time of this writing, the approach I am seeing is to use "Promises".
http://howtonode.org/promises
https://www.promisejs.org/
These allow code and callbacks to be structured well for error management and also makes it more readable.
It primarily uses the .then() function.
someFunction().then(success_callback_func, failed_callback_func);
Here's a basic example:
var SomeModule = require('someModule');
var success = function (ret) {
console.log('>>>>>>>> Success!');
}
var failed = function (err) {
if (err instanceof SomeModule.errorName) {
// Note: I've often seen the error definitions in SomeModule.errors.ErrorName
console.log("FOUND SPECIFIC ERROR");
}
console.log('>>>>>>>> FAILED!');
}
someFunction().then(success, failed);
console.log("This line with appear instantly, since the last function was asynchronous.");
Two things have really helped me solve this problem in my code.
The 'longjohn' module, which lets you see the full stack trace (across multiple asyncronous callbacks).
A simple closure technique to keep exceptions within the standard callback(err, data) idiom (shown here in CoffeeScript).
ferry_errors = (callback, f) ->
return (a...) ->
try f(a...)
catch err
callback(err)
Now you can wrap unsafe code, and your callbacks all handle errors the same way: by checking the error argument.
I've recently created a simple abstraction named WaitFor to call async functions in sync mode (based on Fibers): https://github.com/luciotato/waitfor
It's too new to be "rock solid".
using wait.for you can use async function as if they were sync, without blocking node's event loop. It's almost the same you're used to:
var wait=require('wait.for');
function handleRequest(request, response) {
//launch fiber, keep node spinning
wait.launchFiber(handleinFiber,request, response);
}
function handleInFiber(request, response) {
try {
if (request.url=="whatever")
handleWhateverRequest(request, response);
else
throw new Error("404 not found");
} catch (e) {
response.writeHead(500, {'Content-Type': 'text/plain'});
response.end("Server error: "+e.message);
}
}
function handleWhateverRequest(request, response, callback) {
if (something)
throw new Error("something bad happened");
Response.end("OK");
}
Since you're in a fiber, you can program sequentially, "blocking the fiber", but not node's event loop.
The other example:
var sys = require('sys'),
fs = require('fs'),
wait = require('wait.for');
require("http").createServer( function(req,res){
wait.launchFiber(handleRequest,req,res) //handle in a fiber
).listen(8124);
function handleRequest(request, response) {
try {
var fd=wait.for(fs.open,"/proc/cpuinfo", "r");
console.log("File open.");
var buffer = new require('buffer').Buffer(10);
var bytesRead=wait.for(fs.read,fd, buffer, 0, 10, null);
buffer.dontTryThisAtHome(); // causes exception
response.end(buffer);
}
catch(err) {
response.end('ERROR: '+err.message);
}
}
As you can see, I used wait.for to call node's async functions in sync mode,
without (visible) callbacks, so I can have all the code inside one try-catch block.
wait.for will throw an exception if any of the async functions returns err!==null
more info at https://github.com/luciotato/waitfor
Also in synchronous multi-threaded programming (e.g. .NET, Java, PHP) you can't return any meaningful information to the client when a custom unkown Exception is caught. You may just return HTTP 500 when you have no info regarding the Exception.
Thus, the 'secret' lies in filling a descriptive Error object, this way your error handler can map from the meaningful error to the right HTTP status + optionally a descriptive result. However you must also catch the exception before it arrives to process.on('uncaughtException'):
Step1: Define a meaningful error object
function appError(errorCode, description, isOperational) {
Error.call(this);
Error.captureStackTrace(this);
this.errorCode = errorCode;
//...other properties assigned here
};
appError.prototype.__proto__ = Error.prototype;
module.exports.appError = appError;
Step2: When throwing an Exception, fill it with properties (see step 1) that allows the handler to convert it to meannigul HTTP result:
throw new appError(errorManagement.commonErrors.resourceNotFound, "further explanation", true)
Step3: When invoking some potentially dangerous code, catch errors and re-throw that error while filling additional contextual properties within the Error object
Step4: You must catch the exception during the request handling. This is easier if you use some leading promises library (BlueBird is great) which allows you to catch async errors. If you can't use promises than any built-in NODE library will return errors in callback.
Step5: Now that your error is caught and contains descriptive information about what happens, you only need to map it to meaningful HTTP response. The nice part here is that you may have a centralized, single error handler that gets all the errors and map these to HTTP response:
//this specific example is using Express framework
res.status(getErrorHTTPCode(error))
function getErrorHTTPCode(error)
{
if(error.errorCode == commonErrors.InvalidInput)
return 400;
else if...
}
You may other related best practices here

What is the difference between dnode and nowjs?

How do the two compare to each other?
TL;DR
DNode
provides RMI;
remote functions can accept callbacks as arguments;
which is nice, since it is fully asynchronous;
runs stand-alone or through an existing http server;
can have browser and Node clients;
supports middleware, just like connect;
has been around longer than NowJS.
NowJS
goes beyond just RMI and implements a "shared scope" API. It's like
Dropbox, only with variables and functions instead of files;
remote functions also accept callbacks (thanks to Sridatta and Eric from NowJS
for the clarification);
depends on a listening http server to work;
can only have browser clients;
became public very recently;
is somewhat buggy right now.
Conclusion
NowJS is more of a toy right now -- but keep a watch as it matures. For
serious stuff, maybe go with DNode. For a more detailed review of these
libraries, read along.
DNode
DNode provides a Remote Method Invocation framework. Both the client and server
can expose functions to each other.
// On the server
var server = DNode(function () {
this.echo = function (message) {
console.log(message)
}
}).listen(9999)
// On the client
dnode.connect(9999, function (server) {
server.echo('Hello, world!')
})
The function that is passed to DNode() is a handler not unlike the one passed to
http.createServer. It has two parameters: client can be used to access the
functions exported by the client and connection can be used to handle
connection-related events:
// On the server
var server = DNode(function (client, connection) {
this.echo = function (message) {
console.log(message)
connection.on('end', function () {
console.log('The connection %s ended.', conn.id)
})
}
}).listen(9999)
The exported methods can be passed anything, including functions. They are properly
wrapped as proxies by DNode and can be called back at the other endpoint. This is
fundamental: DNode is fully asynchronous; it does not block while waiting
for a remote method to return:
// A contrived example, of course.
// On the server
var server = DNode(function (client) {
this.echo = function (message) {
console.log(message)
return 'Hello you too.'
}
}).listen(9999)
// On the client
dnode.connect(9999, function (server) {
var ret = server.echo('Hello, world!')
console.log(ret) // This won't work
})
Callbacks must be passed around in order to receive responses from the other
endpoint. Complicated conversations can become unreadable quite fast. This
question discusses possible solutions for this problem.
// On the server
var server = DNode(function (client, callback) {
this.echo = function (message, callback) {
console.log(message)
callback('Hello you too.')
}
this.hello = function (callback) {
callback('Hello, world!')
}
}).listen(9999)
// On the client
dnode.connect(9999, function (server) {
server.echo("I can't have enough nesting with DNode!", function (response) {
console.log(response)
server.hello(function (greeting) {
console.log(greeting)
})
})
})
The DNode client can be a script running inside a Node instance or can be
embedded inside a webpage. In this case, it will only connect to the server that
served the webpage. Connect is of great assistance in this case. This scenario was tested with all modern browsers and with Internet Explorer 5.5 and 7.
DNode was started less than a year ago, on June 2010. It's as mature as a Node
library can be. In my tests, I found no obvious issues.
NowJS
NowJS provides a kind of magic API that borders on being cute. The server has an
everyone.now scope. Everything that is put inside everyone.now becomes
visible to every client through their now scope.
This code, on the server, will share an echo function with every client that
writes a message to the server console:
// Server-side:
everyone.now.echo = function (message) {
console.log(message)
}
// So, on the client, one can write:
now.echo('This will be printed on the server console.')
When a server-side "shared" function runs, this will have a now attribute
that is specific to the client that made that call.
// Client-side
now.receiveResponse = function (response) {
console.log('The server said: %s')
}
// We just touched "now" above and it must be synchronized
// with the server. Will things happen as we expect? Since
// the code is not multithreaded and NowJS talks through TCP,
// the synchronizing message will get to the server first.
// I still feel nervous about it, though.
now.echo('This will be printed on the server console.')
// Server-side:
everyone.now.echo = function (message) {
console.log(message)
this.now.receiveResponse('Thank you for using the "echo" service.')
}
Functions in NowJS can have return values. To get them, a callback must be
passed:
// On the client
now.twice(10, function (r) { console.log(r) }
// On the server
everyone.now.twice = function(n) {
return 2 * n
}
This has an implication if you want to pass a callback as an honest argument (not
to collect a return value) -- one must always pass the return value collector, or
NowJS may get confused. According to the developers, this way of retrieving the
return value with an implicit callback will probably change in the future:
// On the client
now.crunchSomeNumbers('compute-primes',
/* This will be called when our prime numbers are ready to be used. */
function (data) { /* process the data */ },
/* This will be called when the server function returns. Even if we
didn't care about our place in the queue, we'd have to add at least
an empty function. */
function (queueLength) { alert('You are number ' + queueLength + ' on the queue.') }
)
// On the server
everyone.now.crunchSomeNumbers = function(task, dataCallback) {
superComputer.enqueueTask(task, dataCallback)
return superComputer.queueLength
}
And this is it for the NowJS API. Well, actually there are 3 more functions that
can be used to detect client connection and disconnection. I don't know why they
didn't expose these features using EventEmitter, though.
Unlike DNode, NowJS requires that the client be a script running inside a web browser.
The page containing the script must be served by the same Node that is running
the server.
On the server side, NowJS also needs an http server listening. It must be passed
when initializing NowJS:
var server = http.createServer(function (req, response) {
fs.readFile(__dirname + '/now-client.html', function (err, data) {
response.writeHead(200, {'Content-Type':'text/html'})
response.write(data)
response.end()
})
})
server.listen(8080)
var everyone = now.initialize(server)
NowJS first commit is from a couple weeks ago (Mar 2011). As such, expect it to
be buggy. I found issues myself while writing this answer. Also expect its
API to change a lot.
On the positive side, the developers are very accessible -- Eric even guided me
to making callbacks work. The source code is not documented, but is fortunately
simple and short and the user guide and examples are enough to get one started.
NowJS team member here. Correction to andref's answer:
NowJS fully supports "Remote Method Invocation". You can pass functions as arguments in remote calls and you can have functions as return values as well.
These functions are wrapped by NowJS just as they are in DNode so that they are executed on the machine on which the function was defined. This makes it easy to expose new functions to the remote end, just like in DNode.
P.S. Additionally, I don't know if andref meant to imply that remote calls are only asynchronous on DNode. Remote calls are also async on NowJS. They do not block your code.
Haven't tried Dnode so my answer is not a comparison. But I would like to put forth few experiences using nowjs.
Nowjs is based on socket.io which is quite buggy. I frequently experience session time-outs, disconnects and now.ready event firing multiple times in a short duration. Check out this issue on nowjs github page.
Also I found using websockets unviable on certain platforms, however this can be circumvented by explicitly disabling websockets.
I had planned creating a production app using nowjs but it seems its not mature enough to be relied upon. I will try dnode if it serves my purpose, else I will switch to plain-old express.
Update:
Nowjs seems to be scrapped. No commits since 8 months.

Categories

Resources