In my application, a recursive function calls a remote peer for some data and then calls itself again. Is there any way to wait for response from the server and then continue the flow of execution?
I am using Simple-Peer for remote calls.
function foo() {
data = getFromPeer();
if(condition)
foo();
else return bar;
}
getFromPeer is a user-defined function which sends data to a remote peer using a SimplePeer connection. The remote peer responds back when it receives the request. There are no promises or callbacks defined as of now.
Looking at the documentation of https://github.com/feross/simple-peer, there does not seem to be a standard way of doing "Remote Procedure Calls" or receiving an ACKnowledgement of the fact that some data that was previously sent was actually received.
So in order to achieve what you want to achieve with simple-peer you need to add a custom "protocol" over the simple-peer channel. Since you have no guarantee over the order in which the remote peer will receive your messages, you need to give an ID to each message.
something like this in pseudo code (and considering that peer is a connected simple-peer)
pendingJobs = {}
function addJob(data, callback) {
var id = uuid() // imagine a function giving a uuid
pendingJobs[id] = callback
peer.send({ id: uuid, payload: data })
}
peer.on('data', function(info) {
var id = info.id
var cb;
if (pendingJobs[id] && info.payload) {
cb = pendingJobs[id]
cb(info.payload)
}
})
Of course this pseudo-code solution is very rough, and you would probably need to add some garbage collection of pendingJobs, have error callbacks in case the remote peer never sends the ACK message.
you could add a type for each message ('rpc', 'ack', ..)
Related
Issue clarification
When we use .emit() or .send() and we also want to confirm message reception (so called acknowledgements) we simply write something like this:
socket.emit('someEvent', payload, callback);
What this question is all about is a callback part. That's the great stuff as it allows to generally send back some data as a response with no extra events emitted. All that server needs to do is to handle the request in a proper way:
socket.on('someEvent', (payload, callback) => {
doSomeStuff();
callback(someData);
);
That works just fine when we deal with a success case. But what shall we do in these cases:
1) Callback was not sent from the client side / callback's not a function and there's a need to respond from the server side with something like 'Error: no callback is provided. Usage: ...'
Example:
Client side - socket.emit('someEvent'); or socket.emit('someEvent', 1);
Server side - socket.on('someEvent', callback => callback());
or
2) While handling the request something went wrong (e.g. an unsuccessful validation result) and we need to report this in a way like: 'No payload is provided or it is invalid'
Example:
Server side -
socket.emit('someEvent', payload, callback => {
checkPayload();
callback(someData);
});
Client side - socket.on('someEvent', invalidPayload, callback);
Question: is there a mechanism to create custom callback from responder's side?
My workings and workarounds
1) As for the missing callback or that one which is not a function I've concluded that I can only validate it and then invoke it only in case of its validity. So the server side is undergoing some changes:
socket.emit('someEvent', callback => callback instanceof Function && callback()); //check callback correctness
Pros: there won't be an internal error if a callback is not a function as expected.
Cons: in case of invalid callback a client won't be noticed about it.
2) As for the case when we need to send some error back I've only found a workaround to return a specific, agreed in advance, falsy value like null so that it means that no data can be returned.
socket.emit('someEvent', payload, callback => {
checkPayload();
callback(someData || null); //send falsy, error-like value instead
});
Pros: a client will be noticed about some error by getting null.
Cons: from server side there's no simple middleware function that validates the input data and returns error before the main logic is being executed.
I've thought about middlewares for reaching the needed functionality, but there's no, so to say, 'event level middlewares' yet, only on the whole namespace and socket levels. Shall I try to filter events by their names on the socket level to attach the needed functionality and send error in a way like next(new Error(...));? In this case there can be a work with error event listening, I guess.
socket.io / socket.io-client versions used: 2.3.0
1) Callback was not sent from the client side / callback's not a function and there's a need to respond from the server side with something like 'Error: no callback is provided. Usage: ...'
The client and server have to agree how to do this. If the client doesn't provide a callback, then the server argument will be undefined so you can detect that from the server.
So, the proper way to do it is this:
// client
socket.emit('someMsg', someData, function(response) {
console.log(`Got ${response} from server`);
});
// server
io.on('connection', socket => {
socket.on('someMsg', (data, fn) => {
console.log(`Got data ${data} from client, sending response`);
// if client wants a response, send the response
if (fn) {
fn("got your data");
}
});
});
So, if the client does not pass the callback, then fn on the server side will be undefined. So, you are correct to test for that before calling it.
2) As for the case when we need to send some error back I've only found a workaround to return a specific, agreed in advance, falsy value like null so that it means that no data can be returned.
Yes, you have to agree in advance how to send an error back. The cleanest way to send an error back would probably be to wrap your response in an object and use a .error property on that object.
// client
socket.emit('someMsg', someData, function(response) {
if (response.error) {
console.log(`Got error ${response.error} from server`);
} else {
console.log(`Got data ${response.data} from server`);
}
});
// server
io.on('connection', socket => {
socket.on('someMsg', (data, fn) => {
console.log(`Got data ${data} from client, sending response`);
// if client wants a response, send the response
if (fn) {
// no error here
fn({error: null, data: "Got your message"});
}
});
});
What you're seeing here is that socket.io is not really a request/response type protocol and socket.io has tried to shoehorn in a bit of a response around which you have to build your own structure.
Or, you can send an error object if there's an error:
// server
io.on('connection', socket => {
socket.on('someMsg', (data, fn) => {
console.log(`Got data ${data} from client, sending response`);
// if client wants a response, send the response
if (fn) {
// send an error here
fn({error: new Error("xxx Error")});
}
});
});
From server side there's no simple middleware function that validates the input data and returns error before the main logic is being executed.
I don't really understand what you're trying to use middleware for or to validate? the only place this data is present is on your message handler so any server-side validation you want to do on what the client sent needs to be there. You can certainly do that validation before you've send a response.
Shall I try to filter events by their names on the socket level to attach the needed functionality and send error in a way like next(new Error(...));? In this case there can be a work with error event listening, I guess.
Socket.io doesn't work like Express and I don't really see why you'd try to make it work that way. There is no next() involved in receiving a socket.io message so I'm not sure what you're trying to do there. There is an option for middleware when the socket.io connection is first made, but not for subsequent messages sent over that connection.
Is there a way to send response from server even if no callback is provided from client side?
If the client does not provide a callback, then the only way to send a response back to the client would be to send another message. But, the whole point of sending a response is if you have a cooperating client that is listening and expecting a response so the client may as well use the callback if they want the response. If the client doesn't want the response and won't code anything to receive it, there's nothing you can do about that.
websocketServer.on('connection', function(socket, req) {
socket.on('message', onMessage);
sub.subscribe('chat'); // sub: Redis subscription connection
sub.on('message', onSubMessage);
});
function onMessage(message) {
pub.publish('chat', message); // pub: Redis publishing connection
}
function onSubMessage(channel, message) {
// how to access 'socket' from here?
if (channel === 'chat') socket.send(message);
}
I'm trying to get away with as few state & bindings as possible, to make WS server efficient & to have the ability to just add more machines if I need to scale - state would only make this harder. Im still not understanding everything about Node memory management & garbage collection.
What would be the recommended solution in this case? Move onSubMessage into connection callback to access socket? But function would be then initialized on every connection?
What other choices do I have?
Little background about this:
The user opens a WebSocket connection with the server. If the user sends a message, it gets sent to Redis channel (might also know it as Pub/Sub topic) which broadcasts it to every subscribed client (onSubMessage). Redis Pub/Sub acts as a centralized broadcaster: I don't have to worry about different servers or state, Redis sends a message to everybody who is interested. Carefree scaling.
You can use bind() to pre-define an extra argument to the callback function:
...
sub.on('message', onSubMessage.bind(sub, socket));
...
function onSubMessage(socket, channel, message) {
if (channel === 'chat') socket.send(message);
}
This does create a new function instance for every new connection, so there is probably not a real upside to using a wrapping function in terms of memory usage.
Im currently building a webapp that has two clear use cases.
Traditional client request data from server.
Client request a stream from the server after wich the server starts pushing data to the client.
Currently im implementing both 1 and 2 using json message passing over a websocket. However this has proven hard since I need to handcode lots of error handling since the client is not waiting for the response. It just sends the message hoping it will get a reply sometime.
Im using Js and react on the frontend and Clojure on the backend.
I have two questions regarding this.
Given the current design, what alternatives are there for error handling over a websocket?
Would it be smarter to split the two UC using rest for UC1 and websockets for UC2 then i could use something like fetch on the frontend for rest calls.
Update.
The current problem is not knowing how to build an async send function over websockets can match send messages and response messages.
Here's a scheme for doing request/response over socket.io. You could do this over plain webSocket, but you'd have to build a little more of the infrastructure yourself. This same library can be used in client and server:
function initRequestResponseSocket(socket, requestHandler) {
var cntr = 0;
var openResponses = {};
// send a request
socket.sendRequestResponse = function(data, fn) {
// put this data in a wrapper object that contains the request id
// save the callback function for this id
var id = cntr++;
openResponses[id] = fn;
socket.emit('requestMsg', {id: id, data: data});
}
// process a response message that comes back from a request
socket.on('responseMsg', function(wrapper) {
var id = wrapper.id, fn;
if (typeof id === "number" && typeof openResponses[id] === "function") {
fn = openResponses[id];
delete openResponses[id];
fn(wrapper.data);
}
});
// process a requestMsg
socket.on('requestMsg', function(wrapper) {
if (requestHandler && wrapper.id) {
requestHandler(wrapper.data, function(responseToSend) {
socket.emit('responseMsg', {id: wrapper.id, data; responseToSend});
});
}
});
}
This works by wrapping every message sent in a wrapper object that contains a unique id value. Then, when the other end sends it's response, it includes that same id value. That id value can then be matched up with a particular callback response handler for that specific message. It works both ways from client to server or server to client.
You use this by calling initRequestResponseSocket(socket, requestHandler) once on a socket.io socket connection on each end. If you wish to receive requests, then you pass a requestHandler function which gets called each time there is a request. If you are only sending requests and receiving responses, then you don't have to pass in a requestHandler on that end of the connection.
To send a message and wait for a response, you do this:
socket.sendRequestResponse(data, function(err, response) {
if (!err) {
// response is here
}
});
If you're receiving requests and sending back responses, then you do this:
initRequestResponseSocket(socket, function(data, respondCallback) {
// process the data here
// send response
respondCallback(null, yourResponseData);
});
As for error handling, you can monitor for a loss of connection and you could build a timeout into this code so that if a response doesn't arrive in a certain amount of time, then you'd get an error back.
Here's an expanded version of the above code that implements a timeout for a response that does not come within some time period:
function initRequestResponseSocket(socket, requestHandler, timeout) {
var cntr = 0;
var openResponses = {};
// send a request
socket.sendRequestResponse = function(data, fn) {
// put this data in a wrapper object that contains the request id
// save the callback function for this id
var id = cntr++;
openResponses[id] = {fn: fn};
socket.emit('requestMsg', {id: id, data: data});
if (timeout) {
openResponses[id].timer = setTimeout(function() {
delete openResponses[id];
if (fn) {
fn("timeout");
}
}, timeout);
}
}
// process a response message that comes back from a request
socket.on('responseMsg', function(wrapper) {
var id = wrapper.id, requestInfo;
if (typeof id === "number" && typeof openResponse[id] === "object") {
requestInfo = openResponses[id];
delete openResponses[id];
if (requestInfo) {
if (requestInfo.timer) {
clearTimeout(requestInfo.timer);
}
if (requestInfo.fn) {
requestInfo.fn(null, wrapper.data);
}
}
}
});
// process a requestMsg
socket.on('requestMsg', function(wrapper) {
if (requestHandler && wrapper.id) {
requestHandler(wrapper.data, function(responseToSend) {
socket.emit('responseMsg', {id: wrapper.id, data; responseToSend});
});
}
});
}
There are a couple of interesting things in your question and your design, I prefer to ignore the implementation details and look at the high level architecture.
You state that you are looking to a client that requests data and a server that responds with some stream of data. Two things to note here:
HTTP 1.1 has options to send streaming responses (Chunked transfer encoding). If your use-case is only the sending of streaming responses, this might be a better fit for you. This does not hold when you e.g. want to push messages to the client that are not responding to some sort of request (sometimes referred to as Server side events).
Websockets, contrary to HTTP, do not natively implement some sort of request-response cycle. You can use the protocol as such by implementing your own mechanism, something that e.g. the subprotocol WAMP is doing.
As you have found out, implementing your own mechanism comes with it's pitfalls, that is where HTTP has the clear advantage. Given the requirements stated in your question I would opt for the HTTP streaming method instead of implementing your own request/response mechanism.
I've been working on integrating Google Recaptcha into a Meteor and AngularJS web application. Everything was smooth sailing until I had to validate the recaptcha response -- for some bizarre reason, I can't get an async response from the backend to the frontend.
I've tried a lot of different variations and have read many, many posts on SO and the internet in general, but with no luck -- so I opted to post my own question.
Here's what I'm doing:
Client:
Meteor.call('recaptcha.methods.validateRecaptcha', { 'response' : this.recaptcha.getResponse(this.id) }, function(error, result) {
// error and result are both undefined
console.log('Do something with the ' + error + ' or ' + result + '.');
}
So, I'm calling a Meteor method and passing in a callback that is run when the method is done. However, the error and result parameters are both undefined.
Server:
run: function(data) {
if (this.isSimulation) {
/*
* Client-side simulations won't have access to any of the
* Meteor.settings.private variables, so we should just stop here.
*/
return;
}
return Meteor.wrapAsync(HTTP.post)(_someUrl, _someOptions);
}
That last line is a shortened version of the sync/async structure that I've found in several Meteor guides (I also tried this version), namely:
var syncFunc = Meteor.wrapAsync(HTTP.post);
var result = syncFunc(Meteor.settings.private.grecaptcha.verifyUrl, _options);
return result;
I've also tried a version using Futures:
var Future = Npm.require( 'fibers/future' );
var future = new Future();
var callback = future.resolver();
HTTP.post(Meteor.settings.private.grecaptcha.verifyUrl, _options, callback);
return future.wait();
Now, the intention here is that I use Meteor.call() to call this method from the client, the client-side stub runs (to prevent simulation errors since we use private Meteor.settings variables in the real non-SO server-side code) and returns immediately (which happens), and the server hits Google's Recaptcha API (which happens and the server receives a response) before returning the result to the client (which doesn't happen -- the callback occurs but with no error/success data).
My thought is that one of two things are happening:
I'm just doing something wrong and I'm not properly sending the data back to the client.
The synchronous client stub (which returns immediately) is telling the client that the server response isn't important, so it never waits for the proper asynchronous response.
Could any of the Meteor gurus weigh in here and let me know what's going on and how to get async requests to play nicely in a Meteor application?
Thanks!
From the documentation for HTTP.call, which is the generic version of HTTP.post, it says
Optional callback. If passed, the method runs asynchronously, instead of synchronously, and calls asyncCallback. On the client, this callback is required.
So, on server, you can run it asynchronously like this
run: function(data) {
if (this.isSimulation) {
/*
* Client-side simulations won't have access to any of the
* Meteor.settings.private variables, so we should just stop here.
*/
return;
}
// No need to pass callback on server.
// Since this part is not executed on client, you can do this
// Or you can use Meteor.isClient to run it asynchronously when the call is from client.
return HTTP.post(Meteor.settings.private.grecaptcha.verifyUrl, _options);
}
How do the two compare to each other?
TL;DR
DNode
provides RMI;
remote functions can accept callbacks as arguments;
which is nice, since it is fully asynchronous;
runs stand-alone or through an existing http server;
can have browser and Node clients;
supports middleware, just like connect;
has been around longer than NowJS.
NowJS
goes beyond just RMI and implements a "shared scope" API. It's like
Dropbox, only with variables and functions instead of files;
remote functions also accept callbacks (thanks to Sridatta and Eric from NowJS
for the clarification);
depends on a listening http server to work;
can only have browser clients;
became public very recently;
is somewhat buggy right now.
Conclusion
NowJS is more of a toy right now -- but keep a watch as it matures. For
serious stuff, maybe go with DNode. For a more detailed review of these
libraries, read along.
DNode
DNode provides a Remote Method Invocation framework. Both the client and server
can expose functions to each other.
// On the server
var server = DNode(function () {
this.echo = function (message) {
console.log(message)
}
}).listen(9999)
// On the client
dnode.connect(9999, function (server) {
server.echo('Hello, world!')
})
The function that is passed to DNode() is a handler not unlike the one passed to
http.createServer. It has two parameters: client can be used to access the
functions exported by the client and connection can be used to handle
connection-related events:
// On the server
var server = DNode(function (client, connection) {
this.echo = function (message) {
console.log(message)
connection.on('end', function () {
console.log('The connection %s ended.', conn.id)
})
}
}).listen(9999)
The exported methods can be passed anything, including functions. They are properly
wrapped as proxies by DNode and can be called back at the other endpoint. This is
fundamental: DNode is fully asynchronous; it does not block while waiting
for a remote method to return:
// A contrived example, of course.
// On the server
var server = DNode(function (client) {
this.echo = function (message) {
console.log(message)
return 'Hello you too.'
}
}).listen(9999)
// On the client
dnode.connect(9999, function (server) {
var ret = server.echo('Hello, world!')
console.log(ret) // This won't work
})
Callbacks must be passed around in order to receive responses from the other
endpoint. Complicated conversations can become unreadable quite fast. This
question discusses possible solutions for this problem.
// On the server
var server = DNode(function (client, callback) {
this.echo = function (message, callback) {
console.log(message)
callback('Hello you too.')
}
this.hello = function (callback) {
callback('Hello, world!')
}
}).listen(9999)
// On the client
dnode.connect(9999, function (server) {
server.echo("I can't have enough nesting with DNode!", function (response) {
console.log(response)
server.hello(function (greeting) {
console.log(greeting)
})
})
})
The DNode client can be a script running inside a Node instance or can be
embedded inside a webpage. In this case, it will only connect to the server that
served the webpage. Connect is of great assistance in this case. This scenario was tested with all modern browsers and with Internet Explorer 5.5 and 7.
DNode was started less than a year ago, on June 2010. It's as mature as a Node
library can be. In my tests, I found no obvious issues.
NowJS
NowJS provides a kind of magic API that borders on being cute. The server has an
everyone.now scope. Everything that is put inside everyone.now becomes
visible to every client through their now scope.
This code, on the server, will share an echo function with every client that
writes a message to the server console:
// Server-side:
everyone.now.echo = function (message) {
console.log(message)
}
// So, on the client, one can write:
now.echo('This will be printed on the server console.')
When a server-side "shared" function runs, this will have a now attribute
that is specific to the client that made that call.
// Client-side
now.receiveResponse = function (response) {
console.log('The server said: %s')
}
// We just touched "now" above and it must be synchronized
// with the server. Will things happen as we expect? Since
// the code is not multithreaded and NowJS talks through TCP,
// the synchronizing message will get to the server first.
// I still feel nervous about it, though.
now.echo('This will be printed on the server console.')
// Server-side:
everyone.now.echo = function (message) {
console.log(message)
this.now.receiveResponse('Thank you for using the "echo" service.')
}
Functions in NowJS can have return values. To get them, a callback must be
passed:
// On the client
now.twice(10, function (r) { console.log(r) }
// On the server
everyone.now.twice = function(n) {
return 2 * n
}
This has an implication if you want to pass a callback as an honest argument (not
to collect a return value) -- one must always pass the return value collector, or
NowJS may get confused. According to the developers, this way of retrieving the
return value with an implicit callback will probably change in the future:
// On the client
now.crunchSomeNumbers('compute-primes',
/* This will be called when our prime numbers are ready to be used. */
function (data) { /* process the data */ },
/* This will be called when the server function returns. Even if we
didn't care about our place in the queue, we'd have to add at least
an empty function. */
function (queueLength) { alert('You are number ' + queueLength + ' on the queue.') }
)
// On the server
everyone.now.crunchSomeNumbers = function(task, dataCallback) {
superComputer.enqueueTask(task, dataCallback)
return superComputer.queueLength
}
And this is it for the NowJS API. Well, actually there are 3 more functions that
can be used to detect client connection and disconnection. I don't know why they
didn't expose these features using EventEmitter, though.
Unlike DNode, NowJS requires that the client be a script running inside a web browser.
The page containing the script must be served by the same Node that is running
the server.
On the server side, NowJS also needs an http server listening. It must be passed
when initializing NowJS:
var server = http.createServer(function (req, response) {
fs.readFile(__dirname + '/now-client.html', function (err, data) {
response.writeHead(200, {'Content-Type':'text/html'})
response.write(data)
response.end()
})
})
server.listen(8080)
var everyone = now.initialize(server)
NowJS first commit is from a couple weeks ago (Mar 2011). As such, expect it to
be buggy. I found issues myself while writing this answer. Also expect its
API to change a lot.
On the positive side, the developers are very accessible -- Eric even guided me
to making callbacks work. The source code is not documented, but is fortunately
simple and short and the user guide and examples are enough to get one started.
NowJS team member here. Correction to andref's answer:
NowJS fully supports "Remote Method Invocation". You can pass functions as arguments in remote calls and you can have functions as return values as well.
These functions are wrapped by NowJS just as they are in DNode so that they are executed on the machine on which the function was defined. This makes it easy to expose new functions to the remote end, just like in DNode.
P.S. Additionally, I don't know if andref meant to imply that remote calls are only asynchronous on DNode. Remote calls are also async on NowJS. They do not block your code.
Haven't tried Dnode so my answer is not a comparison. But I would like to put forth few experiences using nowjs.
Nowjs is based on socket.io which is quite buggy. I frequently experience session time-outs, disconnects and now.ready event firing multiple times in a short duration. Check out this issue on nowjs github page.
Also I found using websockets unviable on certain platforms, however this can be circumvented by explicitly disabling websockets.
I had planned creating a production app using nowjs but it seems its not mature enough to be relied upon. I will try dnode if it serves my purpose, else I will switch to plain-old express.
Update:
Nowjs seems to be scrapped. No commits since 8 months.