I'm working on an online, turned based game in order to teach myself Node.js and Socket.IO. Some aspects of the game are resolved serverside. At one point during one of these functions, the server may require input from the clients. Is there a way I can "pause" the resolution of the server's function in order to wait for the clients to respond (via a var x = window.prompt)?
Here's an idea of the code I'm working with:
Server:
for (some loop){
if (some condition){
request input via io.sockets.socket(userSocket[i]).emit('requestInput', data)
}
}
Client:
socket.on('requestInput', function (data) {
var input = window.prompt('What is your input regarding ' + data + '?');
//send input back to the server
socket.emit('refresh', input)
});
Any thoughts?
I don't think that is possible.
for (some loop){
if (some condition){
request input via io.sockets.socket(userSocket[i]).emit('requestInput', data)
/* Even if you were able to pause the execution here, there is no way to resume it when client emits the 'refresh' event with user input */
}
}
What you can do instead is emit all 'requestInput' events without pausing and save all responses you will get in socket.on('refresh',function(){}) event in an array, then you can process this array later. I don't know what your exact requirement is but let me know if that works.
Since you are emitting socket.emit('refresh', input) on the client side, you just need to set up a socket event listener on the server side as well. For example:
io.sockets.on('connection', function (socket) {
socket.on('refresh', function (data) {
console.log(data) //input
});
})
I will also point out, so that you don't run into trouble down the line, that indefinite loops are a big nono in node. Nodejs runs on a single thread so you are actually blocking ALL clients as long as your loop is running.
Related
I want to implement Server side events with grails. What i want is that only when there is a change in the DataBase my DataTable should refresh. I want to user HTML5 Server Side events for this. My first question is that while using SSE i observed that the client keeps making a request to the server and if data is available it pulls it. Its similar to an Ajax call being sent every 3-4 seconds which can be changed but what i really want is that only and only when the Data in the DataBase changes should there be a refresh in the DataTable. Also i want to send JSON data to the client but am unable to send it in the right format. Below is my controller code.
def test(){
def action
something.each(){
action = "<a href=\"javascript:fetchDetails('"+it.id+"','comments'"+")\" class='btn btn-primary'><span class='glyphicon glyphicon-upload'></span></a>"
dataArr << [
it.Number,
it.product,
it.description,
OrgChart.findByid(it.Owner)?.displayName,
OrgChart.findByid(it.Coordinator)?.displayName,
startDate,
endDate,
it.status.status,
action
]
}
println dataArr
response.setContentType("text/event-stream;charset=UTF-8")
response << "data: ${dataArr}\n\n"
render "Hi"
}
}
Below is the gsp or the client side code
console.log("Starting eventSource");
var eventSource = new EventSource("/ops/test");
console.log("Started eventSource");
eventSource.onmessage = function(event) {
var data = JSON.stringify(event.data)
console.log("Message received: " + JSON.parse(data));
changeRequestsTable.clear().rows.add(JSON.parse(event.data)).draw()
};
eventSource.onopen = function(event) { console.log("Open " + event); };
eventSource.onerror = function(event) { console.log("Error " + event); };
console.log("eventState: " + eventSource.readyState);
// Stop trying after 10 seconds of errors
setTimeout(function() {eventSource.close();}, 10000);
I know i am a long way from implementing what i intend to but any help would be really appreciated
Going to answer this since it is getting to be a long conversation. As it stands the solution is too broad to give a proper answer since the angle as to how you do things could vary in such a dramatic range from using events that get triggered upon record save that then go off to either make client socket connections through application through to direct socket client connection at point of save that triggers something to be sent to clients. These methods are probably all more complex and more entangled and in short can be done all in a much easier way.
As the users go their interface get the users to make a ws connection to a backend listener. It can be the same location as in no room/separation (additional complexity needed).
As they join room declare a static concurrent map in the top of your websocket listener that collections each session and userId. I wouldn't do it the chat way since this is injected through a service that keeps it as a collection instead like this and changing RunnableFuture to be Websocket sessions like seen in the service link example.
Once you have this you can simply call a broadcast something like this that gets hold of your static concurrent map and for each session either broadcasts the entire new list in json and user processes html update with it or sends a trigger to say update page and they go off doing ajax call to update list.
See the following guide for a tutorial on how to implement Server Sent Events with Grails 3:
http://guides.grails.org/server-sent-events/guide/index.html
I'm using spawn-child npm package to spawn a shell where i run a binary file which was originally built on C++. I provide Stdin's to the binary and then the binary would be sending out the Stdout's constantly for every second. On the node part once i start receiving the Stdout's from binary i have an on listener which would look something like stdout.on('data', function (data) {}) where i send these data's to the SSE channel.
Everything is working fine but the major concern is the constant memory growth of node process that i see when i hit the binary everytime with an Stdin. I have outlined how my code looks, is there an elegant way to control this memory growth, if so please share.
var sseChannel = require('sse-channel'),
spawnCommand = require('spawn-command'),
cmd = 'path to the binary file',
globalArray = [],
uuid = require('uuid');
module.exports = function(app) {
var child = spawnCommand(cmd),
privateChannel = new sseChannel({
historySize: 0,
cors: {
origins: ['*']
},
pingInterval: 15 * 1000,
jsonEncode: false
});
srvc = {
get: function(req, res) {
globalArray[uuid.v4()] = res;
child.stdin.write('a json object in a format that is expected by binary' + '\n'); // req.query.<queryVal>
child.stdout.on('data', function(data) {
privateChannel.send(JSON.stringify(data));
});
},
delete: function(sessionID) {
var response = globalArray[sessionID];
privateChannel.removeClient(response);
response.end();
delete globalArray[sessionID];
}
}
}
This code is just to enumerate how it would look in the app. Hitting the Run code snippet would not work in this case.
I collected heapdump at 2 different intervals and this is how the statistics looks, there is a tremendous increase in the Typed Array value, what could be done to maintain or suppress the growth of Typed Array,
The problem is that you're spawning a process once and then adding a new data event handler for every request to your http server that never gets removed. So this would explain why the memory usage never drops even after gc.
Another (unrelated) problem is that if you are using your single child process to process multiple incoming requests, you can run into the problem of mixing responses for different requests (you cannot assume that one data event will contain only the data for a particular request). If the child process is node.js-based, you could set up an ipc channel with it and then just pass regular JavaScript values back and forth instead of setting up stdout handling/parsing. If the child isn't node.js-based or you want an alternative (no-ipc) solution, you could set up a queue that all requests get pushed onto and then have a function that processes the queue and responds to each request serially (only moving onto the next request once you have somehow determined you have received all output from the child process for the current request).
If you instead meant for the child process to only be used for a single request, you will need to tweak your code to spawn once per request instead (moving spawn() inside get()).
I'm developing a text based adventure game with Meteor and I'm running into an issue with how to handle certain elements. Namely, how to emit data from the Server to the Client without any input from the Client.
The idea is that when a player is engaged in combat with a monster, the combat damage and updating the Player and Monster objects will be occurring in a loop on the server. When the player takes damage it should accordingly update the client UI. Is something like this possible with Publish / Subscribe?
I basically want something that sits and listens for events from the server to update the combat log accordingly.
In pseudo-code, this is something along the lines of what I'm looking for:
// Client Side:
Meteor.call("runCommand", "attack monster");
// Server Side
Meteor.methods({
runCommand: function(input) {
// Take input, run the loop to begin combat,
// whenever the user takes damage update the
// client UI and output a line saying how much
// damage the player just received and by who
}
});
I understand that you can publish a collection to the client, but that's not really as specific of a function I'm looking for, I don't want to publish the entire Player object to the client, I just want to tell the client to write a line to a textbox saying something like "You were hit for 12 damage by a monster!".
I was hoping there was a function similar to SocketIO where I could, if I wanted to, just emit an event to the client telling it to update the UI. I think I can use SocketIO for this if I need to, but people seemed to be adamant that something like this was doable with Meteor entirely without SocketIO, I just don't really understand how.
The only outs I see to this scenario are: writing all of the game logic client-side which feels like a bad idea, writing all of the combat logs to a collection which seems extremely excessive (but maybe it's not?), or using some sort of SocketIO type-tool to just emit messages to the client to tell it to write a new line to the text box.
Using Meteor, create a combat log collection seem to be the simplest option you have.
You can only listen on added event and then clear the collection when the combat is over.
It should be something like this :
var cursor = Combat_Log.find();
var handleCombatLog = cursor.observe({
added: function (tmp)
{
// do your stuff
}
});
I ask a similar question here, hope this will help ^^
Here's how I did it without a collection. I think you are right to be concerned about creating one. That would not be a good idea. First install Streamy.
https://atmospherejs.com/yuukan/streamy
Then on the server
//find active sockets for the user by id
var sockets = Streamy.socketsForUsers(USER_ID_HERE)._sockets
if (!Array.isArray(sockets) || !sockets.length) {
//user is not logged in
} else {
//send event to all open browser windows for the user
sockets.forEach((socket) => {
Streamy.emit('ddpEvent', { yourKey:yourValue }, socket);
})
}
Then in the client, respond to it like this:
Streamy.on('ddpEvent', function(data) {
console.log("data is ", data); //prints out {yourKey:yourValue}
});
I have a node.js server with socket.io. My clients use socket.io to connect to the node.js server.
Data is transmitted from clients to server in the following way:
On the client
var Data = {'data1':'somedata1', 'data2':'somedata2'};
socket.emit('SendToServer', Data);
On the server
socket.on('SendToServer', function(Data) {
for (var key in Data) {
// Do some work with Data[key]
}
});
Suppose that somebody modifies his client and emits to the server a really big chunk of data. For example:
var Data = {'data1':'somedata1', 'data2':'somedata2', ...and so on until he reach for example 'data100000':'data100000'};
socket.emit('SendToServer', Data);
Because of this loop on the server...
for (var key in Data) {
// Do some work with Data[key]
}
... the server would take a very long time to loop through all this data.
So, what is the best solution to prevent such scenarios?
Thanks
EDIT:
I used this function to validate the object:
function ValidateObject(obj) {
var i = 0;
for(var key in obj) {
i++;
if (i > 10) { // object is too big
return false;
}
}
return false;
}
So the easiest thing to do is just check the size of the data before doing anything with it.
socket.on('someevent', function (data) {
if (JSON.stringify(data).length > 10000) //roughly 10 bytes
return;
console.log('valid data: ' + data);
});
To be honest, this is a little inefficient. Your client sends the message, socket.io parses the message into an object, and then you get the event and turn it back into a String.
If you want to be even more efficient then on the client side you should be enforcing max lengths of messages.
For even more efficiency (and to protect against malicious users), as packets come into Socket.io, if the length gets too long, then you should discard them. You'll either need to figure a way to extend the prototypes to do what you want or you'll need to pull the source and modify it yourself. Also, I haven't looked into the socket.io protocol but I'm sure you'll have to do more than just "discard" the packet. Also, some packets are ack-backs and nack-backs so you don't want to mess with those, either.
Side note: If you ONLY care about the number of keys then you can use Object.keys(obj) which returns an array of keys:
if (Object.keys(obj).length > 10)
return;
Probably you may consider switching to socket.io-stream and handle input stream directly.
This way you should join chunks and finally parse json input manually, but you have chance to close connection when input data length exceeds threshold you decide.
Otherwise (staying with socket.io approach) your callback won't be called until whole data stream were received. This doesn't stop your js main thread execution, but waste memory, cpu and bandwith.
On the other hand, if your only goal is to avoid overload of your processing algorithm you can continue limitting it by counting elements in the received object. For instance:
if (Object.keys(data).length > n) return; // Where n is your maximum acceptable number of elements.
// But, anyway, this doesn't control the actual size of each element.
EDIT: Because the question is about "how to handle server overload" You should check load balancing with gninx http://nginx.com/blog/nginx-nodejs-websockets-socketio/ - you could have additional servers in case one client is creating a bottleneck. The other servers would be available. Even if you solve this problem, there are still other problems, like client sending several small packets and so on.
The Socket.io -library seems to be a bit problematic, managing too big messages is not available at the websockets layer, there was a pull -request three years ago, which gives an idea how it might be solved:
https://github.com/Automattic/socket.io/issues/886
However, because WebSockets -protocol does have finite packet size it would allow you to stop processing of the packets if certain size has been achieved. The most effective way of doing this would be before the packet is stransformed to JavaScript heap. This means that you should handle the WebSocket transform manually - this is what the socket.io is doing for you but it does not take into account the packet size.
If you want to implement you own websocket layer, using this WebSocket -node implementation might be useful:
https://github.com/theturtle32/WebSocket-Node
If you do not need to support older browsers, using this pure websockets -approach might be suitable solution.
Well, I'll go with the Javascript side of the thing... let's say you don't want to allow users to go over a certain limit of data, you can just:
var allowedSize = 10;
Object.keys(Data).map(function( key, idx ) {
if( idx > allowedSize ) return;
// Do some work with Data[key]
});
this not only allows you to properly cycle through the elements of your object, it lets you limit easily. ( obviously this can also ruin your own pre-set requests )
Maybe destroy buffer size is what you need.
From the wiki:
destroy buffer size defaults to 10E7
Used by the HTTP transports. The Socket.IO server buffers HTTP request bodies up to this limit. This limit is not applied to websocket or flashsockets.
I am facing a strange issue with calling socket.on methods from the Javascript client. Consider below code:
for(var i=0;i<2;i++) {
var socket = io.connect('http://localhost:5000/');
socket.emit('getLoad');
socket.on('cpuUsage',function(data) {
document.write(data);
});
}
Here basically I am calling a cpuUsage event which is emitted by socket server, but for each iteration I am getting the same value. This is the output:
0.03549148310035006
0.03549148310035006
0.03549148310035006
0.03549148310035006
Edit: Server side code, basically I am using node-usage library to calculate CPU usage:
socket.on('getLoad', function (data) {
usage.lookup(pid, function(err, result) {
cpuUsage = result.cpu;
memUsage = result.memory;
console.log("Cpu Usage1: " + cpuUsage);
console.log("Cpu Usage2: " + memUsage);
/*socket.emit('cpuUsage',result.cpu);
socket.emit('memUsage',result.memory);*/
socket.emit('cpuUsage',cpuUsage);
socket.emit('memUsage',memUsage);
});
});
Where as in the server side, I am getting different values for each emit and socket.on. I am very much feeling strange why this is happening. I tried setting data = null after each socket.on call, but still it prints the same value. I don't know what phrase to search, so I posted. Can anyone please guide me?
Please note: I am basically Java developer and have a less experience in Javascript side.
You are making the assumption that when you use .emit(), a subsequent .on() will wait for a reply, but that's not how socket.io works.
Your code basically does this:
it emits two getLoad messages directly after each other (which is probably why the returning value is the same);
it installs two handlers for a returning cpuUsage message being sent by the server;
This also means that each time you run your loop, you're installing more and more handlers for the same message.
Now I'm not sure what exactly it is you want. If you want to periodically request the CPU load, use setInterval or setTimeout. If you want to send a message to the server and want to 'wait' for a response, you may want to use acknowledgement functions (not very well documented, but see this blog post).
But you should assume that for each type of message, you should only call socket.on('MESSAGETYPE', ) once during the runtime of your code.
EDIT: here's an example client-side setup for a periodic poll of the data:
var socket = io.connect(...);
socket.on('connect', function() {
// Handle the server response:
socket.on('cpuUsage', function(data) {
document.write(data);
});
// Start an interval to query the server for the load every 30 seconds:
setInterval(function() {
socket.emit('getLoad');
}, 30 * 1000); // milliseconds
});
Use this line instead:
var socket = io.connect('iptoserver', {'force new connection': true});
Replace iptoserver with the actual ip to the server of course, in this case localhost.
Edit.
That is, if you want to create multiple clients.
Else you have to place your initiation of the socket variable before the for loop.
I suspected the call returns average CPU usage at the time of startup, which seems to be the case here. Checking the node-usage documentation page (average-cpu-usage-vs-current-cpu-usage) I found:
By default CPU Percentage provided is an average from the starting
time of the process. It does not correctly reflect the current CPU
usage. (this is also a problem with linux ps utility)
But If you call usage.lookup() continuously for a given pid, you can
turn on keepHistory flag and you'll get the CPU usage since last time
you track the usage. This reflects the current CPU usage.
Also given the example how to use it.
var pid = process.pid;
var options = { keepHistory: true }
usage.lookup(pid, options, function(err, result) {
});