I was thinking about extending the functionality of node.js server running Socket.io which I am currently using to allow a client (iOS app) to communicate with a server, so that it could have persistent session data between connection.
On initial connection the server passes the client a session id which the client will store and pass to the server later on if it reconnects after disconnecting, this would allow the client to resume its session without having to re-provide the server with certain information about its current state (obviously when it comes to actual implementation it will be more secure than this).
I want to make it so that the session eventually expires, so it has a max lifetime or if it hasn't been continued after a certain time it times-out. To do this I was thinking of using timers for each session. Im not actually sure how node.js or javascript timers (setTimeout) work in the background and am concerned that having 1000s of session timers could lead to a lot of memory/cpu usage. Could this be a potential issue, should I have a garbage collector that cycles every minute or so and deletes expired session data? What is the kind of most optimal way in terms of least impact on performance method I can do to accomplish this, or are timers already exactly that?
They are used frequently for timeouts, and are very efficient in cpu.
// ten_thousand_timeouts.js
for (var i=0;i<=10000;i++) {
(function(i){
setTimeout(function(){
console.log(i);
},1000)
})(i)
}
With 10,000 the results of logs only took .336 seconds and the act of logging it to the console took most of that time.
//bash script
$> time node ten_thousand_timeouts.js
1
...
9999
10000
real 0m1.336s
user 0m0.275s
sys 0m0.146s
I cannot imagine this being an issue for your use case.
Related
a query related to firebase presence app using javascript sdk. I have seen that there is 60 seconds buffer after the internet is disconnected after which the entries are removed from the firebase real tim edatabase. Is it possible to configure this time from 60 seconds to lets say 30 seconds? . I basically want the entries to be removed from the presence as soon as the internet is disconnected. If not immediately then at least sooner than a minute alteast.Does anybody have any idea on this?
There are two ways an onDisconnect handler can be triggered on the Firebase servers:
A clean disconnect is when the client has time to inform the server that it is about to disconnect, and in that case the server executes the onDisconnect handlers immediately.
A dirty disconnect is when the client disconnects without informing the server. In this case the server detects that the client is gone when the socket from that client times out, and then it executes the onDisconnect handlers.
Detecting the dirty disconnects takes some times (upwards to a few minutes) and there's no way for you to configure it.
If you want more granular presence detection, the common approach is to periodically write a timestamp into the database from the client. This indicates when the client was last active, and can be used by the other clients to then indicate the liveliness of that user/client.
I'm working on node application that would monitor user's online status. It uses socket.io to update online status for users we "observe" (as in, the users we are aware of on the page we're at). What I would like to introduce now is idle status, which would basically mean that after X time of inactivity (as in no request) the status would change from online to idle.
I do monitor all the sockets thus I know when connection was made, so I thought of using this.
My idea is to use setTimeout on every connection for this particular uses (clearing out the previous one if exists) and in setTimeout I would simply change user's status to idle and emit that status change to observers.
What I'm concerned about is performance and scalability of setting and clearning the timeout on every connection. So the question is, are there any issues in terms of the two above with such approach? Is there a better thing of doing it, perhaps a library that is better at handling such things?
We have a small set of multiplayer servers using node.js that are currently serving roughly 1 million messages a minute during peak usage. Is there a way to 'gracefully' restart the server without causing sockets to drop? Basically, I'm wondering what is the best way to handle restarts were it would normally be very disruptive to players?
When a process exits, the OS cleans up any sockets that belong to it by closing them. So, there's no way to just do a simple server restart and preserve your socket connections.
In some operating systems, you can pass ownership of a socket from one process to another so it might be technically feasible for you to create a temporary process or perhaps a previously existing parent process), pass ownership of the sockets to that other process, restart your server, then transfer ownership back to the newly started process. I've never tried this (or heard about it being done), but it sounds like something that might be feasible.
Here's some information on transferring a socket to a child process using child.send() in node.js. It appears this can only be done for a node.js socket created by the net module and there are some caveats about doing it, but it is possible.
If not, the usual work-around is have the clients automatically reconnect when their connection is closed. Done properly, this can be fairly transparent to the client (except for the momentary time when the server is not running).
use redis or some in-memory database for storing connection so that you can easily reconnect even after server restart without loosing any sessions or connection. Try this if it suits your need. Also please note during restart connection may drop but due to having persistence you will be connected again very easily.
socket.io-redis
I've a web sockets based chat application (HTML5).
Browser opens a socket connection to a java based web sockets server over wss.
When browser connects to server directly (without any proxy) everything works well.
But when the browser is behind an enterprise proxy, browser socket connection closes automatically after approx 2 minutes of no-activity.
Browser console shows "Socket closed".
In my test environment I have a Squid-Dansguardian proxy server.
IMP: this behaviour is not observed if the browser is connected without any proxy.
To keep some activity going, I embedded a simple jquery script which will make an http GET request to another server every 60 sec. But it did not help. I still get "socket closed" in my browser console after about 2 minutes of no action.
Any help or pointers are welcome.
Thanks
This seems to me to be a feature, not a bug.
In production applications there is an issue related with what is known as "half-open" sockets - see this great blog post about it.
It happens that connections are lost abruptly, causing the TCP/IP connection to drop without informing the other party to the connection. This can happen for many different reasons - wifi signals or cellular signals are lost, routers crash, modems disconnect, batteries die, power outages...
The only way to detect if the socket is actually open is to try and send data... BUT, your proxy might not be able to safely send data without interfering with your application's logic*.
After two minutes, your Proxy assume that the connection was lost and closes the socket on it's end to save resources and allow new connections to be established.
If your proxy didn't take this precaution, on a long enough timeline all your available resources would be taken by dropped connections that would never close, preventing access to your application.
Two minutes is a lot. On Heroku they set the proxy for 50 seconds (more reasonable). For Http connections, these timeouts are often much shorter.
The best option for you is to keep sending websocket data within the 2 minute timeframe.
The Websocket protocol resolves this issue by implementing an internal ping mechanism - use it. These pings should be sent by the server and the browser responds to them with a pong directly (without involving the javascript application).
The Javascript API (at least on the browser) doesn't let you send ping frames (it's a security thing I guess, that prevents people from using browsers for DoS attacks).
A common practice by some developers (which I think to be misconstructed) is to implement a JSON ping message that is either ignored by the server or results in a JSON pong.
Since you are using Java on the server, you have access to the Ping mechanism and I suggest you implement it.
I would also recommend (if you have control of the Proxy) that you lower the timeout to a more reasonable 50 seconds limit.
* The situation during production is actually even worse...
Because there is a long chain of intermediaries (home router/modem, NAT, ISP, Gateways, Routers, Load Balancers, Proxies...) it's very likely that your application can send data successfully because it's still "connected" to one of the intermediaries.
This should start a chain reaction that will only reach the application after a while, and again ONLY if it attempts to send data.
This is why Ping frames expect Pong frames to be returned (meaning the chain of connection is intact.
P.S.
You should probably also complain about the Java application not closing the connection after a certain timeout. During production, this oversight might force you to restart your server every so often or experience a DoS situation (all available file handles will be used for the inactive old connections and you won't have room for new connections).
check the squid.conf for a request_timeout value. You can change this via the request_timeout. This will affect more than just web sockets. For instance, in an environment I frequently work in, a perl script is hit to generate various configurations. Execution can take upwards of 5-10 minutes to complete. The timeout value on both our httpd and the squid server had to be raised to compensate for this.
Also, look at the connect_timeout value as well. That's defaulted to one minute..
I'm developing application that displays real-time data (charts, etc.) from Redis. Updated data comes to Redis very quickly (milliseconds). So it would make sense to show updates as often as possible (as long as human eye can notice it).
Technology stack:
Node.js as a web server
Redis that holds the data
JavaScript/HTML (AngularJS) as a client
Right now I have client-side polling (GET requests to Node.js server every second that queries Redis for updates).
Is there advantage of doing server-side polling instead, and exposing updates through WebSocket? Every WebSocket connection will require separate Node.js poll (setInterval) though since client queries may be different. But it's not expected to have more than 100 WebSocket connections.
Any pros/cons between these two approaches?
If I understood your question correctly: you have less than 100 users who are going to use your resource simultaneously, and you want to find out what can be a better way to give them updates:
clients ask for updates through time-out request (1 per second)
server keep track of clients and whenever there is an update, it issues them an update.
I think the best solution depends on the data that you have and how important is for users to get this data.
I would go with client-side if:
people do not care if their data is a little bit stale
there would be approximately more then 1 update during this 1 second
I do not have time to modify the code
I would go with server-side if:
it is important to have up to date data and users can not tolerate lags
updates are not so often (if for example we have updates only once per minute, only 1 in 60 client side request would be useful. And here server will just issue only one update)
One good thing is that node.js already has an excellent socket.io library for this purpose.