What caused process.hrtime() hanging in nodejs? - javascript

Here is the code:
var process = require('process')
var c = 0;
while (true) {
var t = process.hrtime();
console.log(++c);
}
Here is my environment:
nodejs v4.2.4, Ubuntu 14.04 LTS on Oracle VM virtualbox v5.0.4 r102546 running in Windows 7
This loop can only run about 60k to 80k times before it hangs. Nothing happens after that.
In my colleague's computer maybe 40k to 60k times. But shouldn't this loop continues forever?
I was first running a benchmark which tests avg execution time of setting up connections, so I can't just get the start time at first then end time after everything finished.
Is this related to the OS that I use?
Thanks if anyone knows the problem.
==========================================================
update 2016.4.13:
One day right after I raised this question, I realized what a stupid question it was. And it was not what I really want to do. So I'm gonna explain it further.
Here is the testing structure:
I have a node server which handles connections.Client will send a 'setup' event on 'connect' event. A Redis subscribe channel will be made at server side and then make some queries from db, then call client's callback of 'setup' event. Client disconnect socket in 'setup' callback, and reconnect on 'disconnect' event.
The client codes use socket.io-client to run in backend and cluster to simulate high concurrency.
Codes are like these:
(some of the functions are not listed here)
[server]
socket.on('setup', function(data, callback) {
queryFromDB();
subscribeToRedis();
callback();
}
[client]
var requests = 1000;
if (cluster.isMaster) {
for (var i = 0; i < 100; ++i) {
cluster.fork();
}
} else {
var count = 0;
var startTime = process.hrtime();
socket = io.connect(...);
socket.on('connect', function() {
socket.emit('setup', {arg1:'...', arg2:'...'}, function() {
var setupEndTime = process.hrtime();
calculateSetupTime(startTime, setupEndTime);
socket.disconnect();
}
}
socket.on('disconnect', function() {
if (count++ < requests) {
var disconnectEndTime = process.hrtime();
calculateSetupTime(startTime, disconnectEndTime);
socket.connect();
} else {
process.exit();
}
}
}
At first the connections could only make 500 or 600 times. Somehow I removed all the hrtime() codes, it made it to 1000 times. But later I raised the number of requests to like 2000 times (without hrtime() codes), it could not finish again.
I was totally confused. Yesterday I thought it was related to hrtime, but of course it wasn't, any infinite loop would hang. I was misled by hrtime.
But what's the problem now?
===================================================================
update 2016.4.19
I solved this problem.
The reason is my client codes use socket.disconnect and socket.connect to simulate a new user. This is wrong.
In this case server may not recognize the old socket disconnected. You have to delete your socket object and new another one.
So you may find the connection count does not equal to disconnection count, and this will prevent our code from disconnecting to redis, thus the whole loop hang because of redis not responsing.

Your code is an infinite loop - at some point this will always exhaust system resources and cause your application to hang.
Other than causing your application to hang, the code you have posted does very little else. Essentially, it could be described like this:
For the rest of eternity, or until my app hangs, (whichever happens first):
Get the current high-resolution real time, and then ignore it without doing anything with it.
Increment a number and log it
Repeat as quickly as possible
If this is really what you wanted to do - you have acheived it, but it will always hang at some point. Otherwise, you may want to explain your desired result further.

Related

node js clustering is repeating the same task on all 8 processes

I've been trying to enable clustering in my node js app. Currently I use this snippet to enable it:
var cluster = require('cluster');
if (cluster.isMaster) {
// Count the machine's CPUs
var cpuCount = require('os').cpus().length;
// Create a worker for each CPU
for (var i = 0; i < cpuCount; i += 1) {
cluster.fork();
}
// Listen for dying workers
cluster.on('exit', function () {
cluster.fork();
});
}
And basically my code performs writes to a Firebase database based on conditions. The problem is that the writes are occurring 8 times each, rather than one worker just taking care of one write task, it seems that all threads are performing all tasks. Is there a way to avoid this? If so, can someone point me in the direction of some resources on this? I can't find anything on google for using Firebase with node js clustering. Here is an example of the way one of my functions work (ref is my firebase reference):
ref.child('user-sent').on('child_added', function(snapshot) {
var message = snapshot.child('message');
payload['user-received/'] = message;
ref.update(payload); // this occurs once for each fork so it updates 8 times
});
If you're spawning 8 threads and each thread attaches a listener on the same location (user-sent), then each thread will fire the child_added event for each child under that location. This is the expected behavior.
If you want to implement a worker queue, where each node under user-sent is only handled by one thread, you'll have to use a work-distribution mechanism that ensures only one thread can claim each node.
The firebase-queue library implements such a work claim mechanism, using Firebase Database transactions. It's been used to scale to a small to medium number of workers (think < 10, not dozens).

Server saturation with Ajax calls

I'm using PHP over IIS 7.5 on Windows Server 2008.
My web application is requesting repeatedly with Ajax in the background 3 different JSON pages:
page 1 Every 6 seconds
page 2 Every 30 seconds
page 3 Every 60 seconds
They retrieve data related with the current state of some tables. This way I keep the view updated.
Usually I have no much trouble with it, but lately I saw my server saturated with hundreds of unanswered requests and I believe the problem can be due to a delay in one of the request.
If page1, which is being requested every 6 seconds, needs 45 seconds to respond (due to slow database queries or whatever), then it seem to me that the requests start getting piled one after the other.
If I have multiple users connected to the web application at the same time (or with multiple tabs) things can turn bad.
Any suggestion about how to avoid this kind of problem?
I was thinking about using some thing such as ZMQ together with Sockets.io in the client side, but as the data I'm requesting doesn't get fired from any user action, I don't see how this could be triggered from the server side.
I was thinking about using some thing such as ZMQ together with Sockets.io in the client side...
This is almost definitely the best option for long-running requests.
...but as the data I'm requesting doesn't get fired from any user action, I don't see how this could be triggered from the server side.
In this case, the 'user action' in question is connecting to the socket.io server. This cut-down example is taken from one of the socket.io getting started docs:
var io = require('socket.io')(http);
io.on('connection', function(socket) {
console.log('a user connected');
});
When the 'connection' event is fired, you could start listening for messages on your ZMQ message queue. If necessary, you could also start the long-running queries.
I ended up solving the problem following the recommendation of #epascarello and improving it a bit if I get no response in X time.
If the request has not come back, do not send another. But fix the serverside code and speed it up.
Basically I did something like the following:
var ELAPSED_TIME_LIMIT = 5; //5 minutes
var responseAnswered = true;
var prevTime = new Date().getTime();
setInterval(function(){
//if it was answered or more than X m inutes passed since the last call
if(responseAnsswered && elapsedTime() > ELAPSED_TIME_LIMIT){
getData()
updateElapsedTime();
}
}, 6000);
function getData(){
responseAnswered = false;
$.post("http://whatever.com/action.json", function(result){
responseAnswered = true
});
}
//Returns the elapsed time since the last time prevTime was update for the given element.
function elapsedTime(){
var curTime = new Date().getTime();
//time difference between the last scroll and the current one
var timeDiff = curTime - prevTime;
//time in minutes
return (timeDiff / 1000) / 60;
}
//updates the prevTime with the current time
function updateElapsedTime(){
prevTime = new Date().getTime();
}
This is a very bad setup. You should always avoid polling if possible. Instead of sending request every 6 seconds from client to server, send data from server to the clients. You should check at the server side if there is any change in the data, then transfer the data to the clients using websockets. You can use nodejs at the server side to monitor any changes in the data.

socket.io stop re-emitting event after x seconds/first failed attempt to get a response

I noticed that whenever my server is offline, and i switch it back online, it receives a ton of socket events, that have been fired while server was down. ( events that are ... by now outdated ).
Is there a way to stop socket.io from re-emitting the events after they have not received a response for x seconds ?.
When all else fails with open source libraries, you go study the code and see what you can figure out. After spending some time doing that with the socket.io source code...
The crux of the issue seems to be this code that is here in socket.emit():
if (this.connected) {
this.packet(packet);
} else {
this.sendBuffer.push(packet);
}
If the socket is not connected, all data sent via .emit() is buffered in the sendBuffer. Then, when the socket connects again, we see this:
Socket.prototype.onconnect = function(){
this.connected = true;
this.disconnected = false;
this.emit('connect');
this.emitBuffered();
};
Socket.prototype.emitBuffered = function(){
var i;
for (i = 0; i < this.receiveBuffer.length; i++) {
emit.apply(this, this.receiveBuffer[i]);
}
this.receiveBuffer = [];
for (i = 0; i < this.sendBuffer.length; i++) {
this.packet(this.sendBuffer[i]);
}
this.sendBuffer = [];
};
So, this fully explains why it buffers all data sent while the connection is down and then sends it all upon reconnect.
Now, as to how to prevent it from sending this buffered data, here's a theory that I will try to test later tonight when I have more time.
Two things look like they present an opportunity. The socket notifies of the connect event before it sends the buffered data and the sendBuffer is a public property of the socket. So, it looks like you can just do this in the client code (clear the buffer upon connect):
// clear previously buffered data when reconnecting
socket.on('connect', function() {
socket.sendBuffer = [];
});
I just tested it, and it works just fine. I have a client socket that sends an increasing counter message to the server every second. I take the server down for 5 seconds, then when I bring the server back up before adding this code, all the queued up messages arrive on the server. No counts are missed.
When, I then add the three lines of code above, any messages sent while the server is down are not sent to the server (technically, they are cleared from the send buffer before being sent). It works.
FYI, another possibility would be to just not call .emit() when the socket is not connected. So, you could just create your own function or method that would only try to .emit() when the socket is actually connected, thus nothing would ever get into the sendBuffer.
Socket.prototype.emitWhenConnected = function(msg, data) {
if (this.connected) {
return this.emit(msg, data);
} else {
// do nothing?
return this;
}
}
Or, more dangerously, you could override .emit() to make it work this way (not my recommendation).
Volatile events are events that will not be sent if the underlying connection is not ready (a bit like UDP, in terms of reliability).
https://socket.io/docs/v4/emitting-events/#volatile-events
socket.volatile.emit("hello", "might or might not be received");

Creating an index with ElasticSearch javascript client, promise does not return

Here is something that's been driving me crazy since an hour now. I'm working on a side project which involves accessing ElasticSearch with Javascript. As a part of the tests, I wanted to create an index. Here is a very simple snippet that, in my mind, should do this, and print the messages returned from the ElasticSearch server:
var es = require('elasticsearch');
var es_client = new es.Client({host: "localhost:9200"});
var breaker = Math.floor((Math.random() * 100) + 1);
var create_promise = es_client.indices.create({index: "test-index-" + breaker});
create_promise.then(function(x) {
console.log(x);
}, function(err) { console.log(err);});
What happens when I go to a directory, run npm install elasticsearch, and then run this code with NodeJS, is that the request is made, but the promise does not return due to some reason. I would expect this code to run to the end, and finish once the response from ES server comes back. Instead, the process just hangs. Any ideas why?
I know that an index can be created just by adding a document to it, but this weird behavior just bugged me, and I couldn't figure out the reason or the sense behind it.
By default the client keeps persistent connections to elasticsearch so that subsequent requests to the same node are much faster. This has the side effect of preventing node from closing normally until client.close() is called. You could either add this to your callback, or disable keepAlive connections by adding keepAlive: false to your client config.

EventSource permanent auto reconnection

I am using JavaScript EventSource in my project front-end.
Sometimes, the connection between the browser and the server fails or the server crashes. In these cases, EventSource tries to reconnect after 3 seconds, as described in the documentation.
But it tries only once. If there is still no connection, the EventSource stops to try reconnection and the user have to refresh the browser window in order to be connected again.
How I can prevent this behavior? I need the EventSource to try reconnecting forever, not only once.
The browser is Firefox.
I deal with this by implementing a keep-alive system; if the browser reconnects for me that is all well and good, but I assume sometimes it won't work, and also that different browsers might behave differently.
I spend a fair few pages on this in chapter five of my book (Blatant plug, find it at O'Reilly here: Data Push Applications Using HTML5 SSE), but if you want a very simple solution that does not require any back-end changes, set up a global timer that will trigger after, say, 30 seconds. If it triggers then it will kill the EventSource object and create another one. The last piece of the puzzle is in your event listener(s): each time you get data from the back-end, kill the timer and recreate it. I.e. as long as you get fresh data at least every 30 seconds, the timer will never trigger.
Here is some minimal code to show this:
var keepAliveTimer = null;
function gotActivity(){
if(keepaliveTimer != null)clearTimeout(keepaliveTimer);
keepaliveTimer = setTimeout(connect, 30 * 1000);
}
function connect(){
gotActivity();
var es = new EventSource("/somewhere/");
es.addEventListener('message', function(e){
gotActivity();
},false);
}
...
connect();
Also note that I call gotActivity() just before connecting. Otherwise a connection that fails, or dies before it gets chance to deliver any data, would go unnoticed.
By the way, if you are able to change the back-end too, it is worth sending out a blank message (a "heartbeat") after 25-30 seconds of quiet. Otherwise the front-end will have to assume the back-end has died. No need to do anything, of course, if your server is sending out regular messages that are never more than 25-30 seconds apart.
If your application relies on the Event-Last-Id header, realize your keep-alive system has to simulate this; that gets a bit more involved.
In my experience, browsers will usually reconnect if there's a network-level error but not if the server responds with an HTTP error (e.g. status 500).
Our team made a simple wrapper library to reconnect in all cases: reconnecting-eventsource. Maybe it's helpful.
Below, I demonstrate an approach that reconnects at a reasonable rate, forever.
This code uses a debounce function along with reconnect interval doubling. During my testing, it works well. It connects at 1 second, 4, 8, 16...up to a maximum of 64 seconds at which it keeps retrying at the same rate.
function isFunction(functionToCheck) {
return functionToCheck && {}.toString.call(functionToCheck) === '[object Function]';
}
function debounce(func, wait) {
var timeout;
var waitFunc;
return function() {
if (isFunction(wait)) {
waitFunc = wait;
}
else {
waitFunc = function() { return wait };
}
var context = this, args = arguments;
var later = function() {
timeout = null;
func.apply(context, args);
};
clearTimeout(timeout);
timeout = setTimeout(later, waitFunc());
};
}
// reconnectFrequencySeconds doubles every retry
var reconnectFrequencySeconds = 1;
var evtSource;
var reconnectFunc = debounce(function() {
setupEventSource();
// Double every attempt to avoid overwhelming server
reconnectFrequencySeconds *= 2;
// Max out at ~1 minute as a compromise between user experience and server load
if (reconnectFrequencySeconds >= 64) {
reconnectFrequencySeconds = 64;
}
}, function() { return reconnectFrequencySeconds * 1000 });
function setupEventSource() {
evtSource = new EventSource(/* URL here */);
evtSource.onmessage = function(e) {
// Handle even here
};
evtSource.onopen = function(e) {
// Reset reconnect frequency upon successful connection
reconnectFrequencySeconds = 1;
};
evtSource.onerror = function(e) {
evtSource.close();
reconnectFunc();
};
}
setupEventSource();

Categories

Resources