Performance testing on node.js "net" - javascript

Does anyone have any recommendations on how to get started with node.js "net" performance testing?
I want to see how my app will scale and want to test 10,000+ concurrent connections!
EDIT: I want to know so I can see if my Ubuntu server configs are correct, etc.

Professional performance testing tools are agnostic to your underlying technology (node.js / .NET), and see just the output (HTTP Requests and responses), so any tools can do.
There's HP's LoadRunner and a lot of others. I have used WebLOAD, which is more cost effective, and a bit easier to use.

10,000 concurrent connections. Hmmm. I would think that such a load would have to be tied to a user population for your app somewhere in the 500,000-2,000,000 range with a 2% to .5% level of concurrency respectively. If this was an internal facing corporate app then your user population expectations would be somewhere in the 83,333(12%) - 125,000 (8%). These concurrency models come from 15 years of observations in corporate and internet facing applications for levels of concurrency vs the defined user population for a given application facing model (internal corporate vs public internet).
The reason why I bring up the above is that you may be over stressing your component for its defined use and as a result you could have some engineering ghosts that you chase down to fix. This can impact your budget and availability to hit other issues that may show up in production use.
Just food for thought,
James Pulley

From the video it seems that memory usage doesn't budge because it doesn't spawn new processes, which is precisely the reason it has picked up a huge following. That's what event driven/non blocking can do

Related

Node.js and fragmentation

Background: I came from Microsoft world, in which I used to have websites stored on IIS. Experience taught me to recycle my application pool once a day in order to eliminate weird problems due to fragmentation. Recycling the app pool basically means to restart your application without restarting the entire IIS. I also watched a lecture that explained how Microsoft had reduced the fragmentation a lot in .Net 4.5.
Now, I'm deploying a Node.js application to production environment and I have to make sure that it works flawlessly all the time. I originally thought to make my app restarted once a day. Then I did some research in order to find some clues about fragmentation problems in Node.js. The only thing I've found is a scrap of paragraph from an article describing GC in V8:
To ensure fast object allocation, short garbage collection pauses, and
the “no memory fragmentation V8” employs a stop-the-world,
generational, accurate, garbage collector.
This statement is really not enough for me to give up building a restart mechanism for my app, but on the other hand I don't want to do some work if there is no problem.
So my quesion is:
Should or shouldn't I restart my app every now and then in order to prevent fragmentation?
Implementing a server restart before you know that memory consumption is indeed a problem is a premature optimization. As such, I don't think you should do it until you actually find that it is a problem. You will likely find more important issues to optimize for as opposed to memory consumption.
To figure out if you need a server restart, I recommend doing the following:
Set up some monitoring tools like https://newrelic.com/ that let's your monitor your performance.
Monitor your memory continuously. Try to see if there is steady increase in the amount of memory consumed, or if it levels off.
Decide upon an acceptable threshold before you need to act. For example once your app consumes 60% of system memory you need to start thinking about a server restart and decide upon the restart interval.
Decide if you are ok with having "downtime" while restarting the sever or not. If you don't want downtime, you may need to build a proxy layer to direct traffic.
In general, I'd recommend server restarts for all dynamic, garbage collected languages. This is fairly common in those types of large applications. It is almost inevitable that a small mistake somewhere in your code base, or one of the libraries you depend on will leak memory. Even if you fix one leak, you'll get another one eventually. This may frustrate your team, which will basically lead to a server restart policy, and a definition of what is acceptable in regards to memory consumption for your application.
I agree with #Parris. You should probably figure out whether you actually need have a restart policy first. I would suggest using pm2 docs here. Even if you don't want to sign up for keymetrics, its a pretty good little process manager and real quick to set up. You can get a report of memory usage from command line. Looks something like this.
Also, if you start in cluster mode like above, you can call pm2 restart my_app and the first one will probably be up again before the last one is taken offline (this is an added benefit, the real reason for having 8 processes is to utilize all 8 cores). If you are adamant about downtime, you could restart them 1 by 1 acording to id.
I agree with #Parris this seems like a premature optimization. Also, restarting is not a solution to the underlying problem, it's a treatment for the symptoms.
If memory errors are a prevalent issue for your node application then I think that some thought as to why this fragmentation occurs in your program in the first place could be a valuable effort. Understanding why memory errors occur after a program has been running for a long period of time, and refactoring the architecture of your program to solve the root of the problem, is a better solution in my eyes than just addressing the symptoms.
I believe two things will benefit you.
immutable objects will help a lot, they are a lot more predictable than using mutable objects, and will not be affected by the length of time the project has been live. Also, since immutable objects are read only blocks of memory they are faster than mutable objects which the server has to spend resources deciding whether to read, or write on the memory block which stores the object. I currently use the library called IMMUTABLE and it works well for me. There are other one's as well like Deep Freeze, however, I have never used it.
Make sure to manage your application's processes correctly, memory leaks are the second big contributor to this problem that I have experienced. Again, this is solved by thinking about how your application is structured, and how user events are handled, making sure once a process is not being used by the client that it is properly removed from the heap, if it is not then the heap keeps growing until all memory is consumed causing the application to crash(refer to the below graphic to see V8's memory Scheme, and where the heap is). Node is a C++ program, and it's controlled by Google's V8 and Javascript.
You can use Node.js's process.memoryUsage() to monitor memory usage. When you identify how to manage your heap V8 offers two solutions, one is Scavenge which is very quick, but incomplete. The other is Mark-Sweep which is slow and frees up all non-referenced memory.
Refer to this blog post for more info on how to manage your heap and manage your memory on V8 which runs Node.js
So the responsible approach to your implementation is to keep a close eye on open processes, a deep understanding of the heap, and how to free non-referenced memory blocks. Creating your project with this in mind also makes the project a lot more scaleable as well.

High CPU Utilization for Meteor.js

A meteor.js 0.82 app is running on an Ubuntu 14.04 server with 2GB memory and 2 cpu cores. It was deployed using mup. However the CPU utilization is very high, htop reports 2.72 load average.
Question: How do I find out which part of the app is causing such a high CPU utilization? I used Kadira but it does not reveal anything taking up alot of CPU load afaik.
Does Meteor only use a single core?
I had a similar problem before with Meteor 0.8.2-0.8.3. Here are what I have done to reduce the CPU usage, hope you may find it useful.
double check your functions, ensure all function has proper return, and does properly catch errors
try to use a replicaSet and oplog mongo convert standalone to replica set
write scripts to auto kill and resprawn a node process if it exceeds 100% cpu usage
utilize multi-core capability by starting 2 processes (edit you have done already) and configure and setup load-balance and reverse proxy
make sure to review your publish and subscription and limit what data to be sent to client (simply avoid something like Collection.find();)
Personally I recommend Phusion Passenger, it makes deploying Meteor applications an ease, and I have used it for several projects without any major problems.
One more thing, avoid running the processes in root (or privilege user), you should be running your apps in another user like www-data. This is for obvious security reason.
P.S. and multiple mongo processes showing in htop are threads under a master process, you can view it in tree mode by pressing F5.

lots of simultaneous connections

I'm working on chat application. The underlying server uses Node.js and the client/server communication goes via WebSockets.
So the question is: how many simultaneous connections can such a server handle (without visible lags)? Of course approximately and assuming that the server is very powerful machine. I know this is not an easy question to answer but I just want ideas, some approximations... or at least upper and lower bounds. Of course I'm going to do some practical tests, but the theory may help me a bit.
Also I have another question related to the first one: is it possible to split Node.js applications into multiple machines? Keep in mind, that most of the data is held in machines memory rather then database.
Waiting for replies. :)
You'll want to make sure you run node.js on top of epoll/kqueue and tune your OS for high TCP connection numbers.
Here are a couple of measured numbers for a publish/subscribe system based on Autobahn WebSockets:
180k concurrent, active WebSocket connections
12k/s dispatched pubsub messages
4k/s WebSocket opening handshakes
<8kB per WebSocket connection
This is on a FreeBSD i386 virtual machine configured with 2 cores and 2GB RAM.
Autobahn WebSockets is Python/Twisted based and runs on a kqueue reactor.
I've made a simple benchmark of a multiroom chat application. Just remember to monitor the CPU% when running it, it's possible that the benchmark app will drain the CPU faster than the server!

How much does it cost in terms of performance to use console.log in nodejs and in browsers?

Let's say you log certain things on your nodejs app or on a browser.
How much does this affect performance / CPU usage vs removing all these logs in production?
I'm not asking because I'm just curious how much "faster" would things run without it so I can take that into account when developing.
It can cost a lot, specially if your application is hardly based on a loop, like a game or a GUI app that gets updated in real time.
Once I developed an educational physics app using <canvas>, and with logs activated withing the main application loop the frame rate easily dropped from 60fps to 28fps! That was quite catastrophic for the user experience.
Overall tip for browser applications is: Do not use console.log() in production for loop based applications specially the ones that need to update a graphical interface within the loop.
For Node: is node.js' console.log asynchronous?
I imagine it's implemented similar in some of the browsers.
I'm not familiar with node.js, however it's typically not a good thing to log anything except critical errors in a production environment. Unless node.js offers a logging utility like log4j, you should look at something like log4js (haven't used, just first google response)

Adobe AIR, memory leaks

We all know how web browsers (such as Firefox) are certain to fill up memory consumption because we continuously execute JavaScript code (from websites) that is prone to memory leakage.
I am debating in developing a Desktop app, and given my experience with Javascript/Css/HTML, I thought I would give AIR a try, this way I don't have to use Java (for example) and deal with learning all its GUI swing stuff.
The problem is that I worry about memory leakage in AIR, since AIR is simply a web browser with an API layer to interact with the Operating System.
Is it plausible to worry about memory leakage in AIR? What should I do about it?
My name is Rob Christensen and I am product manager on Adobe AIR. First, let me say that it is quite easy to build a desktop application, regardless of underlying technology, that consumes a large amount of memory and/or does not free up memory.
In the next release of AIR, we are looking at providing some additional capabilities to the AIR runtime to make it easier to identify memory leaks for JavaScript-based applications. Developers that are building Flash or Flex based applications can already take advantage of the memory profiler included in Flex Builder to track this down. We are hoping to do something similar for JavaScript developers as well.
In my experience talking to developers, memory leaks often occur when objects in memory are never cleaned up. For example, imagine a Twitter client that lists tweets from users based around a search keyword. Overtime, more results show and the list becomes longer. If there is not a limit on the maximum number of Tweets visible, memory will, of course, go up over time. Instead, the application should impose a reasonable limit on the number of items that appear in that list.
There are some talks available that describe best practices around handling memory in AIR. Though the examples in this article are mostly written in ActionScript, the same concepts apply to JavaScript as well.
Performance-Tuning AIR applications
http://www.adobe.com/devnet/air/articles/air_performance.html
If there are memory leaks in the runtime, we jump on these as quickly as we can. We encourage developers to know about such issues by sending them back to our team using the following feedback form (www.adobe.com/go/wish).
If you are using an Ajax framework, you may want to look into whether there are known issues with memory leaks for that particular framework.
So, to summarize, yes, you should always worry about memory when building a desktop application -- whether with AIR or C++. As you are developing your application, you should monitor the memory usage of your application so that you can identify any issues sooner than later. One way to do this is to run longevity tests -- keep your application open over night to see if memory is creeping up.
In general, the tools available for browsers are very limited as well. I expect this will change soon as browser vendors also start providing more hooks into their browsers for identifying memory usage. Hope this helps.
Thank you!
-Rob
Product Manager, Adobe AIR
Sure. I've seen AIR apps on Linux swallow gigabytes of memory over time. It's a real blocker for me and stops me using them.
That said, other people on other platforms have no issue with it. Ultimately you need to decide what most of your market will be using and how affected they'll be by any issues in AIR (or other).
If it's not that important (but it's still an issue) submit bug reports and hope Adobe fix things.

Categories

Resources