I have an issue with memory limit in my machine. So I have to push my memory usage in Node.js below 150MB. Therefore I run my application with --gc-expose parameter and called global.gc() in application each cycle.
So my question, is there any bad effect or deficiency if my app will run for long time ? FYI, I started this app with PM2.
Thanks
Related
Since Node.js 10.5, they introduced the new worker thread which makes Node.js a multi-thread environment.
Previously, with only one thread on Node.js, there's no cpu time slicing happening because of the event driven nature (If I understand correctly).
So now multiple threads on Node with with one physical cpu core, how do they share the cpu? is it the OS scheduler schedule time for each thread to run for various amount of time or what?
Worker threads are advertised as
The Worker class represents an independent JavaScript execution thread.
So something like starting another NodeJS instance but within the same process and with a bare minimum of a communication channel.
Worker threads in NodeJS mimick the Worker API in the modern browsers (not a coincidence, NodeJS is basically a browser without UI and with a few extra JS API) and in that context worker threads are really native threads, scheduled by the OS.
The description quoted above seems to imply that in NodeJS too, worker threads are implemented with native threads rather than with a scheduling managed by NodeJS.
The latter would be useless as this is exactly what the JS event loop coupled with async methods do.
So basically a worker thread is just another "instance" (context) of NodeJS run by another native thread in the same process.
Being a native thread, it is managed and scheduled by the OS. And just like you can run more than one program in a single CPU you can do that with threads (fun fact: in many OSes, threads are the only schedulable entities. Programs are just a group of thread with a common address space and other attributes).
As NodeJS is open source, it is easy to confirm this, see the Worker::StartThread and the Worker::Run functions.
The new thread will execute JS code just like the main one but it has been limited in the way it can interact with the environment (particularly the process itself).
This is in line with the JS approach to multithreading where it is more of "two or more message loops" than real multithreading (where threads are free to interact with each other with all the implication at the architectural level).
So, I'm trying to conduct a test to see how much WebWorker Threads (https://github.com/audreyt/node-webworker-threads) can improve CPU intensive tasks with NodeJS in a multi core system.
I actually got this working on a VM with a single core assigned at work, but when I tried it on my home VM with 4 cores, I'm getting a Segmentation Fault after 15-20 requests.
I've got my project up at https://github.com/WakeskaterX/NodeThreading.git
I have tried eliminating pieces to see why I'm getting the SegFault, but even just returning static numbers throws the SegFault after 15-20 requests.
For the loadtest command I'm running:
loadtest -c 4 -t 20 http://localhost:3030/fib?num=30
It runs just fine when it's synchronously calculating the Fibonacci sequence, but as soon as it hits a web worker it Segmentation Fault Core Dumps. Perhaps this is related to the WebWorker-Threads code on the back end, but I'm mainly wondering why it's happening and how I can debug it further or fix it so I can test background threading in nodejs.
this is a variable lifetime issue — In general, a long-running worker needs to be assigned into an object, instead of a lexical variable; the latter is garbage-collected away when the scope exits.
See https://github.com/WakeskaterX/NodeThreading/pull/1 for the pull request that fixes the issue.
Is there a way to have a JavaScript code that would run as an automated test and measure a web-app memory consumption?
What I am looking for is a way to prevent memory leaks in an angular app by having automated tests as a part of CI build process informing me about memory issues as soon as they arise. I already have many JavaScript tests running in PhantomJS via Jasmine.
I would get that information from the operating system by grepping ps aux for the phantom process.
A meteor.js 0.82 app is running on an Ubuntu 14.04 server with 2GB memory and 2 cpu cores. It was deployed using mup. However the CPU utilization is very high, htop reports 2.72 load average.
Question: How do I find out which part of the app is causing such a high CPU utilization? I used Kadira but it does not reveal anything taking up alot of CPU load afaik.
Does Meteor only use a single core?
I had a similar problem before with Meteor 0.8.2-0.8.3. Here are what I have done to reduce the CPU usage, hope you may find it useful.
double check your functions, ensure all function has proper return, and does properly catch errors
try to use a replicaSet and oplog mongo convert standalone to replica set
write scripts to auto kill and resprawn a node process if it exceeds 100% cpu usage
utilize multi-core capability by starting 2 processes (edit you have done already) and configure and setup load-balance and reverse proxy
make sure to review your publish and subscription and limit what data to be sent to client (simply avoid something like Collection.find();)
Personally I recommend Phusion Passenger, it makes deploying Meteor applications an ease, and I have used it for several projects without any major problems.
One more thing, avoid running the processes in root (or privilege user), you should be running your apps in another user like www-data. This is for obvious security reason.
P.S. and multiple mongo processes showing in htop are threads under a master process, you can view it in tree mode by pressing F5.
I'm writing an application that makes heavy use of the http.request method.
In particular, I've found that sending 16+ ~30kb requests simultaneously really bogs down a Nodejs instance on a 512mb RAM machine.
I'm wondering if this is to be expected, or if Nodejs is just the wrong platform for outbound requests.
Yes, this behavior seems perfectly reasonable.
I would be more concerned if it was doing the work you described without any noticeable load on the system (in which case it would take a very long time). Remember that node is just an evented I/O runtime, so you can have faith that it is scheduling your I/O requests (about) as quickly as the underlying system can, hence it's using the system to it's (nearly) maximum potential, hence the system being "really bogged down".
One thing you should be aware of is the fact that http.request does not create a new socket for each call. Each request occurs on an object called an "agent" which contains a pool of up to 5 sockets. If you are using the v0.6 branch, then you can up this limit by using.
http.globalAgent.maxSockets = Infinity
Try that and see if it helps