Assumed you cluster your node app on a 4 CPU system, in 4 workers(childprocesses=new V8 instance) and each worker starts with about 10mb memory(default).
Is there a way to start them with more? like
--max-old-space-size=...
And how can I pass in more V8-settings to workers?
( + how do strongloop and PM2 handle it? ;) )
You can use cluster.setupMaster() to set the arguments passed to worker processes. Specifically there is an undocumented execArgv setting that defaults to process.execArgv, but you should be able to pass any array of node/v8-specific flags there.
Application arguments are passed via the args setting.
If you are using PM2 then it utilizes full CPU memory as on demand, since it provides a lot configuration for load balancing and performance.
If you want to utilize CPU with it, just increase the number of instances here
pm2 start app.js -i 2
where i is the number of instances you want to start.
While using pm2 following steps are important:
pm2 stop all
pm2 delete all
pm2 start app.js -i 2
Always use pm2 delete all to unregister the CPU, since if you stop it, it still reserve the CPU.
Related
I'm using the Node.js package "chokidar" for this use case:
I'm just watching a single directory on Linux, not recursively
I only need it to watch for add events, for when files are atomically moved into the watched directory (they're moved from another directory on the same filesystem once all changes are done)
I don't need it to look at pre-existing files in the dir at all, just watch for new ones since the app started
Problem:
When there is a large number of pre-existing files (like 80k) in the directory when I start my node.js app, Chokidar seems to put a watch on all of them, even though I don't care about watching existing files at all
Initially this was hitting the kernel watch limit, but I've already solved that with:
...
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
The remaining issue is that all these pre-existing files are being watched, which uses a massive amount of RAM for no reason, as I don't need it watching those files at all.
Is there any way to have Chokidar just entirely ignore all these pre-existing files, and do nothing but watch this single directory for add events for new files only?
I know about increasing Node's RAM limits (which didn't work in this case anyway), but I feel like it shouldn't be using all this RAM in the first place, and I want it to be efficient on a small VPS. I'd like to solve the RAM usage issue rather than just give it more RAM than it should need in the first place.
I'm using this code:
chokidar.watch("/my-watched-dir", {
ignoreInitial: true,
})
.on('add', (filepath) => {...}
I've also tried setting the depth option to 0 and 1 too.
The memory usage climbs very high as soon as the app starts (even before the 1st new file appears triggering the add event for the first time).
And there's no problem when the number of pre-existing files is smaller, so it's not an issue related to through-put of new files after the app starts.
As far as I know, libraries like chokidar on Linux platforms will directly use fs.watch and fs.watchFile provided by Node.js.
To be cross-platform, these two APIs always listen for all events related to paths, so the answer is that you can't use chokidar for your purposes.
If you wish to use less memory, either poll manually or use a native Linux module that has direct access to inotify.
So I am running a service in node which has a node js cluster usage meaning I am running the service with node js clusters... now I want to use pm2 and I use the pm2 cluster mode.
I wonder if it's a good thing to use both of them at the same time or should I use only one of them for better performance and stuff like that...
Any help would be appreciated
To take the complexity out of your architecture I would recommend to use PM2. It lets you efficiently manage multiple processes. It has many features including:
Auto restart an app, if there is any change in code with Watch & Reload.
An easy log management for processes.
Monitoring capabilities of the process.
An auto restart if the system reaches max memory limit or it crashes.
Keymetrics monitoring over the web.
As the processes are separated, now can start/stop/restart them with your pm2.config.js, i.e
pm2 start pm2.config.js // start all processes
pm2 stop app // stop app processes
pm2 restart smsWorker // restart smsWorker
I want to deploy my node application as single executable file, is it possible by using systemd, containers. I dont have enough knowledge on systemd and containers. Please help me if anybody knows about it.
Where i work we use pm2 to run NodeJS applications.
It allows you to run multiple instances on one server, monitors them and restarts if needed (on failure or you can provide a memory limit).
If you insist going with systemd, you will have to create a unit - a file that describes the execution path of you application for systemd.
It would usually be in /usr/lib/systemd/system
You would have to create a file ending with ".service"
[Unit]
Description=My NodeJS App
[Service]
ExecStart=/usr/bin/node /path/to/my/app/index.js
For some backstory and reference, here are some quotes from a few Heroku documentation pages.
From the Heroku Node.js Support > Activation:
The Heroku Node.js buildpack is employed when the application has a package.json file in the root directory.
From Heroku Node.js Support > Default web process type:
First, Heroku looks for a Procfile specifying your process types.
If no Procfile is present in the root directory of your app during the build process, your web process will be started by running npm start, [...]
From Process Types and the Procfile > Process types as templates:
A Procfile contains a number of process type declarations, each on a new line. Each process type is a declaration of a command that is executed when a dyno of that process type is started.
For example, if a web process type is declared, then when a dyno of this type is started, the command associated with the web process type, will be executed. This could mean starting a web server, for example.
I have a package.json file in the root (which will activate the Node.js buildpack), and I've also included a Procfile in the root with the following contents:
service: npm start
I would assume that not defining a web dyno would cause it to not be created; only the service dyno should be created, following the configuration declared in the Procfile.
Instead, what happened is that an active web dyno was automatically created using npm start and an inactive service dyno was created using the definition in Procfile. I then had to:
heroku ps:scale web=0
heroku ps:scale service=1
I can definitely imagine wanting to run a Node.js "service" application on Heroku that does not accept any incoming connections, only making outgoing ones. Is there a way to configure the Node.js buildpack to not automatically create a web dyno when one is not defined? I've looked through lots of documentation looking for a way to either: (1) define it as such or (2) remove the automatically generated web dyno; but, I haven't found anything.
Thanks for the help!
I ended up opening a helpdesk ticket with Heroku on this one. Got a response from them, so I'll post it here. Thanks Heroku support!
The short answer is that, no, currently you'll need to heroku scale web=0 service=1 in order to run a service without a public web process. For a longer explanation:
Early on, the Node.js Buildpack checked for the presence of a Procfile and, if missing, created a default one with web: npm start. This made it easy to create apps without a web process, since you could just provide a Procfile that defined some processes, omitting web from the list.
However, as more and more users needed arrays of buildpacks instead of a single one, that solution created issues. Node is the most popular first buildpack, since it's frequently used by Java, Python, PHP, and Ruby apps to build front-end assets. Whenever an app without a Procfile ran Node first, and another buildpack second, Node would inject its own default Procfile (web: npm start), and the second buildpack would then not create its default Procfile as one already existed in the filesystem. So injecting a default Procfile when one is missing from the app creates problems downstream for multilingual apps.
So, we stopped creating a default Procfile and instead used default_process_types in bin/release. This fixes the issue of subsequent buildpacks inheriting incorrect default Procfiles, but since default_process_types is extended rather than replaced by the Procfile process list, apps without a web process defined in their Procfile will get the default web process merged in. This is why web appears even without a web entry in Procfile.
We also don't want to surprise any customers with unexpected bills. Some apps have many process types, some of which are only to be run occasionally, some limited to a single instance, some which need to be scaled up and down, etc, so defaulting everything to 1 rather than 0 could cause extra billing as well as app malfunctions. This is why non-web processes are scaled to zero by default.
I just ran into the same problem and worked it around doing this in my Procfile after reading Shibumi's answer:
web: echo "useless"
service: node index.js
Is it possible to update a route,model or controller.js file without restarting the Node.js Server?.
I'm currently dealing with a client who wants constant changes to the application in a very frequent event. And the application deals with user session etc.. Whenever we make any changes to the application it requires a restart for the update to get reflect, which is very expensive in-terms of an high traffic situation.
I have seen some server application providing a feature called Rolling Restart but again I'm not sure whether it is a good way to maintain the user session across the restart event. Or do we have any other solution to deal with this kind of situation.
You can restart a server without downtime yes, I recommend you take a look at PM2 https://github.com/Unitech/pm2
You can have multiple instances of node running and when you set a restart it does it gradually, making that you don't have downtime, it also distributes load to the different instances running so it speeds up your app, hope this helps :-)
Nodemon is what I have used before and I was very happy with it.
Install
npm install -g nodemon
then run your app with
nodemon [your node app]
Done