I am looking for a way to investigate one of our nodejs services that keeps getting the error
"FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory"
when testing our performance.
I am looking for a package or program that will give me a warning when there is a memory leak and maybe what exactly is leaking. I know there are the heapdump snapshots, but most of the packages save it to a file that when it is saved on the pod when the pod will crash, will disappear, but also it takes a lot of memory to take the snapshot.
Most of the articles I found are very old and use packages that are already deprecated or just use snapshots and use the chrome dev tools but nobody seems to explain what exactly the columns in the snapshot mean.
Long story short: Do you have any suggestions for how to or which program/package to us for investigating memory issues in a nodejs service that runs on kubernetes and most of its info is received via message queue and not just RestAPI?
Related
I've encountered a problem that I think I understand the solution (paying for more bandwidth on Heroku), but I do not understand the origin of the problem and it is driving me crazy. I would appreciate it if anyone can help me figure it out! Hopefully, others that are in a similar situation in the future can refer back to this.
I'm encountering this error which is similar to this error here:
Heroku server crashes with "JavaScript heap out of memory" when deploying 'react-admin' app
I understand that the free tier of Heroku limits me to around 512 MB of memory, and I've set my Heroku config node options like this:
NODE_OPTIONS: --max_old_space_size=5120
All being said and done, I ran my application locally and inspected the memory heap which only uses up a maximum of 20 MB of memory. Weird. It's nothing close to the 512 MB limit Heroku imposes on me?!
What tidbit of information am I missing?
I got some memory leak issue on my nodejs app executed by openshift docker soulution.
when I try to monitor memory usage using process rss, I found process memory increased over time.
I'm trying to catch memory usage in process heap but memwatch, heapdump module can't show anything.
npm module showed heap size, and diff that size is under 50mb.
but process memory is still increasing and it uses over 150mb.
I thought It caused application leak issue so I try --expose-gc and called global.gc() but never helped.
how can I see where the process use memory or
does nodejs use OS memory more than that's max heap size?
(I showed memory usage increased over 4GB)
I want to fix it or want to see how nodejs use that memory.
thanks for read & answer :)
I am trying to to analyze my node.js memory leak using V8 heapsnapshot and Chrome Developer Tool.
Unfortunately, dev tool always crash at "Building postorder index...".
The heapsnapshot files are between 70mb and 620mb.
Any other way to analyze the memory heap file?
What can I do to find out the cause of the crash?
We inherited a fairly large Javascript application and test suite and have recently started to have issues with memory usage during testing.
Whilst we attempt to fix the issues our test suite has, we'd like to stem the flow of new leaks into the application. Are there any tools that we can integrate with our CI build to get memory profiling? Even some basic memory allocation statistics would help us see whether a suite is eating through memory.
We're running Jasmine with PhantomJS. The closest I've been able to find is Chrome's window.performance.memory, but it's only for the whole of Chrome and seems like it might be quite volatile.
I am not aware of any automated memory statistics 3rd party applications for javascript that would work in CI. Take a look at Google's memory profiling post: https://developer.chrome.com/devtools/docs/javascript-memory-profiling
We are using phantomjs to run our qunit tests page on our TFS build server. Our version of test runner is built from below example
https://github.com/ariya/phantomjs/blob/master/examples/run-qunit.js
Over a period of time number of tests increased from hundreds to couple of thousands and on a fine day phantomjs started crashing. It literally dies saying upload the dump and when you see the dump it 0kb !!
When we took a closer look at it on process explorer we found that memory consumption by phantomjs keeps going up as phantomjs is running tests and eventually crashes somewhere 833MB.
Yes the same amount of memory was being utilized by chrome and IE ! And Yes-Yes our tests were leaking memory :(. We did fixed it, memory utilization is lowered by 50% on chrome and IE and we expected phantomjs will handle it now. But no, phantomjs still kept crashing, process explorer shows same memory consumption.
http://phantomjs.org/api/webpage/method/close.html
According to above documentation phantomjs releases heap allocation just on close ? Could that be the reason why our fixed test consumed less memory on chrome but not phantomjs ? And last how to fix this ? How to make phantomjs keep garbage collecting javascript objects to reduce heap allocation ?
Update 1 - 07/28
We took a work around. I did modified my script to execute my tests module by module. In loop after executing all tests for a module I call page.close so it releases the memory for each module and never keeps building the dead heap of objects. Not closing this question since since its a workaround and not a solution. Hope creators will fix this sometime.
There is a static method, QWebPageSettings::clearMemoryCache, that invokes WebKit's garbage collection. However, it clears all QWebPage memory cache for every instantiated QWebPage object and is therefore, currently, unsuitable for including as an option in PhantomJS.
The Github pull request is available here:
https://github.com/ariya/phantomjs/pull/11511
Here's the Google Groups discussion:
https://groups.google.com/forum/#!msg/phantomjs/wIDp9J7B-bE/v5U31_mTbswJ
Until a workaround is available, you might break up your unit tests into blocks on separate pages. It will take a change to QtWebkit's implementation and how memory/cache is handled across QWebPage objects.
Update September 2014:
https://github.com/ariya/phantomjs/commit/5768b705a0
It looks like support for clearing memory cache was added, but there is a note about my original comment in the commit.
I managed to work around it by setting the /LARGEADDRESSAWARE flag
If you have visual studio installed, run from a visual studio command prompt
editbin /LARGEADDRESSAWARE <pathto>/PhantomJS.exe