Is it possible to see all sources of memory for JavaScript in Chrome? As far as I know the three above are what would be available.
The Heap is your basic GC-able JS objects. "Native Memory" is not part of the Heap .. like DOM, TypedArrays, 2D context ImageData and so on. WebGL too is a source of memory.
I'd like to know how much my code is using. Chrome recently dumped their Native profiler. Heap profiling is simply not sufficient for large memory web-apps.
Is there a way to get useful information on what percent of these memory sources my code is using?
I estimate the native memory by using the chrome task manager (more tools | Task Manager). It shows the private memory, GPU memory and javascript memory. private memory - javascript memory would be an approximation of native memory. But it can't indicate how much memory is allocated by different kinds of resources.
It also shows the GPU memory if the page has a canvas.
Related
I have some JavaScript Functions with source code coming from larger JavaScript-files. They are created this way:
const function = Function("foo", "...<large JS source code>...");
When looking at the memory snapshots in Chrome, the source code of these functions is retained and create a large memory overhead. Is it possible to "release"/"drop" source code of functions in JavaScript?
EDIT
I am really keeping the function itself around and wonder whether it’s possible to tell the function to drop the source code from memory.
Internally Chrome uses the V8 JavaScript engine. This engine is a just-in-time JavaScript engine which means that it takes your JavaScript and it compiles it to machine code when it needs it. This is expensive; so V8 will cache the result of the compilation and (in most cases) will re-use the previously compiled code if you call the function again.
I believe function memory reclamation is handled by the V8 garbage collector. When a variable (or function) falls out of scope (there aren't any references left to it anywhere, including closures) then the garbage collector is free to reclaim that memory including the source code and cached machine code. The garbage collector runs periodically and will clean up any memory from anything still in scope. Generally speaking you shouldn't try to force garbage collection on your own, it should happen automatically, but with Chrome there is a way to force garbage collection using the developer tools.
If you remove any references to your function (remember this includes closures) and force garbage collection you should see the memory reclaimed. Use the Chrome Developer memory tools to see if your function has been reclaimed or not (look at "heap snapshots").
There's one other caveat to this: even if the memory is reclaimed it won't necessarily be released back to the operating system or even cleared. Many applications handling large amounts of small memory allocations will try to improve performance by trying to re-use previously allocated memory before asking the operating system for more. So if you're using a low-level memory inspector you may still see your code hanging around in memory even if it's been garbage collected and there aren't any useful references to it. Without diving pretty deep into the V8 internals it probably isn't possible to determine from a memory dump if your code is still in memory because of a memory leak or because Chrome allocated the memory and simply hasn't released it back to the operating system after internally cleaning up references to that memory.
I am creating a JavaScript application on top of a library (dwv). I noticed that even though the JavaScript heap has a size of 128MB, the private memory of the tab is ~1.2GB! This is true even when the chrome debugger is closed.
Is there a way I can identify what is causing this extreme private memory use?
In this related question it is suggested that chrome only takes the memory when it is available and uses it for optimization. In my case however, it crashes when it cannot allocate the ~1.2GB of memory.
For anyone wondering, it was that dwv uses for every image a separate canvas. Each of these has its own buffer taking up private memory.
I am making an app, which needs a lot of GPU memory sometimes (many huge textures). I could make a system, where I keep frequently used textures in GPU memory, and I upload and delete the rest, only when they are needed.
In general, there are always more textures than the GPU memory, and the more memory I can use - the faster my program is. I don't want to restrict myself e.g. to 100 MB or 1 GB of memory, when there could be 4x more memory free to use. But if I try to allocate too much, the browser will kill my program.
I see, that in WebGL, there is no direct way to tell, how much memory is available. What would be your strategy, how to solve such issue?
Can someone please explain why my Node.js process is taking more than allocated memory?
I assigned 4G of memory to Nodejs process (maximum supported on 64bit machine, as per Nodejs doc), but I have seen process touching 5.6g of RSS memory (way higher than 4g that I assigned)
This is how I am running the process
node -max-old-space-size=4096 processName.js
This is what my TOP command shows (RSS #4.6g)
max-old-space-size controls one aspect of node.js memory usage within the interpreter as used for the storage of Javascript objects (sometimes referred to as the V8 heap), not the entire memory usage of the whole process. For example, max-old-space-size has nothing to do with how much memory the native code portions of node.js use at all.
So, total memory usage can always be more than max-old-space-size.
I am troubleshooting what appears to be a memory leak in our configuration page. The page is used to change the configuration of our service and also displays health diagnostics. This means that we are querying the service periodically for configuration and instrumentation information (typically we use a query interval of 30sec, but to troubleshoot I am querying at 100ms intervals). We rely on knockoutjs, datajs, jquery and spinjs.
I've found that if I leave the page open overnight at the 100ms query interval that the private bytes for the chrome browser tab grows from about 50MB to 335MB. I have four pages with the issue, but am focused on one during my troubleshooting effort. Using chrome://memory-redirect/ I can see the page (process id 26148) memory.
However, the JavaScript heap memory appears to be flat over the same period at 3.6MB. Using the heap profiling tools in Chrome it shows that all of my object allocations are garbage collected.
In the above picture the gray allocations indicate that the objects have been cleaned up by the GC.
The memory timeline also is constant.
I also forced two GCs and confirmed that the number of documents, nodes and listeners was constant between the two GCs.
My questions are:
Where is the process memory being used that is not part of the JavaScript heap?
Given our JavaScript heap memory is flat, could that extra memory be a memory leak caused by our JavaScript code?
Thanks for all the help!
You are comparing apples and oranges - Garbage collected memory in a sub-heap and whole application memory.
You've used Chrome to inspect the JavaScript heap and found evidence that indicates the JavaScript part of your application is running OK.
You're also using a tool to monitor the global memory usage of Chrome itself. That is all the memory that Chrome is using for any task, including tasks not directly related to your application, but to the functioning of the browser itself.
Perhaps you've found a use case that triggers a memory leak in the Chrome internals?
Or perhaps it isn't a memory leak, but memory fragmentation in the non-garbage collected internal heaps used by Chrome?
According to this web page Chrome is written in a mixture of C, C++, Java, JavaScript and Python. This means we have deterministic memory allocators for C and C++ and three different types of garbage collected heaps for Java, JavaScript and Python. Bad news: Python's handling of integers isn't so kind on memory use when it comes to garbage collection (last time I checked, which was a few years ago, maybe they've improved it).
But I've had Chrome sessions run for weeks without issue. So I do wonder what is happening.
You don't say which OS you are using but if you are using Microsoft Windows then you could use C++ Memory Validator to inspect where each allocation was made (full callstack, how many bytes, etc) while Chrome is running (Launch Chrome from C++ Memory Validator, load your applicaton, let it do it's thing then go to the Memory tab and click Refresh - it will display all the live allocations that can be tracked - any statically linked heaps won't be trackable as you won't have the symbols to allow them to be hooked). OK, you don't have symbols to make teh callstacks readable but you can still identify allocations happening at the same place. That may give you a clue as to the cause of the leak/fragmentation so that can report this to the Chrome devs for a closer look.
Do you get the same behaviour in Firefox? If you then you could do what I suggest with C++ Memory Validator but do it on a build of Firefox that you've built yourself - you'll have symbols and source and know exactly where the problem is.
Disclaimer. I am the designer of C++ Memory Validator.