What is the System objects in chrome javascript memory profiler - javascript

I'm profiling javascript application using Chrome dev tools.
I see that the only memory area that grows is System objects total.
I wonder how could I understand what causes such behavior as there are no details in the tool showing which system objects got leaked.
I've taken a look at app allocation, but it doesn't change much over time ...
When I'm using timeline feature heap grows over 500mb

According to the JSHeapSnapshot.js implementation in Chromium, as mentioned in a comment by wOxxOm, a comparison for a given node distance to 100000000 is performed (distances[ordinal] >= WebInspector.HeapSnapshotCommon.baseSystemDistance, where WebInspector.HeapSnapshotCommon.baseSystemDistance = 100000000) and if passing, the size is accumulated into the System segment of the pie chart.
The commit that last modifies this value mentions,
Currently if a user object is retained by both a system-space object
(e.g. a debugger) and another user object, the system object might be
shown earlier in the retainers tree. This happens if its distance is
smaller than distances of other retaining user objects.
The patch treats links from system-space objects to user objects with
less priority, so these links are shown at the bottom of the retainers
tree.
Which indicates that system-space objects on the javascript heap are utilized by debuggers and other internals to the browser (V8, WebKit, etc.). They are outside of the direct control of script allocated heap objects.
wOxxOm also mentioned that the name used to be V8 heap. That is, objects that V8 allocates that are out of reach of the executing script.
It's highly likely that running profiling and taking snapshots performs allocations in that category of heap objects as well, causing the pattern you see of building system allocations over time.

Related

Is there a way to get the address of a variable or object in javascript? [duplicate]

Is it possible to find the memory address of a JavaScript variable? The JavaScript code is part of (embedded into) a normal application where JavaScript is used as a front end to C++ and does not run on the browser. The JavaScript implementation used is SpiderMonkey.
If it would be possible at all, it would be very dependent on the javascript engine. The more modern javascript engine compile their code using a just in time compiler and messing with their internal variables would be either bad for performance, or bad for stability.
If the engine allows it, why not make a function call interface to some native code to exchange the variable's values?
It's more or less impossible - Javascript's evaluation strategy is to always use call by value, but in the case of Objects (including arrays) the value passed is a reference to the Object, which is not copied or cloned. If you reassign the Object itself in the function, the original won't be changed, but if you reassign one of the Object's properties, that will affect the original Object.
That said, what are you trying to accomplish? If it's just passing complex data between C++ and Javascript, you could use a JSON library to communicate. Send a JSON object to C++ for processing, and get a JSON object to replace the old one.
You can using a side-channel, but you can't do anything useful with it other than attacking browser security!
The closest to virtual addresses are ArrayBuffers.
If one virtual address within an ArrayBuffer is identified,
the remaining addresses are also known, as both the addresses
of the memory and the array indices are linear.
Although virtual addresses are not themselves physical memory addresses, there are ways to translate virtual address into a physical memory address.
Browser engines allocate ArrayBuffers always page
aligned. The first byte of the ArrayBuffer is therefore at the
beginning of a new physical page and has the least significant
12 bits set to ‘0’.
If a large chunk of memory is allocated, browser engines typically
use mmap to allocate this memory, which is optimized to
allocate 2 MB transparent huge pages (THP) instead of 4 KB
pages.
As these physical pages are mapped on
demand, i.e., as soon as the first access to the page occurs,
iterating over the array indices results in page faults at the
beginning of a new page. The time to resolve a page fault is
significantly higher than a normal memory access. Thus, you can knows the index at which a new 2 MB page starts. At
this array index, the underlying physical page has the 21 least
significant bits set to ‘0’.
This answer is not trying to provide a proof of concept because I don’t have time for this, but I may be able to do so in the future. This answer is an attempt to point the right direction to the person asking the question.
Sources,
http://www.misc0110.net/files/jszero.pdf
https://download.vusec.net/papers/anc_ndss17.pdf
I think it's possible, but you'd have to:
download the node.js source code.
add in your function manually (like returning the memory address of a pointer, etc.)
compile it and use it as your node executable.

How does Chrome / V8 deal with large objects in javascript?

I'm trying to understand garbage collection behaviour I'm seeing with Chrome / V8 on Windows 10. The scenario is that I have a small program that receives ~ 1MiB of image data from a websocket at a rate of about 60Hz. I'm using Chrome Version 81.0.4044.113 (Official Build) (64-bit)) and Windows 10 Pro 1903.
Minimal receiving code looks like this:
var connection = new WebSocket('ws://127.0.0.1:31333');
connection.onmessage = message => {
var dataCopy = new Uint8Array(message.data, 0);
};
Profiling in Chrome shows a sawtooth of allocations rising until a major garbage collection occurs, repeating at regular intervals. The allocations are all exactly 176 bytes, which doesn't really match up with the expected 1 MiB.
profile heap graph
I found an excellent overview of V8 GC here. If I understand correctly it seems a little surprising that I'm seeing major GC events when a minor scavenge type GC could probably pick up those allocations. Additionally, as mentioned above, the allocations seen while profiling don't have the expected size of 1MiB.
Further research indicates that there's a "large object space" as described in this SO question. Unfortunately the wiki mentioned has moved since the question was asked and I can't find any references to "large object space" at the new location. I suspect the 1MiB allocation is probably big enough to qualify as a large object and if so I would like to confirm what the actual behaviour around those is.
So my questions are:
Why do I see this behaviour with major GC's happening regularly?
Why are the allocations smaller than expected?
If it's related to large object handling are there any official resources that explain how large objects are handled in Chrome / V8 and what the limits around them are?
In the end I filed a bug for V8 here and the answer is that Major GCs are required because the message object is allocated on Blink's heap which requires V8 to perform a Major GC to cooperatively reclaim the memory. The 176 byte objects are likely pointers to the ArrayBuffer on the heap. There is an ongoing project to make Blink's GC generational which will eventually change this behavior.

Why are strings passed to winston kept in memory, leaking and ultimately crashing the node.js process?

I'm inspecting a heap snapshot created by node-heapdump#0.3.14, running on Node.js 10.16.0 in Amazon Linux with kernel 4.14.123-86.109.amzn1.x86_64. Heap snapshot is 1GB and, good news, strings visibly consume most of it, using 750MB of both shallow and retained size.
Most of these strings are bound to be logged by winston (winston#3.2.1, winston-transport#4.3.0, winston-logsene#2.0.7), at a log level (silly) lower than my app's minimal level (debug). So, a few dozen times per second,
I build a log string.
I pass it to winston.log with a logLevel silly.
No log happens (as expected, silly < debug).
Expected: strings are GCed and life goes on.
Actual: strings accumulate in memory, are not GCed, node OOMs at max-heap-size (1.4GB).
I am positive strings do leak. What I'm describing is not nominal operation between two normal GCs because, looking at the contents of the strings in the snapshot, I see a great deal of variation that, in the case of my app, can only come from hours of running.
Also, the devtools sometimes report huge sizes for these strings (23MB for a string that is actually 1KB), and the retainers tree is humongous, with >18000 levels of a next method inside objects message and chunk (see screenshot below).
So, my two questions are:
Why does winston keep these strings in memory? An idea I had: maybe winston keeps the messages in a queue/buffer, doesn't log because the level is too low, and never flushes the queue?
What's going on in this heapsnapshot? How can my little 1KB string have a shallow/retained size of 23MB?! What's going on in this crazy retainers tree / any idea who these next / message / chunk objects belong to?
Available to provide additional information. Thank you!
This was a bug in Winston related to how it used streams. It was fixed in a pull request, you can read more about the issue and the fix here:
https://github.com/winstonjs/winston/issues/1871

How can I determine what objects are being collected by the garbage collector?

I have significant garbage collection pauses. I'd like to pinpoint the objects most responsible for this collection before I try to fix the problem. I've looked at the heap snapshot on Chrome, but (correct me if I am wrong) I cannot seem to find any indicator of what is being collected, only what is taking up the most memory. Is there a way to answer this empirically, or am I limited to educated guesses?
In chrome profiles takes two heap snapshots, one before doing action you want to check and one after.
Now click on second snapshot.
On the bottom bar you will see select box with option "summary". Change it to "comparision".
Then in select box next to it select snaphot you want to compare against (it should automaticaly select snapshot1).
As the results you will get table with data you need ie. "New" and "Deleted" objects.
With newer Chrome releases there is a new tool available that is handy for this kind of task:
The "Record Heap Allocations" profiling type. The regular "Heap SnapShot" comparison tool (as explained in Rafał Łużyński answers) cannot give you that kind of information because each time you take a heap snapshot, a GC run is performed, so GCed objects are never part of the snapshots.
However with the "Record Heap Allocations" tool constantly all allocations are being recorded (that's why it may slow down your application a lot when it is recording). If you are experiencing frequent GC runs, this tool can help you identify places in your code where lots of memory is allocated.
In conjunction with the Heap SnapShot comparison you will see that most of the time a lot more memory is allocated between two snapshots, than you can see from the comparison. In extreme cases the comparison will yield no difference at all, whereas the allocation tool will show you lots and lots of allocated memory (which obviously had to be garbage collected in the meantime).
Unfortunately the current version of the tool does not show you where the allocation took place, but it will show you what has been allocated and how it is was retained at the time of the allocation. From the data (and possibly the constructors) you will however be able to identify your objects and thus the place where they are being allocated.
If you're trying to choose between a few likely culprits, you could modify the object definition to attach themselves to the global scope (as list under document or something).
Then this will stop them from being collected. Which may make the program faster (they're not being reclaimed) or slower (because they build up and get checked by the mark-and-sweep every time). So if you see a change in performance, you may have found the problem.
One alternative is to look at how many objects are being created of each type (set up a counter in the constructor). If they're getting collected a lot, they're also being created just as frequently.
Take a look at https://developers.google.com/chrome-developer-tools/docs/heap-profiling
especially Containment View
The Containment view is essentially a "bird's eye view" of your
application's objects structure. It allows you to peek inside function
closures, to observe VM internal objects that together make up your
JavaScript objects, and to understand how much memory your application
uses at a very low level.
The view provides several entry points:
DOMWindow objects — these are objects considered as "global" objects
for JavaScript code; GC roots — actual GC roots used by VM's garbage
collector; Native objects — browser objects that are "pushed" inside
the JavaScript virtual machine to allow automation, e.g. DOM nodes,
CSS rules (see the next section for more details.) Below is the
example of what the Containment view looks like:

How can I get the memory address of a JavaScript variable?

Is it possible to find the memory address of a JavaScript variable? The JavaScript code is part of (embedded into) a normal application where JavaScript is used as a front end to C++ and does not run on the browser. The JavaScript implementation used is SpiderMonkey.
If it would be possible at all, it would be very dependent on the javascript engine. The more modern javascript engine compile their code using a just in time compiler and messing with their internal variables would be either bad for performance, or bad for stability.
If the engine allows it, why not make a function call interface to some native code to exchange the variable's values?
It's more or less impossible - Javascript's evaluation strategy is to always use call by value, but in the case of Objects (including arrays) the value passed is a reference to the Object, which is not copied or cloned. If you reassign the Object itself in the function, the original won't be changed, but if you reassign one of the Object's properties, that will affect the original Object.
That said, what are you trying to accomplish? If it's just passing complex data between C++ and Javascript, you could use a JSON library to communicate. Send a JSON object to C++ for processing, and get a JSON object to replace the old one.
You can using a side-channel, but you can't do anything useful with it other than attacking browser security!
The closest to virtual addresses are ArrayBuffers.
If one virtual address within an ArrayBuffer is identified,
the remaining addresses are also known, as both the addresses
of the memory and the array indices are linear.
Although virtual addresses are not themselves physical memory addresses, there are ways to translate virtual address into a physical memory address.
Browser engines allocate ArrayBuffers always page
aligned. The first byte of the ArrayBuffer is therefore at the
beginning of a new physical page and has the least significant
12 bits set to ‘0’.
If a large chunk of memory is allocated, browser engines typically
use mmap to allocate this memory, which is optimized to
allocate 2 MB transparent huge pages (THP) instead of 4 KB
pages.
As these physical pages are mapped on
demand, i.e., as soon as the first access to the page occurs,
iterating over the array indices results in page faults at the
beginning of a new page. The time to resolve a page fault is
significantly higher than a normal memory access. Thus, you can knows the index at which a new 2 MB page starts. At
this array index, the underlying physical page has the 21 least
significant bits set to ‘0’.
This answer is not trying to provide a proof of concept because I don’t have time for this, but I may be able to do so in the future. This answer is an attempt to point the right direction to the person asking the question.
Sources,
http://www.misc0110.net/files/jszero.pdf
https://download.vusec.net/papers/anc_ndss17.pdf
I think it's possible, but you'd have to:
download the node.js source code.
add in your function manually (like returning the memory address of a pointer, etc.)
compile it and use it as your node executable.

Categories

Resources