How does Chrome / V8 deal with large objects in javascript? - javascript

I'm trying to understand garbage collection behaviour I'm seeing with Chrome / V8 on Windows 10. The scenario is that I have a small program that receives ~ 1MiB of image data from a websocket at a rate of about 60Hz. I'm using Chrome Version 81.0.4044.113 (Official Build) (64-bit)) and Windows 10 Pro 1903.
Minimal receiving code looks like this:
var connection = new WebSocket('ws://127.0.0.1:31333');
connection.onmessage = message => {
var dataCopy = new Uint8Array(message.data, 0);
};
Profiling in Chrome shows a sawtooth of allocations rising until a major garbage collection occurs, repeating at regular intervals. The allocations are all exactly 176 bytes, which doesn't really match up with the expected 1 MiB.
profile heap graph
I found an excellent overview of V8 GC here. If I understand correctly it seems a little surprising that I'm seeing major GC events when a minor scavenge type GC could probably pick up those allocations. Additionally, as mentioned above, the allocations seen while profiling don't have the expected size of 1MiB.
Further research indicates that there's a "large object space" as described in this SO question. Unfortunately the wiki mentioned has moved since the question was asked and I can't find any references to "large object space" at the new location. I suspect the 1MiB allocation is probably big enough to qualify as a large object and if so I would like to confirm what the actual behaviour around those is.
So my questions are:
Why do I see this behaviour with major GC's happening regularly?
Why are the allocations smaller than expected?
If it's related to large object handling are there any official resources that explain how large objects are handled in Chrome / V8 and what the limits around them are?

In the end I filed a bug for V8 here and the answer is that Major GCs are required because the message object is allocated on Blink's heap which requires V8 to perform a Major GC to cooperatively reclaim the memory. The 176 byte objects are likely pointers to the ArrayBuffer on the heap. There is an ongoing project to make Blink's GC generational which will eventually change this behavior.

Related

Chrome DevTools Memory Inconsistency

I'm building a game using JavaScript that runs at 60 FPS and have noticed a large amount of garbage collection happening (7.2 MB every 1.5 seconds, see below Chrome DevTools), which has a noticeable impact on frame rate.
I've used the Allocation Timeline to see what is being added to memory every frame. As you can see below, there is 4.1 kB allocated every frame. So I would expect to see 369 kB garbage collected every 1.5 seconds.
Why is there an order of magnitude difference between the two? I've been using the Allocation Timeline to reduce the memory used (it was originally 18 kB every frame). However this has had almost zero impact when looking at the Performance tab.
Is there are any way to know what is being garbage collected, as the Allocation Timeline doesn't seem to be correct?
Additional Info
Here's a Heap Snapshot comparison between two moments in the game. This also doesn't seem consistent, and it includes lots of things which I would not expect to have changed every frame.
I tried removing the amount of objects (in an array) in my game and this did make a big impact in the Performance tab. But I really want to see this array listed somewhere in the DevTools so I can work on optimising it.
I tried disabling rendering on the screen, but this didn't have an impact. So the GC is from game code.
Measuring in Safari shows similar results to Chrome's performance tab.
I think I found a way around this: I removed the loop in the engine and have each frame run when a key is pressed.
This way I can take a heap snapshot, progress just one frame, then quickly take another snapshot (before GC kicks in). The "Objects allocated between snapshot" view seems to be more consistent, and shows a lot of memory used up by the JS engine's internal data like function scope.

How does the compressed pointer implementation in V8 differ from JVM's compressed Oops?

Background: V8 announced a feature called pointer compression (What's happening in V8? - Benedikt Meurer), which is intended to reduce the memory overhead of pointers for 64-bit processes.
Java JVM's had a feature called CompressedOops since 2010 (since 6u23). At a first glance, it looks similar but then I realized it is not quite the same.
Question:
What are the main differences between the pointer compression techniques (V8 vs JVM)?
The V8 implementation seems to be still not finalized, but I found at least some references:
JVM implementation:
Trick behind JVM's compressed Oops
V8 implementation
Design document
Discussion about memory savings vs performance
I think the links you provided already contain the answer? In short:
JVM's "compressed Oops" save 3 bits via shifting and thereby make it possible to address 2³ * 4 GB using 32-bit pointers at 8-byte granularity. (At least that's what your link says; I know nothing about the JVM so I cannot confirm or deny that this is accurate information.)
V8's "compressed pointers" pick a base address somewhere in the 64 (well, 48 really) bit address space and then store all heap pointers as 32-bit offsets from that base address, so the maximum heap size that can be addressed in this mode is 4GB.
I would guess that the JVM also needs to employ some variant of a base address, otherwise the shifted pointers would be limited to a very small and fixed subset of the full address space.
V8's approach leaves the bits around that the JVM shifts away, which is nice for V8's purposes because it uses those bits to store other information (its pointers are tagged, and the tags are in those bits).

Why are strings passed to winston kept in memory, leaking and ultimately crashing the node.js process?

I'm inspecting a heap snapshot created by node-heapdump#0.3.14, running on Node.js 10.16.0 in Amazon Linux with kernel 4.14.123-86.109.amzn1.x86_64. Heap snapshot is 1GB and, good news, strings visibly consume most of it, using 750MB of both shallow and retained size.
Most of these strings are bound to be logged by winston (winston#3.2.1, winston-transport#4.3.0, winston-logsene#2.0.7), at a log level (silly) lower than my app's minimal level (debug). So, a few dozen times per second,
I build a log string.
I pass it to winston.log with a logLevel silly.
No log happens (as expected, silly < debug).
Expected: strings are GCed and life goes on.
Actual: strings accumulate in memory, are not GCed, node OOMs at max-heap-size (1.4GB).
I am positive strings do leak. What I'm describing is not nominal operation between two normal GCs because, looking at the contents of the strings in the snapshot, I see a great deal of variation that, in the case of my app, can only come from hours of running.
Also, the devtools sometimes report huge sizes for these strings (23MB for a string that is actually 1KB), and the retainers tree is humongous, with >18000 levels of a next method inside objects message and chunk (see screenshot below).
So, my two questions are:
Why does winston keep these strings in memory? An idea I had: maybe winston keeps the messages in a queue/buffer, doesn't log because the level is too low, and never flushes the queue?
What's going on in this heapsnapshot? How can my little 1KB string have a shallow/retained size of 23MB?! What's going on in this crazy retainers tree / any idea who these next / message / chunk objects belong to?
Available to provide additional information. Thank you!
This was a bug in Winston related to how it used streams. It was fixed in a pull request, you can read more about the issue and the fix here:
https://github.com/winstonjs/winston/issues/1871

What is the System objects in chrome javascript memory profiler

I'm profiling javascript application using Chrome dev tools.
I see that the only memory area that grows is System objects total.
I wonder how could I understand what causes such behavior as there are no details in the tool showing which system objects got leaked.
I've taken a look at app allocation, but it doesn't change much over time ...
When I'm using timeline feature heap grows over 500mb
According to the JSHeapSnapshot.js implementation in Chromium, as mentioned in a comment by wOxxOm, a comparison for a given node distance to 100000000 is performed (distances[ordinal] >= WebInspector.HeapSnapshotCommon.baseSystemDistance, where WebInspector.HeapSnapshotCommon.baseSystemDistance = 100000000) and if passing, the size is accumulated into the System segment of the pie chart.
The commit that last modifies this value mentions,
Currently if a user object is retained by both a system-space object
(e.g. a debugger) and another user object, the system object might be
shown earlier in the retainers tree. This happens if its distance is
smaller than distances of other retaining user objects.
The patch treats links from system-space objects to user objects with
less priority, so these links are shown at the bottom of the retainers
tree.
Which indicates that system-space objects on the javascript heap are utilized by debuggers and other internals to the browser (V8, WebKit, etc.). They are outside of the direct control of script allocated heap objects.
wOxxOm also mentioned that the name used to be V8 heap. That is, objects that V8 allocates that are out of reach of the executing script.
It's highly likely that running profiling and taking snapshots performs allocations in that category of heap objects as well, causing the pattern you see of building system allocations over time.

Information heap size

What information can I obtain from the performance.memory object in Chrome?
What do these numbers mean? (are they in kb's or characters)
What can I learn from these numbers?
Example values of performance.memory
MemoryInfo {
jsHeapSizeLimit: 793000000,
usedJSHeapSize: 10000000,
totalJSHeapSize: 31200000
}
What information can I obtain from the performance.memory object in Chrome?
The property names should be pretty descriptive.
What do these numbers mean? (are they in kb's or characters)
The docs state:
The values are quantized as to not expose private information to
attackers.
See the WebKit Patch for how the quantized values are exposed. The
tests in particular help explain how it works.
What can I learn from these numbers?
You can identify problems with memory management. See http://www.html5rocks.com/en/tutorials/memory/effectivemanagement/ for how the performance.memory API was used in gmail.
The related API documentation does not say, but my read judging by the numbers you shared and what I see on my machine is that the values are in bytes.
A quick review of the code to which Bergi linked - regarding the values being quantized - seems to support this - e.g. float sizeOfNextBucket = 10000000.0; // First bucket size is roughly 10M..
The quantized MemoryInfo properties are mostly useful for monitoring vs. determining the precise impact of operations on memory. A comment in the aforementioned linked code explains this well I think:
86 // We quantize the sizes to make it more difficult for an attacker to see precise
87 // impact of operations on memory. The values are used for performance tuning,
88 // and hence don't need to be as refined when the value is large, so we threshold
89 // at a list of exponentially separated buckets.
Basically the values get less precise as they get bigger but are still sufficiently precise for monitoring memory usage.

Categories

Resources