I'm building a game using JavaScript that runs at 60 FPS and have noticed a large amount of garbage collection happening (7.2 MB every 1.5 seconds, see below Chrome DevTools), which has a noticeable impact on frame rate.
I've used the Allocation Timeline to see what is being added to memory every frame. As you can see below, there is 4.1 kB allocated every frame. So I would expect to see 369 kB garbage collected every 1.5 seconds.
Why is there an order of magnitude difference between the two? I've been using the Allocation Timeline to reduce the memory used (it was originally 18 kB every frame). However this has had almost zero impact when looking at the Performance tab.
Is there are any way to know what is being garbage collected, as the Allocation Timeline doesn't seem to be correct?
Additional Info
Here's a Heap Snapshot comparison between two moments in the game. This also doesn't seem consistent, and it includes lots of things which I would not expect to have changed every frame.
I tried removing the amount of objects (in an array) in my game and this did make a big impact in the Performance tab. But I really want to see this array listed somewhere in the DevTools so I can work on optimising it.
I tried disabling rendering on the screen, but this didn't have an impact. So the GC is from game code.
Measuring in Safari shows similar results to Chrome's performance tab.
I think I found a way around this: I removed the loop in the engine and have each frame run when a key is pressed.
This way I can take a heap snapshot, progress just one frame, then quickly take another snapshot (before GC kicks in). The "Objects allocated between snapshot" view seems to be more consistent, and shows a lot of memory used up by the JS engine's internal data like function scope.
Related
I want to measure the memory usage of my web SPA using performance.memory, and the purpose is to detect if there is any problem i.e. memory leak during the webapp's lifetime.
For this reason I would need to call this API for specific time interval - it could be every 3 second, every 30 second, or every 1 minute, ... Then I have a question - to detect any issue quickly and effectively I would have to make the interval as short as I could, but then I come up with the concern about performance. The measuring itself could affect the performance of the webapp if the measuring is such a expensive task (hopefully I don't think that is the case though)
With this background above, I have the following questions:
Is performance.memory such a method which would affect browser's main thread's performance so that I should care about the frequency of usage?
Would there be a right way or procedure to determine whether a (Javascript) task is affecting the performance of a device? If question 1 is uncertain, then I would have to try other way to find out the proper interval for calling of memory measurement.
(V8 developer here.)
Calling performance.memory is pretty fast. You can easily verify that in a quick test yourself: just call it a thousand times in a loop and measure how long that takes.
[EDIT: Thanks to #Kaiido for highlighting that this kind of microbenchmark can in general be very misleading; for example the first operation could be much more expensive; or the benchmark scenario could be so different from the real application's scenario that the results don't carry over. Do keep in mind that writing useful microbenchmarks always requires some understanding/inspection of what's happening under the hood!
In this particular case, knowing a bit about how performance.memory works internally, the results of such a simple test are broadly accurate; however, as I explain below, they also don't matter.
--End of edit]
However, that observation is not enough to solve your problem. The reason why performance.memory is fast is also the reason why calling it frequently is pointless: it just returns a cached value, it doesn't actually do any work to measure memory consumption. (If it did, then calling it would be super slow.) Here is a quick test to demonstrate both of these points:
function f() {
if (!performance.memory) {
console.error("unsupported browser");
return;
}
let objects = [];
for (let i = 0; i < 100; i++) {
// We'd expect heap usage to increase by ~1MB per iteration.
objects.push(new Array(256000));
let before = performance.now();
let memory = performance.memory.usedJSHeapSize;
let after = performance.now();
console.log(`Took ${after - before} ms, result: ${memory}`);
}
}
f();
(You can also see that browsers clamp timer granularity for security reasons: it's not a coincidence that the reported time is either 0ms or 0.1ms, never anything in between.)
(Second) however, that's not as much of a problem as it may seem at first, because the premise "to detect any issue quickly and effectively I would have to make the interval as short as I could" is misguided: in garbage-collected languages, it is totally normal that memory usage goes up and down, possibly by hundreds of megabytes. That's because finding objects that can be freed is an expensive exercise, so garbage collectors are carefully tuned for a good compromise: they should free up memory as quickly as possible without wasting CPU cycles on useless busywork. As part of that balance they adapt to the given workload, so there are no general numbers to quote here.
Checking memory consumption of your app in the wild is a fine idea, you're not the first to do it, and performance.memory is the best tool for it (for now). Just keep in mind that what you're looking for is a long-term upwards trend, not short-term fluctuations. So measuring every 10 minutes or so is totally sufficient, and you'll still need lots of data points to see statistically-useful results, because any single measurement could have happened right before or right after a garbage collection cycle.
For example, if you determine that all of your users have higher memory consumption after 10 seconds than after 5 seconds, then that's just working as intended, and there's nothing to be done. Whereas if you notice that after 10 minutes, readings are in the 100-300 MB range, and after 20 minutes in the 200-400 MB range, and after an hour they're 500-1000 MB, then it's time to go looking for that leak.
I've got an application that reads through a folder full of .tar.gz files, one at a time and writes processed data to disk. Everything operates on streams, and there shouldn't be more in-memory than a couple of hundred queued promises.
However, over a couple of hours, the memory taken grows from ~60mb to 2gb, and at this point, Node crashes.
I've created to heap snapshots of the application.
One created after a while of running, when Resident Set was about 80mb, as reported by process.memoryUsage():
One after a much longer while, RSS equals ~120mb:
While it's nowhere the 2gb amount it accumulated over a night, we can see TypedArrays (I think steram buffers count there?) and System Objects grew significantly.
First off, why is it so much smaller than the memory usage reported by system (~12mb vs ~150mb)?
Second, basing on that growth, we can extrapolate and assume that Typed Arrays and System objects grew further, and were one of the things that caused excessive memory usage over time.
Therefore, what exactly are Typed Arrays? How do they related to streams? And what are System Objects?
process.memoryUsage from a third session, slightly longer than the second one:
{
"rss": 149794816,
"heapTotal": 51036160,
"heapUsed": 12103496,
"external": 1531733,
"arrayBuffers": 329913
}
Leaving a page open for 2 minutes and recording with Chrome dev tools, I get a saw tooth pattern BUT the JS heap does not return back to it's original level - rather, for each garbage collection it remains a bit higher until it eventually crashes:
Conventional wisdom suggests taking 2 heap snapshots over a period of time and comparing them to isolate the problem. Before a heap snapshot, a garbage collection automatically takes place. Expected results would be that heap snapshot number 1 shows a baseline of ~19 MB of heap, and snapshot 2 shows at least 22 MB after 2 minutes. Instead, snapshot 2 actually shows less heap
What should I do now to find the leak?
It might have just been a fluke. Try taking multiple snapshots. Like, one every 10 seconds, ten times.
Try Allocation Timelines and Allocation Profiles, too. Allocation Timelines show you when memory is getting allocated, in a realtime graph. Profiles show you what functions allocate the most memory.
I'm profiling javascript application using Chrome dev tools.
I see that the only memory area that grows is System objects total.
I wonder how could I understand what causes such behavior as there are no details in the tool showing which system objects got leaked.
I've taken a look at app allocation, but it doesn't change much over time ...
When I'm using timeline feature heap grows over 500mb
According to the JSHeapSnapshot.js implementation in Chromium, as mentioned in a comment by wOxxOm, a comparison for a given node distance to 100000000 is performed (distances[ordinal] >= WebInspector.HeapSnapshotCommon.baseSystemDistance, where WebInspector.HeapSnapshotCommon.baseSystemDistance = 100000000) and if passing, the size is accumulated into the System segment of the pie chart.
The commit that last modifies this value mentions,
Currently if a user object is retained by both a system-space object
(e.g. a debugger) and another user object, the system object might be
shown earlier in the retainers tree. This happens if its distance is
smaller than distances of other retaining user objects.
The patch treats links from system-space objects to user objects with
less priority, so these links are shown at the bottom of the retainers
tree.
Which indicates that system-space objects on the javascript heap are utilized by debuggers and other internals to the browser (V8, WebKit, etc.). They are outside of the direct control of script allocated heap objects.
wOxxOm also mentioned that the name used to be V8 heap. That is, objects that V8 allocates that are out of reach of the executing script.
It's highly likely that running profiling and taking snapshots performs allocations in that category of heap objects as well, causing the pattern you see of building system allocations over time.
I have significant garbage collection pauses. I'd like to pinpoint the objects most responsible for this collection before I try to fix the problem. I've looked at the heap snapshot on Chrome, but (correct me if I am wrong) I cannot seem to find any indicator of what is being collected, only what is taking up the most memory. Is there a way to answer this empirically, or am I limited to educated guesses?
In chrome profiles takes two heap snapshots, one before doing action you want to check and one after.
Now click on second snapshot.
On the bottom bar you will see select box with option "summary". Change it to "comparision".
Then in select box next to it select snaphot you want to compare against (it should automaticaly select snapshot1).
As the results you will get table with data you need ie. "New" and "Deleted" objects.
With newer Chrome releases there is a new tool available that is handy for this kind of task:
The "Record Heap Allocations" profiling type. The regular "Heap SnapShot" comparison tool (as explained in Rafał Łużyński answers) cannot give you that kind of information because each time you take a heap snapshot, a GC run is performed, so GCed objects are never part of the snapshots.
However with the "Record Heap Allocations" tool constantly all allocations are being recorded (that's why it may slow down your application a lot when it is recording). If you are experiencing frequent GC runs, this tool can help you identify places in your code where lots of memory is allocated.
In conjunction with the Heap SnapShot comparison you will see that most of the time a lot more memory is allocated between two snapshots, than you can see from the comparison. In extreme cases the comparison will yield no difference at all, whereas the allocation tool will show you lots and lots of allocated memory (which obviously had to be garbage collected in the meantime).
Unfortunately the current version of the tool does not show you where the allocation took place, but it will show you what has been allocated and how it is was retained at the time of the allocation. From the data (and possibly the constructors) you will however be able to identify your objects and thus the place where they are being allocated.
If you're trying to choose between a few likely culprits, you could modify the object definition to attach themselves to the global scope (as list under document or something).
Then this will stop them from being collected. Which may make the program faster (they're not being reclaimed) or slower (because they build up and get checked by the mark-and-sweep every time). So if you see a change in performance, you may have found the problem.
One alternative is to look at how many objects are being created of each type (set up a counter in the constructor). If they're getting collected a lot, they're also being created just as frequently.
Take a look at https://developers.google.com/chrome-developer-tools/docs/heap-profiling
especially Containment View
The Containment view is essentially a "bird's eye view" of your
application's objects structure. It allows you to peek inside function
closures, to observe VM internal objects that together make up your
JavaScript objects, and to understand how much memory your application
uses at a very low level.
The view provides several entry points:
DOMWindow objects — these are objects considered as "global" objects
for JavaScript code; GC roots — actual GC roots used by VM's garbage
collector; Native objects — browser objects that are "pushed" inside
the JavaScript virtual machine to allow automation, e.g. DOM nodes,
CSS rules (see the next section for more details.) Below is the
example of what the Containment view looks like: