JS Garbage Collection - javascript

What triggers a JavaScript garbage collector to run? Obviously this varies by JS engines, but trying to get a rough idea. Is it only when available memory is below a certain threshold?
Thanks.

It really varies very widely. SpiderMonkey, for example, will GC based on various heuristics about how much memory has been allocated, but the browser embedding also triggers GC in various situations like after enough DOM events have been processed, after a script has run for long enough, some things to do with tabs/windows being closed or loaded, etc, etc. And the heuristics involved have changed wildly between different Firefox releases, and will change again.
And that's all for just one browser.

It varies. Chrome (V8) is simply based on a timer and an activity monitor (it tries not to run when the engine is busy).

This varies per browser and as far as I know you have absolutely no control over it.
Likewise you have no control over when the DOM is getting rendered, which is really annoying if you want to show a loading bar :D
Why do you want to know this?

Related

Is there a way to know anything about hardware resources of 'platform' accessing webpage?

I'd like to be able to find out about a browser's hardware resources from a web page, or at least a rough estimation.
Even when you detect the presence of modern technology (such as csstransforms3d, csstransitions, requestAnimationFrame) in a browser via a tool like Modernizr, you cannot be sure whether to activate some performance-consuming option (such as fancy 3D animation) or to avoid it.
I'm asking because I have (a lot of) experience with situations where the browser is modern (latest Chrome or Firefox supporting all cool technologies) but OS's CPU, GPU, and available memory are just catastrophic (32bit Windows XP with integrated GPU) and thus a decision based purely on detected browser caps is no good.
While Nickolay gave a very good and extensive explanation, I'd like to suggest one very simple, but possibly effective solution - you could try measuring how long it took for the page to load and decide whether to go with the resource-hungry features or not (Gmail does something similar - if the loading goes on for too long, a suggestion to switch to the "basic HTML" version will show up).
The idea is that, for slow computers, loading any page, regardless of content, should be, on average, much slower than on modern computers. Getting the amount of time it took to load your page should be simple, but there are a couple of things to note:
You need to experiment a bit to determine where to put the "too slow" threshold.
You need to keep in mind that slow connections can cause the page to load slower, but this will probably make a difference in a very small number of cases (using DOM ready instead of the load event can also help here).
In addition, the first time a user loads your site will probably be much slower, due to caching. One simple solution for this is to keep your result in a cookie or local storage and only take loading time into account when the user visits for the first time.
Don't forget to always, no matter what detection mechanism you used and how accurate it is, allow the user to choose between the regular, resource-hungry and the faster, "uglier" version - some people prefer better looking effects even if it means the website will be slower, while other value speed and snappiness more.
In general, the available (to web pages) information about the user's system is very limited.
I remember a discussion of adding one such API to the web platform (navigator.hardwareConcurrency - the number of available cores), where the opponents of the feature explained the reasons against it, in particular:
The number of cores available to your app depends on other workload, not just on the available hardware. It's not constant, and the user might not be willing to let your app use all (or whatever fixed portion you choose) of the available hardware resources;
Helps "fingerprinting" the client.
Too oriented on the specifics of today. The web is designed to work on many devices, some of which do not even exist today.
These arguments work as well for other APIs for querying the specific hardware resources. What specifically would you like to check to see if the user's system can afford running a "fancy 3D animation"?
As a user I'd rather you didn't use additional resources (such as fancy 3D animation) if it's not necessary for the core function of your site/app. It's sad really that I have to buy a new laptop every few years just to be able to continue with my current workflow without running very slowly due to lack of HW resources.
That said, here's what you can do:
Provide a fallback link for the users who are having trouble with the "full" version of the site.
If this is important enough to you, you could first run short benchmarks to check the performance and fall back to the less resource-hungry version of the site if you suspect that a system is short on resources.
You could target the specific high-end platforms by checking the OS, screen size, etc.
This article mentions this method on mobile: http://blog.scottlogic.com/2014/12/12/html5-android-optimisation.html
WebGL provides some information about the renderer via webgl.getParameter(). See this page for example: http://analyticalgraphicsinc.github.io/webglreport/

Javascript memory and leak problems

My site is pretty standard ecom site, it isn't a JS backed standalone app or anything, it's just a site which uses JS for standard stuff, as well as some jquery plugins to do a few things.
I'm trying to do some JS memory evaluation on my site. I've done this by looking at the Chrome Task Manager and through Heap Snapshots.
Initailly my site on first load sits between 35MB (i.e 35,000K) and 40MB on the task manager. This is the largest of any tab, if I have several tabs of other websites open at the same time.
If I refesh the page it jumps up to 55-60, another refresh sees it jump to 65-70MB.
On a normal page in a workflow, it fluctuates between 45-65 (sometimes 75 depending on what you're doing). Clicking around and doing the workflow from page to page sees the memory jump up to 85-100, and increases as you continue through the site.
I've tried to do a few things like check for:
detacted nodes
heap snapshots & looking at the deltas
amix's MemoryLeakChecker checking size of objects
I'd need a deeper dive to look for circular references or closure problems.
Heap snapshots don't reveal much, most of the top lists are (array), (string), (system). The snapshots sit between 4.8MB, 5.1MB, 5.8MB, 6.8MB and increase.
I've got a few questions as result:
How do I understand the different metrics between snapshot memory and task manager memory
Are there any good tutorials (apart from the ones on the Google Developers site)?
How much memory is considered acceptable? Given in the task manager my site is always the highest?
Do I have a memory leak? Apart from the steps I've described above (which I haven't found anything concrete from) is there any other ways I can find leaks?
Can you suggest any tools apart from the Chrome Dev Tools (a lot of the tools mentioned on Google for Firefox are not compatible with the latest version, eg: Leak Monitor for FF)
As a side note, most of my functions are low key operations, and don't exceed 200ms (based on a CPU profile). What is a good benchmark I should be aiming for? Is 200ms high?
What you are describing is not a memory leak, it's a garbage that Chrome knows of and that will be removed whenever Chrome decides it's time to do it. To explain this, lets have a closer look at the scenario you have described.
Making memory to 'leak'
First lets open up a new incognito window (just to be sure that browser extensions are not affecting our results) and navigate to google.com.
Then, lets open the Task Manager and enable "JavaScript Memory" column (by right-clicking on the Task Manager window). We need this column to be sure that the memory we will be 'leaking' is being, in fact, allocated by JavaScript. We end up with something like this:
Now, as you suggested, we should reload the page couple of times and observe the memory of our tab going up:
So far, so good - everything works exactly as you described it.
Wait a second...
However, lave your cursor inactive for half a minute, or go to another tab and you will observe a huge memory usage drop on our 'Tab:google'. Why is that? What happened there? Who cleaned up our 'leaked' memory for us?
The Memory Usage Drop
To investigate that, lets repeat what we have done so far, so that 'Tab:google' uses a lot of memory again. Then, lets open Chrome Developer Tools and start recording on the 'Timeline' tab. After that, lets change a tab for couple of seconds and when memory drops stop 'recording' on the 'Timeline'. You should end up with this:
In the last couple of seconds of our recording mysterious 'GC Events' appeared. Exactly in the same time when the memory was released. Coincidence? Nope.
GC Events
GC stands for the Garbage Collector. It's a mechanism that "attempts to reclaim garbage, or memory occupied by objects that are no longer in use by the program". So it turns out that memory of our tab was polluted by garbage and GC was capable of getting rid of these garbage for the whole time (you can even force garbage collection using button at the bottom of the 'Timeline' tab). So why it decided not to? Why it waited for us to stop interacting with the page or change the tab?
Lazy Garbage Collector
The short answer is that garbage collection has to 'freeze' the execution of all scripts before any work can be done. Also, it can take significant amount of CPU time to execute. This can result in lag, choppy animations, unresponsive controls etc. That's why Chrome waits for the right moment to call the garbage collection. And the best moment to do it is when user is not looking.
In addition, please note that 'GC Events' come in series, there are always couple of them with short breaks in between. These breaks are meant for 'normal' JavaScript to execute making the garbage collection less noticeable.
Live Objects
Take a look at "JavaScript Memory" tab at the top two screenshots in this post again. You will notice that this column contains two numbers. First one is memory "reserved for JavaScript VM
heap", the other one is "how much memory live (reachable) objects
comprise" (source). When benchmarking your applications you should worry only about the second value, all the rest will be handled by GC.
An example of a leak
A real JavaScript leak can happen ie. in a web chat application. If, over time, it will use more and more 'live' memory while always displaying only last 10 messages then we can talk about a leak. Such leak, will eventually crash a tab (or a browser).
Conclusion
For scripts running on the page, reloading the page (or going to another location) is equal to restarting your computer while your ANSI C app is running. After that, you should think about all the memory allocated by your scripts as wiped out. The only reason why, in practice, this may not happen immediately after reloading the page is that browser is waiting for the right moment to clean up. And you, as a web developer, should not be concerned about it.
If you still think that your page are leaking you can use the answer from this question to track down the leaked objects.

Best practice: very long running polling processes in Javascript?

I have a touch screen kiosk application that I'm developing that will be deployed on the latest version of Chrome.
The application will need to make AJAX calls to an web service, every 10 minutes or so to pull thru any updated content.
As it's a kiosk application, the page is unlikely to be reloaded very often and theoretically, unless the kiosk is turned off, the application could run for days at a time.
I guess my concern is memory usage and whether or not a very long running setTimeout loop would chew through a large amount of memory is given sufficient time.
I'm currently considering the use of Web Workers and I'm also going to look into Web Sockets but I was wondering if anyone had any experience with this type of thing?
Cheers,
Terry
The browser has a garbage collector so no problems on that. as long as you don't introduce memory leaks through bad code. here's an article and another article about memory leak patterns. that's should get you started on how to program efficiently, and shoot those leaky code.
also, you have to consider the DOM. a person in SO once said that "things that are not on screen should be removed and not just hidden" - this not only removes the entity in a viewing perspective, but actually removes it from the DOM, remove it's handlers, and the memory it used will be freed.
As for the setTimeout, lengthen the interval between calls. Too fast, you will chew up memory fast (and render the page quite... laggy). I just tested code for a timer-based "hashchange" detection, and even on chrome, it does make the page rather slow.
research on the bugs of chrome and keep updated as well.

What's the best way to determine at runtime if a browser is too slow to gracefully handle complex JavaScript/CSS?

I'm toying with the idea of progressively enabling/disabling JavaScript (and CSS) effects on a page - depending on how fast/slow the browser seems to be.
I'm specifically thinking about low-powered mobile devices and old desktop computers -- not just IE6 :-)
Are there any examples of this sort of thing being done?
What would be the best ways to measure this - accounting for things, like temporary slowdowns on busy CPUs?
Notes:
I'm not interested in browser/OS detection.
At the moment, I'm not interested in bandwidth measurements - only browser/cpu performance.
Things that might be interesting to measure:
Base JavaScript
DOM manipulation
DOM/CSS rendering
I'd like to do this in a way that affects the page's render-speed as little as possible.
BTW: In order to not confuse/irritate users with inconsistent behavior - this would, of course, require on-screen notifications to allow users to opt in/out of this whole performance-tuning process.
[Update: there's a related question that I missed: Disable JavaScript function based on user's computer's performance. Thanks Andrioid!]
Not to be a killjoy here, but this is not a feat that is currently possible in any meaningful way in my opinion.
There are several reasons for this, the main ones being:
Whatever measurement you do, if it is to have any meaning, will have to test the maximum potential of the browser/cpu, which you cannot do and maintain any kind of reasonable user experience
Even if you could, it would be a meaningless snapshot since you have no idea what kind of load the cpu is under from other applications than the browser while your test is running, and weather or not that situation will continue while the user is visiting your website.
Even if you could do that, every browser has their own strengths and weaknesses, which means, you'd have to test every dom manipulation function to know how fast the browser would complete it, there is no "general" or "average" that makes sense here in my experience, and even if there was, the speed with which dom manipulation commands execute, is based on the context of what is currently in the dom, which changes when you manipulate it.
The best you can do is to either
Let your users decide what they want, and enable them to easily change that decision if they regret it
or better yet
Choose to give them something that you can be reasonably sure that the greater part of your target audience will be able to enjoy.
Slightly off topic, but following this train of thought: if your users are not techleaders in their social circles (like most users in here are, but most people in the world are not) don't give them too much choice, ie. any choice that is not absolutely nescessary - they don't want it and they don't understand the technical consequences of their decision before it is too late.
A different approach, that does not need explicit benchmark, would be to progressively enable features.
You could apply features in prioritized order, and after each one, drop the rest if a certain amount of time has passed.
Ensuring that the most expensive features come last, you would present the user with a somewhat appropriate selection of features based on how speedy the browser is.
You could try timing some basic operations - have a look at Steve Souder's Episodes and Yahoo's boomerang for good ways of timing stuff browserside. However its going to be rather complicated to work out how the metrics relate to an acceptable level of performance / a rewarding user experience.
If you're going to provide a UI to let users opt in / opt out, why not just let the user choose the level of eye candy in the app vs the rendering speed?
Take a look at some of Google's (copyrighted!) benchmarks for V8:
http://v8.googlecode.com/svn/data/benchmarks/v4/regexp.js
http://v8.googlecode.com/svn/data/benchmarks/v4/splay.js
I chose a couple of the simpler ones to give an idea of similar benchmarks you could create yourself to test feature sets. As long as you keep the run-time of your tests between time logged at start to time logged at completion to less than 100 ms on the slowest systems (which these Google tests are vastly greater than) you should get the information you need without being detrimental to user experience. While the Google benchmarks care at a granularity between the faster systems, you don't. All that you need to know is which systems take longer than XX ms to complete.
Things you could test are regular expression operations (similar to the above), string concatenation, page scrolling, anything that causes a browser repaint or reflow, etc.
You could run all the benchmarks you want using Web Workers, then, according to results, store your detection about the performance of the machine in a cookie.
This is only for HTML5 Supported browsers, of-course
Some Ideas:
Putting a time-limit on the tests seems like an obvious choice.
Storing test results in a cookie also seems obvious.
Poor test performance on a test could pause further scripts
and trigger display of a non-blocking prompt UI (like the save password prompts common in modern web browsers)
that asks the user if they want to opt into further scripting effects - and store the answer in a cookie.
while the user hasn't answered the prompt, then periodically repeat the tests and auto-accept the scripting prompt if consecutive tests finish faster than the first one.
.
On a sidenote - Slow network speeds could also probably be tested
by timing the download of external resources (like the pages own CSS or JavaScript files)
and comparing that result with the JavaScript benchmark results.
this may be useful on sites relying on loads of XHR effects and/or heavy use of <img/>s.
.
It seems that DOM rendering/manipulation benchmarks are difficult to perform before the page has started to render - and are thus likely to cause quite noticable delays for all users.
I came with a similar question and I solved it this way, in fact it helped me taking some decisions.
After rendering the page I do:
let now, finishTime, i = 0;
now = Date.now();//Returns the number of miliseconds after Jan 01 1970
finishTime = now + 200; //We add 200ms (1/5 of a second)
while(now < finishTime){
i++;
now = Date.now();
}
console.log("I looped " + i + " times!!!");
After doing that I tested it on several browser with different OS and the i value gave me the following results:
Windows 10 - 8GB RAM:
1,500,000 aprox on Chrome
301,327 aprox on Internet Explorer
141,280 (on Firefox on a VirtualMachine running Lubuntu 2GB given)
MacOS 8GB RAM:
3,000,000 aprox on Safari
1,500,000 aprox on Firefox
70,000 (on Firefox 41 on a Virtual Machine running Windows XP 2GB given)
Windows 10 - 4GB RAM (This is an Old computer I have)
500,000 aprox on Google Chrome
I load a lot of divs in a form of list, the are loaded dinamically accordeing to user's input, this helped me to limit the number of elements I create according to the performance the have given, BUT
But the JS is not all!, because even tough the Lubuntu OS running on a virtual machine gave poor results, it loaded 20,000 div elements in less than 2 seconds and you could scroll through the list with no problem while I took more than 12 seconds for IE and the performance sucked!
So a Good way could be that, but When it comes to rendering, thats another story, but this definitely could help to take some decisions.
Good luck, everyone!

jQuery or javascript to find memory usage of page

Is there a way to find out how much memory is being used by a web page, or by my jquery application?
Here's my situation:
I'm building a data heavy webapp using a jquery frontend and a restful backend that serves data in JSON. The page is loaded once, and then everything happens via ajax.
The UI provides users with a way to create multiple tabs within the UI, and each tab can contain lots and lots of data. I'm considering limiting the number of tabs they can create, but was thinking it would be nice to only limit them once memory usage has gone above a certain threshold.
Based on the answers, I'd like to make some clarfications:
I'm looking for a runtime solution (not just developer tools), so that my application can determine actions based on memory usage in a user's browser.
Counting DOM elements or document size might be a good estimation, but it could be quite inaccurate since it wouldn't include event binding, data(), plugins, and other in-memory data structures.
2015 Update
Back in 2012 this wasn't possible, if you wanted to support all major browsers in-use. Unfortunately, right now this is still a Chrome only feature (a non-standard extension of window.performance).
window.performance.memory
Browser support: Chrome 6+
2012 Answer
Is there a way to find out how much memory is being used by a web page, or by my jquery application? I'm looking for a runtime solution (not just developer tools), so that my application can determine actions based on memory usage in a user's browser.
The simple but correct answer is no. Not all browsers expose such data to you. And I think you should drop the idea simply because the complexity and inaccuracy of a "handmade" solution may introduce more problem than it solves.
Counting DOM elements or document size might be a good estimation, but it could be quite inaccurate since it wouldn't include event binding, data(), plugins, and other in-memory data structures.
If you really want to stick with your idea you should separate fixed and dynamic content.
Fixed content is not dependant on user actions (memory used by script files, plugins, etc.)
Everything else is considered dynamic and should be your main focus when determining your limit.
But there is no easy way to summarize them. You could implement a tracking system that gathers all these information. All operations should call the appropriate tracking methods. e.g:
Wrap or overwrite jQuery.data method to inform the tracking system about your data allocations.
Wrap html manipulations so that adding or removing content is also tracked (innerHTML.length is the best estimate).
If you keep large in-memory objects they should also be monitored.
As for event binding you should use event delegation and then it could also be considered a somewhat fixed factor.
Another aspect that makes it hard to estimate your memory requirements correctly is that different browsers may allocate memory differently (for Javascript objects and DOM elements).
You can use the Navigation Timing API.
Navigation Timing is a JavaScript API for accurately measuring performance on the web. The API provides a simple way to get accurate and detailed timing statistics—natively—for page navigation and load events.
window.performance.memory gives access to JavaScript memory usage data.
Recommended reading
Measuring page load speed with Navigation Timing
This question is 5 years old, and both javascript and browsers have evolved incredibly in this time. Since this now possible (in at least some browsers), and this question is the first result when you Google "javascript show memory useage", I thought I'd offer a modern solution.
memory-stats.js: https://github.com/paulirish/memory-stats.js/tree/master
This script (which you can run at any time on any page) will display the current memory useage of the page:
var script=document.createElement('script');
script.src='https://rawgit.com/paulirish/memory-stats.js/master/bookmarklet.js';
document.head.appendChild(script);
I don't know of any way that you could actually find out how much memory is being used by the browser, but you might be able to use a heuristic based on the number of elements on the page. Uinsg jQuery, you could do $('*').length and it will give you the count of the number of DOM elements. Honestly, though, it's probably easier just to do some usability testing and come up with a fixed number of tabs to support.
Use the Chrome Heap Snapshot tool
There's also a Firebug tool called MemoryBug but seems it's not very mature yet.
If you want to just see for testing there is a way in Chrome via the developer page to track memory use, but not sure how to do it in javascript directly.
I would like to suggest an entirely different solution from the other answers, namely to observe the speed of your application and once it drops below defined levels either show tips to the user to close tabs, or disable new tabs from opening. A simple class which provides this kind of information is for example https://github.com/mrdoob/stats.js .
Aside of that, it might not be wise for such an intensive application to keep all tabs in memory in the first place. E.g. keeping only the user state (scroll) and loading all the data each time all but the last two tabs are opening might be a safer option.
Lastly, webkit developers have been discussing adding memory information to javascript, but they have gotten in a number of arguments about what and what should not be exposed. Either way, it's not unlikely that this kind of information will be available in a few years (although that information isn't too useful right now).
Perfect question timing with me starting on a similar project!
There is no accurate way of monitoring JS memory usage in-app since it would require higher level privileges. As mentioned in comments, checking the number of all elements etc. would be a waste of time since it ignores bound events etc.
This would be an architecture issue if memory leaks manifest or unused elements persist. Making sure that closed tabs' content is deleted completely without lingering event handlers etc. would be perfect; assuming that it's done you could just simulate heavy usage in a browser and extrapolate the results from memory monitoring (type about:memory in the address bar)
Protip: if you open the same page in IE, FF, Safari... and Chrome; and than navigate to about:memory in Chrome, it will report memory usage across all other browsers. Neat!
What you might want to do is have the server keep track of their bandwidth for that session (how many bytes of data have been sent to them). When they go over the limit, instead of sending data via ajax, the server should send an error code which javascript will use to tell the user they've used too much data.
You can get the document.documentElement.innerHTML and check its length. It would give you the number of bytes used by your web page.
This may not work in all browsers. So you can enclose all your body elements in a giant div and call innerhtml on that div. Something like <body><div id="giantDiv">...</div></body>

Categories

Resources