jQuery or javascript to find memory usage of page - javascript

Is there a way to find out how much memory is being used by a web page, or by my jquery application?
Here's my situation:
I'm building a data heavy webapp using a jquery frontend and a restful backend that serves data in JSON. The page is loaded once, and then everything happens via ajax.
The UI provides users with a way to create multiple tabs within the UI, and each tab can contain lots and lots of data. I'm considering limiting the number of tabs they can create, but was thinking it would be nice to only limit them once memory usage has gone above a certain threshold.
Based on the answers, I'd like to make some clarfications:
I'm looking for a runtime solution (not just developer tools), so that my application can determine actions based on memory usage in a user's browser.
Counting DOM elements or document size might be a good estimation, but it could be quite inaccurate since it wouldn't include event binding, data(), plugins, and other in-memory data structures.

2015 Update
Back in 2012 this wasn't possible, if you wanted to support all major browsers in-use. Unfortunately, right now this is still a Chrome only feature (a non-standard extension of window.performance).
window.performance.memory
Browser support: Chrome 6+
2012 Answer
Is there a way to find out how much memory is being used by a web page, or by my jquery application? I'm looking for a runtime solution (not just developer tools), so that my application can determine actions based on memory usage in a user's browser.
The simple but correct answer is no. Not all browsers expose such data to you. And I think you should drop the idea simply because the complexity and inaccuracy of a "handmade" solution may introduce more problem than it solves.
Counting DOM elements or document size might be a good estimation, but it could be quite inaccurate since it wouldn't include event binding, data(), plugins, and other in-memory data structures.
If you really want to stick with your idea you should separate fixed and dynamic content.
Fixed content is not dependant on user actions (memory used by script files, plugins, etc.)
Everything else is considered dynamic and should be your main focus when determining your limit.
But there is no easy way to summarize them. You could implement a tracking system that gathers all these information. All operations should call the appropriate tracking methods. e.g:
Wrap or overwrite jQuery.data method to inform the tracking system about your data allocations.
Wrap html manipulations so that adding or removing content is also tracked (innerHTML.length is the best estimate).
If you keep large in-memory objects they should also be monitored.
As for event binding you should use event delegation and then it could also be considered a somewhat fixed factor.
Another aspect that makes it hard to estimate your memory requirements correctly is that different browsers may allocate memory differently (for Javascript objects and DOM elements).

You can use the Navigation Timing API.
Navigation Timing is a JavaScript API for accurately measuring performance on the web. The API provides a simple way to get accurate and detailed timing statistics—natively—for page navigation and load events.
window.performance.memory gives access to JavaScript memory usage data.
Recommended reading
Measuring page load speed with Navigation Timing

This question is 5 years old, and both javascript and browsers have evolved incredibly in this time. Since this now possible (in at least some browsers), and this question is the first result when you Google "javascript show memory useage", I thought I'd offer a modern solution.
memory-stats.js: https://github.com/paulirish/memory-stats.js/tree/master
This script (which you can run at any time on any page) will display the current memory useage of the page:
var script=document.createElement('script');
script.src='https://rawgit.com/paulirish/memory-stats.js/master/bookmarklet.js';
document.head.appendChild(script);

I don't know of any way that you could actually find out how much memory is being used by the browser, but you might be able to use a heuristic based on the number of elements on the page. Uinsg jQuery, you could do $('*').length and it will give you the count of the number of DOM elements. Honestly, though, it's probably easier just to do some usability testing and come up with a fixed number of tabs to support.

Use the Chrome Heap Snapshot tool
There's also a Firebug tool called MemoryBug but seems it's not very mature yet.

If you want to just see for testing there is a way in Chrome via the developer page to track memory use, but not sure how to do it in javascript directly.

I would like to suggest an entirely different solution from the other answers, namely to observe the speed of your application and once it drops below defined levels either show tips to the user to close tabs, or disable new tabs from opening. A simple class which provides this kind of information is for example https://github.com/mrdoob/stats.js .
Aside of that, it might not be wise for such an intensive application to keep all tabs in memory in the first place. E.g. keeping only the user state (scroll) and loading all the data each time all but the last two tabs are opening might be a safer option.
Lastly, webkit developers have been discussing adding memory information to javascript, but they have gotten in a number of arguments about what and what should not be exposed. Either way, it's not unlikely that this kind of information will be available in a few years (although that information isn't too useful right now).

Perfect question timing with me starting on a similar project!
There is no accurate way of monitoring JS memory usage in-app since it would require higher level privileges. As mentioned in comments, checking the number of all elements etc. would be a waste of time since it ignores bound events etc.
This would be an architecture issue if memory leaks manifest or unused elements persist. Making sure that closed tabs' content is deleted completely without lingering event handlers etc. would be perfect; assuming that it's done you could just simulate heavy usage in a browser and extrapolate the results from memory monitoring (type about:memory in the address bar)
Protip: if you open the same page in IE, FF, Safari... and Chrome; and than navigate to about:memory in Chrome, it will report memory usage across all other browsers. Neat!

What you might want to do is have the server keep track of their bandwidth for that session (how many bytes of data have been sent to them). When they go over the limit, instead of sending data via ajax, the server should send an error code which javascript will use to tell the user they've used too much data.

You can get the document.documentElement.innerHTML and check its length. It would give you the number of bytes used by your web page.
This may not work in all browsers. So you can enclose all your body elements in a giant div and call innerhtml on that div. Something like <body><div id="giantDiv">...</div></body>

Related

What is the most complete way to gather the largest contentful paint metric in the field?

I would like to gather the "Largest Contentful Paint" metric programmatically in the "field" on actual user page loads and report it to a data collection tools.
I know that I can get this metric for individual pages in a more artificial context by running lighthouse on particular URLs, but I want to gather it when actual users are running my app.
I know that there is a new Largest Contentful Paint API in Javascript but it looks to have very partial support: https://developer.mozilla.org/en-US/docs/Web/API/Largest_Contentful_Paint_API. Chrome seems to have robust support but Firefox has none.
Is there any library, tool, or technique that compenstates in some way for this partial support?
Many metrics that have to do with the result of rendering must be collected by the browser that's performing the renderer. The Javascript API is a way to ask the browser for metrics that it is collecting during the render. If
Is there any library, tool, or technique that compenstates in some way for this partial support?
Basically, no. If the LCP API is not implemented by a browser, then it's not available (this is noted in the web-vitals package readme). However, this isn't the end of things. If you know which element is triggering LCP on Chrome, then you could write a listener that fires when that element has rendered on the page (implementation will be specific to how/where you are generating your client HTML). While this approach is more cumbersome, it would yield complete support across browsers. Worth mentioning here that this approach is decidedly cumbersome and not great to maintain.
Of course, you could also cut your losses and accept that not all browsers support the metric right now, but having some signal is better than none, especially if it is coming from actual site visitors.

Is there a way to know anything about hardware resources of 'platform' accessing webpage?

I'd like to be able to find out about a browser's hardware resources from a web page, or at least a rough estimation.
Even when you detect the presence of modern technology (such as csstransforms3d, csstransitions, requestAnimationFrame) in a browser via a tool like Modernizr, you cannot be sure whether to activate some performance-consuming option (such as fancy 3D animation) or to avoid it.
I'm asking because I have (a lot of) experience with situations where the browser is modern (latest Chrome or Firefox supporting all cool technologies) but OS's CPU, GPU, and available memory are just catastrophic (32bit Windows XP with integrated GPU) and thus a decision based purely on detected browser caps is no good.
While Nickolay gave a very good and extensive explanation, I'd like to suggest one very simple, but possibly effective solution - you could try measuring how long it took for the page to load and decide whether to go with the resource-hungry features or not (Gmail does something similar - if the loading goes on for too long, a suggestion to switch to the "basic HTML" version will show up).
The idea is that, for slow computers, loading any page, regardless of content, should be, on average, much slower than on modern computers. Getting the amount of time it took to load your page should be simple, but there are a couple of things to note:
You need to experiment a bit to determine where to put the "too slow" threshold.
You need to keep in mind that slow connections can cause the page to load slower, but this will probably make a difference in a very small number of cases (using DOM ready instead of the load event can also help here).
In addition, the first time a user loads your site will probably be much slower, due to caching. One simple solution for this is to keep your result in a cookie or local storage and only take loading time into account when the user visits for the first time.
Don't forget to always, no matter what detection mechanism you used and how accurate it is, allow the user to choose between the regular, resource-hungry and the faster, "uglier" version - some people prefer better looking effects even if it means the website will be slower, while other value speed and snappiness more.
In general, the available (to web pages) information about the user's system is very limited.
I remember a discussion of adding one such API to the web platform (navigator.hardwareConcurrency - the number of available cores), where the opponents of the feature explained the reasons against it, in particular:
The number of cores available to your app depends on other workload, not just on the available hardware. It's not constant, and the user might not be willing to let your app use all (or whatever fixed portion you choose) of the available hardware resources;
Helps "fingerprinting" the client.
Too oriented on the specifics of today. The web is designed to work on many devices, some of which do not even exist today.
These arguments work as well for other APIs for querying the specific hardware resources. What specifically would you like to check to see if the user's system can afford running a "fancy 3D animation"?
As a user I'd rather you didn't use additional resources (such as fancy 3D animation) if it's not necessary for the core function of your site/app. It's sad really that I have to buy a new laptop every few years just to be able to continue with my current workflow without running very slowly due to lack of HW resources.
That said, here's what you can do:
Provide a fallback link for the users who are having trouble with the "full" version of the site.
If this is important enough to you, you could first run short benchmarks to check the performance and fall back to the less resource-hungry version of the site if you suspect that a system is short on resources.
You could target the specific high-end platforms by checking the OS, screen size, etc.
This article mentions this method on mobile: http://blog.scottlogic.com/2014/12/12/html5-android-optimisation.html
WebGL provides some information about the renderer via webgl.getParameter(). See this page for example: http://analyticalgraphicsinc.github.io/webglreport/

JavaScript: figuring out max memory that could be used in a program

JavaScript in Chrome (or any other browser for that matter, but I rather limit the discussion to Chrome to make it simpler) does not provide an API which can be used to observe memory related information (e.g. how much memory is being used by the current tab where the JS is running).
I am looking for a creative solution for getting an estimation of how much bytes I can cache in a JavaScript object that my web page is running. The problem definition is that I would like to cache as much as possible.
Can anyone think of a decent way of estimating how much memory can a tab handle before it will crash / become unusable on a machine? I guess a statistical approach could work out fine for some cases, but I'm looking for something more dynamic.

JavaScript instance vs prototype methods and heap snapshot data using Chrome Dev Tools

I have a heap profile taken in Chrome Dev Tools, but I am not sure how to interpret the data. As a test, I created 10,000 WidgetBuilder objects, each with their own methods. I would like to profile out storing methods on instances versus the prototype and see how that affects memory and performance when my page loads.
Should I focus on Retained Size or Shallow Size?
Are the values listed in these columns in bytes?
What is considered a lot of memory?
You might want to start here:
https://developers.google.com/chrome-developer-tools/docs/heap-profiling
It goes into detail on how to understand what you're reading. As for what is considered a lot of memory that's a tricky question. If your website is targeted at mobile devices I would start there as a constraint. To come up with a good comparison I'd suggest running the profiler against sites that you use every day and observe the memory consumption there.
If you find you're using more memory than gmail you might want to rethink ;)
I also recommend checking out jspref:
http://jsperf.com/prototype-vs-instance-functions
There is a LOT of prior work done on that site in regards to performance testing. You might be able to save yourself some time.

JS Garbage Collection

What triggers a JavaScript garbage collector to run? Obviously this varies by JS engines, but trying to get a rough idea. Is it only when available memory is below a certain threshold?
Thanks.
It really varies very widely. SpiderMonkey, for example, will GC based on various heuristics about how much memory has been allocated, but the browser embedding also triggers GC in various situations like after enough DOM events have been processed, after a script has run for long enough, some things to do with tabs/windows being closed or loaded, etc, etc. And the heuristics involved have changed wildly between different Firefox releases, and will change again.
And that's all for just one browser.
It varies. Chrome (V8) is simply based on a timer and an activity monitor (it tries not to run when the engine is busy).
This varies per browser and as far as I know you have absolutely no control over it.
Likewise you have no control over when the DOM is getting rendered, which is really annoying if you want to show a loading bar :D
Why do you want to know this?

Categories

Resources