I was hoping to set up some sort of script (preferably) or small program that would detect and automatically end/restart Chromium's GPU process when it exceeds a certain memory threshold.
Turning off hardware acceleration prevents the bloating of course, but I was interested in a way to use the feature while keeping it in check, without having to launch the Task Manager > End Process every hour or so.
A bit late to the question, but it's the top Google search hit combining "chrome" "gpu process" and "script". There don't seem to be any answers out there, so here's mine.
At least on Windows, it looks as if every chromium-based product with GPU-assisted rendering spins up a process with --gpu-process in the command-line. This applies to any Electron apps as well, as I see this on my main PC for apps like Slack, Discord and Skype, as well as other downstream Chromium-based browsers. The actual process name will match the product's main image name, so you can differentiate that way.
This should allow one to use any program language that can inspect the command line of running process, find their PID, and monitor the process's memory usage, killing it when it's over some threshold. Search for the expected image name (such as chrome.exe) and check for --gpu-process in the arguments or full command-line.
When you terminate it, it will be automatically restarted ... as long as you don't kill it too often. The chromium code includes a check meant to protect against the gpu-render process "flapping" up and down when it actually crashes. It unlikely to know the difference in an external tool killing the gpu render process and the process crashing due to some issue. Based on what I'm seeing, I believe the control process will stop restarting the gpu-render process if it dies more than once an hour.
I'm making an online browser game with websockets and a node server and if I have around 20-30 players, the CPU is usually around 2% and RAM at 10-15%. I'm just using a cheap Digital Ocean droplet to host it.
However, every 20-30 minutes it seems, the server CPU usage will spike to 100% for 10 seconds, and then finally crash. Up until that moment, the CPU usually hovering around 2% and the game is running very smoothly.
I can't tell for the life of me what is triggering this as there are no errors in the logs and nothing in the game that I can see causes it. Just seems to be a random event that brings the server down.
There are also some smaller spikes as well that don't bring the server down, but soon resolve themselves. Here's an image:
I don't think I'm blocking the event loop anywhere and I don't have any execution paths that seem to be long running. The packets to and from the server are usually two per second per user, so not much bandwidth used at all. And the server is mostly just a relay with little processing of packets other than validation so I'm not sure what code path could be so intensive.
What can I do to profile this and find out where to begin in how to investigate what are causing these spikes? I'd like to imagine there's some code path I forgot about that is surprisingly slow under load or maybe I'm missing a node flag that would resolve it but I don't know.
I think I might have figured it out.
I'm using mostly websockets for my game and I was running htop and noticed that if someone sends large packets (performing a ton of actions in a short amount of time) then the CPU spikes to 100%. I was wondering why that was when I remembered I was using a binary-packer to reduce bandwidth usage.
I tried changing the parser to JSON instead so as to not compress and pack the packets and regardless of how large the packets were the CPU usage stayed at 2% the entire time.
So I think what was causing the crash was when one player would send a lot of data in a short amount of time and the server would be overwhelmed with having to pack all of it and send it out in time.
This may not be the actual answer but it's at least something that needs to be fixed. Thankfully the game uses very little bandwidth as it is and bandwidth is not the bottleneck so I may just leave it as JSON.
The only problem is that with JSON encoding that users can read the packets in the Chrome developer console network tab which I don't like.. Makes it a lot easier to find out how the game works and potentially find cheats/exploits..
My site is pretty standard ecom site, it isn't a JS backed standalone app or anything, it's just a site which uses JS for standard stuff, as well as some jquery plugins to do a few things.
I've used Chrome Dev Tools to take some CPU profiles.
Most of my functions are low key operations, and don't exceed 200ms.
What is a good benchmark I should be aiming for? Is 200ms high?
200ms isn't too bad, but you should try to keep your JavaScript interactions under 100ms. The browser UI only has a single thread for updating the both UI and for executing JavaScript (only on can happen at a time). So the longer your JavaScript runs the worse the user experience.
0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result.
1.0 second is about the limit for the user's flow of thought to stay uninterrupted, even though the user will notice the delay. Normally, no special feedback is necessary during delays of more than 0.1 but less than 1.0 second, but the user does lose the feeling of operating directly on the data.
10 seconds is about the limit for keeping the user's attention focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish, so they should be given feedback indicating when the computer expects to be done. Feedback during the delay is especially important if the response time is likely to be highly variable, since users will then not know what to expect.
-Jakob Nielsen
There are several patterns for breaking up work that needs to be done into chunks so you don't get the dreaded "Unresponsive script" error. Such as using timers, or web workers etc, but the implementation depends on what your code needs to do. You're on the right tracking using the Dev Tools to gain some insight into what's going on!
I have a touch screen kiosk application that I'm developing that will be deployed on the latest version of Chrome.
The application will need to make AJAX calls to an web service, every 10 minutes or so to pull thru any updated content.
As it's a kiosk application, the page is unlikely to be reloaded very often and theoretically, unless the kiosk is turned off, the application could run for days at a time.
I guess my concern is memory usage and whether or not a very long running setTimeout loop would chew through a large amount of memory is given sufficient time.
I'm currently considering the use of Web Workers and I'm also going to look into Web Sockets but I was wondering if anyone had any experience with this type of thing?
Cheers,
Terry
The browser has a garbage collector so no problems on that. as long as you don't introduce memory leaks through bad code. here's an article and another article about memory leak patterns. that's should get you started on how to program efficiently, and shoot those leaky code.
also, you have to consider the DOM. a person in SO once said that "things that are not on screen should be removed and not just hidden" - this not only removes the entity in a viewing perspective, but actually removes it from the DOM, remove it's handlers, and the memory it used will be freed.
As for the setTimeout, lengthen the interval between calls. Too fast, you will chew up memory fast (and render the page quite... laggy). I just tested code for a timer-based "hashchange" detection, and even on chrome, it does make the page rather slow.
research on the bugs of chrome and keep updated as well.
I'm toying with the idea of progressively enabling/disabling JavaScript (and CSS) effects on a page - depending on how fast/slow the browser seems to be.
I'm specifically thinking about low-powered mobile devices and old desktop computers -- not just IE6 :-)
Are there any examples of this sort of thing being done?
What would be the best ways to measure this - accounting for things, like temporary slowdowns on busy CPUs?
Notes:
I'm not interested in browser/OS detection.
At the moment, I'm not interested in bandwidth measurements - only browser/cpu performance.
Things that might be interesting to measure:
Base JavaScript
DOM manipulation
DOM/CSS rendering
I'd like to do this in a way that affects the page's render-speed as little as possible.
BTW: In order to not confuse/irritate users with inconsistent behavior - this would, of course, require on-screen notifications to allow users to opt in/out of this whole performance-tuning process.
[Update: there's a related question that I missed: Disable JavaScript function based on user's computer's performance. Thanks Andrioid!]
Not to be a killjoy here, but this is not a feat that is currently possible in any meaningful way in my opinion.
There are several reasons for this, the main ones being:
Whatever measurement you do, if it is to have any meaning, will have to test the maximum potential of the browser/cpu, which you cannot do and maintain any kind of reasonable user experience
Even if you could, it would be a meaningless snapshot since you have no idea what kind of load the cpu is under from other applications than the browser while your test is running, and weather or not that situation will continue while the user is visiting your website.
Even if you could do that, every browser has their own strengths and weaknesses, which means, you'd have to test every dom manipulation function to know how fast the browser would complete it, there is no "general" or "average" that makes sense here in my experience, and even if there was, the speed with which dom manipulation commands execute, is based on the context of what is currently in the dom, which changes when you manipulate it.
The best you can do is to either
Let your users decide what they want, and enable them to easily change that decision if they regret it
or better yet
Choose to give them something that you can be reasonably sure that the greater part of your target audience will be able to enjoy.
Slightly off topic, but following this train of thought: if your users are not techleaders in their social circles (like most users in here are, but most people in the world are not) don't give them too much choice, ie. any choice that is not absolutely nescessary - they don't want it and they don't understand the technical consequences of their decision before it is too late.
A different approach, that does not need explicit benchmark, would be to progressively enable features.
You could apply features in prioritized order, and after each one, drop the rest if a certain amount of time has passed.
Ensuring that the most expensive features come last, you would present the user with a somewhat appropriate selection of features based on how speedy the browser is.
You could try timing some basic operations - have a look at Steve Souder's Episodes and Yahoo's boomerang for good ways of timing stuff browserside. However its going to be rather complicated to work out how the metrics relate to an acceptable level of performance / a rewarding user experience.
If you're going to provide a UI to let users opt in / opt out, why not just let the user choose the level of eye candy in the app vs the rendering speed?
Take a look at some of Google's (copyrighted!) benchmarks for V8:
http://v8.googlecode.com/svn/data/benchmarks/v4/regexp.js
http://v8.googlecode.com/svn/data/benchmarks/v4/splay.js
I chose a couple of the simpler ones to give an idea of similar benchmarks you could create yourself to test feature sets. As long as you keep the run-time of your tests between time logged at start to time logged at completion to less than 100 ms on the slowest systems (which these Google tests are vastly greater than) you should get the information you need without being detrimental to user experience. While the Google benchmarks care at a granularity between the faster systems, you don't. All that you need to know is which systems take longer than XX ms to complete.
Things you could test are regular expression operations (similar to the above), string concatenation, page scrolling, anything that causes a browser repaint or reflow, etc.
You could run all the benchmarks you want using Web Workers, then, according to results, store your detection about the performance of the machine in a cookie.
This is only for HTML5 Supported browsers, of-course
Some Ideas:
Putting a time-limit on the tests seems like an obvious choice.
Storing test results in a cookie also seems obvious.
Poor test performance on a test could pause further scripts
and trigger display of a non-blocking prompt UI (like the save password prompts common in modern web browsers)
that asks the user if they want to opt into further scripting effects - and store the answer in a cookie.
while the user hasn't answered the prompt, then periodically repeat the tests and auto-accept the scripting prompt if consecutive tests finish faster than the first one.
.
On a sidenote - Slow network speeds could also probably be tested
by timing the download of external resources (like the pages own CSS or JavaScript files)
and comparing that result with the JavaScript benchmark results.
this may be useful on sites relying on loads of XHR effects and/or heavy use of <img/>s.
.
It seems that DOM rendering/manipulation benchmarks are difficult to perform before the page has started to render - and are thus likely to cause quite noticable delays for all users.
I came with a similar question and I solved it this way, in fact it helped me taking some decisions.
After rendering the page I do:
let now, finishTime, i = 0;
now = Date.now();//Returns the number of miliseconds after Jan 01 1970
finishTime = now + 200; //We add 200ms (1/5 of a second)
while(now < finishTime){
i++;
now = Date.now();
}
console.log("I looped " + i + " times!!!");
After doing that I tested it on several browser with different OS and the i value gave me the following results:
Windows 10 - 8GB RAM:
1,500,000 aprox on Chrome
301,327 aprox on Internet Explorer
141,280 (on Firefox on a VirtualMachine running Lubuntu 2GB given)
MacOS 8GB RAM:
3,000,000 aprox on Safari
1,500,000 aprox on Firefox
70,000 (on Firefox 41 on a Virtual Machine running Windows XP 2GB given)
Windows 10 - 4GB RAM (This is an Old computer I have)
500,000 aprox on Google Chrome
I load a lot of divs in a form of list, the are loaded dinamically accordeing to user's input, this helped me to limit the number of elements I create according to the performance the have given, BUT
But the JS is not all!, because even tough the Lubuntu OS running on a virtual machine gave poor results, it loaded 20,000 div elements in less than 2 seconds and you could scroll through the list with no problem while I took more than 12 seconds for IE and the performance sucked!
So a Good way could be that, but When it comes to rendering, thats another story, but this definitely could help to take some decisions.
Good luck, everyone!