There are lots of tools for debugging JavaScript codes (like FireBug, Chrome console), but is there a tool for debugging a process. This probably need to monitor the resource (e.g. CPU) usage to find the bottleneck in the process.
I create Javascript animations for moving an element (in a simpler case opening/closing menu), but the movement is now smooth. Different factors can cause overload, e.g. heavy CSS3 gradients. But how to detect the rate-limiting process?
This is indeed the problem of most of websites. When opening a webpage, overload of javascript processes kills the page load. Most of animations/menu actions are broken.
When a JavaScript animation is not running smooth, how do you debug the problem?
OR a more general question, how to monitor the resource usage of running JS process to make a webpage lighter (an faster load on computers with limited resources)?
I would use the timeline->frames in Chrome. Paul Irish has a lot of great talks about this, here is one https://www.youtube.com/watch?v=Vp524yo0p44
Also when doing animation do not use setTimeout/setInterval, the precision is not good enough. Instead use requestAnimationFrame. More information about requestAnimationFrame can be found here. http://paulirish.com/2011/requestanimationframe-for-smart-animating/
Edit: This talk by Paul is also really interesting regarding speed and debugging speed in the browser: https://www.youtube.com/watch?v=MllBwuHbWMY, and here is a quite recent discussing 2D transforms vs absolute positioning: https://www.youtube.com/watch?v=NZelrwd_iRs
Different machines => different performance => different bottlenecks
If animation isn't running smoothly I try to lower on graphics or animation itself. Who says that users are using as powerful machines as you do? So they may hit the issue sooner than you.
But I'd still suggest Process Explorer as it can individually show load of a particular processes. In general it's a more insightful tool compared to default Task Manager provided by Windows.
Related
I'd like to be able to find out about a browser's hardware resources from a web page, or at least a rough estimation.
Even when you detect the presence of modern technology (such as csstransforms3d, csstransitions, requestAnimationFrame) in a browser via a tool like Modernizr, you cannot be sure whether to activate some performance-consuming option (such as fancy 3D animation) or to avoid it.
I'm asking because I have (a lot of) experience with situations where the browser is modern (latest Chrome or Firefox supporting all cool technologies) but OS's CPU, GPU, and available memory are just catastrophic (32bit Windows XP with integrated GPU) and thus a decision based purely on detected browser caps is no good.
While Nickolay gave a very good and extensive explanation, I'd like to suggest one very simple, but possibly effective solution - you could try measuring how long it took for the page to load and decide whether to go with the resource-hungry features or not (Gmail does something similar - if the loading goes on for too long, a suggestion to switch to the "basic HTML" version will show up).
The idea is that, for slow computers, loading any page, regardless of content, should be, on average, much slower than on modern computers. Getting the amount of time it took to load your page should be simple, but there are a couple of things to note:
You need to experiment a bit to determine where to put the "too slow" threshold.
You need to keep in mind that slow connections can cause the page to load slower, but this will probably make a difference in a very small number of cases (using DOM ready instead of the load event can also help here).
In addition, the first time a user loads your site will probably be much slower, due to caching. One simple solution for this is to keep your result in a cookie or local storage and only take loading time into account when the user visits for the first time.
Don't forget to always, no matter what detection mechanism you used and how accurate it is, allow the user to choose between the regular, resource-hungry and the faster, "uglier" version - some people prefer better looking effects even if it means the website will be slower, while other value speed and snappiness more.
In general, the available (to web pages) information about the user's system is very limited.
I remember a discussion of adding one such API to the web platform (navigator.hardwareConcurrency - the number of available cores), where the opponents of the feature explained the reasons against it, in particular:
The number of cores available to your app depends on other workload, not just on the available hardware. It's not constant, and the user might not be willing to let your app use all (or whatever fixed portion you choose) of the available hardware resources;
Helps "fingerprinting" the client.
Too oriented on the specifics of today. The web is designed to work on many devices, some of which do not even exist today.
These arguments work as well for other APIs for querying the specific hardware resources. What specifically would you like to check to see if the user's system can afford running a "fancy 3D animation"?
As a user I'd rather you didn't use additional resources (such as fancy 3D animation) if it's not necessary for the core function of your site/app. It's sad really that I have to buy a new laptop every few years just to be able to continue with my current workflow without running very slowly due to lack of HW resources.
That said, here's what you can do:
Provide a fallback link for the users who are having trouble with the "full" version of the site.
If this is important enough to you, you could first run short benchmarks to check the performance and fall back to the less resource-hungry version of the site if you suspect that a system is short on resources.
You could target the specific high-end platforms by checking the OS, screen size, etc.
This article mentions this method on mobile: http://blog.scottlogic.com/2014/12/12/html5-android-optimisation.html
WebGL provides some information about the renderer via webgl.getParameter(). See this page for example: http://analyticalgraphicsinc.github.io/webglreport/
I did my first canvas, and you can see it here My Canvas.
The main idea of this canvas is that when you go with the cursor against the points they escape from it.
What I want now is to know how much my canvas will use the resources of the user's PC. For example, the RAM, the CPU or GPU.
In particular, in my script there is a function called every 7ms:
setInterval (spiderFree, 7);
I wonder how this can be expensive for a computer.
However the question is, how can I control the expenditure of computer resources due to my script?
You should take a look at this article from Paul Irish on his requestAnimationFrame cross-browser shim.
It will firstly try to optimise frames based on browser capability, and also, have backwards compatibility for old, non-GPU enabled browsers.
From the jQuery ticket:
Benefits:
let the browser choose the best 'animation tick' rate (instead of our
arbitrary 13ms)
greatly reduce animation CPU usage when switching tab
helps keep animation synchronized
Full list of claimed benefits
here
This is the 'industry standard' way of ensuring the best possible frame rate and resource utilisation of your animations.
In addition to Alex's good answer, bear in mind that you can use Firefox's developer tools (F12). You can use the Performance tab to see exactly how long your code is taking to execute, and which parts take the longest. You can also use the Canvas tab to analyze the frames. (You'll need to enable these features from the Settings tab).
I entered a question yesterday and I'd like to change tactics while still keeping the previous thread alive, if possible. (The previous question was concerning variable frame rates in Three.js.)
Rather than address the question directly, I'd like to know what WebGL/Three.js developers use to diagnose their code (to find performance bottlenecks specifically).
I'm starting a large-ish, long-term project and I assume I'll run into all sorts of problems along the way. How to we peer behind the curtain?
I saw a related question and came to WebGL-Inspector, which I will look into. Just looking for all the options. I'm willing to spend money to get professional diagnostic tools. Whatever it takes.
Thanks.
Good day, sir.
I use:
chrome javascript profiler
chrome canvas inspection (http://www.html5rocks.com/en/tutorials/canvas/inspection/)
occasionally try tools like webgl-inspector but it doesn't seem quite as good as chrome's canvas inspector
Also:
standard profiling techniques for javascript, use unminified code to see what's going on everywhere during profiling
basic sanity checks: for frame rate issues, make sure your frame flipping run loop code is up to snuff. Standard practice is using requestAnimationFrame.
make sure your canvas is not being stretched
I have not yet tried applying a pure desktop opengl type of debugger (nvidia nsight, for example) to webgl running inside the browser.
I am having a problem trying to articulate what I am trying to measure but I would do my best in hope for some assistance.
If I would to bind alot of functions to the .resize() event or to add massive amount of listeners, somehow I believe that the cpu processing would increase and that at a certain point the application would lag. (Do correct me if I am wrong).
I am attributing this to the CPU usage(Please correct the term if it is wrong).
Is there anyway to measure this lag(cpu usage).
Thanks
In Google Chrome's Developer Tools there are timelines for recording memory usage and CPU profiling. Lots of good examples on the web for how to use these tools.
You can monitor your browser in TaskManager if you're using Windows.
JavaScript executes in the browser, so it will be part of the footprint of the browser itself.
Some times different browsers will run the same JavaScript with different performance. It just depend how optimized the browser is for that particular code block.
Most browsers will also give you profiling tools that will enable you to pinpoint specific JavaScript functions that are slow. (ex IE dev tools). This is necessary to take a more targeted approach when troubleshooting your performance issues.
This is a longshot - is there anyway to detect poor vs strong graphics card performance via a JS plugin?
We have built a parallax site for a client, it stutters on lower performance machines - we could tweak the performance to make it work better across the board - but this of course reduces the experience for users with higher performance machines.
We could detect browser version also - but the same browser could run on low and high performance machines - so doesn't help our situation
Any ideas?
requestAnimationFrame (rAF) can help with this.
You could figure out your framerate using rAF. Details about that here: calculate FPS in Canvas using requestAnimationFrame. In short, you figure out the time difference between frames then divide 1 by it (e.g. 1/.0159s ~= 62fps ).
Note: with any method you choose, performance will be arbitrarily decided. Perhaps anything over 24 frames per second could be considered "high performance."
Why not let the user decide? Youtube (and many other video sharing sites) implements a selector for quality of playback, now a gear icon with a list of resolutions you can choose from. Would such a HD | SD or Hi-Fi | Lo-Fi selector work (or even make sense) in the context of your application?
This is where "old school" loading screens came in useful, you could render something complex either in the foreground (or hidden away) that didn't matter if it looked odd or jurky -- and by the time you had loaded your resources you could decide on what effects to enable or disable.
Basically you would use what jcage mentioned for this, testing of frame-rate (i.e. using a setInterval in conjuction with a timer). This isn't always 100% reliable however because if their machine decides in that instance to do a triple-helix-backward-somersault (or something more likely) you'd get a dodgy reading. It is possible, depending on the animations involved, to upgrade and downgrade the effects in realtime — but this is always more tricky to code, plus your own analysis of the situation can actually sometimes cause dropped performance.
Firefox has a build in list of graphic cards which are not supported: https://wiki.mozilla.org/Blocklisting/Blocked_Graphics_Drivers the related help acrticle.
But you only can indirectly test them when accessing WebGL features...
http://get.webgl.org/troubleshooting/ leads you to the corresponding browser provider when there are problems. When checking the JS code you will see that they test via
if (window.WebGLRenderingContext) {
alert('Your browser does not support WebGL');
}
if you have an up to date graphic card.
You might consider checking to see if the browser supports window.requestAnimationFrame, which would indicate you are running in a newer browser. Or alternatively consider checking jQuery.fx.interval.
You could then implement a custom function to gauge the available processing power. You might try using a custom easing function which can then be run through a function like .slideDown() to get an idea of the computation power available.
See this answer to another question for more ideas on how to check performance in javascript.
If the browser is ultra-modern and supports requestAnimationFrame, you can calculate the animation frame rate in realtime and drop your animation settings for slower machines. That's IE 10, Firefox 4, Chrome 10, and Safari 6.
You would essentially create a 'tick' function that runs on a requestAnimationFrame() callback and tracks how many milliseconds pass between each tick. After so many ticks have been registered, you can average it out to determine your overall frame rate. But there are two caveats to this:
When the tab is in the background requestAnimationFrame callbacks will be suspended -- so a single, sudden delay between frames of several seconds to many minutes does not mean it's a slow animation. And:
Simply having a JavaScript gauge running on each animation frame will cause the overall animation to slow down a bit; there's no way to measure something like that without negatively impacting it. Remember, Heisenberg's a bastard.
For older browsers, I don't know of any way to reliably gauge the frame rate. You may be able to simulate it using setTimeout() but it won't be nearly as accurate -- and might have an even more negative impact on performance.
This might be the risk/benefit based decision. I think that you will have to make important, but tough decision here.
1)
If you decide to have two verions, you will have to spend some time to:
figure out fast, non intrusive test
spend time implementing that
roll it into production
and you will most probably end up with incorrect implementation, for example at the moment my computer is running 70 chrome tabs, VLC with 1080p anime, and IntellijIDEA
The probability the my MBP, 2012 model will be detected as "slow" computer is high, at least now.
False positives are really hard to figure out.
2)
If you go for one version you will have to choose between HD and Lo-Fi, as #Patrick mentioned, which is again mistake in my opinion.
What i will suggest is that you go to Google Analytics, figure out browser distribution (yes, i know that it can be misleading, but so can any other test) and based on that (if majority of users are Chrome + modern IE/FF go with HD version, BUT spend some time figuring out optimisation strategy.
There are always things that could be done better and faster. Get one older laptop, and optimise until you get decent FPS rate. And thats it, yo as a developer need to make that decision, that is your duty.
3)
If from the Browser distribution you figure out that you absolutely must go with Lo-Fi version, well, try to think is the "downgrade" worth it, and implement it only if that is your last resort.