I entered a question yesterday and I'd like to change tactics while still keeping the previous thread alive, if possible. (The previous question was concerning variable frame rates in Three.js.)
Rather than address the question directly, I'd like to know what WebGL/Three.js developers use to diagnose their code (to find performance bottlenecks specifically).
I'm starting a large-ish, long-term project and I assume I'll run into all sorts of problems along the way. How to we peer behind the curtain?
I saw a related question and came to WebGL-Inspector, which I will look into. Just looking for all the options. I'm willing to spend money to get professional diagnostic tools. Whatever it takes.
Thanks.
Good day, sir.
I use:
chrome javascript profiler
chrome canvas inspection (http://www.html5rocks.com/en/tutorials/canvas/inspection/)
occasionally try tools like webgl-inspector but it doesn't seem quite as good as chrome's canvas inspector
Also:
standard profiling techniques for javascript, use unminified code to see what's going on everywhere during profiling
basic sanity checks: for frame rate issues, make sure your frame flipping run loop code is up to snuff. Standard practice is using requestAnimationFrame.
make sure your canvas is not being stretched
I have not yet tried applying a pure desktop opengl type of debugger (nvidia nsight, for example) to webgl running inside the browser.
Related
JavaScript in Chrome (or any other browser for that matter, but I rather limit the discussion to Chrome to make it simpler) does not provide an API which can be used to observe memory related information (e.g. how much memory is being used by the current tab where the JS is running).
I am looking for a creative solution for getting an estimation of how much bytes I can cache in a JavaScript object that my web page is running. The problem definition is that I would like to cache as much as possible.
Can anyone think of a decent way of estimating how much memory can a tab handle before it will crash / become unusable on a machine? I guess a statistical approach could work out fine for some cases, but I'm looking for something more dynamic.
I'm currently testing Javascript Visualization Toolkits and want to measure execution time, memory consumption etc.
I know hot to profile Javascript with the chrome dev tools, google speed analyzer and so on, but I want users to perform the tests on their own and display the results. (without using dev tools or installing an extension)
Is there a library or something that can be used to achieve this? Subtracting start and end time for each function does not seem like a good solution.
Best case scenario would be a Library to profile individual functions.
Caveat: you will not be able to get CPU profile or memory usage using a JS-based testing solution. If this is what you are after, a Chrome extension may very well be the way forward.
If, however, this doesn't bother you, if you are after a ready-made solution, Benchmark.js may prove to be a good starting point.
The method it uses is akin to what you mentioned - taking time differences in execution. However, it does so multiple times (100 or more times) in order to average out the statistical errors. This allows your results to be free of truly random errors (this does not, however, mean that your data will be meaningful.).
There are lots of tools for debugging JavaScript codes (like FireBug, Chrome console), but is there a tool for debugging a process. This probably need to monitor the resource (e.g. CPU) usage to find the bottleneck in the process.
I create Javascript animations for moving an element (in a simpler case opening/closing menu), but the movement is now smooth. Different factors can cause overload, e.g. heavy CSS3 gradients. But how to detect the rate-limiting process?
This is indeed the problem of most of websites. When opening a webpage, overload of javascript processes kills the page load. Most of animations/menu actions are broken.
When a JavaScript animation is not running smooth, how do you debug the problem?
OR a more general question, how to monitor the resource usage of running JS process to make a webpage lighter (an faster load on computers with limited resources)?
I would use the timeline->frames in Chrome. Paul Irish has a lot of great talks about this, here is one https://www.youtube.com/watch?v=Vp524yo0p44
Also when doing animation do not use setTimeout/setInterval, the precision is not good enough. Instead use requestAnimationFrame. More information about requestAnimationFrame can be found here. http://paulirish.com/2011/requestanimationframe-for-smart-animating/
Edit: This talk by Paul is also really interesting regarding speed and debugging speed in the browser: https://www.youtube.com/watch?v=MllBwuHbWMY, and here is a quite recent discussing 2D transforms vs absolute positioning: https://www.youtube.com/watch?v=NZelrwd_iRs
Different machines => different performance => different bottlenecks
If animation isn't running smoothly I try to lower on graphics or animation itself. Who says that users are using as powerful machines as you do? So they may hit the issue sooner than you.
But I'd still suggest Process Explorer as it can individually show load of a particular processes. In general it's a more insightful tool compared to default Task Manager provided by Windows.
So Im tinkering around making an old school game for fun using the canvas. Firefox is slow, but chrome dosent have firebug, which I find almost a requirement when developing with javascript. So 1st question : how are people developing these complex games without the aid of firebug?
Second question. What are some performance tips that can help the draw functions (or just javascript in general) execute faster? It seems to me that this is the area that is the bottleneck (for firefox at least).
Final question. From experimenting with profiling in firebug, I can see performance gains from what some would call 'bad practices', such as : I have organized code into a list of functions that each do one thing. This runs slower than if I just dump all the code between the beginPath() and closePath(), but doing it that way leads to spaghetti code and is difficult to follow. How do you manage the balance?
I am using 100% Chrome for development and then testing other browsers later.
Chrome has a built in inspector that is (in my opinion) better than firebug. Much easier stack inspection, stepping, and object inspection.
Right click on a page and click "Inspect element." (or press CTRL SHIFT + I)
Then click the "Scripts" tab. You'll see the call stack, scope variables, breakpoints, callstack, etc on the right. Hovering over variables not only lets you see their value but lets you explore their nested values as well.
For your final question - there's nothing wrong with optimisation - the only thing that's bad is premature optimisation. If you've found there's a problem and the only way to solve it is to make your code less readable then you have to make a trade-off between readability/maintainability and performance. If performance is the number one factor then by all means turn your beautifully factored code into ugly spaghetti code. But only after you've exhausted the other options.
Consider a complex rich internet application with lots of user interaction. I'm thinking of extensive drag-drop support, server-side user input validation, custom drawn UI controls such as an Outlook-like calendar, real-time UI feedback, etc... Would such an application be debuggable? I mean, can you easily step through the source code, place breakpoints, view the contents of variables, see the current call stack, use a profiler to pinpoint performance issues, etc...
Yes, why wouldn't it be?
Complexity just means more code to dig through, but tools like console.trace() from Firebug makes that easier.
Yes, it would be debug-able.
If you're using IE8 to test your site, you could use the Developer Tools to inspect individual HTML elements and change their CSS on the fly. There's also the ability to break into Javascript from the same interface.
If you're using Firefox, Firebug has almost identical abilities with a different interface.
Safari also has developer tools installed by default, you just have to go through the hoops of enabling them.
When you are designing your application, design it with debugability and testability in mind. Make sure that individual parts are independently testable, you have enough test data, you have appropriate debug/probe points in your program logic, etc. Essentially if the complexity is properly managed, debugability won't be an issue at all.
If your job depended on it, you would find a way! :)
Seriously... a passenger jet has literally millions of parts and yet there are regular routine maintenance checks and if it breaks down it gets fixed. It's a very rare piece of software that approaches that much complexity.
Web app front ends tend to be relatively simple. Essentially you're just pushing some text from the server to the browser and making it pretty; and you're using various parts of the in-browser display as controls, some of which initiate some more text conversations with the server. There are lots of little things that can go wrong, of course, but much of the hardship is simply getting the browser (all of them!) to Do What You Mean.
The only truly difficult problems are those that are intermittent and/or timing sensitive. Those can be a bear to reproduce and trace. That calls for in-depth logical analysis of your source code and/or some specialized testing methods.