JavaScript detect if hardware acceleration is enabled - javascript

Is it possible to detect to see if a browser has support for hardware accelerated page rendering and is it also possible to see if it has been enabled? I am aware that Firefox 4, IE9 and Chrome support it, but it may or may not be enabled due to the version of the browser, the OS, and the computer hardware itself.
Is this possible using some form of DOM sniffing?

Like all other browser-specific capabilities, probably the best thing you can do is to devise some sort of feature test and actually measure the performance of what matters to you.
You can do when the first page on your site is loaded and then set a cookie with the setting so you only have to do it every once-in-a-while and don't have to do it again for awhile as long as the cookie is present.
Depending upon what type of performance you care the most about, you can devise a test that's pretty specific to that. For example if you cared a lot about image scaling, you could devise a JS test that scales an image among a bunch of different sizes in a big enough loop to get a measurable time and then decide what your timing threshold is.
Not only would this be better than a single binary choice of accelaration on or off, but it would also be able to test the specific features you care about and be able to see how fast they actually are.

I recently discovered a handy command-line switch for Chrome (I am using v. 16.0.912) that results in red borders being painted around HTML (CSS3) elements that are actually being hardware accelerated. You can read more details in a blog that I have published on this subject.

Related

Is there a way to know anything about hardware resources of 'platform' accessing webpage?

I'd like to be able to find out about a browser's hardware resources from a web page, or at least a rough estimation.
Even when you detect the presence of modern technology (such as csstransforms3d, csstransitions, requestAnimationFrame) in a browser via a tool like Modernizr, you cannot be sure whether to activate some performance-consuming option (such as fancy 3D animation) or to avoid it.
I'm asking because I have (a lot of) experience with situations where the browser is modern (latest Chrome or Firefox supporting all cool technologies) but OS's CPU, GPU, and available memory are just catastrophic (32bit Windows XP with integrated GPU) and thus a decision based purely on detected browser caps is no good.
While Nickolay gave a very good and extensive explanation, I'd like to suggest one very simple, but possibly effective solution - you could try measuring how long it took for the page to load and decide whether to go with the resource-hungry features or not (Gmail does something similar - if the loading goes on for too long, a suggestion to switch to the "basic HTML" version will show up).
The idea is that, for slow computers, loading any page, regardless of content, should be, on average, much slower than on modern computers. Getting the amount of time it took to load your page should be simple, but there are a couple of things to note:
You need to experiment a bit to determine where to put the "too slow" threshold.
You need to keep in mind that slow connections can cause the page to load slower, but this will probably make a difference in a very small number of cases (using DOM ready instead of the load event can also help here).
In addition, the first time a user loads your site will probably be much slower, due to caching. One simple solution for this is to keep your result in a cookie or local storage and only take loading time into account when the user visits for the first time.
Don't forget to always, no matter what detection mechanism you used and how accurate it is, allow the user to choose between the regular, resource-hungry and the faster, "uglier" version - some people prefer better looking effects even if it means the website will be slower, while other value speed and snappiness more.
In general, the available (to web pages) information about the user's system is very limited.
I remember a discussion of adding one such API to the web platform (navigator.hardwareConcurrency - the number of available cores), where the opponents of the feature explained the reasons against it, in particular:
The number of cores available to your app depends on other workload, not just on the available hardware. It's not constant, and the user might not be willing to let your app use all (or whatever fixed portion you choose) of the available hardware resources;
Helps "fingerprinting" the client.
Too oriented on the specifics of today. The web is designed to work on many devices, some of which do not even exist today.
These arguments work as well for other APIs for querying the specific hardware resources. What specifically would you like to check to see if the user's system can afford running a "fancy 3D animation"?
As a user I'd rather you didn't use additional resources (such as fancy 3D animation) if it's not necessary for the core function of your site/app. It's sad really that I have to buy a new laptop every few years just to be able to continue with my current workflow without running very slowly due to lack of HW resources.
That said, here's what you can do:
Provide a fallback link for the users who are having trouble with the "full" version of the site.
If this is important enough to you, you could first run short benchmarks to check the performance and fall back to the less resource-hungry version of the site if you suspect that a system is short on resources.
You could target the specific high-end platforms by checking the OS, screen size, etc.
This article mentions this method on mobile: http://blog.scottlogic.com/2014/12/12/html5-android-optimisation.html
WebGL provides some information about the renderer via webgl.getParameter(). See this page for example: http://analyticalgraphicsinc.github.io/webglreport/

Detect Graphics card performance - JS

This is a longshot - is there anyway to detect poor vs strong graphics card performance via a JS plugin?
We have built a parallax site for a client, it stutters on lower performance machines - we could tweak the performance to make it work better across the board - but this of course reduces the experience for users with higher performance machines.
We could detect browser version also - but the same browser could run on low and high performance machines - so doesn't help our situation
Any ideas?
requestAnimationFrame (rAF) can help with this.
You could figure out your framerate using rAF. Details about that here: calculate FPS in Canvas using requestAnimationFrame. In short, you figure out the time difference between frames then divide 1 by it (e.g. 1/.0159s ~= 62fps ).
Note: with any method you choose, performance will be arbitrarily decided. Perhaps anything over 24 frames per second could be considered "high performance."
Why not let the user decide? Youtube (and many other video sharing sites) implements a selector for quality of playback, now a gear icon with a list of resolutions you can choose from. Would such a HD | SD or Hi-Fi | Lo-Fi selector work (or even make sense) in the context of your application?
This is where "old school" loading screens came in useful, you could render something complex either in the foreground (or hidden away) that didn't matter if it looked odd or jurky -- and by the time you had loaded your resources you could decide on what effects to enable or disable.
Basically you would use what jcage mentioned for this, testing of frame-rate (i.e. using a setInterval in conjuction with a timer). This isn't always 100% reliable however because if their machine decides in that instance to do a triple-helix-backward-somersault (or something more likely) you'd get a dodgy reading. It is possible, depending on the animations involved, to upgrade and downgrade the effects in realtime — but this is always more tricky to code, plus your own analysis of the situation can actually sometimes cause dropped performance.
Firefox has a build in list of graphic cards which are not supported: https://wiki.mozilla.org/Blocklisting/Blocked_Graphics_Drivers the related help acrticle.
But you only can indirectly test them when accessing WebGL features...
http://get.webgl.org/troubleshooting/ leads you to the corresponding browser provider when there are problems. When checking the JS code you will see that they test via
if (window.WebGLRenderingContext) {
alert('Your browser does not support WebGL');
}
if you have an up to date graphic card.
You might consider checking to see if the browser supports window.requestAnimationFrame, which would indicate you are running in a newer browser. Or alternatively consider checking jQuery.fx.interval.
You could then implement a custom function to gauge the available processing power. You might try using a custom easing function which can then be run through a function like .slideDown() to get an idea of the computation power available.
See this answer to another question for more ideas on how to check performance in javascript.
If the browser is ultra-modern and supports requestAnimationFrame, you can calculate the animation frame rate in realtime and drop your animation settings for slower machines. That's IE 10, Firefox 4, Chrome 10, and Safari 6.
You would essentially create a 'tick' function that runs on a requestAnimationFrame() callback and tracks how many milliseconds pass between each tick. After so many ticks have been registered, you can average it out to determine your overall frame rate. But there are two caveats to this:
When the tab is in the background requestAnimationFrame callbacks will be suspended -- so a single, sudden delay between frames of several seconds to many minutes does not mean it's a slow animation. And:
Simply having a JavaScript gauge running on each animation frame will cause the overall animation to slow down a bit; there's no way to measure something like that without negatively impacting it. Remember, Heisenberg's a bastard.
For older browsers, I don't know of any way to reliably gauge the frame rate. You may be able to simulate it using setTimeout() but it won't be nearly as accurate -- and might have an even more negative impact on performance.
This might be the risk/benefit based decision. I think that you will have to make important, but tough decision here.
1)
If you decide to have two verions, you will have to spend some time to:
figure out fast, non intrusive test
spend time implementing that
roll it into production
and you will most probably end up with incorrect implementation, for example at the moment my computer is running 70 chrome tabs, VLC with 1080p anime, and IntellijIDEA
The probability the my MBP, 2012 model will be detected as "slow" computer is high, at least now.
False positives are really hard to figure out.
2)
If you go for one version you will have to choose between HD and Lo-Fi, as #Patrick mentioned, which is again mistake in my opinion.
What i will suggest is that you go to Google Analytics, figure out browser distribution (yes, i know that it can be misleading, but so can any other test) and based on that (if majority of users are Chrome + modern IE/FF go with HD version, BUT spend some time figuring out optimisation strategy.
There are always things that could be done better and faster. Get one older laptop, and optimise until you get decent FPS rate. And thats it, yo as a developer need to make that decision, that is your duty.
3)
If from the Browser distribution you figure out that you absolutely must go with Lo-Fi version, well, try to think is the "downgrade" worth it, and implement it only if that is your last resort.

What's the best way to determine at runtime if a browser is too slow to gracefully handle complex JavaScript/CSS?

I'm toying with the idea of progressively enabling/disabling JavaScript (and CSS) effects on a page - depending on how fast/slow the browser seems to be.
I'm specifically thinking about low-powered mobile devices and old desktop computers -- not just IE6 :-)
Are there any examples of this sort of thing being done?
What would be the best ways to measure this - accounting for things, like temporary slowdowns on busy CPUs?
Notes:
I'm not interested in browser/OS detection.
At the moment, I'm not interested in bandwidth measurements - only browser/cpu performance.
Things that might be interesting to measure:
Base JavaScript
DOM manipulation
DOM/CSS rendering
I'd like to do this in a way that affects the page's render-speed as little as possible.
BTW: In order to not confuse/irritate users with inconsistent behavior - this would, of course, require on-screen notifications to allow users to opt in/out of this whole performance-tuning process.
[Update: there's a related question that I missed: Disable JavaScript function based on user's computer's performance. Thanks Andrioid!]
Not to be a killjoy here, but this is not a feat that is currently possible in any meaningful way in my opinion.
There are several reasons for this, the main ones being:
Whatever measurement you do, if it is to have any meaning, will have to test the maximum potential of the browser/cpu, which you cannot do and maintain any kind of reasonable user experience
Even if you could, it would be a meaningless snapshot since you have no idea what kind of load the cpu is under from other applications than the browser while your test is running, and weather or not that situation will continue while the user is visiting your website.
Even if you could do that, every browser has their own strengths and weaknesses, which means, you'd have to test every dom manipulation function to know how fast the browser would complete it, there is no "general" or "average" that makes sense here in my experience, and even if there was, the speed with which dom manipulation commands execute, is based on the context of what is currently in the dom, which changes when you manipulate it.
The best you can do is to either
Let your users decide what they want, and enable them to easily change that decision if they regret it
or better yet
Choose to give them something that you can be reasonably sure that the greater part of your target audience will be able to enjoy.
Slightly off topic, but following this train of thought: if your users are not techleaders in their social circles (like most users in here are, but most people in the world are not) don't give them too much choice, ie. any choice that is not absolutely nescessary - they don't want it and they don't understand the technical consequences of their decision before it is too late.
A different approach, that does not need explicit benchmark, would be to progressively enable features.
You could apply features in prioritized order, and after each one, drop the rest if a certain amount of time has passed.
Ensuring that the most expensive features come last, you would present the user with a somewhat appropriate selection of features based on how speedy the browser is.
You could try timing some basic operations - have a look at Steve Souder's Episodes and Yahoo's boomerang for good ways of timing stuff browserside. However its going to be rather complicated to work out how the metrics relate to an acceptable level of performance / a rewarding user experience.
If you're going to provide a UI to let users opt in / opt out, why not just let the user choose the level of eye candy in the app vs the rendering speed?
Take a look at some of Google's (copyrighted!) benchmarks for V8:
http://v8.googlecode.com/svn/data/benchmarks/v4/regexp.js
http://v8.googlecode.com/svn/data/benchmarks/v4/splay.js
I chose a couple of the simpler ones to give an idea of similar benchmarks you could create yourself to test feature sets. As long as you keep the run-time of your tests between time logged at start to time logged at completion to less than 100 ms on the slowest systems (which these Google tests are vastly greater than) you should get the information you need without being detrimental to user experience. While the Google benchmarks care at a granularity between the faster systems, you don't. All that you need to know is which systems take longer than XX ms to complete.
Things you could test are regular expression operations (similar to the above), string concatenation, page scrolling, anything that causes a browser repaint or reflow, etc.
You could run all the benchmarks you want using Web Workers, then, according to results, store your detection about the performance of the machine in a cookie.
This is only for HTML5 Supported browsers, of-course
Some Ideas:
Putting a time-limit on the tests seems like an obvious choice.
Storing test results in a cookie also seems obvious.
Poor test performance on a test could pause further scripts
and trigger display of a non-blocking prompt UI (like the save password prompts common in modern web browsers)
that asks the user if they want to opt into further scripting effects - and store the answer in a cookie.
while the user hasn't answered the prompt, then periodically repeat the tests and auto-accept the scripting prompt if consecutive tests finish faster than the first one.
.
On a sidenote - Slow network speeds could also probably be tested
by timing the download of external resources (like the pages own CSS or JavaScript files)
and comparing that result with the JavaScript benchmark results.
this may be useful on sites relying on loads of XHR effects and/or heavy use of <img/>s.
.
It seems that DOM rendering/manipulation benchmarks are difficult to perform before the page has started to render - and are thus likely to cause quite noticable delays for all users.
I came with a similar question and I solved it this way, in fact it helped me taking some decisions.
After rendering the page I do:
let now, finishTime, i = 0;
now = Date.now();//Returns the number of miliseconds after Jan 01 1970
finishTime = now + 200; //We add 200ms (1/5 of a second)
while(now < finishTime){
i++;
now = Date.now();
}
console.log("I looped " + i + " times!!!");
After doing that I tested it on several browser with different OS and the i value gave me the following results:
Windows 10 - 8GB RAM:
1,500,000 aprox on Chrome
301,327 aprox on Internet Explorer
141,280 (on Firefox on a VirtualMachine running Lubuntu 2GB given)
MacOS 8GB RAM:
3,000,000 aprox on Safari
1,500,000 aprox on Firefox
70,000 (on Firefox 41 on a Virtual Machine running Windows XP 2GB given)
Windows 10 - 4GB RAM (This is an Old computer I have)
500,000 aprox on Google Chrome
I load a lot of divs in a form of list, the are loaded dinamically accordeing to user's input, this helped me to limit the number of elements I create according to the performance the have given, BUT
But the JS is not all!, because even tough the Lubuntu OS running on a virtual machine gave poor results, it loaded 20,000 div elements in less than 2 seconds and you could scroll through the list with no problem while I took more than 12 seconds for IE and the performance sucked!
So a Good way could be that, but When it comes to rendering, thats another story, but this definitely could help to take some decisions.
Good luck, everyone!

Cross-browser compatibility issues

I see that many people have problems with cross-browser compatibility issues.
My question is why do browsers render the html, css or js differently?
Is it due to the DOM? The box model?
Why are there cross-browser compatibility issues when there are standards from W3C etc?
Are there any differences in the way the major Internet browsers display HTML content? Why is it that Internet Explorer, Firefox (Mozilla), Opera may display the same content differently?
What should I keep in mind when building a cross-browser compatible web site?
There are a lot fof reasons or incompatibility:
Specs are often written in response to the development of propriety features by specific vendors,
Specs can sometimes be poorly written, have ambiguity or were not written in anticipation of specific end-use cases.,
Browser vendors occasionally ignore the specification for their own reasons.
Additional factors:
A lot of this stuff is hard to implement, let along implement correctly,
It also has to be implemented to handle poorly formed HTML, backwards compatibility etc. Browser vendors sometimes sacrifice 'correctness' for 'interoperability',
History, politics & personalities.
I'm sure someone will answer this much better, but here's a start:
Yes, there are standards that are supposed to be adhered with respect to CSS rendering. The problem is, some browser editors (cough Microsoft cough) don't consider it a priority to implement the specifications correctly (or even fully, for that matter). Even, when the layout engine is being worked on to attempt to ensure compatibility, it can get quite nasty figuring out how things should be rendered correctly when mixing various CSS properties and units (see for example, the ACID test http://en.wikipedia.org/wiki/Acid3)
To have a cross-browser website, you'll basically have to check all of your website's pages render correctly in the browsers your visitors use (or that you support). Various tools such as Selenium (seleniumhq.org) can help with this.
You basically have to decide what you're going to do
design for the lowest common denominator (if it's IE6, there isn't much you'll be able to do)
design using validating CSS and using hacks (e.g. clearfix) to correct erroneous behavior in certain browsers
decide not to support certain browsers (IE6 being a prime candidate)
sniff browser and adapt display accordingly (NOT the preferred way to do it)
With respect to differences in manipulating the DOM, libraries such as jQuery help a lot as they hide the implementation differences from you.
For reference, it's a good idea to test your website in at least the following:
WebKit-based browser (Chrome, Safari)
Gecko-based browser (Firefox)
IE
Opera
It is an aftermath of the Great Browser War. Now Netscape Communications lies in ruins, but quirks opponents made to outperform each other is still remains in browsers' codebase, and people are still in development team. Consider watching Crockford's lecture, he gives some highlight on subject. (you will want to save the file instead of streaming it)
Everything that Hamish said, plus another major problem that he alluded to is how browsers handle incorrect HTML. For example, back in the days of IE4/NS4, the element was very problematic. If you didn't close a tag, IE4 would close it for you. Netscape 4 would silently disregard the table completely.
This is still true today where one browser will fix incorrect markup differently than another. Yes, the markup should be corrected, but browsers will try their best to render something.
The standard specifies how HTML/CSS markup should be rendered into displayed as visual elements. It does not specify how the rendering should work specifically. Many different people and companies have created different ways of rendering markup visually. Since HTML rendering is an extremely complex task, of course they didn't all converge on the exact same solution. All rendering engines are aiming for the same goal, but often the spec is vague enough to allow for small differences (be it just pixel level), and bugs are inevitable as well.
Add to that that back in the days browser vendors cared less about standards and more about gaining market share quickly and that certain companies have been very slow in turning around to embrace standards (you know who you are).
In the end, specs, which are quite complex, and browsers, which are even more complex, are written by many different people; you can't expect absolute perfection to arise from this process. And perfection is not the goal of HTML either; it's meant to be a simple, vendor and platform independent markup language to present information on a variety of devices, something it does remarkably well. If you need pixel perfect results, you need to go with a technology that was meant to provide this, like Adobe Flash (which is neither platform nor vendor independent nor simple).
Try to look at it from the glass-half-full perspective: Thousands of different people have written millions of lines of code doing vastly different things on many different platforms with many different goals and focuses, and yet in the end all render HTML markup almost identical with often only tiny, virtually irrelevant differences. There are of course weak spots in every engine and if you happen to hit them all at once, your site will break in every browser in different ways. But this is possible to avoid with a bit of experience.

Progressive enhancement for javascript?

Most people talk about progressive enhancement right now as serving browsers with javascript (enhanced version), and browsers without javascript (simple version).
But there is such a big difference in javascript performance between browsers that it may be useful to apply that term to support for choosing between javascript based features between browsers.
In a complex web app with numerous non-absolutely essential features (and animations), is it worthwhile to start thinking about cordoning them off for say, these sets of features should work in all browsers, and these sets of features only in Chrome and Safari, and these in Firefox and Chrome and Safari and Opera, and so on, because enabling certain features in certain browsers would be too slow.
Sometimes I feel like the user experience would improve if they did not have access to certain non-essential features. For instance disallowing IE users from resizing certain panels that Chrome users would be able to resize.
I have not done this myself, but i can see that it makes a lot of sense if your budget allows for it (and you can't control your user's browser choice)
At the end of the day, IE users may be using a slow browser, but they are still your users. So if you want to give all your users the best possible user experience, it may be worth it to spend some time giving IE users a different version of the application to give them a higher level of performance.
An application that is fast for 99% of your users is undoubtably better than an application that is fast for only 30% of your users. The only question is what's more important - the user experience, or your development time (and take into account that in a few years, the average user will be running faster browsers on faster computers)
Any such work should be driven by benchmarks though, since my experience is that you will often be surprised by what part of the code is slow and what part of the code is fast.
As an aside, Lombardi Blueprint has a very interesting approach, although likely impractical outside of GWT. They have layout algorithms written in java, written such that they can be run both on the client side (via GWT) and the server side (via a standard jvm). Consequently, based on the benchmarked performance of your browser, they are able to dynamically switch between doing the layout on the client side (for fast browsers) vs doing the layout on the server side (for slower browsers).
That sounds like a maintenance nightmare.
I realize that there are some web applications where it just doesn't make sense to have an html version. That said, if it's possible I would start by building an html version of every page first, then use JavaScript to enhance the user experience.
IE is less performant than Safari, Chrome and FF when it comes to JS - but have you really developed a page that is unusable in IE with JS turned on? I just haven't seen it - in the wild I think the various JS implementations are fast enough.
Two different issues with the browsers these days:
Speed. My experience has been that IE 7 works fine, just much slower than the rest. My fix is to give more frequent UI progress updates to the users. Since the UI updating takes time, I minimize the updates on the faster browsers. Eg on IE I update the screen with more feedback after processing another 50 events. For other browsers, after processing 200 events.
Lack of feature. Eg canvas. But it is big expense to build multiple sites. And test them too. So I spend my budget on 1 version for all current desktop browsers. And make additional sites for mobile esp iPhone.
HTH,
Larry
What I do is to write a basic javascript file that has the common functionality, going to the lowest denominator (javascript 1.5). I then have other files for more recent versions of javascript, and those will replace functions in my javascript objects, so that I can progressively add more support.
If I want to use the canvas tag, then I can add that in a different file, since IE and Firefox/Opera/Safari differ in how they create the canvas element.
This is not a joy on maintenance, but if I want to use the new html/javascript features then this seemed to be the best model.
I concur with Andy. Providing different version of an application to different browsers is a potential maintenance problem down the road. I have always found it a better bet to provide one version of an app, that works in all browsers. For example, I try to avoid browser sniffers. The application might not be the coolest one, but it works for everyone and is easier to maintain.
This sort of stuff is easier now with all the nifty Javascript libraries that abstract some of the browser differences away. Besides, you can do a lot of stuff in the older browsers. It's just done "differently" ;)
So let's say that you build a decently-sized application. You have browser-sniffing galore to determine what features will be on and which will be off. You sniffed for Opera 9.x, and now (today actually) Opera 10 comes out. You have to go and update every sniffer on every page. And then another browser comes out soon... and another. You are going to spend all your time, determining what browsers you support and what features to support on them.
I use multiple browsers in a day. So when I go to your site, I am going to see three different interfaces. I will be confused, since the features I expected to be there, or the behaviors I expected will not be there. Eventually, I'll get frustrated and never go to your site again.
Also, there is more to how quickly some piece of JavaScript runs than just the browser. I still have an old Pentium running Firefox 3.5. Sometimes, it can be painfully slow.
I think the answer is that you need to categorize your code into speed categories, not just categorize into browser capabilities.
In other words, give your site tiers of features, first tier is basic html, second tier is javascript usability improvements, third tier is javascript animation eye candy.
Then do a combination of allowing your users to step down a tier whenever they want to, "Click to turn off animations!", "Click to turn on animations!", "Click to view in basic html", and choosing to default to certain speed categories based on browser for speed reasons (e.g. if IE7 seems to evince speed issues with the full animations on, make it default to the second "javascript usability improvements" tier).

Categories

Resources