Three js progressive enhancement for devices - javascript

Does anyone have any advice regarding progressive enhancement for Three js projects on devices?
I have an app with lots of post-processing which is fine on modern devices but a bit slow on older/cheaper phones. It would be nice to enable post-layers progressively for devices that can handle it.
Is there a reasonably reliable way to measure performance so as to automate this?

Welcome to the everyday world of real-time game development on heterogenous devices: PCs, mobiles, or cross-platform consoles...
You might consider simply keeping a list of common hardware profiles. While there are many kinds of phone there are actually very few different GPU architectures used in most all of them. This is the method that has been used by Chrome on Android, for example, to determine which devices work with WebGL and which don't.
I admit that this might seem crude, that there ought to be some clean way to determine which devices will perform well on your code but frankly it will be different for different devices, which is why companies like NVIDIA have test labs that evaluate products against a wide variety of hardware configurations and then they still end up generating some sort of table of performance options for that single game title, rather than building some function that can figure out performance from First Principles.
A few years ago I was scoping-out a system for a game company at which I worked that would measure a range of performance parameters for each individual user and attempt, via regression ("machine learning") to determine the best performance profile for THAT user on THEIR machine because even with a narrow range of hardware choices you still can never tell just how much other stuff is going on on the player's computer. Are they running youtube and teamspeak and who knows what else in the background? Hard to predict the at-home situation, even in a large testing lab. So an adaptive approach might be the best, but even then it is very numerically-intensive and probably not a good fit (today) for javascript-based web apps using THREE.js. So if you want to attack this problem, choose your strategies (drop compositing layers? Opt for simpler geometry? Fewer or simpler shaders? Depends on your app!), try them yourself (experience will trump speculation every time!) , and then start deploying to the public. If your users permit it, ask them to store a cookie. But expect a long iterative process if you really want it to work.

Related

Is there a way to know anything about hardware resources of 'platform' accessing webpage?

I'd like to be able to find out about a browser's hardware resources from a web page, or at least a rough estimation.
Even when you detect the presence of modern technology (such as csstransforms3d, csstransitions, requestAnimationFrame) in a browser via a tool like Modernizr, you cannot be sure whether to activate some performance-consuming option (such as fancy 3D animation) or to avoid it.
I'm asking because I have (a lot of) experience with situations where the browser is modern (latest Chrome or Firefox supporting all cool technologies) but OS's CPU, GPU, and available memory are just catastrophic (32bit Windows XP with integrated GPU) and thus a decision based purely on detected browser caps is no good.
While Nickolay gave a very good and extensive explanation, I'd like to suggest one very simple, but possibly effective solution - you could try measuring how long it took for the page to load and decide whether to go with the resource-hungry features or not (Gmail does something similar - if the loading goes on for too long, a suggestion to switch to the "basic HTML" version will show up).
The idea is that, for slow computers, loading any page, regardless of content, should be, on average, much slower than on modern computers. Getting the amount of time it took to load your page should be simple, but there are a couple of things to note:
You need to experiment a bit to determine where to put the "too slow" threshold.
You need to keep in mind that slow connections can cause the page to load slower, but this will probably make a difference in a very small number of cases (using DOM ready instead of the load event can also help here).
In addition, the first time a user loads your site will probably be much slower, due to caching. One simple solution for this is to keep your result in a cookie or local storage and only take loading time into account when the user visits for the first time.
Don't forget to always, no matter what detection mechanism you used and how accurate it is, allow the user to choose between the regular, resource-hungry and the faster, "uglier" version - some people prefer better looking effects even if it means the website will be slower, while other value speed and snappiness more.
In general, the available (to web pages) information about the user's system is very limited.
I remember a discussion of adding one such API to the web platform (navigator.hardwareConcurrency - the number of available cores), where the opponents of the feature explained the reasons against it, in particular:
The number of cores available to your app depends on other workload, not just on the available hardware. It's not constant, and the user might not be willing to let your app use all (or whatever fixed portion you choose) of the available hardware resources;
Helps "fingerprinting" the client.
Too oriented on the specifics of today. The web is designed to work on many devices, some of which do not even exist today.
These arguments work as well for other APIs for querying the specific hardware resources. What specifically would you like to check to see if the user's system can afford running a "fancy 3D animation"?
As a user I'd rather you didn't use additional resources (such as fancy 3D animation) if it's not necessary for the core function of your site/app. It's sad really that I have to buy a new laptop every few years just to be able to continue with my current workflow without running very slowly due to lack of HW resources.
That said, here's what you can do:
Provide a fallback link for the users who are having trouble with the "full" version of the site.
If this is important enough to you, you could first run short benchmarks to check the performance and fall back to the less resource-hungry version of the site if you suspect that a system is short on resources.
You could target the specific high-end platforms by checking the OS, screen size, etc.
This article mentions this method on mobile: http://blog.scottlogic.com/2014/12/12/html5-android-optimisation.html
WebGL provides some information about the renderer via webgl.getParameter(). See this page for example: http://analyticalgraphicsinc.github.io/webglreport/

Could WebWorkers be used for supercomputer power?

This a general question really, not sure if this is the place for it (it might be deleted as quite general) so please don't heckle (I am just curious).
I have been reading up on WebWorkers API and had a thought.
WebWorkers can be limited to using only small amounts of processing power for each machine/user. This could be tailored to not affect user experience and might only slighly affect browser performance (if at all).
My question is, could they theoretically be used to turn a website/application into a highly distributed supercomputer?
Is it more of an ethical question as IF it could be done, is it wrong if the user is not aware?
Yes, WebWorkers can be used for supercomputing a.k.a. distributed computing.
In fact, that's exactly what CrowdProcess does: http://crowdprocess.com/
DISCLAIMER: I work on CrowdProcess.
Websites can join the platform and supply it with processing power from the browsers that visit them without disrupting the website visitors experience in any way.
Developers can use the platform for their distributed computing jobs. Check the documentation to know how this happens: http://crowdprocess.com/doc-index
The website visitor can opt-in, opt-out or simply agree with the terms and conditions of the website that provides the platform with the browser's processing power.
We ask the website owners to tell the users what's going on in any way they find appropriate for their audience. CrowdProcess is aware that no one should power this platform against their consent and will. That's why we develop projects with a higher purpose: forest fire behavioural prediction, genetic sequence alignment and medical computer vision just to name a few.
Our vision is that one day soon we will have enough commercial applications running on the platform that allow us to pay websites for the processing power they provide.
It's possible, unethical and likely illegal.
It is certainly possible to do. In fact you don't even need to use web workers to do it. It is probably unethical to do if the user is not aware but it may not actually be degrading to the user experience or even noticable. It may even be illegal and you should get some legal advice.
For example, if you have an aplication where users are aware that they help folding proteins while playing your game or something like that then it may be a great application. If, on the other hand, you want to mine bitcoins using the processing power and electricity of your unsuspecting visitors then you are asking for trouble.
I have found two companies...
Seti at home http://setiathome.berkeley.edu/
Gives the user the chance to give some processing power to help them analyse data from their telescope.
Folding at home http://folding.stanford.edu/English/About
Users can give processing power to research labs for all sorts of scientific research and study purposes (including protein strings).
It seems it is LEGAL (via WebSockets or ajax) as long as you give details in Terms and Conditions, but not recommended as better ways to do heavy processing exist (see above 2 examples).

JavaScript or Flash for multiplayer webgame?

I was recommended these two languages as a good option for multiplayer web game development, but I don't know what are restrictions of these languages in terms of game development. I personally like JavaScript, but I'm not sure about client-server communication and security.
Game is going to be 2D turn based strategy.
Shortest answer: Flash.
Short answer: Flash is your best option at the moment, but it's entirely possible to make a good quality game with javascript, there's libraries out there that make the drawing part really easy.
Long answer: For action heavy games, such as a fast paced platformer or a fighting game, Flash is your best option. Flash is also the only option if you want to be able to sell your game to various portals, since they have not yet opened up the ability to license HTML5/Javascript games.
However, Javascript is a viable game development language. Angry Birds on Google+ (and in the chrome store) uses Javascript, and only uses Flash for audio. In fact, audio is currently the biggest challenge, and there's slowly more solutions to that as well. Combine that with stuff like socket.io and the emerging WebGL, and you've achieved parity with most Flash and Unity games.
That said, there are drawbacks to Javascript. Namely, it is much harder to protect the code from being stolen or hacked, and depending on various factors, you're cutting out a subset of users who might be running IE8 and below. There are solutions to these problems, and there is one major drawback of Flash (doesn't work on iOS), but generally, these should be pointed out.
if to be short, it's too hard to draw via javascript. So if you game deals with graphics or animation, it better to use actionscript. IMHO, actionscript is the best if you want to design browser game, javascript will be needed to deal with browser only (if that'll be needed).
UPD: take a look at Haxe. It's multi-platform language. You can target both flash and JS on Haxe.
I would recommend Flash, unless you care about the iOS audience. Having a socket running in your multiplayer game is extremely valuable, and easy to accomplish. Meanwhile, unless you want to demand a specific browser from your users, the never ending headache of browser/platform incompatibility, especially in something as javascript intensive as a multiplayer game, makes Flash the obvious choice. And of course, performing animations is what flash is best at. Achieving similar results in HTML is in many cases impossible, but in all cases MUCH more work.
Oh, and welcome to Stack! :)
Go javascript. But you should look at http://socket.io/ if you go the javascript route

Progressive enhancement for javascript?

Most people talk about progressive enhancement right now as serving browsers with javascript (enhanced version), and browsers without javascript (simple version).
But there is such a big difference in javascript performance between browsers that it may be useful to apply that term to support for choosing between javascript based features between browsers.
In a complex web app with numerous non-absolutely essential features (and animations), is it worthwhile to start thinking about cordoning them off for say, these sets of features should work in all browsers, and these sets of features only in Chrome and Safari, and these in Firefox and Chrome and Safari and Opera, and so on, because enabling certain features in certain browsers would be too slow.
Sometimes I feel like the user experience would improve if they did not have access to certain non-essential features. For instance disallowing IE users from resizing certain panels that Chrome users would be able to resize.
I have not done this myself, but i can see that it makes a lot of sense if your budget allows for it (and you can't control your user's browser choice)
At the end of the day, IE users may be using a slow browser, but they are still your users. So if you want to give all your users the best possible user experience, it may be worth it to spend some time giving IE users a different version of the application to give them a higher level of performance.
An application that is fast for 99% of your users is undoubtably better than an application that is fast for only 30% of your users. The only question is what's more important - the user experience, or your development time (and take into account that in a few years, the average user will be running faster browsers on faster computers)
Any such work should be driven by benchmarks though, since my experience is that you will often be surprised by what part of the code is slow and what part of the code is fast.
As an aside, Lombardi Blueprint has a very interesting approach, although likely impractical outside of GWT. They have layout algorithms written in java, written such that they can be run both on the client side (via GWT) and the server side (via a standard jvm). Consequently, based on the benchmarked performance of your browser, they are able to dynamically switch between doing the layout on the client side (for fast browsers) vs doing the layout on the server side (for slower browsers).
That sounds like a maintenance nightmare.
I realize that there are some web applications where it just doesn't make sense to have an html version. That said, if it's possible I would start by building an html version of every page first, then use JavaScript to enhance the user experience.
IE is less performant than Safari, Chrome and FF when it comes to JS - but have you really developed a page that is unusable in IE with JS turned on? I just haven't seen it - in the wild I think the various JS implementations are fast enough.
Two different issues with the browsers these days:
Speed. My experience has been that IE 7 works fine, just much slower than the rest. My fix is to give more frequent UI progress updates to the users. Since the UI updating takes time, I minimize the updates on the faster browsers. Eg on IE I update the screen with more feedback after processing another 50 events. For other browsers, after processing 200 events.
Lack of feature. Eg canvas. But it is big expense to build multiple sites. And test them too. So I spend my budget on 1 version for all current desktop browsers. And make additional sites for mobile esp iPhone.
HTH,
Larry
What I do is to write a basic javascript file that has the common functionality, going to the lowest denominator (javascript 1.5). I then have other files for more recent versions of javascript, and those will replace functions in my javascript objects, so that I can progressively add more support.
If I want to use the canvas tag, then I can add that in a different file, since IE and Firefox/Opera/Safari differ in how they create the canvas element.
This is not a joy on maintenance, but if I want to use the new html/javascript features then this seemed to be the best model.
I concur with Andy. Providing different version of an application to different browsers is a potential maintenance problem down the road. I have always found it a better bet to provide one version of an app, that works in all browsers. For example, I try to avoid browser sniffers. The application might not be the coolest one, but it works for everyone and is easier to maintain.
This sort of stuff is easier now with all the nifty Javascript libraries that abstract some of the browser differences away. Besides, you can do a lot of stuff in the older browsers. It's just done "differently" ;)
So let's say that you build a decently-sized application. You have browser-sniffing galore to determine what features will be on and which will be off. You sniffed for Opera 9.x, and now (today actually) Opera 10 comes out. You have to go and update every sniffer on every page. And then another browser comes out soon... and another. You are going to spend all your time, determining what browsers you support and what features to support on them.
I use multiple browsers in a day. So when I go to your site, I am going to see three different interfaces. I will be confused, since the features I expected to be there, or the behaviors I expected will not be there. Eventually, I'll get frustrated and never go to your site again.
Also, there is more to how quickly some piece of JavaScript runs than just the browser. I still have an old Pentium running Firefox 3.5. Sometimes, it can be painfully slow.
I think the answer is that you need to categorize your code into speed categories, not just categorize into browser capabilities.
In other words, give your site tiers of features, first tier is basic html, second tier is javascript usability improvements, third tier is javascript animation eye candy.
Then do a combination of allowing your users to step down a tier whenever they want to, "Click to turn off animations!", "Click to turn on animations!", "Click to view in basic html", and choosing to default to certain speed categories based on browser for speed reasons (e.g. if IE7 seems to evince speed issues with the full animations on, make it default to the second "javascript usability improvements" tier).

Adobe AIR, memory leaks

We all know how web browsers (such as Firefox) are certain to fill up memory consumption because we continuously execute JavaScript code (from websites) that is prone to memory leakage.
I am debating in developing a Desktop app, and given my experience with Javascript/Css/HTML, I thought I would give AIR a try, this way I don't have to use Java (for example) and deal with learning all its GUI swing stuff.
The problem is that I worry about memory leakage in AIR, since AIR is simply a web browser with an API layer to interact with the Operating System.
Is it plausible to worry about memory leakage in AIR? What should I do about it?
My name is Rob Christensen and I am product manager on Adobe AIR. First, let me say that it is quite easy to build a desktop application, regardless of underlying technology, that consumes a large amount of memory and/or does not free up memory.
In the next release of AIR, we are looking at providing some additional capabilities to the AIR runtime to make it easier to identify memory leaks for JavaScript-based applications. Developers that are building Flash or Flex based applications can already take advantage of the memory profiler included in Flex Builder to track this down. We are hoping to do something similar for JavaScript developers as well.
In my experience talking to developers, memory leaks often occur when objects in memory are never cleaned up. For example, imagine a Twitter client that lists tweets from users based around a search keyword. Overtime, more results show and the list becomes longer. If there is not a limit on the maximum number of Tweets visible, memory will, of course, go up over time. Instead, the application should impose a reasonable limit on the number of items that appear in that list.
There are some talks available that describe best practices around handling memory in AIR. Though the examples in this article are mostly written in ActionScript, the same concepts apply to JavaScript as well.
Performance-Tuning AIR applications
http://www.adobe.com/devnet/air/articles/air_performance.html
If there are memory leaks in the runtime, we jump on these as quickly as we can. We encourage developers to know about such issues by sending them back to our team using the following feedback form (www.adobe.com/go/wish).
If you are using an Ajax framework, you may want to look into whether there are known issues with memory leaks for that particular framework.
So, to summarize, yes, you should always worry about memory when building a desktop application -- whether with AIR or C++. As you are developing your application, you should monitor the memory usage of your application so that you can identify any issues sooner than later. One way to do this is to run longevity tests -- keep your application open over night to see if memory is creeping up.
In general, the tools available for browsers are very limited as well. I expect this will change soon as browser vendors also start providing more hooks into their browsers for identifying memory usage. Hope this helps.
Thank you!
-Rob
Product Manager, Adobe AIR
Sure. I've seen AIR apps on Linux swallow gigabytes of memory over time. It's a real blocker for me and stops me using them.
That said, other people on other platforms have no issue with it. Ultimately you need to decide what most of your market will be using and how affected they'll be by any issues in AIR (or other).
If it's not that important (but it's still an issue) submit bug reports and hope Adobe fix things.

Categories

Resources