Adobe AIR, memory leaks - javascript

We all know how web browsers (such as Firefox) are certain to fill up memory consumption because we continuously execute JavaScript code (from websites) that is prone to memory leakage.
I am debating in developing a Desktop app, and given my experience with Javascript/Css/HTML, I thought I would give AIR a try, this way I don't have to use Java (for example) and deal with learning all its GUI swing stuff.
The problem is that I worry about memory leakage in AIR, since AIR is simply a web browser with an API layer to interact with the Operating System.
Is it plausible to worry about memory leakage in AIR? What should I do about it?

My name is Rob Christensen and I am product manager on Adobe AIR. First, let me say that it is quite easy to build a desktop application, regardless of underlying technology, that consumes a large amount of memory and/or does not free up memory.
In the next release of AIR, we are looking at providing some additional capabilities to the AIR runtime to make it easier to identify memory leaks for JavaScript-based applications. Developers that are building Flash or Flex based applications can already take advantage of the memory profiler included in Flex Builder to track this down. We are hoping to do something similar for JavaScript developers as well.
In my experience talking to developers, memory leaks often occur when objects in memory are never cleaned up. For example, imagine a Twitter client that lists tweets from users based around a search keyword. Overtime, more results show and the list becomes longer. If there is not a limit on the maximum number of Tweets visible, memory will, of course, go up over time. Instead, the application should impose a reasonable limit on the number of items that appear in that list.
There are some talks available that describe best practices around handling memory in AIR. Though the examples in this article are mostly written in ActionScript, the same concepts apply to JavaScript as well.
Performance-Tuning AIR applications
http://www.adobe.com/devnet/air/articles/air_performance.html
If there are memory leaks in the runtime, we jump on these as quickly as we can. We encourage developers to know about such issues by sending them back to our team using the following feedback form (www.adobe.com/go/wish).
If you are using an Ajax framework, you may want to look into whether there are known issues with memory leaks for that particular framework.
So, to summarize, yes, you should always worry about memory when building a desktop application -- whether with AIR or C++. As you are developing your application, you should monitor the memory usage of your application so that you can identify any issues sooner than later. One way to do this is to run longevity tests -- keep your application open over night to see if memory is creeping up.
In general, the tools available for browsers are very limited as well. I expect this will change soon as browser vendors also start providing more hooks into their browsers for identifying memory usage. Hope this helps.
Thank you!
-Rob
Product Manager, Adobe AIR

Sure. I've seen AIR apps on Linux swallow gigabytes of memory over time. It's a real blocker for me and stops me using them.
That said, other people on other platforms have no issue with it. Ultimately you need to decide what most of your market will be using and how affected they'll be by any issues in AIR (or other).
If it's not that important (but it's still an issue) submit bug reports and hope Adobe fix things.

Related

Is there a way to know anything about hardware resources of 'platform' accessing webpage?

I'd like to be able to find out about a browser's hardware resources from a web page, or at least a rough estimation.
Even when you detect the presence of modern technology (such as csstransforms3d, csstransitions, requestAnimationFrame) in a browser via a tool like Modernizr, you cannot be sure whether to activate some performance-consuming option (such as fancy 3D animation) or to avoid it.
I'm asking because I have (a lot of) experience with situations where the browser is modern (latest Chrome or Firefox supporting all cool technologies) but OS's CPU, GPU, and available memory are just catastrophic (32bit Windows XP with integrated GPU) and thus a decision based purely on detected browser caps is no good.
While Nickolay gave a very good and extensive explanation, I'd like to suggest one very simple, but possibly effective solution - you could try measuring how long it took for the page to load and decide whether to go with the resource-hungry features or not (Gmail does something similar - if the loading goes on for too long, a suggestion to switch to the "basic HTML" version will show up).
The idea is that, for slow computers, loading any page, regardless of content, should be, on average, much slower than on modern computers. Getting the amount of time it took to load your page should be simple, but there are a couple of things to note:
You need to experiment a bit to determine where to put the "too slow" threshold.
You need to keep in mind that slow connections can cause the page to load slower, but this will probably make a difference in a very small number of cases (using DOM ready instead of the load event can also help here).
In addition, the first time a user loads your site will probably be much slower, due to caching. One simple solution for this is to keep your result in a cookie or local storage and only take loading time into account when the user visits for the first time.
Don't forget to always, no matter what detection mechanism you used and how accurate it is, allow the user to choose between the regular, resource-hungry and the faster, "uglier" version - some people prefer better looking effects even if it means the website will be slower, while other value speed and snappiness more.
In general, the available (to web pages) information about the user's system is very limited.
I remember a discussion of adding one such API to the web platform (navigator.hardwareConcurrency - the number of available cores), where the opponents of the feature explained the reasons against it, in particular:
The number of cores available to your app depends on other workload, not just on the available hardware. It's not constant, and the user might not be willing to let your app use all (or whatever fixed portion you choose) of the available hardware resources;
Helps "fingerprinting" the client.
Too oriented on the specifics of today. The web is designed to work on many devices, some of which do not even exist today.
These arguments work as well for other APIs for querying the specific hardware resources. What specifically would you like to check to see if the user's system can afford running a "fancy 3D animation"?
As a user I'd rather you didn't use additional resources (such as fancy 3D animation) if it's not necessary for the core function of your site/app. It's sad really that I have to buy a new laptop every few years just to be able to continue with my current workflow without running very slowly due to lack of HW resources.
That said, here's what you can do:
Provide a fallback link for the users who are having trouble with the "full" version of the site.
If this is important enough to you, you could first run short benchmarks to check the performance and fall back to the less resource-hungry version of the site if you suspect that a system is short on resources.
You could target the specific high-end platforms by checking the OS, screen size, etc.
This article mentions this method on mobile: http://blog.scottlogic.com/2014/12/12/html5-android-optimisation.html
WebGL provides some information about the renderer via webgl.getParameter(). See this page for example: http://analyticalgraphicsinc.github.io/webglreport/

Three js progressive enhancement for devices

Does anyone have any advice regarding progressive enhancement for Three js projects on devices?
I have an app with lots of post-processing which is fine on modern devices but a bit slow on older/cheaper phones. It would be nice to enable post-layers progressively for devices that can handle it.
Is there a reasonably reliable way to measure performance so as to automate this?
Welcome to the everyday world of real-time game development on heterogenous devices: PCs, mobiles, or cross-platform consoles...
You might consider simply keeping a list of common hardware profiles. While there are many kinds of phone there are actually very few different GPU architectures used in most all of them. This is the method that has been used by Chrome on Android, for example, to determine which devices work with WebGL and which don't.
I admit that this might seem crude, that there ought to be some clean way to determine which devices will perform well on your code but frankly it will be different for different devices, which is why companies like NVIDIA have test labs that evaluate products against a wide variety of hardware configurations and then they still end up generating some sort of table of performance options for that single game title, rather than building some function that can figure out performance from First Principles.
A few years ago I was scoping-out a system for a game company at which I worked that would measure a range of performance parameters for each individual user and attempt, via regression ("machine learning") to determine the best performance profile for THAT user on THEIR machine because even with a narrow range of hardware choices you still can never tell just how much other stuff is going on on the player's computer. Are they running youtube and teamspeak and who knows what else in the background? Hard to predict the at-home situation, even in a large testing lab. So an adaptive approach might be the best, but even then it is very numerically-intensive and probably not a good fit (today) for javascript-based web apps using THREE.js. So if you want to attack this problem, choose your strategies (drop compositing layers? Opt for simpler geometry? Fewer or simpler shaders? Depends on your app!), try them yourself (experience will trump speculation every time!) , and then start deploying to the public. If your users permit it, ask them to store a cookie. But expect a long iterative process if you really want it to work.

Best practice: very long running polling processes in Javascript?

I have a touch screen kiosk application that I'm developing that will be deployed on the latest version of Chrome.
The application will need to make AJAX calls to an web service, every 10 minutes or so to pull thru any updated content.
As it's a kiosk application, the page is unlikely to be reloaded very often and theoretically, unless the kiosk is turned off, the application could run for days at a time.
I guess my concern is memory usage and whether or not a very long running setTimeout loop would chew through a large amount of memory is given sufficient time.
I'm currently considering the use of Web Workers and I'm also going to look into Web Sockets but I was wondering if anyone had any experience with this type of thing?
Cheers,
Terry
The browser has a garbage collector so no problems on that. as long as you don't introduce memory leaks through bad code. here's an article and another article about memory leak patterns. that's should get you started on how to program efficiently, and shoot those leaky code.
also, you have to consider the DOM. a person in SO once said that "things that are not on screen should be removed and not just hidden" - this not only removes the entity in a viewing perspective, but actually removes it from the DOM, remove it's handlers, and the memory it used will be freed.
As for the setTimeout, lengthen the interval between calls. Too fast, you will chew up memory fast (and render the page quite... laggy). I just tested code for a timer-based "hashchange" detection, and even on chrome, it does make the page rather slow.
research on the bugs of chrome and keep updated as well.

How to do performance analysis of a heavy JavaScript web app?

I have a huge Web App that's switching from a HTML-rendered-on-the-server-and-pushed-to-the-client approach to a let-the-client-decide-how-to-render-the-data-the-server-sends, which means the performance on the client mattered in the past, but it's critica now. So I'm wondering if in the current state of affairs it's possible to profile Web apps and extract the same data (like call stacks, "threads", event handlers, number of calls to certain functions, etc) we use for server side perf.
I know every browser implements some of these functionalities to some extent (IE dev tools has an embedded profiler, so does Firefox [with Firebug], and Google Chrome has Speed Tracer), but I was wondering if it'd be possible to get, for example, stack traces of sessions. Is it advisable to instrument the code and have a knob to turn on/off the instrumentation? Or it's simply not that useful to go that level in analyzing JavaScript performance?
Fireunit is decent and YUI also provides a profiler, but neither provide stack traces or call frames. Unfortunately, there aren't many JS profiling tools out right now. And none of them are particularly great.
I think it's very important to go to a high level of performance analysis, especially considering the user will deal with the JS app 90%+ of the time directly.

Limitations and future of HTML+JavaScript for web applications

I am a non-web programmer, but I am getting now more interested in web technologies.
I know that HTML and JavaScript are today the fundamental technologies for web applications, but it also seems that actually they were not created strictly for that. (HTML was created for web pages, JavaScript to make them a bit dynamic). Does it have any significant negative impact on how advanced web applications are created today? What are the limitations?
Do you predict any new technology to emerge in 5-10 years to replace HTML+JavaScript? If yes, then what it will be like?
Though HTML and JavaScript may seem old, there is nothing inherently problematic about building complex applications with them. The larger "problems" web applications must deal with have to do with the nature of the world wide web: the inconsistency of network communication and the statelessness of HTTP.
In the first 10 years of the web, the differences between (and shortcomings of) various web browsers was so vast as to confound attempts at building complex applications. Many technologies emerged for building so-called rich Internet applications which circumvented the browser entirely. These include (most notably) Java applets, Macromedia/Adobe Flash, and Microsoft Silverlight. Since they require browser plugins to be installed, the are not optimal for general purpose web applications and, in my opinion, they will be long outlasted by HTML.
In the the past five years, the lives of web developers have become significantly easier. Browsers are paying more attention to the W3C's recommendations, JavaScript is implemented consistently in all major browsers (more or less; the DOM is still frightening), and HTML 5 promises many new features (font management, video/audio embedding, geolocation, asynchronous page updates, etc) which will make pure HTML web app development even easier.
It seems unlikely that anything will emerge in the next decade that will "replace" HTML because there is nothing fundamentally wrong with it. JavaScript...very possible, but it's hard to know with what, and at the moment JavaScript is only getting stronger.
The biggest difference between web and non-web application programming is the constantly-changing platform on which web software runs. If anything is going to change dramatically in the next 10 years it is this. People are going to be accessing the web on everything in 10 years, and keeping up with the different platforms will always be a concern. This is simply the nature of web programming.
My advice: spend 2 hours per week reading about new web technologies. This will keep you in the loop so you know how to plan for what's coming. The web is pretty unpredictable. I don't think there's any such thing as an application that will last for 5 years without major changes. The best you can do is to stay informed, and react quickly.
Several technologies have already emerged that have tried to conquer this famous duo, including....
Java applets (RIP)
ActiveX (RIP)
Macromedia Shockwave (RIP)
Flash
Silverlight
If you're doing research about this on your own, the search phrase to look for is RIA - Rich Internet Application.
The problem with these new technologies is that ultimately they end up requiring some sort of platform-specific binary in order to view and interact with. In the case of Flash and Silverlight, the developer needs to learn an additional web language to create them.
I think the fact remains that plain text wins again (except in edge cases).
On the horizon is the infamous 2022 HTML5. That should be interesting.
It is highly unlikely that anything will emerge in the next few years that will upset the base of HTML + Javascript for RIAs simply because of the trouble with getting everyone on the alternate platform. If another technology were to supplant HTML + Javascript it would either come about through (ordered from least to most likely):
Platform Changes
Hardware (and software)
Software [browser]
.
Browser updates
Introduction of a new technology within the main browser, a la IE's behaviors
Plugins for browsers
Platform changes are unlikely because they require a paradigm shift in either the "gadget" or the "browser" marketplace. Unless there is massive demand for something new that supplants everything that could be done the old way backwards compatibility will need to be maintained.
As an example of a paradigm shift in the gadget and the browser markets, consider what would happen if next year a hardware breakthrough makes it possible to create completely immersive 3-D environments that turn your brain into a plug-and-play device and the languages used to program on its realspace browser it are Lisp variants.
As for a paradigm shift in the browser -- this is also unlikely because it would require, for backwards compatibility, that browser manufacturers support two (or in IE's case 5) rendering engines until the new methods completely fazed out the old methods. Unless the new method was vastly superior to the old method it is doubtless that this would gain any traction.
Imagine, if you will, that IE decides that YAML is a better format for sending data over the wire and sets up a new standard for marking up data and events in YAML. Safari in the meantime figures out how to sandbox a C like language in their browser making it possible for developers to do literally anything they want to if their users will let them.
[Part II coming after work]
What you will find more and more obvious is the use of XML and JSON for data and communication between client and server applications. You hear about it, somewhat, with all the Ajax conversations but that is what Ajax is. That involves javascript which doesn't appear to be going away any time soon. Other languages output js where needed and frequently that's to handle XML or JSON. HTML is there to activate the browser so the next big thing that's already here is more Javascript and XML/JSON with HTML5 wrapped around that for the browser.

Categories

Resources