I'm developing a game with THREE.js that will run inside UIWebView inside an app on iOS8.
I've profiled this application in Chrome's developer tools and ensured that there are no memory leaks - that is - memory use gets up to a certain value and remains constant throughout.
Running this application in UIWebView, however, reveals that the memory use grows over time, as if no garbage collection takes place at all.
I've searched online, but can not determine conclusively whether iOS8 UIWebView has garbage collection or not. Some articles seem to suggest it does, and some that it does not. If it does - how can I trigger this?
The only solution I can imagine at this time, if there is no garbage collection is periodically killing / deallocating UIWebView, and recreating/restarting the app (a game at the menu screen) .
UPDATE:
After spending a few more days looking for leaks here's what I found:
Deallocating UIWebView doesn't work - system never deallocates everything (even with all the suggested hacks) and memory problems get compounded.
I still don't know if UIWebView has mark/sweep garbage collection - the Profile/Instruments panel seems to suggest it does, but the memory use rarely ever goes down. Common sense tells me that some garbage collection must take place, because all the temporary objects in code, and things going out of scope do get cleaned up.
My THREE.js objects do not seem to ever be collected - but this may be related to THREE.js own issues of having to manually dispose of resources (in order to free any GL-related handles etc.)
There are mysterious leaks related to .bind(this) - example setTimeout(object.func.bind(object),100) - will apparently never clean up the function after timeout dispatches - so I end up pre-binding and storing it as a var instead. Same goes for any even handlers passed to jQuery.
My 2 scene game (menu scene and game scene), I ended up rewriting in such a way that both scenes remain in memory (never get removed). Any objects and models I create in the game, instead of relying on GC to get collected, get recycled instead. When objects are removed from the scene, they are put in a pool of objects of the same type to be re-initialized and re-added when such object is required in the scene again. It seemed like an overkill at first, but the benefit is - no memory leaks (objects remain allocated), and faster creation/adding to scene.
The memory grows because some resources are cached in the memory due to HTTP cache strategy. So if the memory does not grow infinitely, don't worry about that.
What you are seeing is probably a bug in iOS8 where the garbage collector never frees up memory. I posted about it here: PhoneGap using way more memory in iOS8 than iOS7
If you can please file a bug with Apple so we can get this fixed in the next OS release.
Related
I'm making a rather extensive game using Javascript. It is a kind of online game maker that allows players to upload media files and use them to create worlds. Unfortunately, it is rather prone to crashing the browser tab at unpredictable moments. So far, I have found no pattern to this - sometimes it happens within a few minutes, other times it can run for hours without a problem.
I have tried enabling logging in Chrome, but the crashes don't seem to generate an error report in the chrome_debug file.
I thought it might be that the program was using too much memory (given the game's open-ended nature, some worlds can involve downloading rather large data files - though this seems unrelated to when the crash actually happens - while large worlds do seem to be more crash-prone, they do not always crash when the world's data is loaded).
I tried using Electron to turn it into an executable app, but the app still crashes. That shouldn't happen if it's a memory issue, right?
Is there any way of finding out what is causing the code to crash?
Most unpredictable crashes in Javascript are caused by memory leaks - objects that are still stored in memory and not being picked up by the garbage collector. Every object in Javascript is stored in a variable somewhere within the global scope, or is associated with another object that is itself connected to the global scope. When a "branch" of the "tree" is removed and can no longer be accessed by the global scope, the garbage collector destroys it.
However, if an object is not being removed from the global scope when it should be, it remains in memory. This usually happens when objects are added to an array but are not removed from that array when they are no longer in use. Over time, these objects build up until the process crashes due to memory overload.
To find memory leaks in Chrome, press F12 and open the Performance tab. By recording the page over time, you can view the amount of memory being used. The green line (nodes) is the most important here - it refers to the number of objects in memory. If nodes are constantly increasing over time (there will always be increases and decreases, but if the overall level is constantly rising) this generally means there's a memory leak.
To find which specific objects are causing the problem, open the Memory tab to take snapshots or timeline profiles of the memory heap. This gives you a count of the specific objects that are in memory at any given time. If there are more of some kind of object than there should be, that's where the leak is.
I am working with quite large volume of data.
Mechanism:
JavaScript is reading WebSQL database, then assembles data into Object that has tree structure.
Then applies to tree object knockout.js (makes elements observable) then data-binds
and then applies Jquery Mobile UI at the end.
Whole process takes unacceptable amount of time.
I have already optimized algorithm that makes tree object out of data,
also optimised conversion to observables mechanism by pushing items directly into ko.observable arrays and calling hasMutated only once.
I am applying knockout.js IF bindings to not process invisible tree nodes in UI until parent is opened.
Performance here is key.
After inspecting page load in timeline in Chrome developer tools I have noticed that Garbage Collector is doing cleans on every concurrent call when I am building tree object.
Question: Is there a way to temporarily disable Chrome GC and then enable it again after I am done with page processing?
P.S I know I could add reference to part that gets collected, basically introduce object that dominates and prevents GC collection, but this would require substantial changes through the code, and I am not sure I could keep it long enough, and it is likely to introduce memory leak. Surely there must be better way
No, there is no way to disable the garbage collector. There cannot be, because what is Chrome supposed to do when more memory is requested but none is available?
(Also, the garbage collector is very fine-grained and complicated; your screenshot is a bit too small to be readable, but in all likelihood what you're seeing are small steps of incremental work to keep up with allocations, and/or "minor GC" cycles that only operate on the relatively small area of the heap where new allocations happen.)
If you want to reduce time spent in GC, then the primary way how to achieve that is to allocate fewer and/or smaller objects. Yes, that can mean changing your application's design so that objects are reused instead of being short-lived, or similar changes in strategy.
If you allocate a lot, you will see a lot of GC activity, there is just no way around that. This is true even in languages/runtimes that are not considered "garbage collected", e.g. in C/C++ using new/delete a lot also has a performance cost.
I'd like to know if it's possible (with any browser / dev tools) to pick a specific value or closure variable while debugging and "follow" or "watch" it somehow into future execution points on the page. Basically, a memory profiler attached to just a single value, which would show during debugging or snapshots whether that value is still being retained either directly or indirectly. Alternatively, I'd like to know if it's possible to look at references in the memory profiler/snapshot view in, say, Chrome, and tie those references to actual points in the source code.
My problem is that I am debugging a memory leak caused by rebuilding a DOM tree for a portion of a fairly complex page. Even taking a very controlled memory snapshot that just looks at a single redraw (removing the old DOM tree and adding a new one, where I know that I'm unintentionally retaining a reference to a small part of the old one), there are still hundreds of objects to look through, and to be quite honest I find the memory profiler in Chrome to be very confusing to navigate through. And even when I find references that might be of interest, I'm at a loss as to how to tie them to points in the code - it's great to know that I'm retaining an HTMLDivElement somewhere but that could be almost any of the files...
So basically, I'm unsure how to proceed, and the two solutions I'm asking about are the only things I can think of, if there is any way to do them. Sorry that this is such a vague question, I am open to other ways of tackling this as well.
For few months now I have been developing an Android app using PhoneGap 2.8 and on the javascript side I have used Backbone and jQuery as my main frameworks. As my application has grown to a reasonable size, I have started to notice a considerable memory consumption. Having read different articles that explain why PhoneGap requires considerable amount of memory even to run, I still believe that I can do some optimization to how i use memory.
In BackBone we have a Router object that maps URI-s to specific functions, which render me something called a View object. Not only I implemented my router functions to create a view and render it, but I also store globally reference to currently being displayed view. So before a new view is created, I tell the old view to make some clean up (That is done recursively since views can contain more "sub" views). Under clean up I currently tell view to undelegate his events (I trust Backbone removes the event listeners). Not much more is done currently. Once new view is rendered, global variable will reference the new view. I trust that javascript GC will release the memory, used by the old view. Alas, I dont things this is happening- the more I navigate around my app, the more memory is being used up. I am aware that there is some memory leaking going on, but I can't figure out what is it, that takes memory. One thing I suspect is that old objects are not being garbage collected properly for some reason. I suspect that once I render new html (DOM) over some container, perhaps old DOM is causing memory leaks, perhaps some event handlers are being unnecessarily stored somewhere.
What I would like to know, if there is any tools or commands or tips on how can I debug/ trace/ measure where memory is being allocated. Is there a way to access all event listeners and measure them somehow (same for DOM). Any article to smart memory efficient techniques would also be appreciated. Currently only thing that I can thing of to do, is to start recursively deleting all attributes of the objects (in the end objects as well) I am willing to destroy.
Any suggestion is very welcome!
Thank you in advance.
I faced similar issues with my first phonegap app. Few techniques we managed to apply were
*old view - view getting navigated away
Unbind all events associated with old view
Remove all nodes attached to the view from dom, to make sure event are also removed
Remove old view object, model/collection, so that there are no instances remaining on the DOM
Moreover try to use prototyping as much as possible, as functions created via prototype occupy space in RAM only once. So if the view is created/initiated again, its associated/child functions aren't going to be pushed into RAM again
Most imp, make sure 'this' pointer isn't leaking anywhere between files. One of my workplace used to get stuck after 1.5 hrs of play and after a week debugging, we came to find out that there was a leakage of this pointed between 2 files/objects/views, which created a circular referencing and make the DOM to explode.
Also you can try to use Google Chrome's profiling tool
Few useful links
http://blog.socialcast.com/javascript-memory-management/
Backbone.js Memory Management, Rising DOM Node Count
I'm suspecting that if i set div.innerHTML="" instead of using while(div.firstChild)div.removeChild(div.firstChild) the memory will be hogged until the page refreshes or the browser closes.
My question is how do we actully go about testing if my hypothesis is true?
Firstly, whether memory is returned to the system or not depends on the Javascript garbage collector (gc) and therefore results will vary from browser to browser.
It's difficult to measure memory usage by looking at the process as there are several layers of memory management in place. To see how this can have an impact, consider that a huge javascript object might have been erased forever, but that memory might still not have yet been released back to the operating system because the web browser might keep hold of it in case you need to create more big objects. Another example is that most gc routines only run periodically, so it's possible that object is still in memory but will be reclaimed later.
However, it's pretty easy to determine if a particular operation is leaking memory as all you have to do is repeat it in an endless loop. e.g.
remove references to existing html elements
construct new html elements
add them to the page
Try this code:
var div = document.getElementById("test")
while(true) {
// remove operation, change me
while(div.firstChild) {
div.removeChild(div.firstChild);
}
// create some new content
for(var i=0; i<1000; i++) {
var p = document.createElement('p');
p.appendChild(document.createTextNode('text'));
div.appendChild(p);
}
}
My results in Chrome, using the Task Manager (shift+esc):
Leaving the loop running endlessly without deleting anything eventually results in the "Aw! Snap" screen, which indicates memory has been exhausted
Using the removeChild technique leads to memory usage stabilizing at around 750MB
Using the innerHTML technique leads to memory usage stabilizing at around 750MB
If memory isn't leaking you'll notice a pattern similar to this: Increases to 300, drops to 150, increases to 400, drops to 250, eventually stabilising. This is the memory management system running out of memory, triggering the gc to reclaim memory that has been deallocated and increasing the available memory footprint each time until reaching a soft limit that has been set to avoid the process impacting others. This is a typical memory management scheme and more can be found by reading this wikipedia article: http://en.wikipedia.org/wiki/Garbage_collection_(computer_science)
Since both results stabilize, I'd conclude that (for Chrome at least) both techniques work the same though you might get different results from other browsers.
As there doesn't seem to be a difference, removeChild should be preferred as innerHTML is not included in the W3C standard and is frowned upon by some developers. See more here: Alternative for innerHTML?
The reason innerHTML and removeChild have no differences is because underneath, they both have to de-reference elements to stop them from being visible on the screen. Memory leaks are most likely to occur when you have circular references (A points to B, B points to A, nobody else points to either) but this is only a problem in older browsers. This link has some good guidelines about how to avoid js memory leaks: Javascript memory management pitfalls?