I seem to have some pretty large memory leaks in an app that I am working on. The app itself is not very complex. Every 15 seconds, the page requests approx 40kb of JSON from the server, and draws a table on the page using it. It is cheaper to draw the table over because the data is usually always new. I am attaching a few events to the table, approx 5 per line, 30 lines in the table. I used jQuery's .html() method to put the new html into the container and overwrite the existing. I do this specifically so that jQuery's special cleanup functions go in and attempt to detach all events on the elements in the element that it is overwriting. I then also delete the large variables of html once they are sent to the DOM using delete my_var.
I have checked for circular references and attached events that are never cleared a few times, but never REALLY dug into it. I was wondering if someone could give me a few pointers on how to optimize a very heavy app like this. I just picked up "High Performance Javascript" by Nicholas Zakas, but didn't have much time to get into it yet.
To give an idea on how much memory this is using, after 4~ hours, it is using about 420,000k on chrome, and much more on Firefox or IE.
Thanks!
I'd suggest writing a test version of your script without events. DOM / JS circular references might be very hard to spot. By eliminating some variables from the equation, you might be able to narrow down your search a bit.
I have experienced the same thing. I had a piece of code that polled every 10 seconds and retrieved a count of the current user's errors (data entry/auditing), a simple integer. This was then used to replace the text inside a div (so the user knew when new errors were found in their work). If left overnight, the browser would end up using well over 1gb of memory!
The closest I came to solving the issue was reducing the polling to occur every 2 minutes instead and insist the user closed down at the end of the day. A better solution still would have been to use Ajax Push Engine to push data to the page ONLY when an error was created. This would have resulted in data being sent less frequently and thus less memory being used.
Are you saving anything to an abject / array? I've had this happen before with a chrome plugin where an array just kept getting larger and larger. This sounds like it might be your problem, especially considering you're fetching 40k.
A snippet would be great it seems as if you are creating new variables each time and the old ones aren't going out of scope therefore not being garbage collected.
Also try and encapsulate more of your JS using constructors and instances of objects ect. When the JS is just a list of functions and all the variables have a global scope rather than being properties of an instance your JS can take up a lot memory.
Why not draw the table and attach events only once and just replace the table data every 15 second?
1) Jquery Ajax wrapper, called recurrently, leads to memory leaks and the community is aware of that (although the issue on the ajax wrapper is not by far as ugly as your case).
2) When it comes to optimization, you've done the first step(using lightweight json calls) and delete method, but that the problem is in the "event attaching" area and html method.
What I mean, is that:
1) you are probably reattaching the listener after each html() call
2) you re-draw the hole table on each ajax call.
This indeed leads to memory leaks.
You have to:
1)draw the table (with the first time content) on the server side
2)$(document).ready you attach listeners to the table's cells
3)call json service with ajax, parse response
4)refill table with the parsed array data
Tell us what you've achived meanwhile :)
I was having a similar problem just this week. It turned out I had a circular reference in my database. I had an inventory item ABC123 flagged as being replaced by XYZ321 and I also had XYZ321 flagged as being replced by ABC123. Sometimes the circular reference is not in the PHP code.
Related
For few months now I have been developing an Android app using PhoneGap 2.8 and on the javascript side I have used Backbone and jQuery as my main frameworks. As my application has grown to a reasonable size, I have started to notice a considerable memory consumption. Having read different articles that explain why PhoneGap requires considerable amount of memory even to run, I still believe that I can do some optimization to how i use memory.
In BackBone we have a Router object that maps URI-s to specific functions, which render me something called a View object. Not only I implemented my router functions to create a view and render it, but I also store globally reference to currently being displayed view. So before a new view is created, I tell the old view to make some clean up (That is done recursively since views can contain more "sub" views). Under clean up I currently tell view to undelegate his events (I trust Backbone removes the event listeners). Not much more is done currently. Once new view is rendered, global variable will reference the new view. I trust that javascript GC will release the memory, used by the old view. Alas, I dont things this is happening- the more I navigate around my app, the more memory is being used up. I am aware that there is some memory leaking going on, but I can't figure out what is it, that takes memory. One thing I suspect is that old objects are not being garbage collected properly for some reason. I suspect that once I render new html (DOM) over some container, perhaps old DOM is causing memory leaks, perhaps some event handlers are being unnecessarily stored somewhere.
What I would like to know, if there is any tools or commands or tips on how can I debug/ trace/ measure where memory is being allocated. Is there a way to access all event listeners and measure them somehow (same for DOM). Any article to smart memory efficient techniques would also be appreciated. Currently only thing that I can thing of to do, is to start recursively deleting all attributes of the objects (in the end objects as well) I am willing to destroy.
Any suggestion is very welcome!
Thank you in advance.
I faced similar issues with my first phonegap app. Few techniques we managed to apply were
*old view - view getting navigated away
Unbind all events associated with old view
Remove all nodes attached to the view from dom, to make sure event are also removed
Remove old view object, model/collection, so that there are no instances remaining on the DOM
Moreover try to use prototyping as much as possible, as functions created via prototype occupy space in RAM only once. So if the view is created/initiated again, its associated/child functions aren't going to be pushed into RAM again
Most imp, make sure 'this' pointer isn't leaking anywhere between files. One of my workplace used to get stuck after 1.5 hrs of play and after a week debugging, we came to find out that there was a leakage of this pointed between 2 files/objects/views, which created a circular referencing and make the DOM to explode.
Also you can try to use Google Chrome's profiling tool
Few useful links
http://blog.socialcast.com/javascript-memory-management/
Backbone.js Memory Management, Rising DOM Node Count
I currently think about an architecture for one-page mobile-web-apps, let them work offline with a lot of data.
My concern is that loading and keeping all data loaded in objects is wasting too much memory. I think about older android phones, iphones etc.
Would it be a good idea to start with variables initialized whith json-strings of data-model-objects, which i load/parse into an object when i need them?
i could free the loaded object as soon as the use of the model changes (of course only if its unlikely that the object is needed in the near future)
or are those string-variables hold in memory anyway, so i dont save memory?
What is the difference in memory consumption between a javascript object and its (stringyfied JSON-)String?
UPDATE:
i found the answer to the question in this article about javascript object size.
so comparing a json-string and its corresponding loaded object shows that the string is smaller. Thats what i expected.
Would it be better to retreive the strings via ajax and put them into localStorage? After the anonymous ajax callback finishes the GC could do its job...
is this even the right direction? what is the best approach keeping data like that?
I know all this is very vague, so any help is highly appreciated!
localStorage is stored on the real disk,so every time you read the data will not as quick as in a Object. localStorage is good for offline.if a large data don't require to offline and don't read too much often,just store it in an hidden will better.
I often use data-attributes to store configuration that I can't semantically markup so that the JS will behave in a certain way for those elements. Now this is fine for pages where the server renders them (dutifully filling out the data-attributes).
However, I've seen examples where the javascript writes data-attributes to save bits of data it may need later. For example, posting some data to the server. If it fails to send then storing the data in a data-attribute and providing a retry button. When the retry button is clicked it finds the appropriate data-attribute and tries again.
To me this feels dirty and expensive as I have to delve into the DOM to then dig this bit of data out, but it's also very easy for me to do.
I can see 2 alternative approaches:
One would be to either take advantage of the scoping of an anonymous Javascript function to keep a handle on the original bit of data, although this may not be possible and could perhaps lead to too much "magic".
Two, keep an object lying around that keeps a track of these things. Instead of asking the DOM for the contents of a certain data-attribute I just query my object.
I guess my assumptions are that the DOM should not be used to store arbitrary bits of state, and instead we should use simpler objects that have a single purpose. On top of that I assume that accessing the DOM is more expensive than a simpler, but specific object to keep track of things.
What do other people think with regards to, performance, clarity and ease of execution?
Your assumptions are very good! Although it's allowed and perfectly valid, it's not a good practice to store data in the DOM. Sure, it's fine if you only have one input field, but, but as the application grows, you end up with a jumbled mess of data everywhere...and as you mentioned, the DOM is SLOW.
The bigger the app, the more essential it is to separate your interests:
DOM Events -> trigger JS functions -> access Data (JS object, JS API, or AJAX API) -> process results (API call or DOM Change)
I'm a big fan of creating an API to access JS data, so you can also trigger new events upon add, delete, get, change.
I've been getting more and more into high-level application development with JavaScript/jQuery. I've been trying to learn more about the JavaScript language and dive into some of the more advanced features. I was just reading an article on memory leaks when i read this section of the article.
JavaScript is a garbage collected language, meaning that memory is allocated to objects upon their creation and reclaimed by the browser when there are no more references to them. While there is nothing wrong with JavaScript's garbage collection mechanism, it is at odds with the way some browsers handle the allocation and recovery of memory for DOM objects.
This got me thinking about some of my coding habits. For some time now I have been very focused on minimizing the number of requests I send to the server, which I feel is just a good practice. But I'm wondering if sometimes I don't go too far. I am very unaware of any kind of efficiency issues/bottlenecks that come with the JavaScript language.
Example
I recently built an impound management application for a towing company. I used the jQuery UI dialog widget and populated a datagrid with specific ticket data. Now, this sounds very simple at the surface... but their is a LOT of data being passed around here.
(and now for the question... drumroll please...)
I'm wondering what the pros/cons are for each of the following options.
1) Make only one request for a given ticket and store it permanently in the DOM. Simply showing/hiding the modal window, this means only one request is sent out per ticket.
2) Make a request every time a ticket is open and destroy it when it's closed.
My natural inclination was to store the tickets in the DOM - but i'm concerned that this will eventually start to hog a ton of memory if the application goes a long time without being reset (which it will be).
I'm really just looking for pros/cons for both of those two options (or something neat I haven't even heard of =P).
The solution here depends on the specifics of your problem, as the 'right' answer will vary based on length of time the page is left open, size of DOM elements, and request latency. Here are a few more things to consider:
Keep only the newest n items in the cache. This works well if you are only likely to redisplay items in a short period of time.
Store the data for each element instead of the DOM element, and reconstruct the DOM on each display.
Use HTML5 Storage to store the data instead of DOM or variable storage. This has the added advantage that data can be stored across page requests.
Any caching strategy will need to consider when to invalidate the cache and re-request updated data. Depending on your strategy, you will need to handle conflicts that result from multiple editors.
The best way is to get started using the simplest method, and add complexity to improve speed only where necessary.
The third path would be to store the data associated with a ticket in JS, and create and destroy DOM nodes as the modal window is summoned/dismissed (jQuery templates might be a natural solution here.)
That said, the primary reason you avoid network traffic seems to be user experience (the network is slower than RAM, always). But that experience might not actually be degraded by making a request every time, if it's something the user intuits involves loading data.
I would say number 2 would be best. Because that way if the ticket changes after you open it, that change will appear the second time the ticket is opened.
One important factor in the number of redraws/reflows that are triggered for DOM manipulation. It's much more efficient to build up your content changes and insert them in one go than do do it incrementally, since each increment causes a redraw/reflow.
See: http://www.youtube.com/watch?v=AKZ2fj8155I to better understand this.
I take a fat JSON array from the server via an AJAX call, then process it and render HTML with Javascript. What I want is to make it as fast as humanly possible.
Chrome leads over FF in my tests but it can still take 5-8 seconds for the browser to render ~300 records.
I considered lazy-loading such as that implemented in Google Reader but that goes against my other use cases, such as being able to get instantaneous search results (simple search being done on the client side over all the records we got in the JSON array) and multiple filters.
One thing I have noticed is that both FF and Chrome do not render anything until they loop over all items in the JSON array, even though I explicitly insert the newly created elements into DOM after every loop (as soon as I have the HTML). What I'd like to achieve would be just that: force the browser to render as soon as it can.
I tried deferring the calls (every item from the array would be processed by a deferred function) but ran into additional issues there as it seems that the order of execution isn't guaranteed anymore (some items further down the array would be processed before other items before it).
I'm looking for any hints and tips here.
try:
push rows into an array, then simply
el.innerHTML = array.join("");
use document fragments
var frag = document.createDocumentFragment();
for ( loop ) {
frag.appendChild( el );
}
parent.appendChild( frag );
If you don't need to display all 300 records at once you could try to paginate them 30 or 50 records at a time and only unroll the JSON array as those sub-parts are required to be displayed through a pager or a local search box. Once converted you could cache the content for subsequent display as users navigate up and down the pages.
Try creating the elements in a detached DOM node or a document fragment, then attaching the whole thing in one go.
300 isn't a lot.
I managed to create a tree of over 500 elements with data from JSON using jQuery, in a fraction of a second on Chrome.
300 isn't a big number.
If they are rendered so slowly, it might be due to a wrong way of doing it. Can you specify how you do it?
The slowest way would be to write HTML into a string in Javascript, then assign it with innerHtml member. But that would still be fast as hell for 300 rows.
Google Web Toolkit has BulkTableRenderers that are designed to render large tables quickly. Even if you choose not to use GWT, you might be able to pick up some techniques by looking through the source code which are available under Apache License, Version 2.0.