In Angularjs, I keep lot of data in my services so that the server calls can be reduced and the data can be picked from local variables. This leads to data being persisted in the application till user refresh screen or close the tab. This is hazard to both security and performance as lot of data is kept in browser memory.
I want to clean up all the services on certain event (like logout) so that the cached data is cleaned up from the browser memory.
Note: I do not want to refresh the screen (window.location.reload) and I have lot of services in my application already. So need some solution with minimum effort.
unfortunately javascript doesn't provide any way to manually manage processs such as garbage collection, ecc...
Because you are in an AngularJS app neither the browser-reloading can help you.
there is just one thing that you can do in order to have a best memory management, try to reduce all variables to primitive values when you don't need them anymore.
Keep in mind, javascript clears memory when automatically some variable loses references, for example, variables defined in child scopes are destroyed when the function calls ends.
Related
There's one thing about Flux (or at least about its implementations) that I don't quite understand.
It's about internal data management of Stores. I'll try to explain my question by breaking it down into points.
Let's imagine I have some app with client-side routing.
As I understood Stores are singletons. Somewhere they store some data (e.g. an array)
User clicks somewhere and navigates to some part of the app. Correspoding Store fetches some data.
Let's imagine that it's a really big amount of data. So big that it takes a lot of resources and even makes page laggy.
After a while user navigates to a different route. What happens with the internal data of the Store mentioned above?
As far as I understood, it remains intact. At least until user navigates to original route and the Store changes its state.
And before that happens the Store holds big amount of data even when it's not needed.
Can someone clear this out for me? Thanks.
Store just listen the dispatcher(actions) and somehow react on it. In example you mention - yes, it will hold the data at least you don't use weakmap or kinda, but you also can listen actions on changing routes and process your data in this case.
When some cached value is expired or new cache will be generated for any reason and we have a huge traffic at the time of no cache exists, there will be a heavy load on MongoDB and response time significantly increases. This is typically called "Dog-pile effect". Everything works well after cache is created.
I know that it's a very common problem which applies to all web applications using a database & cache system.
What should one do to avoid dog-pile effect at a Node.js & MongoDB & Redis stack? What are best practices and common mistakes?
One fairly proven way to keep the dogs from piling up is to keep a "lock" (e.g. in Redis) that prevents the cache populating logic from firing up more than once. The first time that the fetcher is called (for a given piece of content), the lock is acquired (for it) and set to expire (e.g. with SET ... NX EX 60). Any subsequent invocation of the fetcher for that content will fail on getting the lock thus only one dog gets to the pile.
The other thing you may want to put into place is some kind of rate limiting on the fetcher, regardless the content. That's also quite easily doable with Redis - feel free to look it up or ask another question :)
Id just serve expired content until new content is done caching so that database wont get stampede.
I have a node server for loading certain scripts that can be written by anyone. I understand that when I fire up my Node server, modules load for the first time in the global scope. When one requests a page, it gets loaded by the "start server" callback; and I can use all the already loaded modules per request. But I haven't encountered a script where global variables get changed during request time and affects every single other instance in the process (maybe there is).
My question is, how safe is it, in terms of server crashing, to alter the global data? Also, suppose that that I have written a proper locking mechanism that will "pause" the server for all instances for a very short amount of time until the proper data is loaded.
Node.js is single threaded. So it's impossible for two separate requests to alter a global variable simultaneously. So in theory, it's safe.
However, if you're doing stuff like keep user A's data temporarily in a variable and then when user A later submits another request use that variable be aware that user B may make a request in between potentially altering user A's data.
For such cases keeping global values in arrays or objects is one way of separating user data. Another strategy is to use a closure which is a common practice in callback-intensive or event/promise oriented libraries such as socket.io.
When it comes to multithreading or multiprocessing, message passing style API like node's built-in cluster module has the same guarantees of not clobbering globals since each process have its own global. There are several multithreading modules that's implemented similarly - one node instance per thread. However, shared memory style APIs can't make such guarantees since each thread is now a real OS thread which may preempt each other and clobber each others memory. So if you ever decide to try out one of the multithreading modules, be aware of this issue.
It is possible to implement fake shared memory using message passing though - sort of like how we do it with ajax or socket.io. So I'd personally avoid shared memory style multithreading unless I really, really need to cooperatively work on a very large dataset that would bog down message passing architectures.
Then again, remember, the web is a giant message passing architecture with the messages being HTML and XML and json. So message passing scales to Google size.
For few months now I have been developing an Android app using PhoneGap 2.8 and on the javascript side I have used Backbone and jQuery as my main frameworks. As my application has grown to a reasonable size, I have started to notice a considerable memory consumption. Having read different articles that explain why PhoneGap requires considerable amount of memory even to run, I still believe that I can do some optimization to how i use memory.
In BackBone we have a Router object that maps URI-s to specific functions, which render me something called a View object. Not only I implemented my router functions to create a view and render it, but I also store globally reference to currently being displayed view. So before a new view is created, I tell the old view to make some clean up (That is done recursively since views can contain more "sub" views). Under clean up I currently tell view to undelegate his events (I trust Backbone removes the event listeners). Not much more is done currently. Once new view is rendered, global variable will reference the new view. I trust that javascript GC will release the memory, used by the old view. Alas, I dont things this is happening- the more I navigate around my app, the more memory is being used up. I am aware that there is some memory leaking going on, but I can't figure out what is it, that takes memory. One thing I suspect is that old objects are not being garbage collected properly for some reason. I suspect that once I render new html (DOM) over some container, perhaps old DOM is causing memory leaks, perhaps some event handlers are being unnecessarily stored somewhere.
What I would like to know, if there is any tools or commands or tips on how can I debug/ trace/ measure where memory is being allocated. Is there a way to access all event listeners and measure them somehow (same for DOM). Any article to smart memory efficient techniques would also be appreciated. Currently only thing that I can thing of to do, is to start recursively deleting all attributes of the objects (in the end objects as well) I am willing to destroy.
Any suggestion is very welcome!
Thank you in advance.
I faced similar issues with my first phonegap app. Few techniques we managed to apply were
*old view - view getting navigated away
Unbind all events associated with old view
Remove all nodes attached to the view from dom, to make sure event are also removed
Remove old view object, model/collection, so that there are no instances remaining on the DOM
Moreover try to use prototyping as much as possible, as functions created via prototype occupy space in RAM only once. So if the view is created/initiated again, its associated/child functions aren't going to be pushed into RAM again
Most imp, make sure 'this' pointer isn't leaking anywhere between files. One of my workplace used to get stuck after 1.5 hrs of play and after a week debugging, we came to find out that there was a leakage of this pointed between 2 files/objects/views, which created a circular referencing and make the DOM to explode.
Also you can try to use Google Chrome's profiling tool
Few useful links
http://blog.socialcast.com/javascript-memory-management/
Backbone.js Memory Management, Rising DOM Node Count
I've been getting more and more into high-level application development with JavaScript/jQuery. I've been trying to learn more about the JavaScript language and dive into some of the more advanced features. I was just reading an article on memory leaks when i read this section of the article.
JavaScript is a garbage collected language, meaning that memory is allocated to objects upon their creation and reclaimed by the browser when there are no more references to them. While there is nothing wrong with JavaScript's garbage collection mechanism, it is at odds with the way some browsers handle the allocation and recovery of memory for DOM objects.
This got me thinking about some of my coding habits. For some time now I have been very focused on minimizing the number of requests I send to the server, which I feel is just a good practice. But I'm wondering if sometimes I don't go too far. I am very unaware of any kind of efficiency issues/bottlenecks that come with the JavaScript language.
Example
I recently built an impound management application for a towing company. I used the jQuery UI dialog widget and populated a datagrid with specific ticket data. Now, this sounds very simple at the surface... but their is a LOT of data being passed around here.
(and now for the question... drumroll please...)
I'm wondering what the pros/cons are for each of the following options.
1) Make only one request for a given ticket and store it permanently in the DOM. Simply showing/hiding the modal window, this means only one request is sent out per ticket.
2) Make a request every time a ticket is open and destroy it when it's closed.
My natural inclination was to store the tickets in the DOM - but i'm concerned that this will eventually start to hog a ton of memory if the application goes a long time without being reset (which it will be).
I'm really just looking for pros/cons for both of those two options (or something neat I haven't even heard of =P).
The solution here depends on the specifics of your problem, as the 'right' answer will vary based on length of time the page is left open, size of DOM elements, and request latency. Here are a few more things to consider:
Keep only the newest n items in the cache. This works well if you are only likely to redisplay items in a short period of time.
Store the data for each element instead of the DOM element, and reconstruct the DOM on each display.
Use HTML5 Storage to store the data instead of DOM or variable storage. This has the added advantage that data can be stored across page requests.
Any caching strategy will need to consider when to invalidate the cache and re-request updated data. Depending on your strategy, you will need to handle conflicts that result from multiple editors.
The best way is to get started using the simplest method, and add complexity to improve speed only where necessary.
The third path would be to store the data associated with a ticket in JS, and create and destroy DOM nodes as the modal window is summoned/dismissed (jQuery templates might be a natural solution here.)
That said, the primary reason you avoid network traffic seems to be user experience (the network is slower than RAM, always). But that experience might not actually be degraded by making a request every time, if it's something the user intuits involves loading data.
I would say number 2 would be best. Because that way if the ticket changes after you open it, that change will appear the second time the ticket is opened.
One important factor in the number of redraws/reflows that are triggered for DOM manipulation. It's much more efficient to build up your content changes and insert them in one go than do do it incrementally, since each increment causes a redraw/reflow.
See: http://www.youtube.com/watch?v=AKZ2fj8155I to better understand this.