Unsetting all custom Javascript objects after initialization - javascript

The custom javascript of my site is namespaced, combined and minified resulting in a 12kb file of custom js. This is code for the entire site, and usually after pageload has been triggered a greater portion of it doesn't have to sit in memory.
My question:
Does a heap of custom script that only gets executed once or not at all affect a users performance? Especially if the user has multiple tabs open
I was thinking of setting mynamespace = null but I wouldn't know if this actually improves the users browsers performance.

Nulling it out should trigger the garbage collector to free up some memory. Unless the system had enough stuff in memory that it was being swapped out to the disk, the user wouldn't notice a difference.

You don't have to unset JavaScript objects manually.
Because JavaScript can do "Garbage Collection"
Edit:
You can do
delete window.mynamespace;
And the "mynamespace" will be deleted

Related

Why browser becomes slow with large number o DOM elements

I know this question sounds very trivial, but I just want to know how 'browser processes DOM and what makes it become slow with large number of DOM elements? Is this just about the size? What if DOM elements are not high in number but javascript objects are? Would it still respond slow?
I guess, if there are events attached to javascript objects and we don't dispose them, it responds slow because it has to execute all the event handlers (in a sequential manner), but other than that what are the other reason where 'memory leak' slows down the browser? (Assuming browser has consumed lots of memory but enough memory is still usable in system).
Update:
Surprisingly, CPU and memory usage is always under control while browser responds slow.
If a page is loaded with all its elements and it doesn't change, then there is no reason why it should be slow no matter the amount of DOM elements. However, if you have a dynamic page, there are loads of operations that cause the entire layout to redraw itself. This is called layout thrashing and can have dramatic effects on performance.
The most obvious is that your browser consumps all your memory and when it comes time to render something during scrolling, for example it has no more memory.
If there are no memory issues, then JohanP is write - there is no reasons.
Why do browsers slow down when loading a lot of data? Because they have to load a lot of data. Large images are obviously the worst culprit in terms of load speed, but page load is directly correlated to the number of kilobytes being transmitted. If you have a lot of code, it's going to have a large filesize.
As for JavaScript, there are four main causes of leaks:
Accidental Global Variables -- Variables not explicitly defined will assume a global scope:
function foo(arg) {
bar = "This is a global variable";
}
Forgotten Timers Or Callbacks -- When declaring a variable in a timed function like setInterval(function() {}), the variable still exists at the end of the interval.
Out Of DOM References -- When assigning reference to an element, which later is removed, the reference still exists:
var button: document.getElementById('button');
document.body.removeChild(document.getElementById('button'));
Closures -- Loops often don't get closed 'correctly', losing variable scope, and leaking in the process. See JavaScript closures.
See 4 Types of Memory Leaks in JavaScript and How to Get Rid Of Them for further information on JavaScript memory leaks.
Hope this helps!

Is there a way to control Chrome GC?

I am working with quite large volume of data.
Mechanism:
JavaScript is reading WebSQL database, then assembles data into Object that has tree structure.
Then applies to tree object knockout.js (makes elements observable) then data-binds
and then applies Jquery Mobile UI at the end.
Whole process takes unacceptable amount of time.
I have already optimized algorithm that makes tree object out of data,
also optimised conversion to observables mechanism by pushing items directly into ko.observable arrays and calling hasMutated only once.
I am applying knockout.js IF bindings to not process invisible tree nodes in UI until parent is opened.
Performance here is key.
After inspecting page load in timeline in Chrome developer tools I have noticed that Garbage Collector is doing cleans on every concurrent call when I am building tree object.
Question: Is there a way to temporarily disable Chrome GC and then enable it again after I am done with page processing?
P.S I know I could add reference to part that gets collected, basically introduce object that dominates and prevents GC collection, but this would require substantial changes through the code, and I am not sure I could keep it long enough, and it is likely to introduce memory leak. Surely there must be better way
No, there is no way to disable the garbage collector. There cannot be, because what is Chrome supposed to do when more memory is requested but none is available?
(Also, the garbage collector is very fine-grained and complicated; your screenshot is a bit too small to be readable, but in all likelihood what you're seeing are small steps of incremental work to keep up with allocations, and/or "minor GC" cycles that only operate on the relatively small area of the heap where new allocations happen.)
If you want to reduce time spent in GC, then the primary way how to achieve that is to allocate fewer and/or smaller objects. Yes, that can mean changing your application's design so that objects are reused instead of being short-lived, or similar changes in strategy.
If you allocate a lot, you will see a lot of GC activity, there is just no way around that. This is true even in languages/runtimes that are not considered "garbage collected", e.g. in C/C++ using new/delete a lot also has a performance cost.

Identify javascript closures with developer tools

I am currently developing a website that is pure javascript and relies heavily on the jQuery & jQuery UI libraries (this site is not intended for use by a general public, hence progressive enhancement is not a strict requirement for this project). I am encountering a significant memory leak on executing the following code:
oDialogBox = $("<div>...</div>");
/* Add useful things to the dialog box here */
oDialogBox.appendTo("body");
oDialogBox.dialog({
/* Other dialog box settings here */
close: function(event, ui) {
oDialogBox.dialog("destroy");
oDialogBox.remove();
oDialogBox = null;
}
});
At any given time in this dialog box, I am creating, removing and modifying a large number of instances of jQuery UI buttons, multiselects (per the Multiselect widget created by Eric Hynds) and on click event handlers. According to jQuery UI documentation, calling .remove() on oDialogBox should result in all child widgets being unbound and deleted. Yet my detached DOM tree shows a significant number of garbage elements that the GC isn't collecting.
It is highly likely I have missed a large set of closures that need to be finished off safely. How do I do the following:
1) How do I identify which closures are keeping a given detached DOM object alive (either in Firefox or Chrome)?
2) Assuming the complete set of closures is identified, does anything beyond nulling the variable need to be done to assure marking the DOM element for garbage collection?
3) I have also noticed my list of arrays stored by the page is giant and contains references to DOM elements not being gathered by the GC. Is there a documented best practice for cleaning arrays from javascript and allowing all elements to be marked for deletion? (Note: this is a current prime suspect for the source of the memory leak)
I'm afraid that I don't have a great answer for #1. I haven't found any really good tools for this myself, even given how good the development tools have become over the last few years. The best advice I can give is to always keep things in the smallest scope you possibly can. If things don't escape, it's generally easier to simply figure out where the references must be.
As to #2, there can be further concerns. If the object referenced by variable v1 closes over the free variables of some function, removing v1 will not be enough to make them eligible for garbage collection if another variable v2 closes over v1 in some other function. So I guess if you really mean the "complete set of closures", then you should be all set. But this might get hairy. Again, if most object have references only in narrow scopes, these problems are much less severe.
For #3, what sorts of arrays are you discussing? If it's jQuery collections, then perhaps you simply have too many of them around. The only reason I know for them to stay around for a long time is to bind event handlers to them, and that is almost always better handled by event delegation on parent elements. If it's you're own custom arrays, do you really have a good reason to store references to them in arrays that last for any substantial length of time? I've rarely found one.

Can variables in JS function closures be accessed in any way (including devtools)? How?

In my previous question:
Securing javascript game timing
... it became clear that client-side timing in a Javascript/Canvas game simply won't be secure. I know the mantra about not trusting the client - that is what is causing my struggle in the first place. :-)
So, if I do move all timing to the server and just deal with it, here is a follow-up question. The game obviously needs to be completed before submitting it. As the game puzzle is all Javascript, this introduces the problem of manipulating the client-side code to fake the completion of the game.
I've created the game JS code in a separate class file. If I instantiate the game as such:
var game;
$document.ready(function(){
game = new Game();
});
... then, I can access the 'game' object and all of its methods and variables via the console.
However, if I do:
$document.ready(function(){
var game = new Game();
});
... then I cannot access the 'game' object through the console. This seems to help, but is there something I don't know - can this object still be accessed in some way I don't know about or is making it a private var in that function a little more secure?
Thanks!
Note: there are many other security considerations and attack vectors in such a system. This answer just seeks to answer the specific question that was asked here.
It depends on the browser and what its devtools provide. Most browsers' devtools provide functionality to:
pause execution of JavaScript at any point in time and use a debugger interface.
variables that are in-scope at the current point of execution where the debugger is paused can usually be accessed via devtools in various ways. Inparticular, via console, where anything one can do with that variable in the console is fair game: query its fields, call its methods, etc. If the variable binding isn't const, one can even reassign the variable to a new user-created instance of the object.
navigate JS files and set breakpoints in them.
this is a vector to the above bullet point.
you can make this less attractive by using JS minification (ie. obfuscation), but that's not going to stop someone who's determined.
String literals don't get minified and can usually help a lot in navigating and understanding minified code.
inspect event listeners on HTML elements and set breakpoints on them.
If a variable has a reference bound to it in a function closure that is known to be an event listener of a certain HTML element, or reachable (execution-wise) by such an event listener, this can be another vector to the first bullet point. This can be very common in web games. Just a keyboard event listener usually is an entry-point to reach functions that reference important game objects.
There are even tools to record the JS heap memory. It's a lot of data to sift through, but it's basically everything on the JS heap (readonly).
Given those browser features (and the fact that a user can use any browser they wish), it's impossible to "safeguard" anything 100% on the client-side. It's a losing battle. If you want to play it like a game, you can do your best.
Look into Object.freeze and friends.
freeze or seal anything that can be frozen or sealed, including class prototypes
make variables which can be const const
use assertions to assert in critical parts of the logic that the program state is consistent and try to detect tampering.
Don't care too much about the console. Yes, if there are global objects whose method can easily be fired to "win" the game, it's a nice possibility to cheat, but it can easily prevented as you demonstrated.
So, the hacker would just listen (look at the network pane) which requests are made to your server and fire them manually. If they were just some simple urls like /action=start and /action=end, he could very easily fire them manually without any timing. So you will need to prevent that (although you never can really make it safe), e.g. by adding additional credential tokens. Or you could embed some "secret"(s) into the game code, which are revealed during the gameplay, and need to be sent to the server to prove the rightfulness. Of course they could be read out of your code, but you have to make it too complicated for the hacker. It's a bit like security by obscurity…

Save or destroy data/DOM elements? Which takes more resources?

I've been getting more and more into high-level application development with JavaScript/jQuery. I've been trying to learn more about the JavaScript language and dive into some of the more advanced features. I was just reading an article on memory leaks when i read this section of the article.
JavaScript is a garbage collected language, meaning that memory is allocated to objects upon their creation and reclaimed by the browser when there are no more references to them. While there is nothing wrong with JavaScript's garbage collection mechanism, it is at odds with the way some browsers handle the allocation and recovery of memory for DOM objects.
This got me thinking about some of my coding habits. For some time now I have been very focused on minimizing the number of requests I send to the server, which I feel is just a good practice. But I'm wondering if sometimes I don't go too far. I am very unaware of any kind of efficiency issues/bottlenecks that come with the JavaScript language.
Example
I recently built an impound management application for a towing company. I used the jQuery UI dialog widget and populated a datagrid with specific ticket data. Now, this sounds very simple at the surface... but their is a LOT of data being passed around here.
(and now for the question... drumroll please...)
I'm wondering what the pros/cons are for each of the following options.
1) Make only one request for a given ticket and store it permanently in the DOM. Simply showing/hiding the modal window, this means only one request is sent out per ticket.
2) Make a request every time a ticket is open and destroy it when it's closed.
My natural inclination was to store the tickets in the DOM - but i'm concerned that this will eventually start to hog a ton of memory if the application goes a long time without being reset (which it will be).
I'm really just looking for pros/cons for both of those two options (or something neat I haven't even heard of =P).
The solution here depends on the specifics of your problem, as the 'right' answer will vary based on length of time the page is left open, size of DOM elements, and request latency. Here are a few more things to consider:
Keep only the newest n items in the cache. This works well if you are only likely to redisplay items in a short period of time.
Store the data for each element instead of the DOM element, and reconstruct the DOM on each display.
Use HTML5 Storage to store the data instead of DOM or variable storage. This has the added advantage that data can be stored across page requests.
Any caching strategy will need to consider when to invalidate the cache and re-request updated data. Depending on your strategy, you will need to handle conflicts that result from multiple editors.
The best way is to get started using the simplest method, and add complexity to improve speed only where necessary.
The third path would be to store the data associated with a ticket in JS, and create and destroy DOM nodes as the modal window is summoned/dismissed (jQuery templates might be a natural solution here.)
That said, the primary reason you avoid network traffic seems to be user experience (the network is slower than RAM, always). But that experience might not actually be degraded by making a request every time, if it's something the user intuits involves loading data.
I would say number 2 would be best. Because that way if the ticket changes after you open it, that change will appear the second time the ticket is opened.
One important factor in the number of redraws/reflows that are triggered for DOM manipulation. It's much more efficient to build up your content changes and insert them in one go than do do it incrementally, since each increment causes a redraw/reflow.
See: http://www.youtube.com/watch?v=AKZ2fj8155I to better understand this.

Categories

Resources