Memory leakage on event handling - javascript

I've been reading about memory leakages lately and haven't yet wrapped my head around all of it and have some questions regarding my own style of writing. Specifically, I'm not really sure if the way I handle events might be a source of leaking. Consider the following code
function Wrapper(text) {
this.text = text;
this.bindHandlers();
};
Wrapper.prototype.onClick = function (e) {
alert(this.text);
};
Wrapper.prototype.bindHandlers = function () {
var t = this, div = $('<div>' + this.text + '</div>');
var reallyHugeArray = [1,2,3...]; // an array of 100000 elements for example
div.click(function (e) {
// all variables of the parent function are in scope for this function, including reallyHugeArray
t.onClick(e);
});
$(document).append(div);
};
var a = new Wrapper('testString');
// had enough fun with the Wrapper, now let's nullify it
a = null;
As you can see, I like to use an anonymous functions as the event handler so that it would be more convenient to have access to instance specific variables (in this case this.text in the onClick function) and functions. However, if I understood correctly, having an anonymous function inside a function (as is the event handler), which has access to the local scope, disables the garbage collector from removing the local variables, therefore creating a leak.
So my question is whether this method of event handling can create memory leakages and if it does, is there any way to prevent it, but still have a similarily convenient way to access the instance variables and functions?
(Off-topic: a function inside a function inside a function makes Javascript sound like Inception)

In your particular example, the anonymous click handler creates a function closure for the scope above it. That means that the values of t, div and reallyHugeArray are maintained for the life of your anonymous click handler function.
This is not a really a memory "leak", but rather memory "usage". It doesn't get worse and worse over time, it just uses the fixed amount of memory that those local varaibles t, div and reallyHugeArray occupy. This is often an advantage in javascript programming because those variables are available to the inner function. But, as you wondered, it can occasionally cause problems if you expected that memory to be freed.
In the case of references to other things (DOM objects or other JS variables), since these outer variables continue on, everything that they refer to also continues on and cannot be freed by the garbage collector. In general, this is not a big problem. Things that tend to cause problems are things that are done over and over as the web page is used or things that are done in some large loop with lots of iterations. Something only executed once like this just uses a little more memory once and from then on the memory usage of the construct is constant.
If, for some reason, you were binding this event handler over and over again, creating a new function closure every time and never releasing them, then it could be a problem.
I find this construct in Javascript very useful. I don't think of it as something to stay away from, but it is worth understanding in case you have references to really large things that you want to be freed, transient things that should be freed because you don't need them long term or you're doing something over and over again. In that case, you can explicitly set local variables to null if you won't need them in the inner function to kill their references and allow the garbage collector to do it's thing. But, this is not something you generally need to do - just something to be aware of in certain circumstances.

Related

Is it possible to define an inline "static" lambda (arrow) function?

My code involves lots of event handlers, which do not require any context (scope) to execute. When I use inline lambda functions in contrast to statically declared (constant) lambda functions, a new event handler is created each time I assign the event handler.
Question: Is it possible to define an inline lambda function, which does not create a new Function object for each time the lambda is passed as a callback. (Given that no unique context scope is required.)
Two examples illustrating the trade-off between notation and memory usage.
1. Inline lambda: (desired notation, unnecesarry memory consumption)
for (const divId of ["div1", "div2", "div3"]) {
documentgetElementById(divId).addEventListener("click", e => console.log(e));
} // Creates a new Function
// object for each loop
// cycle.
Desired notation, but creates a new Function callback (e => console.log(e)) for each divId, despite the callback not depending on any context information (hence being functionally equivalent for each divId). It would be great if there was a way to just pass a pointer to this function.
2. Statically declared lambda: (undesired notation, desired and minimal memory consumption)
const staticLambda = e => console.log(e); // Function object created only once.
for (const divId of ["div1", "div2", "div3"]) {
documentgetElementById(divId).addEventListener("click", staticLambda);
}
Undesired notation (needs the extra constant), but on the up-side only creates the Function callback (staticLambda) once for all three divIds.
Imagine how this would look inside a class method; the lambda function needs to be declared outside of its respective method as a static property of the class, hence destroying the elegance of lambdas (which are so good at keeping the callback code at the location where it is passed).
Note: This is a simplified example. I realize that creating 2 (out of 3) unnecessary callbacks does not affect performance substantially, however, I am interested in properly handling cases with orders of magnitude more callbacks.
You might want to add the event listeners in a function, like this:
function addListeners(ids, listener){
for( const divId of ids){
document.getElementById(divId).addEventListener("click", listener);
}
}
Now you can call your function like this:
addListeners(["div1", "div2", "div3"], e => console.log(e));
While these kinds of optimizations do help in performance (Does use of anonymous functions affect performance?) I don't think it has a very large impact and shouldn't be pursued just for performance reasons, but your case can of course be a valid one.
If you are managing many event listeners you should also take care to remove them when they become unused. Removing an event listener is the easiest if you still have a reference to the original handler around and thus declaring the handler somewhere outside the scope where it is attached.
I would also argue that it is good practice to put 'simple' event handlers in their own functions and keep them in a separate module or file, this allows for easier refactoring. This also solves the performance issue.
Memory usage should not be the reason you choose one form over the other. For two reasons:
The usage is so small as to be almost unmeasurable in most cases.
Javascript interpreters will most likely compile a single function where it can when optimizing bytecode or JIT machine code.
However you should choose either form due to their properties:
Declare implicit function when you have more than one place using the same logic. Always look for opportunities to refactor code especially an easy refactor as naming a function. It reduces future workload when fixing bugs.
When you need to instantiate a new closure each time you call your function then use an IIFE (inline). Modern javascript may compile it to a single function but they will create a new closure each time the function is "redeclared".

Javascript variables and memory leaking?

Is it possible to create memory leaks when coding in Javascript? and if so is this dependent on the Javascript rendering engine e.g. V8 or IE's Chakra
I seem to be getting really slow performance when iterating through large loop constructs.
Should i "delete" the variables that im not using?
var myVar = 'very very long string';
delete myVar;
In the example you've shown, unless myVar is in the global scope it will simply be garbage collected at the end of the function.
In general, you don't need to worry about memory in JavaScript. Where you do need to worry about memory is when you unintentionally create references to objects and forget about them. For example:
function buttonClick() {
var clicked = false;
document.body.addEventListener('click', function(event) {
if (clicked) return;
if (event.target.nodeName !== 'button') return;
clicked = foo();
}, true);
}
The above code is bad code to begin with (not cleaning up event listeners), but it illustrates the example of a "memory leak" in JavaScript. When buttonClick() is called, it binds a function to the click event on the <body>. Because removeEventListener is never called to unbind the listener, the memory used by the function is never reclaimed. That means that every time buttonClick() is called, a little bit of memory will "leak".
Even then, however, the amount of memory leaked is quite small and won't ever become a problem for the vast majority of use cases. Where it would likely be a problem is in server-side JavaScript, where the code is potentially run much more frequently and the process stays alive for much longer.

Does the js event-loop mean you can use global variables for temp scratch-space?

* Disclaimer: I'm not saying this is a good idea - as a matter of fact I'll explicitly say it is not - so take this question by way of trying to understand what exactly the event-loop means for coding style.
My rudimentary understanding of the javascript-has-no-threads mantra is that the runtime treats all of javascript as short "blocks of code" which are scheduled executed one after the other without ever shifting away from a block during execution. A block of code (I don't know the real terminology) in this case is basically code that runs as a result of an event handler being triggered.
If my understanding is correct that would mean that it is technically 100% safe to use global variables if your use of them does not span more than one "block of code".
So for example if I have a single global object window.workspace I could have my event handlers and any code that flows from there - rather than storing temporary variables in closures - store them all in window.workspace. As long as I don't assume that workspace to retain any state in between calls to event handlers (even the same one), this should be perfectly safe.
Is this accurate (though, once again, not advised)
Exactly how a JavaScript event mechanism works is up to the container in which JavaScript runs. It would be possible to set up a system wherein event handlers always had some sort of persistent state object passed in on each call.
In browsers and systems like Node.js, however, the answer to your question (to the extent I understand it) is a guarded "yes", or maybe "yes but".
Because JavaScript has closures, a cleaner way to ensure that there's persistent (not like DB persistence; I mean persistent across invocations of an event handler) but private storage is to do something like this:
(function(global) {
var persistentValue = 12;
// set up an event handler
global.whatever().handleEvent(function() {
if (persistentValue > 12) { ... }
else { persistentValue ++; }
});
})( this );
The idea is that the "persistentValue" variable remains "alive" in the closure around the event handler, so each time it's called it'll see that variable as it was the last time it ran. Now, of course if other event handlers are created in the same wrapper function, then they'll also have access to the variable. In that sense, it's like a relatively-global variable to those handlers.

Should I encapsulate blocks of functionality in anonymous JavaScript functions?

My intuition is that it's a good idea to encapsulate blocks of code in anonymous functions like this:
(function() {
var aVar;
aVar.func = function() { alert('ronk'); };
aVar.mem = 5;
})();
Because I'm not going to need aVar again, so I assume that the garbage collector will then delete aVar when it goes out of scope. Is this right? Or are interpreters smart enough to see that I don't use the variable again and clean it up immediately? Are there any reasons such as style or readability that I should not use anonymous functions this way?
Also, if I name the function, like this:
var operations = function() {
var aVar;
aVar.func = function() { alert('ronk'); };
aVar.mem = 5;
};
operations();
does operations then necessarily stick around until it goes out of scope? Or can the interpreter immediately tell when it's no longer needed?
A Better Example
I'd also like to clarify that I'm not necessarily talking about global scope. Consider a block that looks like
(function() {
var date = new Date(); // I want to keep this around indefinitely
// And even thought date is private, it will be accessible via this HTML node
// to other scripts.
document.getElementById('someNode').date = date;
// This function is private
function someFunction() {
var someFuncMember;
}
// I can still call this because I named it. someFunction remains available.
// It has a someFuncMember that is instantiated whenever someFunction is
// called, but then goes out of scope and is deleted.
someFunction();
// This function is anonymous, and its members should go out of scope and be
// deleted
(function() {
var member;
})(); // member is immediately deleted
// ...and the function is also deleted, right? Because I never assigned it to a
// variable. So for performance, this is preferrable to the someFunction
// example as long as I don't need to call the code again.
})();
Are my assumptions and conclusions in there correct? Whenever I'm not going to reuse a block, I should not only encapsulate it in a function, but encapsulate it in an anonymous function so that the function has no references and is deleted after it's called, right?
You're right that sticking variables inside an anonymous function is a good practice to avoid cluttering up the global object.
To answer your latter two questions: It's completely impossible for the interpreter to know that an object won't be used again as long as there's a globally visible reference to it. For all the interpreter knows, you could eval some code that depends on window['aVar'] or window['operation'] at any moment.
Essentially, remember two things:
As long as an object is around, none of its slots will be magically freed without your say-so.
Variables declared in the global context are slots of the global object (window in client-side Javascript).
Combined, these mean that objects in global variables last for the lifetime of your script (unless the variable is reassigned). This is why we declare anonymous functions — the variables get a new context object that disappears as soon as the function finishes execution. In addition to the efficiency wins, it also reduces the chance of name collisions.
Your second example (with the inner anonymous function) might be a little overzealous, though. I wouldn't worry about "helping the garbage collector" there — GC probably isn't going to run in the middle that function anyway. Worry about things that will be kept around persistently, not just slightly longer than they otherwise would be. These self-executing anonymous functions are basically modules of code that naturally belong together, so a good guide is to think about whether that describes what you're doing.
There are reasons to use anonymous functions inside anonymous functions, though. For example, in this case:
(function () {
var bfa = new Array(24 * 1024*1024);
var calculation = calculationFor(bfa);
$('.resultShowButton').click( function () {
var text = "Result is " + eval(calculation);
alert(text);
} );
})();
This results in that gigantic array being captured by the click callback so that it never goes away. You could avoid this by quarantining the array inside its own function.
Anything that you add to the global scope will stay there until the page is unloaded (unless you specifically remove it).
It's generally a good idea to put variables and function that belong together either in a local scope or in an object, so that they add as little as possible to the global namespace. That way it's a lot easier to reuse code, as you can combine different scripts in a page with minimal risks for naming collisions.

Why does this Javascript object not go out of scope after $(document).ready?

I have some working Javascript that manipulates the some DOM elements. The problem is, I don't understand why it works, which is never a good thing. I am trying to learn more about object oriented javascript and javascript best practices, so the organization may seems a little strange.
Basically, I wrap two methods that manipulate the DOM inside a CSContent object. I create an instance of that object, content in $(document).ready and bind some events to the functions in content. However, I am confused as to how these functions can still be called after $(document).ready exits. Doesn't that mean that content has gone out of scope, and its functions are not available? Anyway, here is the code:
function CSContent() {
var tweetTextArea = document.getElementById('cscontent-tweet'),
tweetTextElement = document.getElementById('edit-cscontent-cs-content-tweet'),
charCountElement = document.getElementById('cscontent-tweet-charactercount');
this.toggleTweetTextarea = function () {
$(tweetTextArea).slideToggle();
};
this.updateTweetCharacterCount = function () {
var numOfCharsLeft = 140 - tweetTextElement.value.length;
if (numOfCharsLeft < 0) {
$(charCountElement).addClass('cscontent-negative-chars-left');
}
else {
$(charCountElement).removeClass('cscontent-negative-chars-left');
}
charCountElement.innerHTML = '' + numOfCharsLeft + ' characters left.';
};
}
$(document).ready(function () {
var content = new CSContent();
//If the twitter box starts out unchecked, then hide the text area
if ($('#edit-cscontent-cs-content-twitter:checked').val() === undefined) {
$('#cscontent-tweet').hide();
}
$('#edit-cscontent-cs-content-twitter').change(content.toggleTweetTextarea);
//Seems wasteful, but we bind to keyup and keypress to fix some weird miscounting behavior when deleting characters.
$('#edit-cscontent-cs-content-tweet').keypress(content.updateTweetCharacterCount);
$('#edit-cscontent-cs-content-tweet').keyup(content.updateTweetCharacterCount);
content.updateTweetCharacterCount();
});
This, m'lord, is called a closure: the local variable content will remain in memory after $(document).ready exits. This is also a known cause of memory leaks.
In short, you bind this function to an event listener of a DOM element and then the JavaScript garbage collector knows that it should keep the local variable intact. You can't call it directly (outside of the function), unless the event is triggered. With some, you can do this ‘manually’, if you really want to call the function afterward (e.g., using element.click() to simulate a click).
I assume you wonder why the event handlers like
$('#edit-cscontent-cs-content-twitter').change(content.toggleTweetTextarea);
work?
Well you don't pass content as event handler but the function that is contained in content.toggleTweetTextarea. And this reference will still exist after content does not exist anymore. There is nothing special about it. You just assigned an object (the function) to another variable. As long as at least one reference to an object exists, the object won't be garbage collected.
Now you may ask why those functions have still access to e.g. tweetTextArea ? This is indeed a closure. When the functions are created via new CSContent(), the activation context of this function is added to the scope chain of the inner functions CSContent.toggleTweetTextarea and CSContent.updateTweetCharacterCount. So even if you don't have a reference to content anymore, the scope of this function is still contained in the scope chain of the other functions.
You won't be able to access the object contained in content anymore after ready() is finished, this indeed goes out of scope.
My brain is off today, but shouldn't you be using closures in this situation?
$('#edit-cscontent-cs-content-twitter').change(
function(){
content.toggleTweetTextarea();
}
);

Categories

Resources