gsap tweenlite/tweenmax garbage collecting, references and performances - javascript

I'm trying to understand what's the best way to use TweenLite/TweenMax.
Is it useful to reference all my tweens with the same variable?
After killing the tween with the relative public method, do I have to set the reference to null to improve the garbage collecting disposal?
Below there is a well commented example:
$(document).ready(function () {
var elementOne = $('#elementOne');
var elementTwo = $('#elementTwo');
var myTween;
// is it useful to overwrite the variable?
myTween = TweenMax.to(elementOne, 1, {
opacity: 0
});
myTween = TweenMax.to(elementTwo, 1, {
left: 0,
onComplete: destroy
});
function destroy () {
// suggested on tweenmax docs
// the console.log still returns me the object
myTween.kill();
console.log(myTween);
// is it required for garbage collecting?
// now the console.log returns me null
myTween = null;
console.log(myTween);
// and then...jQuery GC friendly remove
elementOne.remove();
elementTwo.remove();
}
});

You don't need to do anything special to make a tween (or timeline) available for gc other than what you'd normally do for any JS object. In other words, if you maintain a reference in your own code to an instance, it'll stick around (otherwise your code could break). But you do NOT need to specifically kill() a tween. A lot of effort has gone into GSAP to ensure that things are optimized and headache-free. The engine will automatically release completed tweens for garbage collection when necessary. And yet a tween will still work if you maintain a reference and restart() it, for example.
Just because you call kill() on a tween instance, that doesn't force the browser to run its garbage collection routine. It doesn't null your variable either. That's just how JavaScript works (and that's a good thing). It has nothing to do with TweenLite/Max specifically.
Also keep in mind that you don't need to store any tween instances in variables. The only time it's helpful is if you need to control the tween later (or insert it into a timeline or something like that). Typically it's fine to just call TweenMax.to(...) without storing the result in a variable.
Does that clear things up?

Related

How do I create a memory leak in JavaScript?

I would like to understand what kind of code causes memory leaks in JavaScript and created the script below. However, when I run the script in Safari 6.0.4 on OS X the memory consumption shown in the Activity Monitor does not really increase.
Is something wrong with my script or is this no longer an issue with modern browsers?
<html>
<body>
</body>
<script>
var i, el;
function attachAlert(element) {
element.onclick = function() { alert(element.innerHTML); };
}
for (i = 0; i < 1000000; i++) {
el = document.createElement('div');
el.innerHTML = i;
attachAlert(el);
}
</script>
</html>
The script is based on the Closure section of Google's JavaScript style guide:
http://google-styleguide.googlecode.com/svn/trunk/javascriptguide.xml?showone=Closures#Closures
EDIT: The bug that caused the above code to leak has apparently been fixed: http://jibbering.com/faq/notes/closures/#clMem
But my question remains: Would someone be able to provide a realistic example of JavaScript code that leaks memory in modern browsers?
There are many articles on the Internet that suggest memory leaks can be an issue for complex single page applications but I have a hard time finding an examples that I can run in my browser.
You're not keeping the element you've created around and referenced anywhere - that's why you're not seeing the memory usage increase. Try attaching the element to the DOM, or store it in an object, or set the onclick to be a different element that sticks around. Then you'll see the memory usage skyrocket. The garbage collector will come through and clean up anything that can no longer be referenced.
Basically a walkthrough of your code:
create element (el)
create a new function that references that
element
set the function to be the onclick of that element
overwrite the element with a new element
Everything is centric around the element existing. Once there isn't a way to access the element, the onclick can't be accessed anymore. So, since the onclick can't be accessed, the function that was created is destroyed.. and the function had the only reference to the element.. so the element is cleaned up as well.
Someone might have a more technical example, but that's the basis of my understanding of the javascript garbage collector.
Edit: Here's one of many possibilities for a leaking version of your script:
<html>
<body>
</body>
<script>
var i, el;
var createdElements = {};
var events = [];
function attachAlert(element) {
element.onclick = function() { alert(element.innerHTML); };
}
function reallyBadAttachAlert(element) {
return function() { alert(element.innerHTML); };
}
for (i = 0; i < 1000000; i++) {
el = document.createElement('div');
el.innerHTML = i;
/** posibility one: you're storing the element somewhere **/
attachAlert(el);
createdElements['div' + i] = el;
/** posibility two: you're storing the callbacks somewhere **/
event = reallyBadAttachAlert(el);
events.push(event);
el.onclick = event;
}
</script>
</html>
So, for #1, you're simply storing a reference to that element somewhere. Doesn't matter that you'll never use it - because that reference is made in the object, the element and its callbacks will never go away (or at least until you delete the element from the object). For possibility #2, you could be storing the events somewhere. Because the event can be accessed (i.e. by doing events[10]();) even though the element is nowhere to be found, it's still referenced by the event.. so the element will stay in memory as well as the event, until it's removed from the array.
update: Here is a very simple example based on the caching scenario in the Google I/O presentation:
/*
This is an example of a memory leak. A new property is added to the cache
object 10 times/second. The value of performance.memory.usedJSHeapSize
steadily increases.
Since the value of cache[key] is easy to recalculate, we might want to free
that memory if it becomes low. However, there is no way to do that...
Another method to manually clear the cache could be added, but manually
adding memory checks adds a lot of extra code and overhead. It would be
nice if we could clear the cache automatically only when memory became low.
Thus the solution presented at Google I/O!
*/
(function(w){
var cache = {}
function getCachedThing(key) {
if(!(key in cache)) {
cache[key] = key;
}
return cache[key];
}
var i = 0;
setInterval(function() {
getCachedThing(i++);
}, 100);
w.getCachedThing = getCachedThing
})(window);
Because usedJSHeapSize does not update when the page is opened from the local file system, you might not see the increasing memory usage. In that case, I have hosted this code for you here: https://memory-leak.surge.sh/example-for-waterfr
This Google I/O'19 presentation gives examples of real-world memory leaks as well as strategies for avoiding them:
Method getImageCached() returns a reference to an object, also caching a local reference. Even if this reference goes out of the method consumer's scope, the referenced memory cannot be garbage collected because there is a still a strong reference inside the implementation of getImageCached(). Ideally, the cached reference would be eligible for garbage collection if memory got too low. (Not exactly a memory leak, but a situation where there is memory that could be freed at the cost of running the expensive operations again).
Leak #1: the reference to the cached image. Solved by using weak references inside getImageCached().
Leak #2: the string keys inside the cache (Map object). Solved by using the new FinalizationGroup API.
Please see the linked video for JS code with line-by-line explanations.
More generally, "real" JS memory leaks are caused by unwanted references (to objects that will never be used again). They are usually bugs in the JS code. This article explains four common ways memory leaks are introduced in JS:
Accidental global variables
Forgotten timers/callbacks
Out of DOM references
Closures
An interesting kind of JavaScript memory leak documents how closures caused a memory leak in the popular MeteorJS framework.
2020 Update:
Most CPU-side memory overflow is no longer working on modern v8 engine based browser. However, we can overflow the GPU-side memory by running this script
// Initialize canvas and its context
window.reallyFatCanvas = document.createElement('canvas');
let context = window.reallyFatCanvas.getContext('2d');
// References new context inside context, in loop.
function leakingLoop() {
context.canvas.width = document.body.clientWidth;
context.canvas.height = document.body.clientHeight;
const newContext = document.createElement('canvas').getContext('2d');
context.context = newContext;
context.drawImage(newContext.canvas, 0, 0);
// The new context will reference another context on the next loop
context = newContext;
}
// Use interval instead of while(true) {...}
setInterval(leakingLoop,1);
EDIT: I rename every variables (and constants) so it makes a lot of sense. Here is the explanation.
Based on my observation, canvas context seems sync with Video Memory. So if we put reference of a canvas object which also reference another canvas object and so on, the Video RAM fills a lot more than DRAM, tested on microsoft edge and chrome.
This is my third attempt of screenshot:
I have no idea why my laptop always freeze seconds after taking screenshot while running this script. Please be careful if you want to try that script.
I tried to do something like that and got exception out of memory.
const test = (array) => {
array.push((new Array(1000000)).fill('test'));
};
const testArray = [];
for(let i = 0; i <= 1000; i++) {
test(testArray);
}
The Easiest Way Is:
while(true){}
A small example of code causing a 1MB memory leak:
Object.defineProperty(globalThis, Symbol(), {value: new Uint8Array(1<<20).slice(), writable: false, configurable: false})
After you run that code, the only way to free the leaked memory is to close the tab you ran it on.
If all you want is to create a memory leak, then the easiest way IMO is to instantiate a TypedArray since they hog up a fixed size of memory and outlive any references. For example, creating a Float64Array with 2^27 elements consumes 1GiB (1 Gibibyte) of memory since it needs 8 bytes per element.
Start the console and just write this:
new Float64Array(Math.pow(2, 27))

Measuring pollution of global namespace

Background
I'm trying to refactor some long, ugly Javascript (shamefully, it's my own). I started the project when I started learning Javascript; it was a great learning experience, but there is some total garbage in my code and I employ some rather bad practices, chief among them being heavy pollution of the global namespace / object (in my case, the window object). In my effort to mitigate said pollution, I think it would be helpful to measure it.
Approach
My gut instinct was to simply count the number of objects attached to the window object prior to loading any code, again after loading third-party libraries and lastly after my code has been executed. Then, as I refactor, I would try to reduce the increase that corresponds to loading my code). To do this, I'm using:
console.log(Object.keys(window).length)
at various places in my code. This seems to work alright and I see the number grow, in particular after my own code is loaded. But...
Problem
Just from looking at the contents of the window object in the Chrome Developer console, I can see that its not counting everything attached to the object. I suspect it's not including some more fundamental properties or object types, whether they belong to the browser, a library or my own code. Either way though, can anyone think of a better and more accurate way to measure global namespace pollution that would help in refactoring?
Thanks in advance!
So after some of the comments left by Felix Kling and Lèse majesté, I have found a solution that works well. Prior to loading any libraries or my own code, I create the dashboard global object (my only intentional one) and store a list of objects attached to window via:
var dashboard = {
cache: {
load: Object.getOwnPropertyNames(window)
}
};
Then, after I load all of the libraries but prior to loading any of my own code, I modify the dashboard object, adding the pollution method (within a new debug namespace):
dashboard.debug = {
pollution: (function() {
var pollution,
base = cache.load, // window at load
filter = function(a,b) { // difference of two arrays
return a.filter(function(i) {
return !(b.indexOf(i) > -1);
});
},
library = filter(Object.getOwnPropertyNames(window), base),
custom = function() {
return filter(Object.getOwnPropertyNames(window),
base.concat(library));
};
delete cache.load;
pollution = function() {
console.log('Global namespace polluted with:\n ' +
custom().length + ' custom objects \n ' +
library.length + ' library objects');
return {custom: custom().sort(), library: library.sort()};
};
return pollution;
}())
};
At any point, I can call this method from the console and see
Global namespace polluted with:
53 custom objects
44 library objects
as well as two arrays listing the keys associated with those objects. The base and library snapshots are static, while the current custom measurement (via custom) is dynamic such that if I were to load any custom javascript via AJAX, then I could remeasure and see any new custom "pollution".
The general pattern you've selected works OK from experience. However, there are two things you might need to consider (as additions or alternatives):
Use JsLint.com or JSHint.com with your existing code and look at the errors produced. It should help you spot most if not all of the global variable usage quickly and easily (you'll see errors of 'undefined' variables for example). This is a great simple approach. So, the measurement in this case will be just looking at the total number of issues.
We've found that Chrome can make doing detection of leaking resources on the window object tricky (as things are added during the course of running the page). We've needed to check for example to see if certain properties returned are native by using RegExs: /\s*function \w*\(\) {\s*\[native code\]\s*}\s*/ to spot native code. In some code "leak detection" code we've written, we also try to (in a try catch) obtain the value of a property to verify it's set to a value (and not just undefined). But, that shouldn't be necessary in your case.

Is $(document.body) and document.body the same? Cleaning garbage and binding in class? - MooTools 1.3

I am building a MooTools class and I have this in my initialize function:
this.css = null;
window.addEvent('domready', function(){
this.document = $(document);
this.body = $(document.body);
this.head = $(document.head);
}.bind(this));
Ok and now to the questions ...
Should I declare this.css = null or any other empty variable in the init:
this.css = null; // Maybe this.css = '' - empty string?
Next thing is about window and document ... Should I put it into $() or not because it works both way, so I just want to know which way is preferred? So to summarize:
window.addEvent() // or should I use $(window).addEvent()
this.document = $(document) // or this.document = document
this.body = $(document.body) // or this.body = document.body
I stored these values into object to avoid multiple DOM queries, is this ok? Or would it be better to call $(selector) / $$(selector) every time?
Two more things left ... About binding ... Is it ok to use .bind(this) every time or would it be better to use .bind(this.myDiv) and use it inside function as eg.: this.setStyle(...); instead of this.myDiv.setStyle(...)
(function(){
this.setStyle('overflow-y', 'visible');
}.bind(this.body)).delay(100);
or
(function(){
this.body.setStyle('overflow-y', 'visible');
}.bind(this)).delay(100);
And the last thing is about garbage collection ... Do I have to garbage myself and how to do it (as far as I know MooTools does it on its own on unload). The confusing part is that I found function in MT docs:
myElement.destroy();
They say: Empties an Element of all its children, removes and garbages the Element. Useful to clear memory before the pageUnload.
So I have to garbage on my own? How to do that? When to use .destroy()? I am working on a huge project and I notice that IE gets slow over multiple executions of the script (so how to handle that? probably some cleaning needed, memory leaks?).
pff, this is a bit long.
first, initial variables. this.css = null... the only time i'd set 'empty' variables are: typecast; when it's a property of an object i may reference and don't want undefined; when it's a string i will concatenate with or a number i will incre/decrement; null is not really useful at this point.
a common / good practice when writing a mootools class is to use the Options class as a mixin. this allows you to set default options object with your default settings that can be overridden upon instantiation. similarly, Object.merge({ var: val}, useroptions); can override a default val if supplied.
now, iirc, there are times when you'd have to use $(document.body) and it's not because document.body does not work, it's because applying $() also applies Element prototypes in IE (since Element prototype is not extended there, the methods are applied to the elements directly instead, which happens when you $ them). Also, $ assigns an internal UID of the target element and allows for element storage to be used for that element. I don't see a point to using $(document) or $(window) - they are 'extended' as much as needed by default. In any case, even in IE, you only need to $(something) the one time and can continue using it as just 'something' afterwards. check my document.getElementById("foo").method() example - you can just run $("foo"); on it's own and then try document.getElementById("foo").method() again - it will work in IE too.
window.addEvent(); // is fine.
document.body.adopt(new Element("div")); // not fine in IE.
new Element("div").inject(document.body); // fine.
and on their own:
$(document.body).adopt(new Element("div")); // fine.
document.body.adopt(new Element("span")); // now fine, after first $.
see this in ie8: http://www.jsfiddle.net/AePzD/1/ - first attempt to set the background fails but the second one works. subsequently, document.body.methods() calls are going to work fine.
http://www.jsfiddle.net/AePzD/2/ - this shows how the element (which $ also returns) can have methods in webkit/mozilla and not in trident. however, replace that with $("foo") and it will start working. rule of thumb: $ elements you don't dynamically create before applying methods to them.
storing selectors can be a good performance practice, for sure. but it can also fill your scope chain with many variables so be careful. if you will use a selector two or more times, it's good to cache it. failing to do so is not a drama, selector engines like sizzle and slick are so fast these days it does not matter (unless you are animating at the time and it impacts your FPS).
as for binding, whichever way you like.
keep in mind delay has a second argument, BIND:
(function(){
this.setStyle('background', 'blue');
}).delay(100, $("foo"));
so do quite a few functions. this particular bind is not very useful but in a class, you may want to do
(function(){
this.element.setStyle('background', 'blue');
this.someMethod();
}).delay(100, this));
GC. mootools does it's own GC, sure. however, .destroy is a very good practice, imo. if you don't need something in the DOM, use element.dispose(). if you won't attach it to the DOM again, use .destroy() - removes all child nodes and cleans up. more memory \o/
advice on IE... dodgy. you can use drip if you can to trace memory leaks, there are things like dynatrace that can be very good in profiling. in terms of practices... make sure you don't use inline js, you always clean up what you don't need (events, elements) and generally, be careful, esp when you are stacking up events and dealing with ajax (bring new elements that need events - consider event delegation instead...). use fewer dom nodes - also helps...

Never store intermediate program state in DOM?

I managed to run into this funny bug the other day where too quick modifications of the DOM caused the entire internet explorer to crash. So i was thinking, why am i even setting these values if i'm going to change them later anyway? (the modifications are very unpredictable so completely avoiding the scenario is impossible)
Some background, my website is more like a game/application and has alot of custom elements etc. Performance is key and moving around and modifying objects should be fast, smooth and without flicker. In one iteration tons of objects can have their state modified.
The objects in my application right now follows something similar to this pattern. domElement is the actual element used in the DOM, created with document.createElement or getElementById.
function SetWidth(w) {
this.width = w;
this.domElement.style.width = w + "px";
}
Obiously an object has more styles than just width but just to simplify things. This works pretty well right now but what if i for some reason set the width of one object twice inside one "program loop". This will mean the DOM will be modified twice but it's only the second state that should be displayed. In most new browsers this doesn't make a difference because the page is not rendered until all user javascript has completed. But in some browsers you can get updates unpredictably anytime. And even if there is no visible change, does it impact the performance?
Another problem is that some elements depend on being attached to the DOM before you can set/get some properties on them which you can easier avoid with the pattern below.
What i was thinking of doing instead was going back to the old-school render-loop pattern.
So the above object would look like this:
function SetWidth(w) {
this.width = w;
this.stateChanged = true;
}
function Render() {
if(this.stateChanged)
this.domElement.style.width = this.width + "px";
}
Once the "program-loop" is done you loop through every object only to "render" them (or use some more sophisticated structure keeping track of all modified objects).
To me this seems like defeating the whole purpose of having the DOM as you are basically reinventing what is already in place but sometimes it feels like it doesn't work properly so you have to roll your own version.
Has anyone used something similar and is it worth it? What are the pros and cons? What else do i have to think of? Adding and removing objects and managing z-index also should be taken into consideration.
One of the things to watch out for is that if you request the value of property that is dependant on the layout, the browser will recalculate the layout to get the value, which may take a significant amount of time. So "this.style.width = '80px'" will be quick but "this.style.width = '80px'; var wid = this.clientWidth;" will take much, much longer. Of course, if you use the setWidth/render pattern above, you won't be able to get the value of
clientWidth until after the render phase.
The setWidth/render pattern is a reasonable one for your use case, but I'd create a sub object to hold the pending layout values rather than storing them directly on the 'this' object. If you empty/recreate the sub object at the end of the render step, you won't need to store a separate stateChanged variable, the presence of the property in the sub object can serve that purpose.

Can't empty JS array

Yes, I am having issues with this very basic (or so it seems) thing. I am pretty new to JS and still trying to get my head around it, but I am pretty familiar with PHP and have never experienced anything like this. I just can not empty this damn array, and stuff keeps getting added to the end every time i run this.
I have no idea why, and i am starting to think that it is somehow related to the way chekbox id's are named, but i may be mistaking....
id="alias[1321-213]",
id="alias[1128-397]",
id="alias[77-5467]" and so on.
I have tried sticking
checkboxes = []; and
checkboxes.length = 0;
In every place possible. Right after the beginning of the function, at the end of the function, even outside, right before the function, but it does not help, and the only way to empty this array is to reload the page. Please tell me what I am doing wrong, or at least point me to a place where i can RTFM. I am completely out of ideas here.
function() {
var checkboxes = new Array();
checkboxes = $(':input[name="checkbox"]');
$.each(checkboxes,
function(key, value) {
console.log(value.id);
alert(value.id);
}
);
checkboxes.length = 0;
}
I have also read Mastering Javascript Arrays 3 times to make sure I am not doing something wrong, but still can't figure it out....
I think there's a lot of confusion coming out of this because you are clearing the array -- just maybe not for the purpose you want, or at the wrong time, etc.
function () {
var checkboxes = new Array(); // local-scope variable
checkboxes = $(':input[name="checkbox"]'); // new instance
$.each(checkboxes, /* ... */); // (truncated for brevity)
checkboxes.length = 0; // this clears the array
// but, for what...?
// nothing happens now
}
As per your snippet, every call to the function recreates and clears the array. However, all work with the array is done while the array is full (nothing happens after it's cleared).
Also, checkboxes is a private variable to the function. The variable only exists during execution of the function and is forgotten once the function is done.
So, what is the big picture? Why are you trying to clear the array?
To take a guess, it sounds like you're intending on clearing it for the next call of the function.
i.e. (filling in doSomething for function name):
doSomething(); // log all array elements and clears the array
doSomething(); // does nothing, since the array is already empty
To accomplish this, you need to define checkboxes in a single location outside of the function, either as a global variable or using closures (the heavily more recommended, albeit more complex, option):
NOTE: If you haven't dealt with closures before, they may not make much sense after only a single example. But, there are thousands of resources available, including Stack Overflow, to help explain them better than I can here.
// closure (or instantly-called function)
// allows for defining private variables, since they are only known by
// code blocks within this function
(function () {
// private (closure-scoped) variable(s)
var checkboxes = $(':input[name="checkbox"]');
function doSomething() {
$.each(checkboxes, /* ... */);
checkboxes.length = 0;
}
})();
The closure will run once, defining checkboxes as the array of inputs while doSomething will iterate the array before clearing it.
Now, the last step is to expose doSomething -- cause, as with checkboxes, it is also private. You can accomplish exposing it by passing the function reference from the closure to a variable outside the closure:
var doSomething = (function () {
/* ... */
return doSomething; // note that you DO NOT want parenthesis here
})();
Is setting length even possible? ;)
Any way checkboxes is a jquery object not an array.
When You do checkboxes = $(':input[name="checkbox"]'); it's not an array any more and whatever there was before has no influence. It doesn't matter what was in a variable if You assign something new to it in any language I know.
You are making some jquery related error. Please elaborate more so that I can help
Are You sure You put name="checkbox" in all of them? It doesn't seem to have a lot of sense. Maybe You waned $(':input[type="checkbox"]'); ?
Edit: that's funny. the above selector isn't too good as well. It should be:
$('input:checkbox');
as for removing stuff:
delete varname
delete arrname[elem]
is the right way to do it.
assigning null does not change the length but just makes trouble.
What about doing this:
checkboxes = new Array();
You can also delete it.
delete checkboxes;
When you rerun the function it will search for all checkboxes again.
Consider this strategy:
function() {
var checkboxes = $(':input[name=checkbox]:not(.examined)');
checkboxes.addClass('examined');
$.each(checkboxes, function(key, value) {
// ... stuff
});
}
You could also use .data() if you don't want to "pollute" the DOM.

Categories

Resources