How do I create a memory leak in JavaScript? - javascript

I would like to understand what kind of code causes memory leaks in JavaScript and created the script below. However, when I run the script in Safari 6.0.4 on OS X the memory consumption shown in the Activity Monitor does not really increase.
Is something wrong with my script or is this no longer an issue with modern browsers?
<html>
<body>
</body>
<script>
var i, el;
function attachAlert(element) {
element.onclick = function() { alert(element.innerHTML); };
}
for (i = 0; i < 1000000; i++) {
el = document.createElement('div');
el.innerHTML = i;
attachAlert(el);
}
</script>
</html>
The script is based on the Closure section of Google's JavaScript style guide:
http://google-styleguide.googlecode.com/svn/trunk/javascriptguide.xml?showone=Closures#Closures
EDIT: The bug that caused the above code to leak has apparently been fixed: http://jibbering.com/faq/notes/closures/#clMem
But my question remains: Would someone be able to provide a realistic example of JavaScript code that leaks memory in modern browsers?
There are many articles on the Internet that suggest memory leaks can be an issue for complex single page applications but I have a hard time finding an examples that I can run in my browser.

You're not keeping the element you've created around and referenced anywhere - that's why you're not seeing the memory usage increase. Try attaching the element to the DOM, or store it in an object, or set the onclick to be a different element that sticks around. Then you'll see the memory usage skyrocket. The garbage collector will come through and clean up anything that can no longer be referenced.
Basically a walkthrough of your code:
create element (el)
create a new function that references that
element
set the function to be the onclick of that element
overwrite the element with a new element
Everything is centric around the element existing. Once there isn't a way to access the element, the onclick can't be accessed anymore. So, since the onclick can't be accessed, the function that was created is destroyed.. and the function had the only reference to the element.. so the element is cleaned up as well.
Someone might have a more technical example, but that's the basis of my understanding of the javascript garbage collector.
Edit: Here's one of many possibilities for a leaking version of your script:
<html>
<body>
</body>
<script>
var i, el;
var createdElements = {};
var events = [];
function attachAlert(element) {
element.onclick = function() { alert(element.innerHTML); };
}
function reallyBadAttachAlert(element) {
return function() { alert(element.innerHTML); };
}
for (i = 0; i < 1000000; i++) {
el = document.createElement('div');
el.innerHTML = i;
/** posibility one: you're storing the element somewhere **/
attachAlert(el);
createdElements['div' + i] = el;
/** posibility two: you're storing the callbacks somewhere **/
event = reallyBadAttachAlert(el);
events.push(event);
el.onclick = event;
}
</script>
</html>
So, for #1, you're simply storing a reference to that element somewhere. Doesn't matter that you'll never use it - because that reference is made in the object, the element and its callbacks will never go away (or at least until you delete the element from the object). For possibility #2, you could be storing the events somewhere. Because the event can be accessed (i.e. by doing events[10]();) even though the element is nowhere to be found, it's still referenced by the event.. so the element will stay in memory as well as the event, until it's removed from the array.

update: Here is a very simple example based on the caching scenario in the Google I/O presentation:
/*
This is an example of a memory leak. A new property is added to the cache
object 10 times/second. The value of performance.memory.usedJSHeapSize
steadily increases.
Since the value of cache[key] is easy to recalculate, we might want to free
that memory if it becomes low. However, there is no way to do that...
Another method to manually clear the cache could be added, but manually
adding memory checks adds a lot of extra code and overhead. It would be
nice if we could clear the cache automatically only when memory became low.
Thus the solution presented at Google I/O!
*/
(function(w){
var cache = {}
function getCachedThing(key) {
if(!(key in cache)) {
cache[key] = key;
}
return cache[key];
}
var i = 0;
setInterval(function() {
getCachedThing(i++);
}, 100);
w.getCachedThing = getCachedThing
})(window);
Because usedJSHeapSize does not update when the page is opened from the local file system, you might not see the increasing memory usage. In that case, I have hosted this code for you here: https://memory-leak.surge.sh/example-for-waterfr
This Google I/O'19 presentation gives examples of real-world memory leaks as well as strategies for avoiding them:
Method getImageCached() returns a reference to an object, also caching a local reference. Even if this reference goes out of the method consumer's scope, the referenced memory cannot be garbage collected because there is a still a strong reference inside the implementation of getImageCached(). Ideally, the cached reference would be eligible for garbage collection if memory got too low. (Not exactly a memory leak, but a situation where there is memory that could be freed at the cost of running the expensive operations again).
Leak #1: the reference to the cached image. Solved by using weak references inside getImageCached().
Leak #2: the string keys inside the cache (Map object). Solved by using the new FinalizationGroup API.
Please see the linked video for JS code with line-by-line explanations.
More generally, "real" JS memory leaks are caused by unwanted references (to objects that will never be used again). They are usually bugs in the JS code. This article explains four common ways memory leaks are introduced in JS:
Accidental global variables
Forgotten timers/callbacks
Out of DOM references
Closures
An interesting kind of JavaScript memory leak documents how closures caused a memory leak in the popular MeteorJS framework.

2020 Update:
Most CPU-side memory overflow is no longer working on modern v8 engine based browser. However, we can overflow the GPU-side memory by running this script
// Initialize canvas and its context
window.reallyFatCanvas = document.createElement('canvas');
let context = window.reallyFatCanvas.getContext('2d');
// References new context inside context, in loop.
function leakingLoop() {
context.canvas.width = document.body.clientWidth;
context.canvas.height = document.body.clientHeight;
const newContext = document.createElement('canvas').getContext('2d');
context.context = newContext;
context.drawImage(newContext.canvas, 0, 0);
// The new context will reference another context on the next loop
context = newContext;
}
// Use interval instead of while(true) {...}
setInterval(leakingLoop,1);
EDIT: I rename every variables (and constants) so it makes a lot of sense. Here is the explanation.
Based on my observation, canvas context seems sync with Video Memory. So if we put reference of a canvas object which also reference another canvas object and so on, the Video RAM fills a lot more than DRAM, tested on microsoft edge and chrome.
This is my third attempt of screenshot:
I have no idea why my laptop always freeze seconds after taking screenshot while running this script. Please be careful if you want to try that script.

I tried to do something like that and got exception out of memory.
const test = (array) => {
array.push((new Array(1000000)).fill('test'));
};
const testArray = [];
for(let i = 0; i <= 1000; i++) {
test(testArray);
}

The Easiest Way Is:
while(true){}

A small example of code causing a 1MB memory leak:
Object.defineProperty(globalThis, Symbol(), {value: new Uint8Array(1<<20).slice(), writable: false, configurable: false})
After you run that code, the only way to free the leaked memory is to close the tab you ran it on.

If all you want is to create a memory leak, then the easiest way IMO is to instantiate a TypedArray since they hog up a fixed size of memory and outlive any references. For example, creating a Float64Array with 2^27 elements consumes 1GiB (1 Gibibyte) of memory since it needs 8 bytes per element.
Start the console and just write this:
new Float64Array(Math.pow(2, 27))

Related

When and How JavaScript garbage collector works

I did read few articles like this on MDN and this one I got the idea of how GC happens in JavaScript
I still don't understand things like
a) When does Garbage collector kicks in ( it gets called after some interval or some conditions have to met) ?
b) Who is responsible for Garbage collection ( it's part of JavaScript engine or browser/Node ) ?
c) runs on main thread or separate thread ?
d) which one of the following have higher peak memory usage ?
// first-case
// variables will be unreachable after each cycle
(function() {
for (let i = 0; i < 10000; i++) {
let name = 'this is name' + i;
let index = i;
}
})()
// second-case
// creating variable once
(function() {
let i, name, index;
for (i = 0; i < 10000; i++) {
name = 'this is name' + i;
index = i;
}
})()
V8 developer here. The short answer is: it's complicated. In particular, different JavaScript engines, and different versions of the same engine, will do things differently.
To address your specific questions:
a) When does Garbage collector kicks in ( it gets called after some interval or some conditions have to met) ?
Depends. Probably both. Modern garbage collectors often are generational: they have a relatively small "young generation", which gets collected whenever it is full. Additionally they have a much larger "old generation", where they typically do their work in many small steps, so as to never interrupt execution for too long. One common way to trigger such a small step is when N bytes (or objects) have been allocated since the last step. Another way, especially in modern tabbed browsers, is to trigger GC activity when a tab is inactive or in the background. There may well be additional triggers beyond these two.
b) Who is responsible for Garbage collection ( it's part of JavaScript engine or browser/Node ) ?
The garbage collector is part of the JavaScript engine. That said, it must have certain interactions with the respective embedder to deal with embedder-managed objects (e.g. DOM nodes) whose lifetime is tied to JavaScript objects in one way or another.
c) runs on main thread or separate thread ?
Depends. In a modern implementation, typically both: some work happens in the background (in one or more threads), some steps are more efficient to do on the main thread.
d) which one of the following have higher peak memory usage ?
These two snippets will (probably) have the same peak memory usage: neither of them ever lets objects allocated by more than one iteration be reachable at the same time.
Edit: if you want to read more about recent GC-related work that V8 has been doing, you can find a series of blog posts here: https://v8.dev/blog/tags/memory

Can this code produce memory leak?

Looks like i found memory leak in my code, but i'm not sure, and i don't have so many experience with nodejs memory leaks.
Can someone explain me, can this code produce memory leak?
var tasks = [];
// each 10 seconds
tasks.push(function () {
console.log('hello, world!');
});
// each minute
while (tasks.length) {
var task = tasks.shift();
task();
}
UPD: Missed while loop in my code, updated now.
My question is, will scope of my anonymous function from array cleared from memory?
Well, not a memory leak but you're putting new elements in your array 6 times faster than you are retrieving them.
As a result, you will actually be using only one out of 5 pushed functions, and your array will keep growing.
If you let it run long enough, you'll end up with a massive array that can never be emptied.
EDIT: After you added the while loop, the array is not growing anymore, and it shouldn't have any memory leak coming from this part of your code. It does not mean there is none in your project. Make sure any value created in your pushed functions can properly be garbage collected (i.e. that you did not keep a reference on it somewhere).

gsap tweenlite/tweenmax garbage collecting, references and performances

I'm trying to understand what's the best way to use TweenLite/TweenMax.
Is it useful to reference all my tweens with the same variable?
After killing the tween with the relative public method, do I have to set the reference to null to improve the garbage collecting disposal?
Below there is a well commented example:
$(document).ready(function () {
var elementOne = $('#elementOne');
var elementTwo = $('#elementTwo');
var myTween;
// is it useful to overwrite the variable?
myTween = TweenMax.to(elementOne, 1, {
opacity: 0
});
myTween = TweenMax.to(elementTwo, 1, {
left: 0,
onComplete: destroy
});
function destroy () {
// suggested on tweenmax docs
// the console.log still returns me the object
myTween.kill();
console.log(myTween);
// is it required for garbage collecting?
// now the console.log returns me null
myTween = null;
console.log(myTween);
// and then...jQuery GC friendly remove
elementOne.remove();
elementTwo.remove();
}
});
You don't need to do anything special to make a tween (or timeline) available for gc other than what you'd normally do for any JS object. In other words, if you maintain a reference in your own code to an instance, it'll stick around (otherwise your code could break). But you do NOT need to specifically kill() a tween. A lot of effort has gone into GSAP to ensure that things are optimized and headache-free. The engine will automatically release completed tweens for garbage collection when necessary. And yet a tween will still work if you maintain a reference and restart() it, for example.
Just because you call kill() on a tween instance, that doesn't force the browser to run its garbage collection routine. It doesn't null your variable either. That's just how JavaScript works (and that's a good thing). It has nothing to do with TweenLite/Max specifically.
Also keep in mind that you don't need to store any tween instances in variables. The only time it's helpful is if you need to control the tween later (or insert it into a timeline or something like that). Typically it's fine to just call TweenMax.to(...) without storing the result in a variable.
Does that clear things up?

JavaScript Performance: Multiple variables or one object?

this is just a simple performance question, helping me understand the javascript engine.
for this I'm was wondering, what is faster: declaring multiple variables for certain values or using one object containing multiple values.
example:
var x = 15;
var y = 300;
vs.
var sizes = { x: 15, y: 300 };
this is just a very simple example, could of course differ in a real project.
does this even matter?
A complete answer for that question would be really long. So I'll try to explain a few things only. First, maybe most important fact, even if you declare a variable with var, it depends where you do that. In a global scope, you implicitly would also write that variable in an object, most browsers call it window. So for instance
// global scope
var x = 15;
console.log( window.x ); // 15
If we do the same thing within the context of a function things change. Within the context of a function, we would write that variable name into its such called 'Activation Object'. That is, an internal object which the js engine handles for you. All formal parameters, function declarations and variables are stored there.
Now to answer your actual question: Within the context of a function, its always the fastest possible access to have variables declared with var. This again is not necesarrily true if we are in the global context. The global object is very huge and its not really fast to access anything within.
If we store things within an object, its still very fast, but not as fast as variables declared by var. Especially the access times do increase. But nonetheless, we are talking about micro and nanoseconds here (in modern browser implementations). Old'ish browsers, especially IE6+7 have huge performance penalties when accessing object properties.
If you are really interested in stuff like this, I highyl recommend the book 'High Performance Javascript' by Nicholas C. Zakas. He measured lots of different techniques to access and store data in ECMAscript for you.
Again, performance differences for object lookups and variables declared by var is almost not measureable in modern browsers. Old'ish Browsers like FF3 or IE6 do show a fundamental slow performance for object lookups/access.
foo_bar is always faster than foo.bar in every modern browser (IE11+/Edge and any version of Chrome, FireFox, and Safari) and NodeJS so long as you see performance as holistic (which I recommend you should). After millions of iterations in a tight loop, foo.bar may approach (but never surpass) the same ops/s as foo_bar due to the wealth of correct branch predictions. Notwithstanding, foo.bar incurs a ton more overhead during both JIT compilation and execution because it is so much more complex of an operation. JavaScript that features no tight loops benefits an extra amount from using foo_bar because, in comparison, foo.bar would have a much higher overhead:savings ratio such that there was extra overhead involved in the JIT of foo.bar just to make foo.bar a little faster in a few places. Granted, all JIT engines intelligently try to guess how much effort should be put into optimizing what to minimize needless overhead, but there is still a baseline overhead incurred by processing foo.bar that can never be optimized away.
Why? JavaScript is a highly dynamic language, where there is costly overhead associated with every object. It was originally a tiny scripting executed line-by-line and still exhibits line-by-line execution behavior (it's not executed line-by-line anymore but, for example, one can do something evil like var a=10;eval('a=20');console.log(a) to log the number 20). JIT compilation is highly constrained by this fact that JavaScript must observe line-by-line behavior. Not everything can be anticipated by JIT, so all code must be slow in order for extraneous code such as is shown below to run fine.
(function() {"use strict";
// chronological optimization is very poor because it is so complicated and volatile
var setTimeout=window.setTimeout;
var scope = {};
scope.count = 0;
scope.index = 0;
scope.length = 0;
function increment() {
// The code below is SLOW because JIT cannot assume that the scope object has not changed in the interum
for (scope.index=0, scope.length=17; scope.index<scope.length; scope.index=scope.index+1|0)
scope.count = scope.count + 1|0;
scope.count = scope.count - scope.index + 1|0;
}
setTimeout(function() {
console.log( scope );
}, 713);
for(var i=0;i<192;i=i+1|0)
for (scope.index=11, scope.length=712; scope.index<scope.length; scope.index=scope.index+1|0)
setTimeout(increment, scope.index);
})();
(function() {"use strict";
// chronological optimization is very poor because it is so complicated and volatile
var setTimeout=window.setTimeout;
var scope_count = 0;
var scope_index = 0;
var scope_length = 0;
function increment() {
// The code below is FAST because JIT does not have to use a property cache
for (scope_index=0, scope_length=17; scope_index<scope_length; scope_index=scope_index+1|0)
scope_count = scope_count + 1|0;
scope_count = scope_count - scope_index + 1|0;
}
setTimeout(function() {
console.log({
count: scope_count,
index: scope_index,
length: scope_length
});
}, 713);
for(var i=0;i<192;i=i+1|0)
for (scope_index=4, scope_length=712; scope_index<scope_length; scope_index=scope_index+1|0)
setTimeout(increment, scope_index);
})();
Performing a one sample z-interval by running each code snippet above 30 times and seeing which one gave a higher count, I am 90% confident that the later code snippet with pure variable names is faster than the first code snippet with object access between 76.5% and 96.9% of the time. As another way to analyze the data, there is a 0.0000003464% chance that the data I collected was a fluke and the first snippet is actually faster. Thus, I believe it is reasonable to infer that foo_bar is faster than foo.bar because there is less overhead.
Don't get me wrong. Hash maps are very fast because many engines feature advanced property caches, but there will still always be enough extra overhead when using hash maps. Observe.
(function(){"use strict"; // wrap in iife
// This is why you should not pack variables into objects
var performance = window.performance;
var iter = {};
iter.domino = -1; // Once removed, performance topples like a domino
iter.index=16384, iter.length=16384;
console.log(iter);
var startTime = performance.now();
// Warm it up and trick the JIT compiler into false optimizations
for (iter.index=0, iter.length=128; iter.index < iter.length; iter.index=iter.index+1|0)
if (recurse_until(iter, iter.index, 0) !== iter.domino)
throw Error('mismatch!');
// Now that its warmed up, drop the cache off cold and abruptly
for (iter.index=0, iter.length=16384; iter.index < iter.length; iter.index=iter.index+1|0)
if (recurse_until(iter, iter.index, 0) !== iter.domino)
throw Error('mismatch!');
// Now that we have shocked JIT, we should be running much slower now
for (iter.index=0, iter.length=16384; iter.index < iter.length; iter.index=iter.index+1|0)
if (recurse_until(iter, iter.index, 0) !== iter.domino)
throw Error('mismatch!');
var endTime=performance.now();
console.log(iter);
console.log('It took ' + (endTime-startTime));
function recurse_until(obj, _dec, _inc) {
var dec=_dec|0, inc=_inc|0;
var ret = (
dec > (inc<<1) ? recurse_until(null, dec-1|0, inc+1|0) :
inc < 384 ? recurse_until :
// Note: do not do this in production. Dynamic code evaluation is slow and
// can usually be avoided. The code below must be dynamically evaluated to
// ensure we fool the JIT compiler.
recurse_until.constructor(
'return function(obj,x,y){' +
// rotate the indices
'obj.domino=obj.domino+1&7;' +
'if(!obj.domino)' +
'for(var key in obj){' +
'var k=obj[key];' +
'delete obj[key];' +
'obj[key]=k;' +
'break' +
'}' +
'return obj.domino' +
'}'
)()
);
if (obj === null) return ret;
recurse_until = ret;
return obj.domino;
}
})();
For a performance comparison, observe pass-by-reference via an array and local variables.
// This is the correct way to write blazingly fast code
(function(){"use strict"; // wrap in iife
var performance = window.performance;
var iter_domino=[0,0,0]; // Now, domino is a pass-by-reference list
var iter_index=16384, iter_length=16384;
var startTime = performance.now();
// Warm it up and trick the JIT compiler into false optimizations
for (iter_index=0, iter_length=128; iter_index < iter_length; iter_index=iter_index+1|0)
if (recurse_until(iter_domino, iter_index, 0)[0] !== iter_domino[0])
throw Error('mismatch!');
// Now that its warmed up, drop the cache off cold and abruptly
for (iter_index=0, iter_length=16384; iter_index < iter_length; iter_index=iter_index+1|0)
if (recurse_until(iter_domino, iter_index, 0)[0] !== iter_domino[0])
throw Error('mismatch!');
// Now that we have shocked JIT, we should be running much slower now
for (iter_index=0, iter_length=16384; iter_index < iter_length; iter_index=iter_index+1|0)
if (recurse_until(iter_domino, iter_index, 0)[0] !== iter_domino[0])
throw Error('mismatch!');
var endTime=performance.now();
console.log('It took ' + (endTime-startTime));
function recurse_until(iter_domino, _dec, _inc) {
var dec=_dec|0, inc=_inc|0;
var ret = (
dec > (inc<<1) ? recurse_until(null, dec-1|0, inc+1|0) :
inc < 384 ? recurse_until :
// Note: do not do this in production. Dynamic code evaluation is slow and
// can usually be avoided. The code below must be dynamically evaluated to
// ensure we fool the JIT compiler.
recurse_until.constructor(
'return function(iter_domino, x,y){' +
// rotate the indices
'iter_domino[0]=iter_domino[0]+1&7;' +
'if(!iter_domino[0])' +
'iter_domino.push( iter_domino.shift() );' +
'return iter_domino' +
'}'
)()
);
if (iter_domino === null) return ret;
recurse_until = ret;
return iter_domino;
}
})();
JavaScript is very different from other languages in that benchmarks can easily be a performance-sin when misused. What really matters is what should in theory run the fastest accounting for everything in JavaScript. The browser you are running your benchmark in right now may fail to optimize for something that a later version of the browser will optimize for.
Further, browsers are guided in the direction that we program. If everyone used CodeA that makes no performance sense via pure logic but is really fast (44Kops/s) only in a certain browser, other browsers will lean towards optimizing CodeA and CodeA may eventually surpass 44Kops/s in all browsers. On the other hand, if CodeA were really slow in all browsers (9Kops/s) but was very logical performance-wise, browsers would be able to take advantage of that logic and CodeA may soon surpass 900Kops/s in all browsers. Ascertaining the logical performance of code is very simple and very difficult. One must put themself in the shoes of the computer and imagine one has an infinite amount of paper, an infinite supply of pencils, and an infinite amount of time, and no ability to interpret the purpose/intention of the code. How can you structure your code to fare the best under such hypothetical circumstances? For example, hypothetically, the hash maps incurred by foo.bar would be a bit slower than doing foo_bar because foo.bar would require looking at the table named foo and finding the property named bar. You could put your finger on the location of the bar property to cache it, but the overhead of looking through the table to find bar costed time.
You are definitely micro-optimizing. I wouldn't worry about it until there is a demonstrable performance bottleneck, and you have narrowed the issue to using multiple vars vs a object with properties.
Logically thinking about it using the object approach requires three variable creations, one for the object, and one for each property on the object, vs 2 for just declaring variables. So having the object will have a higher memory approach. However, it is probably more efficient to pass an object to a method, than n > 1 variables to a method, since you only need to copy 1 value (javascript is pass by value). This also has implications for keeping track of the lexical scoping of the objects; i.e. passing less things to methods will use less memory.
however, i doubt the performance differences will even be quantifiable by any profiler.
Theory or questions like "What you are ..hmm.. doing, dude?", of course, can appear here as an answers. But I dont think it's good approach.
I just created two test benchs:
Specific, http://jsben.ch/SvNyw for global scope
It shows, for example, that on 07/2017 in Chromium based browsers (Vivaldi, Opera, Google Chrome and other) to achieve max performance there are preferable to use var. It works about 25% faster for reading values and 10% faster for writing ones.
Under Node.js there're about the same results - because of same JS engine.
In Opera Presto (12.18) there're the similar percentage test results as in chromium-based browsers.
In (modern) Firefox there is other and strange picture. Reading of global scope var is around the same as reading of object property, and writing of global scope var is dramatically slower than writing obj.prop (around twice slower). It seems like a bug.
For testing under IE/Edge or any others you are welcome.
Normal case, http://jsben.ch/5UvSZ for in-function local scope
In both Chromium based browsers and Mozilla Firefox you can see huge domination of simple var performance according to object property access. Local simple variables are several times (!) faster than dealing with object properties.
So,
if you need maximize some critical JavaScript code performance:
in browser - you can forced to make different optimizations for several browsers. I dont recommend! Or you can select some "favorite" browser, optimize your code for it and do not see what freezes happens in other ones. Not wery good, but is the way.
in browser, again - do you really need to optimize this way? May be something wrong in your algorithm / code logic?
in highload Node.js module (or other highload calcs) - well, try to minimize object "dots", with minimized damage to quality/readability of course - use var.
The safe optimization trick for any case - when you have too much operations with obj.subobj.* you can do var subobj = obj.subobj; and operate with subobj.*. This can ever improve readability.
In any case, think what do you need and do and make real bechmarks of you highload code.

Using JavaScript with Internet Explorer, how do I clear memory without refreshing the page?

I have an AJAX-based website using JavaScript on the client. Certain operations on the site cache large result sets from service calls in the browser (i.e. hundreds of megabytes). These are throw-away results. They will be viewed for a short time and then need to be cleared from memory.
I've written a small test site that loads a bunch of junk in memory and then calls JavaScript's delete method. This works great in Firefox (memory almost instantly gets returned). Internet Explorer 8 (haven't tried 7) doesn't free the memory until the page is refreshed or closed.
Does anyone know how to drop IE's memory usage using JavaScript/Ajax (no page refreshes)?
Below is my sample client code:
function load() {
var x = ['dfjasdlfkjsa;dflkjsad;flkjsadf;lj'];
for( var i = 0; i < 10000000; ++i ) {
x.push('asdfasfasfsfasdfkasjfslafkjslfjsalfjsaldfkjasl;dfkjsadfl;kjsdflskajflskfjslakfjaslfkjsaldfkjsaldfksdfjk');
}
alert('deleting'); // <--- memory usage around 500mb
delete x; // <--- immediate results in Firefox 3.5 (not IE8)
alert('done');
}
UPDATE: Setting the variable to 'null' does not immediately clear the memory (as that is left up to the garbage collector). Also, setting a variable null only gets a single reference where there might be multiple references.
IE (well, technically, JScript) has an undocumented CollectGarbage method, which supposedly forces garbage collector to run immediately. You might want to play with that, but from my experience, nulling references is enough most of the time.
Instead of using the delete method, set the variable to null. This seems to be the best way to clear up memory cross-browser (delete is notoriously flaky). It works the same as my other answer here.
alert('deleting'); // <--- memory usage around 500mb
x = null;
alert('done');
And if you try "x= null;" instead of delete ?
Just assign null to the variable:
x = null;
I realize you mentioned the data is throw away, but, when you delete the nodes, even if just setting x to null, you need to make certain that no event handlers are attached to any of the nodes, otherwise that node cannot be garbage collected.
Here is a function I use in my own code:
http://javascript.crockford.com/memory/leak.html

Categories

Resources