Related
In JavaScript, we have two ways of making a "class" and giving it public functions.
Method 1:
function MyClass() {
var privateInstanceVariable = 'foo';
this.myFunc = function() { alert(privateInstanceVariable ); }
}
Method 2:
function MyClass() { }
MyClass.prototype.myFunc = function() {
alert("I can't use private instance variables. :(");
}
I've read numerous times people saying that using Method 2 is more efficient as all instances share the same copy of the function rather than each getting their own. Defining functions via the prototype has a huge disadvantage though - it makes it impossible to have private instance variables.
Even though, in theory, using Method 1 gives each instance of an object its own copy of the function (and thus uses way more memory, not to mention the time required for allocations) - is that what actually happens in practice? It seems like an optimization web browsers could easily make is to recognize this extremely common pattern, and actually have all instances of the object reference the same copy of functions defined via these "constructor functions". Then it could only give an instance its own copy of the function if it is explicitly changed later on.
Any insight - or, even better, real world experience - about performance differences between the two, would be extremely helpful.
See http://jsperf.com/prototype-vs-this
Declaring your methods via the prototype is faster, but whether or not this is relevant is debatable.
If you have a performance bottleneck in your app it is unlikely to be this, unless you happen to be instantiating 10000+ objects on every step of some arbitrary animation, for example.
If performance is a serious concern, and you'd like to micro-optimise, then I would suggest declaring via prototype. Otherwise, just use the pattern that makes most sense to you.
I'll add that, in JavaScript, there is a convention of prefixing properties that are intended to be seen as private with an underscore (e.g. _process()). Most developers will understand and avoid these properties, unless they're willing to forgo the social contract, but in that case you might as well not cater to them. What I mean to say is that: you probably don't really need true private variables...
In the new version of Chrome, this.method is about 20% faster than prototype.method, but creating new object is still slower.
If you can reuse the object instead of always creating an new one, this can be 50% - 90% faster than creating new objects. Plus the benefit of no garbage collection, which is huge:
http://jsperf.com/prototype-vs-this/59
It only makes a difference when you're creating lots of instances. Otherwise, the performance of calling the member function is exactly the same in both cases.
I've created a test case on jsperf to demonstrate this:
http://jsperf.com/prototype-vs-this/10
You might not have considered this, but putting the method directly on the object is actually better in one way:
Method invocations are very slightly faster (jsperf) since the prototype chain does not have to be consulted to resolve the method.
However, the speed difference is almost negligible. On top of that, putting a method on a prototype is better in two more impactful ways:
Faster to create instances (jsperf)
Uses less memory
Like James said, this difference can be important if you are instantiating thousands of instances of a class.
That said, I can certainly imagine a JavaScript engine that recognizes that the function you are attaching to each object does not change across instances and thus only keeps one copy of the function in memory, with all instance methods pointing to the shared function. In fact, it seems that Firefox is doing some special optimization like this but Chrome is not.
ASIDE:
You are right that it is impossible to access private instance variables from inside methods on prototypes. So I guess the question you must ask yourself is do you value being able to make instance variables truly private over utilizing inheritance and prototyping? I personally think that making variables truly private is not that important and would just use the underscore prefix (e.g., "this._myVar") to signify that although the variable is public, it should be considered to be private. That said, in ES6, there is apparently a way to have the both of both worlds!
You may use this approach and it will allow you to use prototype and access instance variables.
var Person = (function () {
function Person(age, name) {
this.age = age;
this.name = name;
}
Person.prototype.showDetails = function () {
alert('Age: ' + this.age + ' Name: ' + this.name);
};
return Person; // This is not referencing `var Person` but the Person function
}()); // See Note1 below
Note1:
The parenthesis will call the function (self invoking function) and assign the result to the var Person.
Usage
var p1 = new Person(40, 'George');
var p2 = new Person(55, 'Jerry');
p1.showDetails();
p2.showDetails();
In short, use method 2 for creating properties/methods that all instances will share. Those will be "global" and any change to it will be reflected across all instances. Use method 1 for creating instance specific properties/methods.
I wish I had a better reference but for now take a look at this. You can see how I used both methods in the same project for different purposes.
Hope this helps. :)
This answer should be considered an expansion of the rest of the answers filling in missing points. Both personal experience and benchmarks are incorporated.
As far as my experience goes, I use constructors to literally construct my objects religiously, whether methods are private or not. The main reason being that when I started that was the easiest immediate approach to me so it's not a special preference. It might have been as simple as that I like visible encapsulation and prototypes are a bit disembodied. My private methods will be assigned as variables in the scope as well. Although this is my habit and keeps things nicely self contained, it's not always the best habit and I do sometimes hit walls. Apart from wacky scenarios with highly dynamic self assembling according to configuration objects and code layout it tends to be the weaker approach in my opinion particularly if performance is a concern. Knowing that the internals are private is useful but you can achieve that via other means with the right discipline. Unless performance is a serious consideration, use whatever works best otherwise for the task at hand.
Using prototype inheritance and a convention to mark items as private does make debugging easier as you can then traverse the object graph easily from the console or debugger. On the other hand, such a convention makes obfuscation somewhat harder and makes it easier for others to bolt on their own scripts onto your site. This is one of the reasons the private scope approach gained popularity. It's not true security but instead adds resistance. Unfortunately a lot of people do still think it's a genuinely way to program secure JavaScript. Since debuggers have gotten really good, code obfuscation takes its place. If you're looking for security flaws where too much is on the client, it's a design pattern your might want to look out for.
A convention allows you to have protected properties with little fuss. That can be a blessing and a curse. It does ease some inheritance issues as it is less restrictive. You still do have the risk of collision or increased cognitive load in considering where else a property might be accessed. Self assembling objects let you do some strange things where you can get around a number of inheritance problems but they can be unconventional. My modules tend to have a rich inner structure where things don't get pulled out until the functionality is needed elsewhere (shared) or exposed unless needed externally. The constructor pattern tends to lead to creating self contained sophisticated modules more so than simply piecemeal objects. If you want that then it's fine. Otherwise if you want a more traditional OOP structure and layout then I would probably suggest regulating access by convention. In my usage scenarios complex OOP isn't often justified and modules do the trick.
All of the tests here are minimal. In real world usage it is likely that modules will be more complex making the hit a lot greater than tests here will indicate. It's quite common to have a private variable with multiple methods working on it and each of those methods will add more overhead on initialisation that you wont get with prototype inheritance. In most cases is doesn't matter because only a few instances of such objects float around although cumulatively it might add up.
There is an assumption that prototype methods are slower to call because of prototype lookup. It's not an unfair assumption, I made the same myself until I tested it. In reality it's complex and some tests suggest that aspect is trivial. Between, prototype.m = f, this.m = f and this.m = function... the latter performs significantly better than the first two which perform around the same. If the prototype lookup alone were a significant issue then the last two functions instead would out perform the first significantly. Instead something else strange is going on at least where Canary is concerned. It's possible functions are optimised according to what they are members of. A multitude of performance considerations come into play. You also have differences for parameter access and variable access.
Memory Capacity. It's not well discussed here. An assumption you can make up front that's likely to be true is that prototype inheritance will usually be far more memory efficient and according to my tests it is in general. When you build up your object in your constructor you can assume that each object will probably have its own instance of each function rather than shared, a larger property map for its own personal properties and likely some overhead to keep the constructor scope open as well. Functions that operate on the private scope are extremely and disproportionately demanding of memory. I find that in a lot of scenarios the proportionate difference in memory will be much more significant than the proportionate difference in CPU cycles.
Memory Graph. You also can jam up the engine making GC more expensive. Profilers do tend to show time spent in GC these days. It's not only a problem when it comes to allocating and freeing more. You'll also create a larger object graph to traverse and things like that so the GC consumes more cycles. If you create a million objects and then hardly touch them, depending on the engine it might turn out to have more of an ambient performance impact than you expected. I have proven that this does at least make the gc run for longer when objects are disposed of. That is there tends to be a correlation with memory used and the time it takes to GC. However there are cases where the time is the same regardless of the memory. This indicates that the graph makeup (layers of indirection, item count, etc) has more impact. That's not something that is always easy to predict.
Not many people use chained prototypes extensively, myself included I have to admit. Prototype chains can be expensive in theory. Someone will but I've not measured the cost. If you instead build your objects entirely in the constructor and then have a chain of inheritance as each constructor calls a parent constructor upon itself, in theory method access should be much faster. On the other hand you can accomplish the equivalent if it matters (such as flatten the prototypes down the ancestor chain) and you don't mind breaking things like hasOwnProperty, perhaps instanceof, etc if you really need it. In either case things start to get complex once you down this road when it comes to performance hacks. You'll probably end up doing things you shouldn't be doing.
Many people don't directly use either approach you've presented. Instead they make their own things using anonymous objects allowing method sharing any which way (mixins for example). There are a number of frameworks as well that implement their own strategies for organising modules and objects. These are heavily convention based custom approaches. For most people and for you your first challenge should be organisation rather than performance. This is often complicated in that Javascript gives many ways of achieving things versus languages or platforms with more explicit OOP/namespace/module support. When it comes to performance I would say instead to avoid major pitfalls first and foremost.
There's a new Symbol type that's supposed to work for private variables and methods. There are a number of ways to use this and it raises a host of questions related to performance and access. In my tests the performance of Symbols wasn't great compared to everything else but I never tested them thoroughly.
Disclaimers:
There are lots of discussions about performance and there isn't always a permanently correct answer for this as usage scenarios and engines change. Always profile but also always measure in more than one way as profiles aren't always accurate or reliable. Avoid significant effort into optimisation unless there's definitely a demonstrable problem.
It's probably better instead to include performance checks for sensitive areas in automated testing and to run when browsers update.
Remember sometimes battery life matters as well as perceptible performance. The slowest solution might turn out faster after running an optimising compiler on it (IE, a compiler might have a better idea of when restricted scope variables are accessed than properties marked as private by convention). Consider backend such as node.js. This can require better latency and throughput than you would often find on the browser. Most people wont need to worry about these things with something like validation for a registration form but the number of diverse scenarios where such things might matter is growing.
You have to be careful with memory allocation tracking tools in to persist the result. In some cases where I didn't return and persist the data it was optimised out entirely or the sample rate was not sufficient between instantiated/unreferenced, leaving me scratching my head as to how an array initialised and filled to a million registered as 3.4KiB in the allocation profile.
In the real world in most cases the only way to really optimise an application is to write it in the first place so you can measure it. There are dozens to hundreds of factors that can come into play if not thousands in any given scenario. Engines also do things that can lead to asymmetric or non-linear performance characteristics. If you define functions in a constructor, they might be arrow functions or traditional, each behaves differently in certain situations and I have no idea about the other function types. Classes also don't behave the same in terms as performance for prototyped constructors that should be equivalent. You need to be really careful with benchmarks as well. Prototyped classes can have deferred initialisation in various ways, especially if your prototyped your properties as well (advice, don't). This means that you can understate initialisation cost and overstate access/property mutation cost. I have also seen indications of progressive optimisation. In these cases I have filled a large array with instances of objects that are identical and as the number of instances increase the objects appear to be incrementally optimised for memory up to a point where the remainder is the same. It is also possible that those optimisations can also impact CPU performance significantly. These things are heavily dependent not merely on the code you write but what happens in runtime such as number of objects, variance between objects, etc.
Nested helper functions can be useful for making your code more understandable. Google even recommends using nested functions in their style guide. I'm wondering about the instantiation of these nested functions and performance. For example,
work(1);
work(2);
function work(a) {
// do some stuff
log();
// do some more stuff
function log() {
console.log(a);
}
}
work is instantiated once, but is log instantiated twice?
If log is instantiated every time work is executed, would it generally be recommended not to nest functions? Instead, write code like the following
work(1);
work(2);
function work(a) {
// do some stuff
log(a);
// do some more stuff
}
function log(a) {
console.log(a);
}
These examples are overly trivial and the question is more about the general case.
work is instantiated once, but is log instantiated twice?
Yes, on each call to work.
would it generally be recommended not to nest functions?
Why not? I presume you're hinting at performance issues.
Whether a practice is good or bad depends on your reasons for using it. In the case of simple helpers, it's good to keep them local because it means you can make them suitable just for your special case and not worry about the extra cruft of a general function. E.g. to pad a number with a leading zero:
function pad(n) {
return (n<10? '0' : '') + n;
}
works very well as a helper where n is expected to always be in the range 0 to 99, but as a general function is missing a lot of features (dealing with non–number n, -ve numbers, etc.).
If you are concerned about performance, you can always use a closure so the helper is only instantiated once:
var work = (function() {
function log() {
console.log(a);
}
return function (a) {
// do some stuff
log();
// do some more stuff
};
}());
Which can also make sense where log is used by more than one function within the closure.
Note that for a single case, this is very much a micro optimisation and not likely to deliver any noticeable difference in performance.
Nested function-objects are instantiated and added to the LexicalEnvironment created when an enclosing function is run. Each of these nested functions will also have a [[Scope]] property created on them. In addition when a nested function is run, a new LexicalEnvironment object will be created and the [[Scope]] copied to its outer property.
When the enclosing function completes, then the nested function-object and its associated memory will be eligible for garbage collection.
This process will repeat for every call to the outer function.
Contrast this with your second implementation, where the function-object need only be created once; likewise its garbage collection.
If this is a "hot" function (i.e. called many times) then the second implementation is infinitely preferable.
RobG is right. Performance us affected by instantiating functions for each work thread. Whether it is a noticeable problem or not really comes down to how many simultaneous active working threads you have, as this affects memory consumption as well as execution speed.
If performance is a big issue on your application (e.g. a complex, heavy function) and you only want to use the function in one place, closures are the way to go.
If the function you're calling from "work" is to be used from several parts of your code, it's better to keep them separate instead of nesting them. This makes to keep the code updated simpler (as you only update it in one place).
Most JS engines parse code only once (even for nested functions) so the work involved in instantiating functions is not a big issue.
Memory usage, on the other side, can be an issue if you have many nesting levels as well as several simultaneous threads or event listeners, so nesting should be managed carefully in these cases (on large-scale applications).
Yes. Instantiation occurs each time you invoke the nested functions. Instantiating functions does use CPU time but it's not as important as parsing.
So, for the general case (not the case now that you mention that your functions will be invoked many times per second), parsing time is more relevant than instantiation.
In this case, nesting functions will use a lot of memory and CPU time,CSO it's best to use RobG's solution (closures), where functions are instantiated once and they are simply called, causing less memory usage.
If you want more optimized code in this critical piece of code, you should try to use as few functions as possible as this will work faster, albeit at the expense of losing code readability and maintainability.
I have been programming JavaScript for a fair while. I haven't ever taken a course or read guides/books as to best practices, but I've seemed to figure things out pretty well as I've gone. I know that my code isn't always the shortest and cleanest, but it's made sense for me.
I recently got into object-oriented programming and think that it's a great time to jump head-first into the language. While I know that it would be in my best interest to spend a few days pouring over OOP, I have a specific question that would help me in my attempts.
Making Variables Objects
I have a PhoneGap application, for example, that includes many functions each serially listed in split-up .js files. Everything worked okay, but I would have to keep passing the same few functions between every function, and the potential for error was fine. After reading into methods and objects a little, each page evolved to this usage:
var myPage = {
myVar1 : 'value',
myVar2 : 32,
initialize : function() {
//code that initialized the variables within this object
},
doSomething : function() {
//function that would do something with myPage vars
}
//...
};
Using this methodology, each page ended up as a grand variable with many encompassed methods and variables. This cleaned up quite a bit and greatly reduced global vars, but I still don't feel as though it's perfect. My shared .js files are broken into types and many objects, each with their own functions. My first question is: is this a bad thing, and if so, why? If I could get an explanation for my specific use, it would benefit me much more than seeing optimal examples in textbooks and guides.
Making Prototypes and Classes with a Single Instance
Looking at examples provided here and across the web and dead-tree publications, I see that objects and prototypical behavior is defined as a class, then instances of the class are created. Nothing that I need to do, however, seems to fit this budget, and I'm nearly positive that it's because I'm simply not thinking in the correct paradigm. However, if I have a function that validates input, how does that fit into a better OOP methodology?
I simply have a variable/object named validate that contains the functions necessary, then put that variable in the shared file. If I were to create it as a class and create an instance of it each time that I needed to validate something, doesn't that create a lot more overhead? I don't see how it works for applications such as this.
I think that OOP is the way to go, and I want to be part of it. Thanks in advance for all answers and comments.
My first question is: is this a bad thing, and if so, why?
Competent developers avoid having functions defined in the global scope (what you do when you write function() {} inside a script) because it pollutes the global scope, where many other scripts try to do the same. Think of it as many of us trying to make a better "share()" function on the same page - only one of us will win.
I see that objects and prototypical behavior is defined as a class, then instances of the class are created. Nothing that I need to do, however, seems to fit this budget, and I'm nearly positive that it's because I'm simply not thinking in the correct paradigm
There's not much you can do there. If you have a function that validates input, then it would be better as part of the element:
someElement.validate(); // as a prototype
instead of the conventional
validate(someElement); // as a global function
I'm new-ish to JavaScript. I understand many of the concepts of the language, I've been reading up on the prototype inheritance model, and I'm whetting my whistle with more and more interactive front-end stuff. It's an interesting language, but I'm always a bit turned off by the callback spaghetti that is typical of many non-trivial interaction models.
Something that has always seemed strange to me is that in spite of the readability nightmare that is a nest of JavaScript nested callbacks, the one thing that I very rarely see in many examples and tutorials is the use of predefined named functions as callback arguments. I'm a Java programmer by day, and discarding the stereotypical jabs about Enterprise-y names for units of code one of the things I've come to enjoy about working in a language with a strong selection of featureful IDE's is that using meaningful, if long, names can make the intent and meaning of code much clearer without making it more difficult to actually be productive. So why not use the same approach when writing JavaScript code?
Giving it thought, I can come up with arguments that are both for and against this idea, but my naivety and newness to the language impairs me from reaching any conclusions as to why this would be good at a technical level.
Pros:
Flexibility. An asynchronous function with a callback parameter could be reached by one of many different code paths and it could be harried to have to write a named function to account for every single possible edge case.
Speed. It plays heavily in to the hacker mentality. Bolt things on to it until it works.
Everyone else is doing it
Smaller file sizes, even if trivially so, but every bit counts on the web.
Simpler AST? I would assume that anonymous functions are generated at runtime and so the JIT won't muck about with mapping the name to instructions, but I'm just guessing at this point.
Quicker dispatching? Not sure about this one either. Guessing again.
Cons:
It's hideous and unreadable
It adds to the confusion when you're nested nuts deep in a swamp of callbacks (which, to be fair, probably means you're writing poorly constructed code to begin with, but it's quite common).
For someone without a functional background it can be a bizarre concept to grok
With so many modern browsers showing the ability to execute JavaScript code much faster than before, I'm failing to see how any trivial sort of performance gain one might get out using anonymous callbacks would be a necessity. It seems that, if you are in a situation where using a named function is feasible (predictable behavior and path of execution) then there would be no reason not to.
So are there any technical reasons or gotchas that I'm not aware of that makes this practice so commonplace for a reason?
I use anonymous functions for three reasons:
If no name is needed because the function is only ever called in one place, then why add a name to whatever namespace you're in.
Anonymous functions are declared inline and inline functions have advantages in that they can access variables in the parent scopes. Yes, you can put a name on an anonymous function, but that's usually pointless if it's declared inline. So inline has a significant advantage and if you're doing inline, there's little reason to put a name on it.
The code seems more self-contained and readable when handlers are defined right inside the code that's calling them. You can read the code in almost sequential fashion rather than having to go find the function with that name.
I do try to avoid deep nesting of anonymous functions because that can be hairy to understand and read. Usually when that happens, there's a better way to structure the code (sometimes with a loop, sometimes with a data table, etc...) and named functions isn't usually the solution there either.
I guess I'd add that if a callback starts to get more than about 15-20 lines long and it doesn't need direct access to variables in the parent scope, I would be tempted to give it a name and break it out into it's own named function declared elsewhere. There is definitely a readability point here where a non-trivial function that gets long is just more maintainable if it's put in its own named unit. But, most callbacks I end up with are not that long and I find it more readable to keep them inline.
I prefer named functions myself, but for me it comes down to one question:
Will I use this function anywhere else?
If the answer is yes, I name/define it. If not, pass it as an anonymous function.
If you only use it once, it doesn't make sense to crowd the global namespace with it. In today's complex front-ends, the number of named functions that could have been anonymous grows quickly (easily over 1000 on really intricate designs), resulting in (relatively) large performance gains by preferring anonymous functions.
However, code maintainability is also extremely important. Each situation is different. If you're not writing a lot of these functions to begin with, there's no harm in doing it either way. It's really up to your preference.
Another note about names. Getting in the habit of defining long names will really hurt your file size. Take the following example.
Assume both of these functions do the same thing:
function addTimes(time1, time2)
{
// return time1 + time2;
}
function addTwoTimesIn24HourFormat(time1, time2)
{
// return time1 + time2;
}
The second tells you exactly what it does in the name. The first is more ambiguous. However, there are 17 characters of difference in the name. Say the function is called 8 times throughout the code, that's 153 extra bytes your code didn't need to have. Not colossal, but if it's a habit, extrapolating that to 10s or even 100s of functions will easily mean a few KB of difference in the download.
Again however, maintainability needs to be weighed against the benefits of performance. This is the pain of dealing with a scripted language.
A bit late to the party, but some not yet mentioned aspects to functions, anonymous or otherwise...
Anon funcs are not easily referred to in humanoid conversations about code, amongst a team. E.g., "Joe, could you explain what the algorithm does, within that function. ... Which one? The 17th anonymous function within the fooApp function. ... No, not that one! The 17th one!"
Anon funcs are anonymous to the debugger as well. (duh!) Therefore, the debugger stack trace will generally just show a question mark or similar, making it less useful when you have set multiple breakpoints. You hit the breakpoint, but find yourself scrolling the debug window up/down to figure out where the hell you are in your program, because hey, question mark function just doesn't do it!
Concerns about polluting the global namespace are valid, but easily remedied by naming your functions as nodes within your own root object, like "myFooApp.happyFunc = function ( ... ) { ... }; ".
Functions that are available in the global namespace, or as nodes in your root object like above, can be invoked from the debugger directly, during development and debug. E.g., at the console command line, do "myFooApp.happyFunc(42)". This is an extremely powerful ability that does not exist (natively) in compiled programming languages. Try that with an anon func.
Anon funcs can be made more readable by assigning them to a var, and then passing the var as the callback (instead of inlining). E.g.:
var funky = function ( ... ) { ... };
jQuery('#otis').click(funky);
Using the above approach, you could potentially group several anon funcs at the top of the parental func, then below that, the meat of sequential statements becomes much tighter grouped, and easier to read.
Anonymous functions are useful because they help you control which functions are exposed.
More Detail: If there is no name, you can't reassign it or tamper with it anywhere but the exact place it was created. A good rule of thumb is, if you don't need to re-use this function anywhere, it's a good idea to consider if an anonymous function would be better to prevent getting tampered with anywhere.
Example:
If you're working on a big project with a lot of people, what if you have a function inside of a bigger function and you name it something? That means anyone working with you and also editing code in the bigger function can do stuff to that smaller function at any time. What if you named it "add" for instance, and someone reassigned "add" to a number instead inside the same scope? Then the whole thing breaks!
PS -I know this is a very old post, but there is a much simpler answer to this question and I wish someone had put it this way when I was looking for the answer myself as a beginner- I hope you're ok with reviving an old thread!
Its more readable using named functions and they are also capable of self-referencing as in the example below.
(function recursion(iteration){
if (iteration > 0) {
console.log(iteration);
recursion(--iteration);
} else {
console.log('done');
}
})(20);
console.log('recursion defined? ' + (typeof recursion === 'function'));
http://jsfiddle.net/Yq2WD/
This is nice when you want to have an immediately invoked function that references itself but does not add to the global namespace. It's still readable but not polluting. Have your cake and eat it to.
Hi, my name is Jason OR hi, my name is ???? you pick.
Well, just to be clear for the sake of my arguments, the following are all anonymous functions/function expressions in my book:
var x = function(){ alert('hi'); },
indexOfHandyMethods = {
hi: function(){ alert('hi'); },
high: function(){
buyPotatoChips();
playBobMarley();
}
};
someObject.someEventListenerHandlerAssigner( function(e){
if(e.doIt === true){ doStuff(e.someId); }
} );
(function namedButAnon(){ alert('name visible internally only'); })()
Pros:
It can reduce a bit of cruft, particularly in recursive functions (where you could (should actually since arguments.callee is deprecated) still use a named reference per the last example internally), and makes it clear the function only ever fires in this one place.
Code legibility win: in the example of the object literal with anon funcs assigned as methods, it would be silly to add more places to hunt and peck for logic in your code when the whole point of that object literal is to plop some related functionality in the same conveniently referenced spot. When declaring public methods in a constructor, however, I do tend to define labeled functions inline and then assign as references of this.sameFuncName. It lets me use the same methods internally without the 'this.' cruft and makes order of definition a non-concern when they call each other.
Useful for avoiding needless global namespace pollution - internal namespaces, however, shouldn't ever be that broadly filled or handled by multiple teams simultaneously so that argument seems a bit silly to me.
I agree with the inline callbacks when setting short event handlers. It's silly to have to hunt for a 1-5 line function, especially since with JS and function hoisting, the definitions could end up anywhere, not even within the same file. This could happen by accident without breaking anything and no, you don't always have control of that stuff. Events always result in a callback function being fired. There's no reason to add more links to the chain of names you need to scan through just to reverse engineer simple event-handlers in a large codebase and the stack trace concern can be addressed by abstracting event triggers themselves into methods that log useful info when debug mode is on and fire the triggers. I'm actually starting to build entire interfaces this way.
Useful when you WANT the order of function definition to matter. Sometimes you want to be certain a default function is what you think it is until a certain point in the code where it's okay to redefine it. Or you want breakage to be more obvious when dependencies get shuffled.
Cons:
Anon functions can't take advantage of function hoisting. This is a major difference. I tend to take heavy advantage of hoisting to define my own explicitly named funcs and object constructors towards the bottom and get to the object definition and main-loop type stuff right up at the top. I find it makes the code easier to read when you name your vars well and get a broad view of what's going on before ctrl-Fing for details only when they matter to you. Hoisting can also be a huge benefit in heavily event-driven interfaces where imposing a strict order of what's available when can bite you in the butt. Hoisting has its own caveats (like circular reference potential) but it is a very useful tool for organizing and making code legible when used right.
Legibility/Debug. Absolutely they get used way too heavily at times and it can make debug and code legibility a hassle. Codebases that rely heavily on JQ, for instance, can be a serious PITA to read and debug if you don't encapsulate the near-inevitable anon-heavy and massively overloaded args of the $ soup in a sensible way. JQuery's hover method for instance, is a classic example of over-use of anon funcs when you drop two anon funcs into it, since it's easy for a first-timer to assume it's a standard event listener assignment method rather than one method overloaded to assign handlers for one or two events. $(this).hover(onMouseOver, onMouseOut) is a lot more clear than two anon funcs.
I've heard that it's not a good idea to make elements global in JavaScript. I don't understand why. Is it something IE can't handle?
For example:
div = getElementById('topbar');
I don't think that's an implementation issue, but more a good vs bad practice issue. Usually global * is bad practice and should be avoided (global variables and so on) since you never really know how the scope of the project will evolve and how your file will be included.
I'm not a big JS freak so I won't be able to give you the specifics on exactly why JS events are bad but Christian Heilmann talks about JS best practices here, you could take a look. Also try googling "JS best practices"
Edit: Wikipedia about global variables, that could also apply to your problem :
[global variables] are usually
considered bad practice precisely
because of their nonlocality: a global
variable can potentially be modified
from anywhere, (unless they reside in
protected memory) and any part of the
program may depend on it. A global
variable therefore has an unlimited
potential for creating mutual
dependencies, and adding mutual
dependencies increases complexity. See
Action at a distance. However, in a
few cases, global variables can be
suitable for use. For example, they
can be used to avoid having to pass
frequently-used variables continuously
throughout several functions.
via http://en.wikipedia.org/wiki/Global_variable
Is it something IE can't handle?
No it is not an IE thing. You can never assume that your code will be the only script used in the document. So it is important that you make sure your code does not have global function or variable names that other scripts can override.
Refer to Play Well With Others for examples.
I assume by "events" you mean the event-handling JavaScript (functions).
In general, it's bad to use more than one global variable in JS. (It's impossible not to use at least one if you're storing any data for future use.) That's because it runs into the same problem as all namespacing tries to solve - what if you wrote a method doSomething() and someone else wrote a method called doSomething()?
The best way to get around this is to make a global variable that is an object to hold all of your data and functions. For example:
var MyStuff = {};
MyStuff.counter = 0;
MyStuff.eventHandler = function() { ... };
MyStuff.doSomething = function() { ... };
// Later, when you want to call doSomething()...
MyStuff.doSomething();
This way, you're minimally polluting the global namespace; you only need worry that someone else uses your global variable.
Of course, none of this is a problem if your code will never play with anyone else's... but this sort of thinking will bite you in the ass later if you ever do end up using someone else's code. As long as everyone plays nice in terms of JS global names, all code can get along.
There shouldn't be any problem using global variables in your code as long as you are wrapping them inside a uniqe namespase/object (to avoid collision with scripts that are not yours)
the main adventage of using global variable in javascript derives from the fact that javascript is not a strong type language. there for, if you pass somes complex objects as arguments to a function, you will probebly lose all the intellisence for those objects (inside the function scope.)
while using global objects insteads, will preserve that intellisence.
I personally find that very usfull and it certainly have place in my code.
(of course, one should alwayse make the right balance between locales and globals variables)