Related
I'm new-ish to JavaScript. I understand many of the concepts of the language, I've been reading up on the prototype inheritance model, and I'm whetting my whistle with more and more interactive front-end stuff. It's an interesting language, but I'm always a bit turned off by the callback spaghetti that is typical of many non-trivial interaction models.
Something that has always seemed strange to me is that in spite of the readability nightmare that is a nest of JavaScript nested callbacks, the one thing that I very rarely see in many examples and tutorials is the use of predefined named functions as callback arguments. I'm a Java programmer by day, and discarding the stereotypical jabs about Enterprise-y names for units of code one of the things I've come to enjoy about working in a language with a strong selection of featureful IDE's is that using meaningful, if long, names can make the intent and meaning of code much clearer without making it more difficult to actually be productive. So why not use the same approach when writing JavaScript code?
Giving it thought, I can come up with arguments that are both for and against this idea, but my naivety and newness to the language impairs me from reaching any conclusions as to why this would be good at a technical level.
Pros:
Flexibility. An asynchronous function with a callback parameter could be reached by one of many different code paths and it could be harried to have to write a named function to account for every single possible edge case.
Speed. It plays heavily in to the hacker mentality. Bolt things on to it until it works.
Everyone else is doing it
Smaller file sizes, even if trivially so, but every bit counts on the web.
Simpler AST? I would assume that anonymous functions are generated at runtime and so the JIT won't muck about with mapping the name to instructions, but I'm just guessing at this point.
Quicker dispatching? Not sure about this one either. Guessing again.
Cons:
It's hideous and unreadable
It adds to the confusion when you're nested nuts deep in a swamp of callbacks (which, to be fair, probably means you're writing poorly constructed code to begin with, but it's quite common).
For someone without a functional background it can be a bizarre concept to grok
With so many modern browsers showing the ability to execute JavaScript code much faster than before, I'm failing to see how any trivial sort of performance gain one might get out using anonymous callbacks would be a necessity. It seems that, if you are in a situation where using a named function is feasible (predictable behavior and path of execution) then there would be no reason not to.
So are there any technical reasons or gotchas that I'm not aware of that makes this practice so commonplace for a reason?
I use anonymous functions for three reasons:
If no name is needed because the function is only ever called in one place, then why add a name to whatever namespace you're in.
Anonymous functions are declared inline and inline functions have advantages in that they can access variables in the parent scopes. Yes, you can put a name on an anonymous function, but that's usually pointless if it's declared inline. So inline has a significant advantage and if you're doing inline, there's little reason to put a name on it.
The code seems more self-contained and readable when handlers are defined right inside the code that's calling them. You can read the code in almost sequential fashion rather than having to go find the function with that name.
I do try to avoid deep nesting of anonymous functions because that can be hairy to understand and read. Usually when that happens, there's a better way to structure the code (sometimes with a loop, sometimes with a data table, etc...) and named functions isn't usually the solution there either.
I guess I'd add that if a callback starts to get more than about 15-20 lines long and it doesn't need direct access to variables in the parent scope, I would be tempted to give it a name and break it out into it's own named function declared elsewhere. There is definitely a readability point here where a non-trivial function that gets long is just more maintainable if it's put in its own named unit. But, most callbacks I end up with are not that long and I find it more readable to keep them inline.
I prefer named functions myself, but for me it comes down to one question:
Will I use this function anywhere else?
If the answer is yes, I name/define it. If not, pass it as an anonymous function.
If you only use it once, it doesn't make sense to crowd the global namespace with it. In today's complex front-ends, the number of named functions that could have been anonymous grows quickly (easily over 1000 on really intricate designs), resulting in (relatively) large performance gains by preferring anonymous functions.
However, code maintainability is also extremely important. Each situation is different. If you're not writing a lot of these functions to begin with, there's no harm in doing it either way. It's really up to your preference.
Another note about names. Getting in the habit of defining long names will really hurt your file size. Take the following example.
Assume both of these functions do the same thing:
function addTimes(time1, time2)
{
// return time1 + time2;
}
function addTwoTimesIn24HourFormat(time1, time2)
{
// return time1 + time2;
}
The second tells you exactly what it does in the name. The first is more ambiguous. However, there are 17 characters of difference in the name. Say the function is called 8 times throughout the code, that's 153 extra bytes your code didn't need to have. Not colossal, but if it's a habit, extrapolating that to 10s or even 100s of functions will easily mean a few KB of difference in the download.
Again however, maintainability needs to be weighed against the benefits of performance. This is the pain of dealing with a scripted language.
A bit late to the party, but some not yet mentioned aspects to functions, anonymous or otherwise...
Anon funcs are not easily referred to in humanoid conversations about code, amongst a team. E.g., "Joe, could you explain what the algorithm does, within that function. ... Which one? The 17th anonymous function within the fooApp function. ... No, not that one! The 17th one!"
Anon funcs are anonymous to the debugger as well. (duh!) Therefore, the debugger stack trace will generally just show a question mark or similar, making it less useful when you have set multiple breakpoints. You hit the breakpoint, but find yourself scrolling the debug window up/down to figure out where the hell you are in your program, because hey, question mark function just doesn't do it!
Concerns about polluting the global namespace are valid, but easily remedied by naming your functions as nodes within your own root object, like "myFooApp.happyFunc = function ( ... ) { ... }; ".
Functions that are available in the global namespace, or as nodes in your root object like above, can be invoked from the debugger directly, during development and debug. E.g., at the console command line, do "myFooApp.happyFunc(42)". This is an extremely powerful ability that does not exist (natively) in compiled programming languages. Try that with an anon func.
Anon funcs can be made more readable by assigning them to a var, and then passing the var as the callback (instead of inlining). E.g.:
var funky = function ( ... ) { ... };
jQuery('#otis').click(funky);
Using the above approach, you could potentially group several anon funcs at the top of the parental func, then below that, the meat of sequential statements becomes much tighter grouped, and easier to read.
Anonymous functions are useful because they help you control which functions are exposed.
More Detail: If there is no name, you can't reassign it or tamper with it anywhere but the exact place it was created. A good rule of thumb is, if you don't need to re-use this function anywhere, it's a good idea to consider if an anonymous function would be better to prevent getting tampered with anywhere.
Example:
If you're working on a big project with a lot of people, what if you have a function inside of a bigger function and you name it something? That means anyone working with you and also editing code in the bigger function can do stuff to that smaller function at any time. What if you named it "add" for instance, and someone reassigned "add" to a number instead inside the same scope? Then the whole thing breaks!
PS -I know this is a very old post, but there is a much simpler answer to this question and I wish someone had put it this way when I was looking for the answer myself as a beginner- I hope you're ok with reviving an old thread!
Its more readable using named functions and they are also capable of self-referencing as in the example below.
(function recursion(iteration){
if (iteration > 0) {
console.log(iteration);
recursion(--iteration);
} else {
console.log('done');
}
})(20);
console.log('recursion defined? ' + (typeof recursion === 'function'));
http://jsfiddle.net/Yq2WD/
This is nice when you want to have an immediately invoked function that references itself but does not add to the global namespace. It's still readable but not polluting. Have your cake and eat it to.
Hi, my name is Jason OR hi, my name is ???? you pick.
Well, just to be clear for the sake of my arguments, the following are all anonymous functions/function expressions in my book:
var x = function(){ alert('hi'); },
indexOfHandyMethods = {
hi: function(){ alert('hi'); },
high: function(){
buyPotatoChips();
playBobMarley();
}
};
someObject.someEventListenerHandlerAssigner( function(e){
if(e.doIt === true){ doStuff(e.someId); }
} );
(function namedButAnon(){ alert('name visible internally only'); })()
Pros:
It can reduce a bit of cruft, particularly in recursive functions (where you could (should actually since arguments.callee is deprecated) still use a named reference per the last example internally), and makes it clear the function only ever fires in this one place.
Code legibility win: in the example of the object literal with anon funcs assigned as methods, it would be silly to add more places to hunt and peck for logic in your code when the whole point of that object literal is to plop some related functionality in the same conveniently referenced spot. When declaring public methods in a constructor, however, I do tend to define labeled functions inline and then assign as references of this.sameFuncName. It lets me use the same methods internally without the 'this.' cruft and makes order of definition a non-concern when they call each other.
Useful for avoiding needless global namespace pollution - internal namespaces, however, shouldn't ever be that broadly filled or handled by multiple teams simultaneously so that argument seems a bit silly to me.
I agree with the inline callbacks when setting short event handlers. It's silly to have to hunt for a 1-5 line function, especially since with JS and function hoisting, the definitions could end up anywhere, not even within the same file. This could happen by accident without breaking anything and no, you don't always have control of that stuff. Events always result in a callback function being fired. There's no reason to add more links to the chain of names you need to scan through just to reverse engineer simple event-handlers in a large codebase and the stack trace concern can be addressed by abstracting event triggers themselves into methods that log useful info when debug mode is on and fire the triggers. I'm actually starting to build entire interfaces this way.
Useful when you WANT the order of function definition to matter. Sometimes you want to be certain a default function is what you think it is until a certain point in the code where it's okay to redefine it. Or you want breakage to be more obvious when dependencies get shuffled.
Cons:
Anon functions can't take advantage of function hoisting. This is a major difference. I tend to take heavy advantage of hoisting to define my own explicitly named funcs and object constructors towards the bottom and get to the object definition and main-loop type stuff right up at the top. I find it makes the code easier to read when you name your vars well and get a broad view of what's going on before ctrl-Fing for details only when they matter to you. Hoisting can also be a huge benefit in heavily event-driven interfaces where imposing a strict order of what's available when can bite you in the butt. Hoisting has its own caveats (like circular reference potential) but it is a very useful tool for organizing and making code legible when used right.
Legibility/Debug. Absolutely they get used way too heavily at times and it can make debug and code legibility a hassle. Codebases that rely heavily on JQ, for instance, can be a serious PITA to read and debug if you don't encapsulate the near-inevitable anon-heavy and massively overloaded args of the $ soup in a sensible way. JQuery's hover method for instance, is a classic example of over-use of anon funcs when you drop two anon funcs into it, since it's easy for a first-timer to assume it's a standard event listener assignment method rather than one method overloaded to assign handlers for one or two events. $(this).hover(onMouseOver, onMouseOut) is a lot more clear than two anon funcs.
I watched this: http://www.youtube.com/watch?v=mHtdZgou0qU yesterday, and I've been thinking about how to improve my javascript. I'm trying to keep everything he said in mind when re-writing an animation that looked really choppy in firefox.
One of the things I'm wondering is if for loops add on to the scope chain. Zakas talked a lot about how closures add on to the scope chain, and accessing variables outside of the local scope tends to take longer. With a for loop, since you can declare a variable in the first statement, does this mean that it's adding another scope to the chain? I would assume not, because Zakas also said there is NO difference between do-while, while, and for loops, but it still seems like it would.
Part of the reason I ask is because I often see, in JS libraries, code like:
function foo(){
var x=window.bar,
i=0,
len=x.length;
for(;i<len;i++){
//
}
}
If for loops did add another object onto the chain, then this would be very inefficient code, because all the operations inside the loop (assuming they use i) would be accessing an out-of-scope variable.
Again, if I were asked to bet on it, I would say that they don't, but why then wouldn't the variables used be accessible outside of the loop?
JavaScript does not have block scope and it has variable hoisting, so any variables that appear to be defined in a for loop are not actually defined there.
The reason you see code like provided in your example is because of the hoisting behaviour. The code's author knows about variable hoisting so has declared all the variables for the scope at the start, so it's clear what JavaScript is doing.
I am using prototype 1.4 in our project,I used to create Class in this manner:
1) Manner1
var Person=Class.create();
Person.prototype={
initialize:function(name,age){
this._check(name,age);
this.name=name;
this.age=age;
},
say:function(){
console.log(this.name+','+this.age);
}
_check:function(name,age){
if(typeof(name)!='string' || typeof(age)!='number')
throw new Error("Bad format...");
}
}
However,in the above code,the method of Person "_check" can be called outside which is not my expected.
And in my former post ,thanks for 'T.J. Crowder',he told me one solution to make the method totally private:
2) Manner2
var Person=(function(){
var person_real=Class.create();
person_real.prototype={
initialize:function(name,age){
_check(name,age);
this.name=name;
this.age=age;
},
say:function(){
console.log(this.name+','+this.age);
}
}
//some private method
function _check(name,age){
//....the check code here
}
return person_real;
})();
Now,the "_check" can not be exposed outside.
But what I am now confusing is that does this manner will cause performance problem or is it the best pratice?
Since one of the reasons we create class(the blueprint) is to reducing the repeat codes which can be reused many times anywhere.
Now look at the "Manner1":
We create a Class "Person",then we puts all the instance methods to the prototype object of class Person.
Then each time we call the
var obj=new Person('xx',21);
the object "obj" will own the methods from the Person.prototype. "obj" itself does not hold any codes here.
However in "Manner2":
Each time we call:
var obj=new Person('xx',21);
A new blueprint will be created,the private methods such as "_check" will be created each time also.
Is it a waste of memory?
Note: maybe I am wrong. But I am really confused. Any one can give me an explaination?
You're getting caught with a couple common problems that people have with Javascript.
Second question first, as its easier to answer. With prototypical inheritance you're only describing the differences between two related objects. If you create a function on the prototype of an object and then make 10000 clones of it, there's still only one function. Each individual object will call that method, and due to how "this" works in Javascript, the "this" in those function calls will point to the individual objects despite the function living in the singular prototype.
Of course when you have unique per-item properties, the unique values for each object, then yes you can can performance issues (I hesitate to use the term "instance" because it doesn't mean the same thing in prototypical inheritance compared to class based inheritance). Since you're no longer benefiting from one single shared function/dataset you lose that benefit. Knowing how to minimize these inefficiencies is one key part of smart, efficient programming in Javascript (and any prototypal inheritance based language). There's a lot of non-obvious tricks to get functionality and data shared with minimal duplication.
The first question is a common class of confusion in Javascript due to the near omnipotence of Function in Javascript. Functions in Javascript single handedly do the jobs of many different constructs in other languages. Similarly, Objects in Javascript fill the role of a lot of data constructs on other languages. This concentrated responsibility brings a lot of powerful options to the table, but makes Javascript a kind of boogieman: it looks like really simple, and is really simple. But it's really simple kind of how like poker is really simple. The's a small set of moving pieces, but the way they interact begets a much deeper metagame that rapidly becomes bewildering.
It's better to understand objects/functions/data in a Javascript application/script/whatever as a constantly evolving system, as this better helps capture prototypal inheritance. Perhaps like a football (american) game where the current state of the game can't really be captured without knowing how many downs there is, how many timeouts remain, what quarter you're in, etc. You have to move with the code instead of looking at it like most languages, even dynamic ones like php.
In your first example all you're doing essentially is using Class.create to get the initialize() function automatically called and to handle grandparent constructor stuff automatically so otherwise similar to just doing:
var Person = function(){ this.initialize.call(this, arguments) };
Person.prototype = { ... }
Without the grandparent stuff. Any properties set directly on an object are going to be public (unless you're using ES5 defineProperty stuff to specifically control it) so adding _ to the beginning of the name won't do anything outside of conventions. There is no such thing as a private or protected member in javascript. There is ONLY public so if you define a property on an object it is public. (again discounting ES5 which does add some control to this, but it's not really designed to fill the same as private variables in other languages and it shouldn't be used or relied on in the same way).
Instead of private members we have to use scope to create privacy. It's important to remember that Javascript only has function scoping. If you want to have private members in an Object then you need to wrap the whole object in a bigger scope (function) and selectively make public what you want. That's an important basic rule that for some reason I rarely see succincly explained. (other schemes exist like providing a public interface to register/add functions into a private context, but they all end with everything that's sharing having access to a private context build by a function, and usually are way more complicated than needed).
Think of it this way. You want to share these members inside the object, so all the object members need to be within the same scope. But you don't want it shared on the outside so they must be defined in their own scope. You can't put them as members ON the public object since Javascript has no concept of anything besides public variables (ES5 aside, which can still be screwed with) you're left with function scope as the only way to implement privacy.
Fortunately this isn't painful because you have a number of effective, even smooth ways to do this. Here comes in Javascript's power as a functional language. By returning a function from a function (or an object containing functions/data/references) you can easily control what gets let back out into the world. And since Javascript has lexical scoping, those newly unleashed public interfaces still have a pipe back into the private scope they were originally created in, where their hidden brethren still live.
The immediately executed function function serves the role of gatekeeper here. The objects and scope aren't created until the function is run, and often the only purpose for the container function is to provide scope, so an anonymous immediately executing function fits the bill (neither anonymous or immediately executing are required though). It's important to recognize the flexibility in Javascript in terms of construction an "Object" or whatever. In most languages you carefully construct your class structure and what members each thing has and on and on and on. In Javascript, "if you dream it you can make it" is essentially the name of the game. Specifically, I can create an anonymous function with 20 different Objects/Classes/whatever built inside of it and then ad-hoc cherry pick a method or two from each and return those as a single public Object, and then that IS the object/interface even though it wasn't until two seconds before I did return { ...20 functions from 20 different objects... }.
So in your second example you're seeing this tactic being used. In this case a prototype is create and then a separate function is created in the same scope. As usual, only one copy of the prototype will exist no matter how many children come from it. And only one copy of that function exists because that containing function (the scope) is only called once. The children created from the prototype will not have access to call the function, but it will seem like they do because they're getting backdoor access through functions that live on the prototype. If a child goes and implements its own initialize in that example it will no longer be able to make use of _check because it never had direct access in the first place.
Instead of looking at it as the child doing stuff, look at it as the parent/prototype tending to the set of functions that live on it like a central phone operator that everything routes through, with children as thin clients. The children don't run the functions, the children ping the prototype to run them with this pointing at the caller child.
This is why stuff like Prototype's Class becomes useful. If you're implementing multilevel inheritance you start running into holes where prototypes up the chain miss out on some functionality. This is because while properties/functions in the prototype chain automagically show up on children, the constructor functions do not similarly cascade; the constructor property inherits same as anything else, but they're all named constructor so you only get the direct prototype's constructor for free. Also constructors need to be executed to get their special sauce whereas a prototype is simply a property. Constructors usually hold that key bit of functionality that sets up an objects own private data which is often required for the rest of the functionality to work. Inheriting a buttload of functions from a prototype chain doesn't do any good if you don't manually run through the constructor chain as well (or use something like Class) to get set up.
This is also part of why you don't usually see deep inheritance trees in Javascript and is usually peoples' complaints about stuff like GWT with those deep Java-like inheritance patterns. It's not really good for Javascript and prototypical inheritance. You want shallow and wide, a few layers deep or less. And it's important to get a good grasp on where in the flow an object is living and what properties are where. Javascript does a good job of creating a facade that makes it look like X and Y functionality is implemented on an object, but when you peer behind you realize that (ideally) most objects are mostly empty shells that a.) hold a few bits of unique data and b.) are packaged with pointers to the appropriate prototype objects and their functionality to make use of that data. If you're doing a lot of copying of functions and redundant data then you're doing it wrong (though don't feel bad because it's more common than not).
Herein lies the difference between Shallow Copy and Deep Copy. jQuery's $.extend defaults to shallow (I would assume most/all do). A shallow copy is basically just passing around references without duplicating functions. This allows you to build objects like legos and get a bit more direct control to allow merging parts of multiple objects. A shallow copy should, similar to using 10000 objects built from a prototype, not hurt you on performance or memory. This is also a good reason to be very careful about haphazardly doing a deep copy, and to appreciate that there is a big difference between shallow and deep (especially when it comes to the DOM). This is also one of those place Javascript libraries care some heavy lifting. When there's browser bugs that cause browsers to fail at handling the situations where it should be cheap to do something because it should be using references and efficiently garbage collecting. When that happens it tends to require creative or annoying workarounds to ensure you're not leaking copies.
A new blueprint will be created,the private methods such as "_check" will be created each time also. Is it a waste of memory?
You are wrong. In the second way, you execute the surrounding function only one time and then assign person_real to Person. The code is exactly the same as in the first way (apart form _check of course). Consider this variation of the first way:
var Person=Class.create();
Person.prototype={
initialize:function(name,age){
_check(name,age);
this.name=name;
this.age=age;
},
say:function(){
console.log(this.name+','+this.age);
}
}
function _check(name,age){
//....the check code here
}
Would you still say that _check is created every time you call new Person? Probably not. The difference here is that _check is global and can be accessed form any other code. By putting this piece in a function and call the function immediately, we make _check local to that function.
The methods initialize and say have access to _check, because they are closures.
Maybe it makes more sense to you when we replace the immediate function with a normal function call:
function getPersonClass(){
var person_real=Class.create();
person_real.prototype={
initialize:function(name,age){
_check(name,age);
this.name=name;
this.age=age;
},
say:function(){
console.log(this.name+','+this.age);
}
}
//some private method
function _check(name,age){
//....the check code here
}
return person_real;
}
var Person = getPersonClass();
getPersonClass is only called once. As _check is created inside this function, it means that it is also only created once.
I am making a webapp. I have a fairly basic question about javascript performance. Sometimes I need a global variable to store information that is used the entire time the website is open.
An example is a variable called needs_saved. It is true or false to say whether the page needs saved. I might have another variable called is_ie, ie_version, or space_remaining.
These are all variable that I need in various functions throughout the app.
Now, I know global variables are bad because they require the browser to search each level of function scope. But, I don't know if there is any better way to store values that are needed throughout the program.
I know I could create a global object called 'program_status' and give it the properties is_ie, ie_version, etc... But is this any better since it would first have to find my program_status object (stored as a global variable), and then the internal property?
Maybe I'm overthinking this.
Thanks
You have nothing to worry about.
The performance impact of a global variable is minute.
Global variables are discouraged because they can make code harder to maintain.
In your case, they won't.
The reason global variable use should be kept to a minimum is because the global namespace gets polluted when there's a lot of them, and there's a good chance of a clash if your program needs to use some 3rd party libraries which also create their own globals.
Creating a single object to hold all of your global state is a good idea since it limits the number of identifiers you need to reserve at the global level.
To solve performance problems, you can then create a local reference to that object in any scope where you need to access it multiple times:
So instead of
if (globalState.isIe) { alert(globalState.ieMessage); }
you can do
var state = globalState;
if (state.isIe) { alert(state.ieMessage); }
You don't need to do this if you only access the state object once. In any case, even if you never do this, the performance penalties will be negligible.
If you're worried about performance, code something clean then run a profiler on it to optimize. I know both Safari and Google Chrome have one, and it's pretty sure Firebugs includes one for Firefox too. Heck, even Internet Explorer 8 has one.
I've heard that it's not a good idea to make elements global in JavaScript. I don't understand why. Is it something IE can't handle?
For example:
div = getElementById('topbar');
I don't think that's an implementation issue, but more a good vs bad practice issue. Usually global * is bad practice and should be avoided (global variables and so on) since you never really know how the scope of the project will evolve and how your file will be included.
I'm not a big JS freak so I won't be able to give you the specifics on exactly why JS events are bad but Christian Heilmann talks about JS best practices here, you could take a look. Also try googling "JS best practices"
Edit: Wikipedia about global variables, that could also apply to your problem :
[global variables] are usually
considered bad practice precisely
because of their nonlocality: a global
variable can potentially be modified
from anywhere, (unless they reside in
protected memory) and any part of the
program may depend on it. A global
variable therefore has an unlimited
potential for creating mutual
dependencies, and adding mutual
dependencies increases complexity. See
Action at a distance. However, in a
few cases, global variables can be
suitable for use. For example, they
can be used to avoid having to pass
frequently-used variables continuously
throughout several functions.
via http://en.wikipedia.org/wiki/Global_variable
Is it something IE can't handle?
No it is not an IE thing. You can never assume that your code will be the only script used in the document. So it is important that you make sure your code does not have global function or variable names that other scripts can override.
Refer to Play Well With Others for examples.
I assume by "events" you mean the event-handling JavaScript (functions).
In general, it's bad to use more than one global variable in JS. (It's impossible not to use at least one if you're storing any data for future use.) That's because it runs into the same problem as all namespacing tries to solve - what if you wrote a method doSomething() and someone else wrote a method called doSomething()?
The best way to get around this is to make a global variable that is an object to hold all of your data and functions. For example:
var MyStuff = {};
MyStuff.counter = 0;
MyStuff.eventHandler = function() { ... };
MyStuff.doSomething = function() { ... };
// Later, when you want to call doSomething()...
MyStuff.doSomething();
This way, you're minimally polluting the global namespace; you only need worry that someone else uses your global variable.
Of course, none of this is a problem if your code will never play with anyone else's... but this sort of thinking will bite you in the ass later if you ever do end up using someone else's code. As long as everyone plays nice in terms of JS global names, all code can get along.
There shouldn't be any problem using global variables in your code as long as you are wrapping them inside a uniqe namespase/object (to avoid collision with scripts that are not yours)
the main adventage of using global variable in javascript derives from the fact that javascript is not a strong type language. there for, if you pass somes complex objects as arguments to a function, you will probebly lose all the intellisence for those objects (inside the function scope.)
while using global objects insteads, will preserve that intellisence.
I personally find that very usfull and it certainly have place in my code.
(of course, one should alwayse make the right balance between locales and globals variables)