Is there a benefit to calling a function within a function? - javascript

I'm following a tutorial on MDN for Javascript and they add an onclick handler to a button and then call a function within a function:
myButton.onclick = function() {
setUserName();
}
I tried assigning the event to the function directly and it still worked, so I wondered if there's any good reason to do it their way.

If the function doesn't care about the event object that gets passed as the first argument, no. It bloats the code, creates an extra object in memory and has no benefit.

Defining a lambda function for handling an event is a dirty way of coding, IMO. It's much cleaner and safer to separate interface and actual processing.
If you need to do something complicated in your handler, it's just as easy to define a properly named function to do the job, and assign the handler to it.
Once you have done this terrible naming effort, you can do whatever you want with your handler, including testing it by feeding dummy events, reusing it for handling events from different buttons, etc.
On the other hand, web pages that will probably not be around for more than a few weeks or months at most are not the best place for clean coding anyway...

Related

Create Object without assigning it to var inside function scope [duplicate]

Is it bad javascript practice to not assign a newly created object to a variable if you're never going to access it?
For example:
for(var i=0;i<links.length;i++){
new objectName(links[i]);
}
And again, I won't be accessing it, so there's no need for a variable to reference it.
If you're not accessing it but it's still useful, that suggests that the constructor itself has visible side effects. Generally speaking, that's a bad idea.
What would change if you didn't call the constructor at all?
If your constructor is doing something to the global state, that strikes me as very bad. On the other hand, you could be using it just for the sake of validation - i.e. if the constructor returns without throwing an exception, it's okay. That's not quite so bad, but a separate method for validation would make things a lot clearer if that's the case.
That’s absolutely fine if you don’t need to use it again.
"Is it bad javascript practice to not
assign a newly created object to a
variable if you're never going to
access it?"
I feel it is bad practice to make an assignment that is not needed, and, I would argue, not just for javascript but in general. If there are side-effects you want, getting them from the action of an assignment is bad practice for the simple reason that it would be fairly opaque from a maintenance point of view.
.
It seems like your constructor is doing something else besides creating/initializing an object.
It would be a cleaner solution to implement that extra functionality into a function or method.
Constructors should be used to create and initialize objects.
That’s absolutely fine if you don’t need to use it again. But we need to explain why.
An object can continue to do something, even if you don't have a reference to it (like other suggested, an animation, a listener, etc.)
Who think this is odd, should reflect on the alternatives before say this is not fine:
Maybe someone forget to assign the object to a variable? Maybe he made an even worse mistake: he forget to use it later. Is it reasonable? Is it Odd?
Is new objectName().start() more clear? maybe. But if you always need to call start() immediately after creation to make the object useful, it is better to include the start() in the costructor. This way you can't forget to do it.
If you can do it with a static method, without constructors inside, then the real question is: Why do you have an Object? a simple function should be enough.
If you create a static method only to create new Object, from one side maybe is more clear that it will do something (but you should anyway assume that a constructor can do something, this is a fact) on the other side is less clear that something can continue acting after the call. If you are not in the previous case (no creation of objects), maybe it is better to emphatize on the creation of something using a constructor. But what is more clear is subjective and can be pointless to argue more.
A good Object name can clarify or suggest that the object will do something, after creation.
Examples:
new Baby() is supposed to start to cry, sometimes. And he will live even if you don't tell him to to something.
new AutoRegisteringListener(); Really need to explain what will happen when you create such object? are you really surprised that this listener will register itself to some events?
In my opinion, the point is to think in object-oriented way, instead of thinking only in functional way: objects can have behaviour. This is what they are designed for, among other things.

What are the benefits to using anonymous functions instead of named functions for callbacks and parameters in JavaScript event code?

I'm new-ish to JavaScript. I understand many of the concepts of the language, I've been reading up on the prototype inheritance model, and I'm whetting my whistle with more and more interactive front-end stuff. It's an interesting language, but I'm always a bit turned off by the callback spaghetti that is typical of many non-trivial interaction models.
Something that has always seemed strange to me is that in spite of the readability nightmare that is a nest of JavaScript nested callbacks, the one thing that I very rarely see in many examples and tutorials is the use of predefined named functions as callback arguments. I'm a Java programmer by day, and discarding the stereotypical jabs about Enterprise-y names for units of code one of the things I've come to enjoy about working in a language with a strong selection of featureful IDE's is that using meaningful, if long, names can make the intent and meaning of code much clearer without making it more difficult to actually be productive. So why not use the same approach when writing JavaScript code?
Giving it thought, I can come up with arguments that are both for and against this idea, but my naivety and newness to the language impairs me from reaching any conclusions as to why this would be good at a technical level.
Pros:
Flexibility. An asynchronous function with a callback parameter could be reached by one of many different code paths and it could be harried to have to write a named function to account for every single possible edge case.
Speed. It plays heavily in to the hacker mentality. Bolt things on to it until it works.
Everyone else is doing it
Smaller file sizes, even if trivially so, but every bit counts on the web.
Simpler AST? I would assume that anonymous functions are generated at runtime and so the JIT won't muck about with mapping the name to instructions, but I'm just guessing at this point.
Quicker dispatching? Not sure about this one either. Guessing again.
Cons:
It's hideous and unreadable
It adds to the confusion when you're nested nuts deep in a swamp of callbacks (which, to be fair, probably means you're writing poorly constructed code to begin with, but it's quite common).
For someone without a functional background it can be a bizarre concept to grok
With so many modern browsers showing the ability to execute JavaScript code much faster than before, I'm failing to see how any trivial sort of performance gain one might get out using anonymous callbacks would be a necessity. It seems that, if you are in a situation where using a named function is feasible (predictable behavior and path of execution) then there would be no reason not to.
So are there any technical reasons or gotchas that I'm not aware of that makes this practice so commonplace for a reason?
I use anonymous functions for three reasons:
If no name is needed because the function is only ever called in one place, then why add a name to whatever namespace you're in.
Anonymous functions are declared inline and inline functions have advantages in that they can access variables in the parent scopes. Yes, you can put a name on an anonymous function, but that's usually pointless if it's declared inline. So inline has a significant advantage and if you're doing inline, there's little reason to put a name on it.
The code seems more self-contained and readable when handlers are defined right inside the code that's calling them. You can read the code in almost sequential fashion rather than having to go find the function with that name.
I do try to avoid deep nesting of anonymous functions because that can be hairy to understand and read. Usually when that happens, there's a better way to structure the code (sometimes with a loop, sometimes with a data table, etc...) and named functions isn't usually the solution there either.
I guess I'd add that if a callback starts to get more than about 15-20 lines long and it doesn't need direct access to variables in the parent scope, I would be tempted to give it a name and break it out into it's own named function declared elsewhere. There is definitely a readability point here where a non-trivial function that gets long is just more maintainable if it's put in its own named unit. But, most callbacks I end up with are not that long and I find it more readable to keep them inline.
I prefer named functions myself, but for me it comes down to one question:
Will I use this function anywhere else?
If the answer is yes, I name/define it. If not, pass it as an anonymous function.
If you only use it once, it doesn't make sense to crowd the global namespace with it. In today's complex front-ends, the number of named functions that could have been anonymous grows quickly (easily over 1000 on really intricate designs), resulting in (relatively) large performance gains by preferring anonymous functions.
However, code maintainability is also extremely important. Each situation is different. If you're not writing a lot of these functions to begin with, there's no harm in doing it either way. It's really up to your preference.
Another note about names. Getting in the habit of defining long names will really hurt your file size. Take the following example.
Assume both of these functions do the same thing:
function addTimes(time1, time2)
{
// return time1 + time2;
}
function addTwoTimesIn24HourFormat(time1, time2)
{
// return time1 + time2;
}
The second tells you exactly what it does in the name. The first is more ambiguous. However, there are 17 characters of difference in the name. Say the function is called 8 times throughout the code, that's 153 extra bytes your code didn't need to have. Not colossal, but if it's a habit, extrapolating that to 10s or even 100s of functions will easily mean a few KB of difference in the download.
Again however, maintainability needs to be weighed against the benefits of performance. This is the pain of dealing with a scripted language.
A bit late to the party, but some not yet mentioned aspects to functions, anonymous or otherwise...
Anon funcs are not easily referred to in humanoid conversations about code, amongst a team. E.g., "Joe, could you explain what the algorithm does, within that function. ... Which one? The 17th anonymous function within the fooApp function. ... No, not that one! The 17th one!"
Anon funcs are anonymous to the debugger as well. (duh!) Therefore, the debugger stack trace will generally just show a question mark or similar, making it less useful when you have set multiple breakpoints. You hit the breakpoint, but find yourself scrolling the debug window up/down to figure out where the hell you are in your program, because hey, question mark function just doesn't do it!
Concerns about polluting the global namespace are valid, but easily remedied by naming your functions as nodes within your own root object, like "myFooApp.happyFunc = function ( ... ) { ... }; ".
Functions that are available in the global namespace, or as nodes in your root object like above, can be invoked from the debugger directly, during development and debug. E.g., at the console command line, do "myFooApp.happyFunc(42)". This is an extremely powerful ability that does not exist (natively) in compiled programming languages. Try that with an anon func.
Anon funcs can be made more readable by assigning them to a var, and then passing the var as the callback (instead of inlining). E.g.:
var funky = function ( ... ) { ... };
jQuery('#otis').click(funky);
Using the above approach, you could potentially group several anon funcs at the top of the parental func, then below that, the meat of sequential statements becomes much tighter grouped, and easier to read.
Anonymous functions are useful because they help you control which functions are exposed.
More Detail: If there is no name, you can't reassign it or tamper with it anywhere but the exact place it was created. A good rule of thumb is, if you don't need to re-use this function anywhere, it's a good idea to consider if an anonymous function would be better to prevent getting tampered with anywhere.
Example:
If you're working on a big project with a lot of people, what if you have a function inside of a bigger function and you name it something? That means anyone working with you and also editing code in the bigger function can do stuff to that smaller function at any time. What if you named it "add" for instance, and someone reassigned "add" to a number instead inside the same scope? Then the whole thing breaks!
PS -I know this is a very old post, but there is a much simpler answer to this question and I wish someone had put it this way when I was looking for the answer myself as a beginner- I hope you're ok with reviving an old thread!
Its more readable using named functions and they are also capable of self-referencing as in the example below.
(function recursion(iteration){
if (iteration > 0) {
console.log(iteration);
recursion(--iteration);
} else {
console.log('done');
}
})(20);
console.log('recursion defined? ' + (typeof recursion === 'function'));
http://jsfiddle.net/Yq2WD/
This is nice when you want to have an immediately invoked function that references itself but does not add to the global namespace. It's still readable but not polluting. Have your cake and eat it to.
Hi, my name is Jason OR hi, my name is ???? you pick.
Well, just to be clear for the sake of my arguments, the following are all anonymous functions/function expressions in my book:
var x = function(){ alert('hi'); },
indexOfHandyMethods = {
hi: function(){ alert('hi'); },
high: function(){
buyPotatoChips();
playBobMarley();
}
};
someObject.someEventListenerHandlerAssigner( function(e){
if(e.doIt === true){ doStuff(e.someId); }
} );
(function namedButAnon(){ alert('name visible internally only'); })()
Pros:
It can reduce a bit of cruft, particularly in recursive functions (where you could (should actually since arguments.callee is deprecated) still use a named reference per the last example internally), and makes it clear the function only ever fires in this one place.
Code legibility win: in the example of the object literal with anon funcs assigned as methods, it would be silly to add more places to hunt and peck for logic in your code when the whole point of that object literal is to plop some related functionality in the same conveniently referenced spot. When declaring public methods in a constructor, however, I do tend to define labeled functions inline and then assign as references of this.sameFuncName. It lets me use the same methods internally without the 'this.' cruft and makes order of definition a non-concern when they call each other.
Useful for avoiding needless global namespace pollution - internal namespaces, however, shouldn't ever be that broadly filled or handled by multiple teams simultaneously so that argument seems a bit silly to me.
I agree with the inline callbacks when setting short event handlers. It's silly to have to hunt for a 1-5 line function, especially since with JS and function hoisting, the definitions could end up anywhere, not even within the same file. This could happen by accident without breaking anything and no, you don't always have control of that stuff. Events always result in a callback function being fired. There's no reason to add more links to the chain of names you need to scan through just to reverse engineer simple event-handlers in a large codebase and the stack trace concern can be addressed by abstracting event triggers themselves into methods that log useful info when debug mode is on and fire the triggers. I'm actually starting to build entire interfaces this way.
Useful when you WANT the order of function definition to matter. Sometimes you want to be certain a default function is what you think it is until a certain point in the code where it's okay to redefine it. Or you want breakage to be more obvious when dependencies get shuffled.
Cons:
Anon functions can't take advantage of function hoisting. This is a major difference. I tend to take heavy advantage of hoisting to define my own explicitly named funcs and object constructors towards the bottom and get to the object definition and main-loop type stuff right up at the top. I find it makes the code easier to read when you name your vars well and get a broad view of what's going on before ctrl-Fing for details only when they matter to you. Hoisting can also be a huge benefit in heavily event-driven interfaces where imposing a strict order of what's available when can bite you in the butt. Hoisting has its own caveats (like circular reference potential) but it is a very useful tool for organizing and making code legible when used right.
Legibility/Debug. Absolutely they get used way too heavily at times and it can make debug and code legibility a hassle. Codebases that rely heavily on JQ, for instance, can be a serious PITA to read and debug if you don't encapsulate the near-inevitable anon-heavy and massively overloaded args of the $ soup in a sensible way. JQuery's hover method for instance, is a classic example of over-use of anon funcs when you drop two anon funcs into it, since it's easy for a first-timer to assume it's a standard event listener assignment method rather than one method overloaded to assign handlers for one or two events. $(this).hover(onMouseOver, onMouseOut) is a lot more clear than two anon funcs.

Use case for jQuery Global Ajax Event Handlers?

I'm currently studying the jQuery ajax methods and trying to take a little more in-depth look into them. I was playing with the global event handlers: ajaxStart, ajaxSend, etc. I understand how they work, but I can't think of any good use cases for them.
I've seen examples where they are used for loggers which seems feasible enough, but why make them methods and not stand alone functions that can be called like $.ajax(). It seems if I don't have any particular element to attach them to I just set it to the $(document) anyway.
Also, being able to use $(this) inside of the handlers does not seem like much of a benefit over just doing $("#log").
Have these been a life savor for anyone, are there any other use cases outside of a global logger?
The global event handlers are useful for showing indicators to the user as well. That way their experience is consistent (same indicators when saving/loading) and you don't have to write the same code over and over.
The ajaxError method is great for global ajax error handling. Instead of having an error callback on all of your ajax calls, you can use the global and have it log somewhere. You can access all the information from the original ajax call from ajaxError.
$(this) insead of $('div span.foo div[data-foo="foo"] > input.EVIL') or $('*')can be a life saver...
It depends on the exact scenario, but it ALWAYS better and a good practice...
And the case you want for the ajaxsetup: Pass the same options to a jQuery function over and over
Here is a guy that this option helped him out.

Theory for attaching javascript eventlistener to variables?

I was wondering wether there is a way to attach eventlisteners to variables. The idea is to do something like this:
someVar.addEventListener('change', someTodo, false);
So once someVar is changed by i.e. someVar=1, someTodo would be executed.
I think to understand that - in theory - eventlisteners can be added to everything in the DOM, the problem beeing that variables do not trigger those events, while HTML objects DO trigger them.
If that is indeed correct, the extended question would be: How to train DOM objects to trigger events? I have read something about prototyping, is that the trick here?
Please note: I like to understand and write all of my code myself. So I'd rather be interested in the theory then using some existing thing like jQuery, where all sorts of miracles are baked right in.
Marco
The safe and time tested approach is to use getters and setters on your objects (ie, you only allow access to the variable through object methods like getX()/setX()). You could then have overload setX() to trigger callbacks. There are some languages like Lua and Python where access to an object's members can be caught with meta functions but I do not believe Javascript supports this in any way.

Custom events and event pooling in jQuery - What's the point?

I've been reading about custom events in jQuery and why they should be used but I'm still clearly missing the point. There is a very good article I read here that has the following code example;
function UpdateOutput() {
var name = $('#txtName').val();
var address = $('#txtAddress').val();
var city = $('#txtCity').val();
$('#output').html(name + ' ' + address + ' ' + city);
}
$(document).bind('NAME_CHANGE ADDRESS_CHANGE CITY_CHANGE', function() {
UpdateOutput();
});
$('#txtAddress').keyup(function() {
$(document).trigger('ADDRESS_CHANGE');
});
$('#txtCity').keyup(function() {
$(document).trigger('CITY_CHANGE');
});
Can someone tell me why I just don't call the UpdateOutput() function directly? It would still work exactly the same way, i.e.
$('#txtAddress').keyup(function() {
UpdateOutput()
});
$('#txtCity').keyup(function() {
UpdateOutput()
});
Many thanks
As your article starts with:
As everyone knows, the more dependencies you have in a system, the harder maintaining that system is. Javascript is no exception- and orchestrating actions across complex user interfaces can be a nightmare if not done properly.
Using events removes (some of) these dependencies:
When something happens (a key is released/up), a notification of this event is send, without knowing whether anyone is interested at the moment or not.
When someone is interested in a certain event he can subscribe for it, without knowing why this event was triggered (key released)
Both bullets are independent, the first notifies and the second responds. Removing one does not matter for the functionality of the other. It is also easy to have multiple instances firing/subscribing to events (Also due to the missing dependencies).
Because you are one client of that event, even though you have full control over the entire application. It helps decoupling your code better.
As one of the clients who is interested in knowing when the name, address, or city changes, you are updating the values in some part of the screen. Some other client might want to do something else such as pull up all adjacent cities, reverse geocode the address, do a name lookup on namesdatabase.com, and so on.
You can still control everything without events and call multiple functions directly or put everything in it's own function, but adding events decouples the implementation from what needs to be done, based only on the type of event, so UpdateOutput does not have to worry about pulling up names from a names database, and you don't have to worry about calling all the necessary functions yourself whenever a particular event happens as long as there is a basic understanding of the events defined in the system and what they represent.
A second reason is abstraction. For example, deleting of a user account might get triggered by a simple click, but just by translating that to a higher level event such as DeleteAccount, things would get simpler and understandable when you consider that there could be tens, or hundreds, or maybe even thousands of such events all across the application. Working at a higher level of abstraction than "keyups", "keydowns", "mouseovers", etc. (which are really meaningless anyways in the context of an application and the intent behind), things can get a lot more manageable as the application size grows.
It’s perfectly reasonable to wire up the UpdateOutput() call directly. Often, it’s difficult to represent the need for an abstraction that custom event pooling provides in simple examples.
However, there’s (arguably) two issues of maintainability (again, depending on the use) when calling UpdateOutput directly. The first problem occurs by repeating the UpdateOutput() call in numerous places. This makes refactoring functions (especially those with parameters) extremely difficult when they change. With Event Pooling, you can prep data before passing it to a function, which is helpful when disparate code blocks call the same function. With ajax heavy web apps around, controlling function calls is very important.
Secondly, it’s possible you’ll need to call multiple functions for the same event. Imagine in addition to UpdateOutput(), there’s also some validation function which needs to be fired. Or when a user updates the zip code, some maps api is trigged to show something, or whatever other functionality which could happen (the point being you may have numerous functions being called in a simple keyup() event. Having this wired up directly makes for very large blocks of code which simply call multiple functions, and if there’s conditional logic required, they can get out of hand. Rewritten with event pooling, you get a lot more control of how a function like UpdateOuput is being called, so you don’t need to chase down the method across numerous files. You can simply see what events that cal is binded too.

Categories

Resources