Use single line functions or repeat code - javascript

I was writing a Javascript code in which I needed to show and hide some sections of a web. I ended up with functions like these:
function hideBreakPanel() {
$('section#break-panel').addClass('hide');
}
function hideTimerPanel() {
$('section#timer-panel').addClass('hide');
}
function showBreakPanel() {
resetInputValues();
$('section#break-panel').removeClass('hide');
}
function showTimerPanel() {
resetInputValues();
$('section#timer-panel').removeClass('hide');
}
My question is related with code quality and refactoring. When is better to have simple functions like these or invoke a Javascript/jQuery function directly? I suppose that the last approach have a better performance, but in this case performance is not a problem as it is a really simple site.

I think you're fine with having functions like these, after all hideBreakPanel might later involve something more than applying a class to an element. The only thing I'd point out is to try to minimize the amount of repeated code in those functions. Don't worry about the fact that you're adding a function call overhead, unless you're doing this in a performance-critical scenario, the runtime interpreter couldn't care less.
One way you could arrange the functions to avoid repeating yourself:
function hidePanel(name) {
$('section#' + name + '-panel').addClass('hide');
}
function showPanel(name) {
resetInputValues();
$('section#' + name + '-panel').removeClass('hide');
}
If you absolutely must have a shorthand, you can then do:
function hideBreakPanel() {
hidePanel("break");
}
Or even
var hideBreakPanel = hidePanel.bind(hidePanel, "break");
This way you encapsulate common functionality in a function, and you won't have to update all your hide functions to ammend the way hiding is done.

My question is related with code quality and refactoring. When is
better to have simple functions like these or invoke a
Javascript/jQuery function directly? I suppose that the last approach
have a better performance, but in this case performance is not a
problem as it is a really simple site.
Just from a general standpoint, you can get into a bit of trouble later if you have a lot of one-liner functions and multiple lines of code crammed into one and things like that if the goal is merely syntactical sugar and a very personal definition of clarity (this can be quite transient and change like fashion trends).
It's because the quality that gives code longevity is often, above all, familiarity and, to a lesser extent, centralization (less branches of code to jump through). Being able to recognize and not absolutely loathe code you wrote years later (not finding it bizarre/alien, e.g.) often favors those qualities that reduce the number of concepts in the system, and flow down towards very idiomatic use of languages and libraries. There are human metrics here beyond formal SE metrics like just being motivated to keep maintaining the same code.
But it's a balancing act. If the motivation to seek these shorter and sweeter function calls has more to do with concepts beyond syntax like having a central place to modify and extend and instrument the behavior, to improve safety in otherwise error-prone code, etc., then even a bunch of one-liner functions could start to become of great aid in the future. The key in that case to keep the familiarity is to make sure you (and your team if applicable) have plenty of reuse for such functions, and incorporate it into the daily practices and standards.
Idiomatic code tends to be quite safe here because we tend to be saturated by examples of it, keeping it familiar. Any time you start going deep on the end of establishing proprietary interfaces, we risk losing that quality. Yet proprietary interfaces are definitely needed, so the key is to make them count.
Another kind of esoteric view is that functions that depend on each other tend to age together. An image processing function that just operates on very simple types provided by a language tends to age well. We can find, for example, C functions of this sort that are still relevant and easily-applicable today that date back all the way to the 80s. Such code stays familiar. If it depends on a very exotic pixel and color library and math routines outside of the norm, then it tends to age a lot more quickly (loses the familiarity/applicability), because that image processing routine now ages with everything it depends on. So again, always with an eye towards tightrope-balancing and trade-offs, it can sometimes be useful to avoid the temptation to venture too far outside the norms, and avoid coupling your code to too many exotic interfaces (especially ones that serve little more than sugar). Sometimes the slightly-more verbose form of code that favors reducing the number of concepts and more directly uses what is already available in the system can be preferable.
Yet, as is often the case, it depends. But these might be some less frequently-mentioned qualities to keep in mind when making all of your decisions.

If resetInputValues() method returns undefined (meaning returns nothing e.g) or any falsy value, you could refactorize it to:
function togglePanel(type, toHide) {
$('section#' + type + '-panel').toggleClass('hide', toHide || resetInputValues());
}
Use e.g togglePanel('break'); for showBreakPanel() and togglePanel('break', true) for hideBreakPanel().

Related

Handling callbacks in code that is like an inheritance tree

I am working on a javascript based character sheet system for role playing games. Within the specific type of RPG I am trying to implement there exists something of an inheritance tree, so that there are the basic RPG-system rules, there are more specific rules for a given setting within that RPG-system, and then there might be differences between specific versions of that setting.
For a given user interaction the code should execute some actions for the overall RPG-system, and some other code for the specific setting and version that is being used.
Currently I am handling this by doing something in the form of:
CharacterSheet.prototype.handleSomeUserInteraction() {
this.handleSomeUserInteractionForSystem();
this.handleSomeUserInteractionForSetting();
this.handleSomeUserInteractionForVersion();
}
// Empty implementations here to avoid errors, this will be overwritten with a
// function that does the actual handling in a separate file
CharacterSheet.prototype.handleSomeUserInteractionForSystem() {}
CharacterSheet.prototype.handleSomeUserInteractionForSetting() {}
CharacterSheet.prototype.handleSomeUserInteractionForVersion() {}
The above is of course inelegant and not efficient - some interactions don't require handling at all three levels, but I still have to call an empty callback. (It should be noted that, of course, I don't know in advance which of these callbacks are needed - it depends on the particular system/setting/version.)
Ideally I would like to achieve the following:
It should be lightweight - in rare circumstances these callbacks need to be called maybe hundred times per second.
Only required/non-empty callbacks should be called.
The interface to registering a callback should be intuitive and clean, so that other developers who are otherwise unfamiliar with the rest of the code can use it easily develop their own RPG system/setting/version code.
I.e. it should be obvious which parts of the code can be extended (e.g. where a callback can be registered) and how.
Related to the above, there should be as little juggling with e.g. 'this' arguments as possible.
How can I best achieve the above objectives?
NOTE: Since inheritance is a bit wonky for me in JavaScript I have not gone that path, I have also read that JS inheritance may decrease performance (though I suspect that at my required four levels of inheritance it would not be a problem).
NOTE: I am looking for a clean vanilla JS solution, I am not a big fan of all these frameworks that bloat the codebase needlessly.
This is what I have come up with so far - inspired by the event driven solution suggested by #Nope.
CharacterSheet.prototype.onUserInteraction = [];
CharacterSheet.prototype.handleSomeUserInteraction()
{
var _this = this;
this.prototype.onUserInteraction.forEach(callback => {callback.call(_this);});
}
// Implementers then first create an implementation of the callback:
CharacterSheet.prototype.handleOnUserInteracation = function()
{
// Do something here
}
// And then register the callback this way:
CharacterSheet.prototype.onUserInteraction.push(CharacterSheet.prototype.handleOnUserInteracation);
This is not as clean and intuitive as I would ideally like, but it ensures that the code written in the callback can be treated as if it was written directly for the CharacterSheet prototype, and I do not have to mess with true JS inheritance.
Feedback for this approach would be appreciated, and alternative solutions will be upvoted if you explain the advantages/disadvantages. If it's good enough I will gladly mark it as accepted also.
Please note that the above code was written from memory, so it might contain some minor mistakes.

Is having a nested function inside a constructor memory inefficient? [duplicate]

In JavaScript, we have two ways of making a "class" and giving it public functions.
Method 1:
function MyClass() {
var privateInstanceVariable = 'foo';
this.myFunc = function() { alert(privateInstanceVariable ); }
}
Method 2:
function MyClass() { }
MyClass.prototype.myFunc = function() {
alert("I can't use private instance variables. :(");
}
I've read numerous times people saying that using Method 2 is more efficient as all instances share the same copy of the function rather than each getting their own. Defining functions via the prototype has a huge disadvantage though - it makes it impossible to have private instance variables.
Even though, in theory, using Method 1 gives each instance of an object its own copy of the function (and thus uses way more memory, not to mention the time required for allocations) - is that what actually happens in practice? It seems like an optimization web browsers could easily make is to recognize this extremely common pattern, and actually have all instances of the object reference the same copy of functions defined via these "constructor functions". Then it could only give an instance its own copy of the function if it is explicitly changed later on.
Any insight - or, even better, real world experience - about performance differences between the two, would be extremely helpful.
See http://jsperf.com/prototype-vs-this
Declaring your methods via the prototype is faster, but whether or not this is relevant is debatable.
If you have a performance bottleneck in your app it is unlikely to be this, unless you happen to be instantiating 10000+ objects on every step of some arbitrary animation, for example.
If performance is a serious concern, and you'd like to micro-optimise, then I would suggest declaring via prototype. Otherwise, just use the pattern that makes most sense to you.
I'll add that, in JavaScript, there is a convention of prefixing properties that are intended to be seen as private with an underscore (e.g. _process()). Most developers will understand and avoid these properties, unless they're willing to forgo the social contract, but in that case you might as well not cater to them. What I mean to say is that: you probably don't really need true private variables...
In the new version of Chrome, this.method is about 20% faster than prototype.method, but creating new object is still slower.
If you can reuse the object instead of always creating an new one, this can be 50% - 90% faster than creating new objects. Plus the benefit of no garbage collection, which is huge:
http://jsperf.com/prototype-vs-this/59
It only makes a difference when you're creating lots of instances. Otherwise, the performance of calling the member function is exactly the same in both cases.
I've created a test case on jsperf to demonstrate this:
http://jsperf.com/prototype-vs-this/10
You might not have considered this, but putting the method directly on the object is actually better in one way:
Method invocations are very slightly faster (jsperf) since the prototype chain does not have to be consulted to resolve the method.
However, the speed difference is almost negligible. On top of that, putting a method on a prototype is better in two more impactful ways:
Faster to create instances (jsperf)
Uses less memory
Like James said, this difference can be important if you are instantiating thousands of instances of a class.
That said, I can certainly imagine a JavaScript engine that recognizes that the function you are attaching to each object does not change across instances and thus only keeps one copy of the function in memory, with all instance methods pointing to the shared function. In fact, it seems that Firefox is doing some special optimization like this but Chrome is not.
ASIDE:
You are right that it is impossible to access private instance variables from inside methods on prototypes. So I guess the question you must ask yourself is do you value being able to make instance variables truly private over utilizing inheritance and prototyping? I personally think that making variables truly private is not that important and would just use the underscore prefix (e.g., "this._myVar") to signify that although the variable is public, it should be considered to be private. That said, in ES6, there is apparently a way to have the both of both worlds!
You may use this approach and it will allow you to use prototype and access instance variables.
var Person = (function () {
function Person(age, name) {
this.age = age;
this.name = name;
}
Person.prototype.showDetails = function () {
alert('Age: ' + this.age + ' Name: ' + this.name);
};
return Person; // This is not referencing `var Person` but the Person function
}()); // See Note1 below
Note1:
The parenthesis will call the function (self invoking function) and assign the result to the var Person.
Usage
var p1 = new Person(40, 'George');
var p2 = new Person(55, 'Jerry');
p1.showDetails();
p2.showDetails();
In short, use method 2 for creating properties/methods that all instances will share. Those will be "global" and any change to it will be reflected across all instances. Use method 1 for creating instance specific properties/methods.
I wish I had a better reference but for now take a look at this. You can see how I used both methods in the same project for different purposes.
Hope this helps. :)
This answer should be considered an expansion of the rest of the answers filling in missing points. Both personal experience and benchmarks are incorporated.
As far as my experience goes, I use constructors to literally construct my objects religiously, whether methods are private or not. The main reason being that when I started that was the easiest immediate approach to me so it's not a special preference. It might have been as simple as that I like visible encapsulation and prototypes are a bit disembodied. My private methods will be assigned as variables in the scope as well. Although this is my habit and keeps things nicely self contained, it's not always the best habit and I do sometimes hit walls. Apart from wacky scenarios with highly dynamic self assembling according to configuration objects and code layout it tends to be the weaker approach in my opinion particularly if performance is a concern. Knowing that the internals are private is useful but you can achieve that via other means with the right discipline. Unless performance is a serious consideration, use whatever works best otherwise for the task at hand.
Using prototype inheritance and a convention to mark items as private does make debugging easier as you can then traverse the object graph easily from the console or debugger. On the other hand, such a convention makes obfuscation somewhat harder and makes it easier for others to bolt on their own scripts onto your site. This is one of the reasons the private scope approach gained popularity. It's not true security but instead adds resistance. Unfortunately a lot of people do still think it's a genuinely way to program secure JavaScript. Since debuggers have gotten really good, code obfuscation takes its place. If you're looking for security flaws where too much is on the client, it's a design pattern your might want to look out for.
A convention allows you to have protected properties with little fuss. That can be a blessing and a curse. It does ease some inheritance issues as it is less restrictive. You still do have the risk of collision or increased cognitive load in considering where else a property might be accessed. Self assembling objects let you do some strange things where you can get around a number of inheritance problems but they can be unconventional. My modules tend to have a rich inner structure where things don't get pulled out until the functionality is needed elsewhere (shared) or exposed unless needed externally. The constructor pattern tends to lead to creating self contained sophisticated modules more so than simply piecemeal objects. If you want that then it's fine. Otherwise if you want a more traditional OOP structure and layout then I would probably suggest regulating access by convention. In my usage scenarios complex OOP isn't often justified and modules do the trick.
All of the tests here are minimal. In real world usage it is likely that modules will be more complex making the hit a lot greater than tests here will indicate. It's quite common to have a private variable with multiple methods working on it and each of those methods will add more overhead on initialisation that you wont get with prototype inheritance. In most cases is doesn't matter because only a few instances of such objects float around although cumulatively it might add up.
There is an assumption that prototype methods are slower to call because of prototype lookup. It's not an unfair assumption, I made the same myself until I tested it. In reality it's complex and some tests suggest that aspect is trivial. Between, prototype.m = f, this.m = f and this.m = function... the latter performs significantly better than the first two which perform around the same. If the prototype lookup alone were a significant issue then the last two functions instead would out perform the first significantly. Instead something else strange is going on at least where Canary is concerned. It's possible functions are optimised according to what they are members of. A multitude of performance considerations come into play. You also have differences for parameter access and variable access.
Memory Capacity. It's not well discussed here. An assumption you can make up front that's likely to be true is that prototype inheritance will usually be far more memory efficient and according to my tests it is in general. When you build up your object in your constructor you can assume that each object will probably have its own instance of each function rather than shared, a larger property map for its own personal properties and likely some overhead to keep the constructor scope open as well. Functions that operate on the private scope are extremely and disproportionately demanding of memory. I find that in a lot of scenarios the proportionate difference in memory will be much more significant than the proportionate difference in CPU cycles.
Memory Graph. You also can jam up the engine making GC more expensive. Profilers do tend to show time spent in GC these days. It's not only a problem when it comes to allocating and freeing more. You'll also create a larger object graph to traverse and things like that so the GC consumes more cycles. If you create a million objects and then hardly touch them, depending on the engine it might turn out to have more of an ambient performance impact than you expected. I have proven that this does at least make the gc run for longer when objects are disposed of. That is there tends to be a correlation with memory used and the time it takes to GC. However there are cases where the time is the same regardless of the memory. This indicates that the graph makeup (layers of indirection, item count, etc) has more impact. That's not something that is always easy to predict.
Not many people use chained prototypes extensively, myself included I have to admit. Prototype chains can be expensive in theory. Someone will but I've not measured the cost. If you instead build your objects entirely in the constructor and then have a chain of inheritance as each constructor calls a parent constructor upon itself, in theory method access should be much faster. On the other hand you can accomplish the equivalent if it matters (such as flatten the prototypes down the ancestor chain) and you don't mind breaking things like hasOwnProperty, perhaps instanceof, etc if you really need it. In either case things start to get complex once you down this road when it comes to performance hacks. You'll probably end up doing things you shouldn't be doing.
Many people don't directly use either approach you've presented. Instead they make their own things using anonymous objects allowing method sharing any which way (mixins for example). There are a number of frameworks as well that implement their own strategies for organising modules and objects. These are heavily convention based custom approaches. For most people and for you your first challenge should be organisation rather than performance. This is often complicated in that Javascript gives many ways of achieving things versus languages or platforms with more explicit OOP/namespace/module support. When it comes to performance I would say instead to avoid major pitfalls first and foremost.
There's a new Symbol type that's supposed to work for private variables and methods. There are a number of ways to use this and it raises a host of questions related to performance and access. In my tests the performance of Symbols wasn't great compared to everything else but I never tested them thoroughly.
Disclaimers:
There are lots of discussions about performance and there isn't always a permanently correct answer for this as usage scenarios and engines change. Always profile but also always measure in more than one way as profiles aren't always accurate or reliable. Avoid significant effort into optimisation unless there's definitely a demonstrable problem.
It's probably better instead to include performance checks for sensitive areas in automated testing and to run when browsers update.
Remember sometimes battery life matters as well as perceptible performance. The slowest solution might turn out faster after running an optimising compiler on it (IE, a compiler might have a better idea of when restricted scope variables are accessed than properties marked as private by convention). Consider backend such as node.js. This can require better latency and throughput than you would often find on the browser. Most people wont need to worry about these things with something like validation for a registration form but the number of diverse scenarios where such things might matter is growing.
You have to be careful with memory allocation tracking tools in to persist the result. In some cases where I didn't return and persist the data it was optimised out entirely or the sample rate was not sufficient between instantiated/unreferenced, leaving me scratching my head as to how an array initialised and filled to a million registered as 3.4KiB in the allocation profile.
In the real world in most cases the only way to really optimise an application is to write it in the first place so you can measure it. There are dozens to hundreds of factors that can come into play if not thousands in any given scenario. Engines also do things that can lead to asymmetric or non-linear performance characteristics. If you define functions in a constructor, they might be arrow functions or traditional, each behaves differently in certain situations and I have no idea about the other function types. Classes also don't behave the same in terms as performance for prototyped constructors that should be equivalent. You need to be really careful with benchmarks as well. Prototyped classes can have deferred initialisation in various ways, especially if your prototyped your properties as well (advice, don't). This means that you can understate initialisation cost and overstate access/property mutation cost. I have also seen indications of progressive optimisation. In these cases I have filled a large array with instances of objects that are identical and as the number of instances increase the objects appear to be incrementally optimised for memory up to a point where the remainder is the same. It is also possible that those optimisations can also impact CPU performance significantly. These things are heavily dependent not merely on the code you write but what happens in runtime such as number of objects, variance between objects, etc.

What are the benefits to using anonymous functions instead of named functions for callbacks and parameters in JavaScript event code?

I'm new-ish to JavaScript. I understand many of the concepts of the language, I've been reading up on the prototype inheritance model, and I'm whetting my whistle with more and more interactive front-end stuff. It's an interesting language, but I'm always a bit turned off by the callback spaghetti that is typical of many non-trivial interaction models.
Something that has always seemed strange to me is that in spite of the readability nightmare that is a nest of JavaScript nested callbacks, the one thing that I very rarely see in many examples and tutorials is the use of predefined named functions as callback arguments. I'm a Java programmer by day, and discarding the stereotypical jabs about Enterprise-y names for units of code one of the things I've come to enjoy about working in a language with a strong selection of featureful IDE's is that using meaningful, if long, names can make the intent and meaning of code much clearer without making it more difficult to actually be productive. So why not use the same approach when writing JavaScript code?
Giving it thought, I can come up with arguments that are both for and against this idea, but my naivety and newness to the language impairs me from reaching any conclusions as to why this would be good at a technical level.
Pros:
Flexibility. An asynchronous function with a callback parameter could be reached by one of many different code paths and it could be harried to have to write a named function to account for every single possible edge case.
Speed. It plays heavily in to the hacker mentality. Bolt things on to it until it works.
Everyone else is doing it
Smaller file sizes, even if trivially so, but every bit counts on the web.
Simpler AST? I would assume that anonymous functions are generated at runtime and so the JIT won't muck about with mapping the name to instructions, but I'm just guessing at this point.
Quicker dispatching? Not sure about this one either. Guessing again.
Cons:
It's hideous and unreadable
It adds to the confusion when you're nested nuts deep in a swamp of callbacks (which, to be fair, probably means you're writing poorly constructed code to begin with, but it's quite common).
For someone without a functional background it can be a bizarre concept to grok
With so many modern browsers showing the ability to execute JavaScript code much faster than before, I'm failing to see how any trivial sort of performance gain one might get out using anonymous callbacks would be a necessity. It seems that, if you are in a situation where using a named function is feasible (predictable behavior and path of execution) then there would be no reason not to.
So are there any technical reasons or gotchas that I'm not aware of that makes this practice so commonplace for a reason?
I use anonymous functions for three reasons:
If no name is needed because the function is only ever called in one place, then why add a name to whatever namespace you're in.
Anonymous functions are declared inline and inline functions have advantages in that they can access variables in the parent scopes. Yes, you can put a name on an anonymous function, but that's usually pointless if it's declared inline. So inline has a significant advantage and if you're doing inline, there's little reason to put a name on it.
The code seems more self-contained and readable when handlers are defined right inside the code that's calling them. You can read the code in almost sequential fashion rather than having to go find the function with that name.
I do try to avoid deep nesting of anonymous functions because that can be hairy to understand and read. Usually when that happens, there's a better way to structure the code (sometimes with a loop, sometimes with a data table, etc...) and named functions isn't usually the solution there either.
I guess I'd add that if a callback starts to get more than about 15-20 lines long and it doesn't need direct access to variables in the parent scope, I would be tempted to give it a name and break it out into it's own named function declared elsewhere. There is definitely a readability point here where a non-trivial function that gets long is just more maintainable if it's put in its own named unit. But, most callbacks I end up with are not that long and I find it more readable to keep them inline.
I prefer named functions myself, but for me it comes down to one question:
Will I use this function anywhere else?
If the answer is yes, I name/define it. If not, pass it as an anonymous function.
If you only use it once, it doesn't make sense to crowd the global namespace with it. In today's complex front-ends, the number of named functions that could have been anonymous grows quickly (easily over 1000 on really intricate designs), resulting in (relatively) large performance gains by preferring anonymous functions.
However, code maintainability is also extremely important. Each situation is different. If you're not writing a lot of these functions to begin with, there's no harm in doing it either way. It's really up to your preference.
Another note about names. Getting in the habit of defining long names will really hurt your file size. Take the following example.
Assume both of these functions do the same thing:
function addTimes(time1, time2)
{
// return time1 + time2;
}
function addTwoTimesIn24HourFormat(time1, time2)
{
// return time1 + time2;
}
The second tells you exactly what it does in the name. The first is more ambiguous. However, there are 17 characters of difference in the name. Say the function is called 8 times throughout the code, that's 153 extra bytes your code didn't need to have. Not colossal, but if it's a habit, extrapolating that to 10s or even 100s of functions will easily mean a few KB of difference in the download.
Again however, maintainability needs to be weighed against the benefits of performance. This is the pain of dealing with a scripted language.
A bit late to the party, but some not yet mentioned aspects to functions, anonymous or otherwise...
Anon funcs are not easily referred to in humanoid conversations about code, amongst a team. E.g., "Joe, could you explain what the algorithm does, within that function. ... Which one? The 17th anonymous function within the fooApp function. ... No, not that one! The 17th one!"
Anon funcs are anonymous to the debugger as well. (duh!) Therefore, the debugger stack trace will generally just show a question mark or similar, making it less useful when you have set multiple breakpoints. You hit the breakpoint, but find yourself scrolling the debug window up/down to figure out where the hell you are in your program, because hey, question mark function just doesn't do it!
Concerns about polluting the global namespace are valid, but easily remedied by naming your functions as nodes within your own root object, like "myFooApp.happyFunc = function ( ... ) { ... }; ".
Functions that are available in the global namespace, or as nodes in your root object like above, can be invoked from the debugger directly, during development and debug. E.g., at the console command line, do "myFooApp.happyFunc(42)". This is an extremely powerful ability that does not exist (natively) in compiled programming languages. Try that with an anon func.
Anon funcs can be made more readable by assigning them to a var, and then passing the var as the callback (instead of inlining). E.g.:
var funky = function ( ... ) { ... };
jQuery('#otis').click(funky);
Using the above approach, you could potentially group several anon funcs at the top of the parental func, then below that, the meat of sequential statements becomes much tighter grouped, and easier to read.
Anonymous functions are useful because they help you control which functions are exposed.
More Detail: If there is no name, you can't reassign it or tamper with it anywhere but the exact place it was created. A good rule of thumb is, if you don't need to re-use this function anywhere, it's a good idea to consider if an anonymous function would be better to prevent getting tampered with anywhere.
Example:
If you're working on a big project with a lot of people, what if you have a function inside of a bigger function and you name it something? That means anyone working with you and also editing code in the bigger function can do stuff to that smaller function at any time. What if you named it "add" for instance, and someone reassigned "add" to a number instead inside the same scope? Then the whole thing breaks!
PS -I know this is a very old post, but there is a much simpler answer to this question and I wish someone had put it this way when I was looking for the answer myself as a beginner- I hope you're ok with reviving an old thread!
Its more readable using named functions and they are also capable of self-referencing as in the example below.
(function recursion(iteration){
if (iteration > 0) {
console.log(iteration);
recursion(--iteration);
} else {
console.log('done');
}
})(20);
console.log('recursion defined? ' + (typeof recursion === 'function'));
http://jsfiddle.net/Yq2WD/
This is nice when you want to have an immediately invoked function that references itself but does not add to the global namespace. It's still readable but not polluting. Have your cake and eat it to.
Hi, my name is Jason OR hi, my name is ???? you pick.
Well, just to be clear for the sake of my arguments, the following are all anonymous functions/function expressions in my book:
var x = function(){ alert('hi'); },
indexOfHandyMethods = {
hi: function(){ alert('hi'); },
high: function(){
buyPotatoChips();
playBobMarley();
}
};
someObject.someEventListenerHandlerAssigner( function(e){
if(e.doIt === true){ doStuff(e.someId); }
} );
(function namedButAnon(){ alert('name visible internally only'); })()
Pros:
It can reduce a bit of cruft, particularly in recursive functions (where you could (should actually since arguments.callee is deprecated) still use a named reference per the last example internally), and makes it clear the function only ever fires in this one place.
Code legibility win: in the example of the object literal with anon funcs assigned as methods, it would be silly to add more places to hunt and peck for logic in your code when the whole point of that object literal is to plop some related functionality in the same conveniently referenced spot. When declaring public methods in a constructor, however, I do tend to define labeled functions inline and then assign as references of this.sameFuncName. It lets me use the same methods internally without the 'this.' cruft and makes order of definition a non-concern when they call each other.
Useful for avoiding needless global namespace pollution - internal namespaces, however, shouldn't ever be that broadly filled or handled by multiple teams simultaneously so that argument seems a bit silly to me.
I agree with the inline callbacks when setting short event handlers. It's silly to have to hunt for a 1-5 line function, especially since with JS and function hoisting, the definitions could end up anywhere, not even within the same file. This could happen by accident without breaking anything and no, you don't always have control of that stuff. Events always result in a callback function being fired. There's no reason to add more links to the chain of names you need to scan through just to reverse engineer simple event-handlers in a large codebase and the stack trace concern can be addressed by abstracting event triggers themselves into methods that log useful info when debug mode is on and fire the triggers. I'm actually starting to build entire interfaces this way.
Useful when you WANT the order of function definition to matter. Sometimes you want to be certain a default function is what you think it is until a certain point in the code where it's okay to redefine it. Or you want breakage to be more obvious when dependencies get shuffled.
Cons:
Anon functions can't take advantage of function hoisting. This is a major difference. I tend to take heavy advantage of hoisting to define my own explicitly named funcs and object constructors towards the bottom and get to the object definition and main-loop type stuff right up at the top. I find it makes the code easier to read when you name your vars well and get a broad view of what's going on before ctrl-Fing for details only when they matter to you. Hoisting can also be a huge benefit in heavily event-driven interfaces where imposing a strict order of what's available when can bite you in the butt. Hoisting has its own caveats (like circular reference potential) but it is a very useful tool for organizing and making code legible when used right.
Legibility/Debug. Absolutely they get used way too heavily at times and it can make debug and code legibility a hassle. Codebases that rely heavily on JQ, for instance, can be a serious PITA to read and debug if you don't encapsulate the near-inevitable anon-heavy and massively overloaded args of the $ soup in a sensible way. JQuery's hover method for instance, is a classic example of over-use of anon funcs when you drop two anon funcs into it, since it's easy for a first-timer to assume it's a standard event listener assignment method rather than one method overloaded to assign handlers for one or two events. $(this).hover(onMouseOver, onMouseOut) is a lot more clear than two anon funcs.

The disadvantages of JavaScript prototype inheritance, what are they?

I recently watched Douglas Crockford's JavaScript presentations, where he raves about JavaScript prototype inheritance as if it is the best thing since sliced white bread. Considering Crockford's reputation, it may very well be.
Can someone please tell me what is the downside of JavaScript prototype inheritance? (compared to class inheritance in C# or Java, for example)
In my experience, a significant disadvantage is that you can't mimic Java's "private" member variables by encapsulating a variable within a closure, but still have it accessible to methods subsequently added to the prototype.
i.e.:
function MyObject() {
var foo = 1;
this.bar = 2;
}
MyObject.prototype.getFoo = function() {
// can't access "foo" here!
}
MyObject.prototype.getBar = function() {
return this.bar; // OK!
}
This confuses OO programmers who are taught to make member variables private.
Things I miss when sub-classing an existing object in Javascript vs. inheriting from a class in C++:
No standard (built-into-the-language) way of writing it that looks the same no matter which developer wrote it.
Writing your code doesn't naturally produce an interface definition the way the class header file does in C++.
There's no standard way to do protected and private member variables or methods. There are some conventions for some things, but again different developers do it differently.
There's no compiler step to tell you when you've made foolish typing mistakes in your definition.
There's no type-safety when you want it.
Don't get me wrong, there are a zillion advantages to the way javascript prototype inheritance works vs C++, but these are some of the places where I find javascript works less smoothly.
4 and 5 are not strictly related to prototype inheritance, but they come into play when you have a significant sized project with many modules, many classes and lots of files and you wish to refactor some classes. In C++, you can change the classes, change as many callers as you can find and then let the compiler find all the remaining references for you that need fixing. If you've added parameters, changed types, changed method names, moved methods,etc... the compiler will show you were you need to fix things.
In Javascript, there is no easy way to discover all possible pieces of code that need to be changed without literally executing every possible code path to see if you've missed something or made some typo. While this is a general disadvantage of javascript, I've found it particularly comes into play when refactoring existing classes in a significant-sized project. I've come near the end of a release cycle in a significant-sized JS project and decided that I should NOT do any refactoring to fix a problem (even though that was the better solution) because the risk of not finding all possible ramifications of that change was much higher in JS than C++.
So, consequently, I find it's riskier to make some types of OO-related changes in a JS project.
I think the main danger is that multiple parties can override one another's prototype methods, leading to unexpected behavior.
This is particularly dangerous because so many programmers get excited about prototype "inheritance" (I'd call it extension) and therefore start using it all over the place, adding methods left and right that may have ambiguous or subjective behavior. Ultimately, if left unchecked, this kind of "prototype method proliferation" can lead to very difficult-to-maintain code.
A popular example would be the trim method. It might be implemented something like this by one party:
String.prototype.trim = function() {
// remove all ' ' characters from left & right
}
Then another party might create a new definition, with a completely different signature, taking an argument which specifies the character to trim. Suddenly all the code that passes nothing to trim has no effect.
Or another party reimplements the method to strip ' ' characters and other forms of white space (e.g., tabs, line breaks). This might go unnoticed for some time but lead to odd behavior down the road.
Depending on the project, these may be considered remote dangers. But they can happen, and from my understanding this is why libraries such as Underscore.js opt to keep all their methods within namespaces rather than add prototype methods.
(Update: Obviously, this is a judgment call. Other libraries--namely, the aptly-named Prototype--do go the prototype route. I'm not trying to say one way is right or wrong, only that this is the argument I've heard against using prototype methods too liberally.)
I miss being able to separate interface from implementation. In languages with an inheritance system that includes concepts like abstract or interface, you could e.g. declare your interface in your domain layer but put the implementation in your infrastructure layer. (Cf. onion architecture.) JavaScript's inheritance system has no way to do something like this.
I'd like to know if my intuitive answer matches up with what the experts think.
What concerns me is that if I have a function in C# (for the sake of discussion) that takes a parameter, any developer who writes code that calls my function immediately knows from the function signature what sort of parameters it takes and what type of value it returns.
With JavaScript "duck-typing", someone could inherit one of my objects and change its member functions and values (Yes, I know that functions are values in JavaScript) in almost any way imaginable so that the object they pass in to my function bears no resemblance to the object I expect my function to be passed.
I feel like there is no good way to make it obvious how a function is supposed to be called.

How can I get better at OOP? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
This might come as a strange question to many of you, and I don't actually know if it is correct to say OOP in this context, because OOP (object-oriented programming) is usually associated with programming languages like C++ and Java, and not lightweight programming languages, or scripting languages. My question, however, is in the category of JavaScript, which is also object oriented. I do know about objects, properties, methods, prototypes and constructors, I just can't seem to get into my mind when to use objects.
When I am writing my web-applications, I, for some reason, never use objects. This annoys me, because when I read about objects in a variety of books and online articles, objects make everything so much simpler and, just to put it out there, I HATE repeating myself, and this is why I wish I knew when to use objects.
I really want to become better at using objects and when to use objects.
Can you please mention a few situations objects would be good? It would be really nice to have written down something you know you can go back and look at when you get confused about when to use these darn objects :)
I would love simple answers explaining why and when objects are to prefer.
I would also like if you could tell me if I am to use objects when I am in some special situations generally suitable for objects i.e. every time you want to _________ then you use an object...
I really hope you understand my question and you will consider that I'm somewhat new to this site and new to JavaScript
Thanks!
You probably use objects without even realizing it.
If you're writing Javascript that interacts with the DOM, you're using objects.
If you're using any of the Javascript frameworks out there (jQuery, MooTools, etc.), you're using objects.
Using objects will be useful when you need to encapsulate some commonly used code so that it can be easily re-used (within a single application or across multiple applications like jQuery plugins...which are objects in and of themselves).
And to answer the question in the title of your post...the only way to really get better at OOP is to practice! Reading and studying the subject can only get you so far.
First, you don't need to use objects to avoid repeating yourself. If you need to do the same thing at two points in your code, you can write a plain vanilla non-OOP function to do that and call it twice.
To summarize the advantages of OOP without writing a book here, OOP basically does three things for you:
Group related data together. Non-OOP programs often have a whole bunch of variables floating around in the main program that are only loosely related. With OOP, you put related variables into an object.
Associate functions with data. By putting functions in an object with the data they operate on (purists will say they are then "members" rather than "functions"), you make it clear to the reader that these go together.
Combining #1 and #2 lets you hide implementation details from other objects. You create the "public interface" for a class, the set of functions that other objects should call and that represent the logical things that this class does, and then any other functions you need can be hidden. (More explicitly in some languages than in others, but that's not the point here.)
Classes can inherit and mutate. If you have two similar classes A and B that should be mostly the same but with some minor differences, you can make a superclass C with all the common stuff and then A and B inherit from that and each adds in its own unique stuff. This is what is usually advertised as the power of OOP. Frankly, yes, it's way cool, and in some situations can be very handy, but I only use its true power occasionally, and I suspect the same is true of most programmers. (OOP enthusiasts feel free to jump in with how and why you use inheritance all the time.)
When to OOP it? Any time you have several pieces of data that logically go together, it makes sense to create a class to hold them. Like X and Y coordinates; or customer name, address, and zip code; or phaser range and phaser power consumption; or whatever.
Any time you have functions that logically operate on this related data, put them in the the class with the data. Like "capitalize customer name", "compute distance of this point from the origin", etc.
How and when to use inheritance is more complicated. I'll leave that for another time.
The first thing to remember, is that a lot of simple Javascript code really doesn't need to define objects (using them is inevitable, as all the stuff the DOM gives you are objects). Don't panic too much.
One of the good things about Javascript is that it supports a lot of different styles; OOP, imperative and functional.
One of the bad things about using Javascript is that because it supports a lot of different styles, it's hard to learn another style, at least until you are forced to an "a-ha" moment by something else.
Spending time in languages that are more inclined to force you into OOP (even when some would argue they shouldn't) is helpful here. C# and Java force one along OOP lines, though C++ does not (with the same strength and weakness here as with Javascript).
Try to think about the "things" in your program. Some of these things will already be modelled by objects (those the DOM gives you). Some really will just be numbers and strings and not worth composing beyond that (though learning how to add functionality to those types through prototype is a good idea too). Some will be "things" more complicated than a simple type and naturally suited to modelling as an object.
Do also look at functional programming in Javascript (e.g. passing a function as a parameter to another function is about the simplest example), as the interaction between this and OOP is one of the strongest elements in Javascript, and essential in defining methods on objects given the prototype-based OOP model it has.
My personal experience with OOP in JavaScript is a positive one, once I figured out to get it to do what I wanted of course, I usually use it in combination with jQuery so the resulting code can seem a bit.... odd.
function BlogPost(id,title,content)
{
this.id = id;
this.title = title;
this.content = content;
function display()
{
var post = $('<div class="blogpost"></div>');
$(post).append('<h2>' + this.title + '</h2>');
$(post).append('<p>' + this.content + '</p>');
var deleteButton = $('<span class="deletePost">delete</span>')
$(post).append(deleteButton);
$(deleteButton).click(this.delete)
$('#postcontainer').append(post);
}
function delete()
{
$.post('some/xhr/handeler',{id:this.id});
}
}
This is a quick (untested) class that can be used to dynamically add blogposts to a div with id postcontainer' and handles clicks on a delete button.
Think about how you can use objects to organize and simplify your application. I have found two metaphors useful:
A mechanical watch is made up of a number of gears, each of which serves a single purpose in the overall operation of the machine. If you think of your application as a watch, then objects are its gears.
A workgroup is made up of a number of people, each of whom performs a specific job. The people communicate with each other in performing their jobs, and the jobs fall along two lines--those that perform tasks (workers), and those that organize and direct the work of other people (managers).
You can use these metaphors to organize the work your application does. Start by dividing the app into functional layers; UI, business layer, and persistence, for example. Take each of your use cases, and distribute the work it does among these layers. That should give you a starting point for the classes you will need.
Make each class as self-contained a possible. you want to seal it off when it is done, like a .NET control. Classes should communicate with each other only through their interfaces, which are made up of properties and methods. These interfaces should have the smallest footprint (fewest public properties and methods) you can come up with. The idea is to isolate the side effects of a change to any class to that class alone.
If you do these things, you will be ahead of 80% of all programmers. You will find it much easier to develop and maintain even large applications, because you will be able to break complex problems down to simple components.
Javascript is just a terrible language to learn OOP in. I would recommend learning OOP in another language (like Java or C++) and then learn Object Oriented syntax in Javascript. At that moment you have all the ingredients.
That's when you can decide whether or not you want to be using an object for a task in Javascript or if it is enough to use just functions.
Personally, I mostly write non-object javascript and leave objects for when a task feels object oriented to me. For example, I used an object oriented design for a drag and drop script, in which you simply made a DragNDrop object with the correct parameters and the items in your page would be dragable from that moment on, or when I wanted to simplify some javascript xml handling functions, I wrote an object that wrapped the normal xml objects.
Justin Niessner said it and I can only add to his answer...
In addition to practice, I find reading other people's code very instructive. You need to be cautious because not all code is exemplary (to say the least) but developing critical skills is part of getting better.
In my opinion I think it's better to think about OOP in the context of a particular domain or business problem. For example, JavaScript uses objects to model browser behavior and attributes, e.g., Window, Frame, History...
A domain model of a business problem will contain objects that will be reflected in the programming code written OOP. For example, a university student application will have objects for students, professors, courses, curriculums, rooms and so on. Consequently, begin with your business problem and model the domain. Your OOP code should have objects modeled from your domain.
Source:http://csci.csusb.edu/dick/samples/uml0.html
You might be interested in design patterns (Book, Wikipedia), which tell you, how to solve common problems using OOP.
Many classical design patterns may not be so relevant for JavaScript, since in JavaScript there are other non OOP elements (e.g. functions), which can solve some problems even more elegant.
Some simple design patterns I can recommend to start with:
Abstract factory: Defer the instantiation of objects. In JavaScript in most cases a function will do the job.
Decorator: Transparently add functionality to an object at runtime. Can also be nested. Usage example: Logging
Composite: Treat a tree/graph of object like a single object.
I think that using classes in general and OOP principles, makes your code neater, more readable and makes you more productive .
Recently I worked on a web application that would require heavy client side Javascript.
Coming from C#/Java background I realized that Javascript would require a change of thinking, however I still wanted to apply good OO principles if possible, in particular to control the likely complexity of the application.
After a bit of searching I found a great book called Object Oriented Javascript by Stoyan Stefanov. It truly opened my eyes to power of this often called "toy language". Some sections on functional programming concepts, variable scoping and closures may even be a bit more advanced than you want.
However reading and applying many of these concepts from this book (closures, anonymous functions etc) in Javascript, has actually even helped me back in C# land where these concepts are only now becoming more mainstream.
Given your stated situation and goal I can highly recommend this book as one of the best ways to learn about doing OO in Javascript.
Javascript is much, much less object oriented than C# or Java; don't worry if your Javascript doesn't look object oriented.

Categories

Resources