Best Practices for onload Javascript - javascript

What is the best way to handle several different onload scripts spread across many pages?
For example, I have 50 different pages, and on each page I want to set a different button click handler when the dom is ready.
Is it best to set onclicks like this on each individual page,
<a id="link1" href="#" onclick="myFunc()" />
Or a very long document ready function in an external js file,
Element.observe(window, 'load', function() {
if ($('link1')) {
// set click handler
}
if ($('link2')) {
// set click hanlder
}
...
}
Or split each if ($('link')) {} section into script tags and place them on appropriate pages,
Or lastly, split each if ($('link')) {} section into its own separate js file and load appropriately per page?
Solution 1 seems like the least elegant and is relatively obtrusive, solution 2 will lead to a very lengthy load function, solution 3 is less obtrusive then 1 but still not great, and solution 4 will load require the user to download a separate js file per page he visits.
Are any of these best (or worst) or is there a solution 5 I'm not thinking of?
Edit: I am asking about the design pattern, not which onload function is the proper one to use.

Have you thought about making a class for each type of behavior you'd like to attach to an element? That way you could reuse functionality between pages, just in case there was overlap.
For example, let's say that on some of the pages you want to a have button that pops up some extra information on the page. Your html could look like this:
More info
And your JavaScript could look like this:
jQuery(".more-info").click(function() { ... });
If you stuck to some kind of convention, you could also add multiple classes to a link and have it do a few different things if you needed (since jQuery will let you stack event handlers on an element).
Basically, you're focusing on the behaviors for each type of element you're attaching JavaScript to, rather than picking out specific ids of elements to attach functionality to.
I'd also suggest putting all of the JavaScript into one common file or a limited number of common files. The main reason being that, after the first page load, the JavaScript would be cached and won't need to load on each page. Another reason is that it would encourage you do develop common behaviors for buttons that are available throughout the site.
In any case, I would discourage attaching the onlick directly in the html (option #1). It's obtrusive and limits the flexibility you have with your JavaScript.
Edit: I didn't realize Diodeus had posted a very similar answer (which I agree with).

First of all I dont understand why you think setting event listeners is obtrusive?
but ...
Solution one is a bad idea
<a id="link1" href="#" onclick="myFunc()" />
because you should keep your make-up and your scripts seperate.
Solution two is a bad idea
Element.observe(window, 'load', function() {
if ($('link1')) {
// set click handler
}
if ($('link2')) {
// set click hanlder
}
...
}
because you are using a lot of unneeded javascript for every page.
Solution three is a bad idea for the same reason I said solution one is a bad idea.
Solution 4 is the best idea, yeah its one extra load per page, but if for each page you just split each if ($('link')) {} section, the file size can not be that large? Plus, if you take this code out of the global javascript, then its load time will be reduced.

You could hack the class name and use it in a creative manner:
<a class="loadevent functionA" id="link1" href="#" onclick="myFunc()" />
... on another page...
<a class="loadevent functionB" id="link1" href="#" onclick="myFunc()" />
You could select by class name "loadevent" and grab the other class names for that tag, the other class name being the actual function name you want to hook into. This way one handler would be able to do every page and all you have to do is provide the corresponding class names.

I would use JQuery's document ready if possible
$(document).ready(function() {
// jQuery goodness here.
});

While Chris is somewhat correct in that you can do this:
$(document.ready(function() {
// A
});
$(document.ready(function() {
// B
});
$(document.ready(function() {
// C
});
and all functions will be called (in the order they are encountered), it's worth mentioning that the ready() event isn't quite the same to onload(). From the jQuery API docs:
Binds a function to be executed
whenever the DOM is ready to be
traversed and manipulated.
You may want the load() event instead:
$(document).load(function() {
// do stuff
});
which will wait for images and the like to be loaded.

Without resorting to the kneejerk jQuery, if the page varying JS is relatively light I would include it in an inline header script (binding to the onload event trigger, yes) similar to #4, but I wouldn't do this as a separate JS script and download, I'd be looking to handle this with a server side include - however you want to handle that (me? I'd go with XSLT includes).
That gives you both a high degree of modular separation and keeps the download as light as possible.

Having a lot of different pages, I would allow different handling of events for those pages ...
If, however, differencies were slight, I would try to find a pattern I could hang on to, and probably make a real simple algorithm to tell the pages apart ...
The last thing I would resort to, was to use a (big) library (jquery, mootools or whatever !-) if I wasn't going to use it in any other way ...
Now you're talking of best practices, best practice would always be what your users will experience as the lightest solution, and in that, users should be understood in the widest possible way, including developers and so on who are to maintain that site !o]

Related

Best way to execute Javascript on an anchor

Generally, there are 3 ways (that I am aware of) to execute javascript from an <a/> tag:
1) Use onclick():
hello
2) Directly link:
hello
3) Or attach externally:
// In an onload event or similar
document.getElementById('hello').onclick = window.alert('Hello');
return false;
<a id="hello" href="#">hello</a>
I am actually loading the link via AJAX, so #3 is basically out. So, is it better to do #1 or #2 or something completely different? Also, why? What are the pitfalls that I should be aware of?
Also of note, the anchor really doesn't link anywhere, hence the href="#", I am using a so the styles conform as this is still an object to be clicked and a button is inappropriate in the context.
Thanks
If you are loading the content via ajax and need to hook up event handlers, then you have these choices:
Put a javascript handler in your HTML with your option 1) or 2). In my mind option 1) is a cleaner way of specifying it, but I don't think there's a mountain of difference between 1) or 2) - they both do essentially the same thing. I'm not a fan of this option in general because I think there's value in keeping the markup and the code separate.
After loading the content with ajax, call some local code that will find and hook up all the links. This would be the same kind of code you would have in your page and execute on DOMReady if the HTML had been static HTML in your page. I would use addEventListener (falling back to attachEvent) to hook up this way as it more cleanly allows multiple listeners for a single object.
Call some code after you load the content with ajax that finds all the links and hooks up the clicks to some generic click handler that can then examine meta data in the link and figure out what should be done on that click based on the meta data. For example, this meta data could be attributes on the clicked link.
When you load the content, also load code that can find each link individually and hook up an appropriate event handler for each link much the way one would do it if the content was just being loaded in a regular page. This would meet the desire of separating HTML from JS as the JS would find each appropriate link and hook up an event handler for it with addEventListener or attachEvent.
Much like jQuery .live() works, hook up a generic event handler for unhandled clicks on links at the document level and dispatch each click based on some meta data in the link.
Run some code that uses an actual framework like jQuery's .live() capability rather than building your own capability.
Which I would use would depend a little on the circumstances.
First of all, of your three options for attaching an event handler, I'd use a new option #4. I'd use addEventListener (falling back to attachEvent for old versions of IE) rather than assigning to onclick because this more cleanly allows for multiple listeners on an item. If it were me, I'd be using a framework (jQuery or YUI) that makes the cross browser compatibility invisible. This allows complete separation of HTML and JS (no JS inline with the HTML) which I think is desirable in any project involving more than one person and just seems cleaner to me..
Then, it's just a question for me for which of the options above I'd use to run the code that hooks up these event listeners.
If there were a lot of different snippets of HTML that I was dynamically loading and it would be cleaner if they were all "standalone" and separately maintainable, then I would want to load both HTML and relevant code at the same time so have the newly loaded code handle hooking up to it's appropriate links.
If a generic standalone system wasn't really required because there were only a few snippets to be loaded and the code to handle them could be pre-included in the page, then I'd probably just make a function call after the HTML snippet was loaded via ajax to have the javascript hook up to the links in the snippet that had just been loaded. This would maintain the complete separation between HTML and JS, but be pretty easy to implement. You could put some sort of key object in each snippet that would identify which piece of JS to call or could be used as a parameter to pass to the JS or the JS could just examine the snippet to see which objects were available and hook up to whichever ones were present.
Number 3 is not "out" if you want to load via AJAX.
var link = document.createElement("a");
//Add attributes (href, text, etc...)
link.onclick = function () { //This has to be a function, not a string
//Handle the click
return false; //to prevent following the link
};
parent.appendChild(link); //Add it to the DOM
Modern browsers support a Content Security Policy or CSP. This is the highest level of web security and strongly recommended if you can apply it because it completely blocks all XSS attacks.
The way that CSP does this is disabling all the vectors where a user could inject Javascript into a page - in your question that is both options 1 and 2 (especially 1).
For this reason best practice is always option 3, as any other option will break if CSP is enabled.
I'm a firm believer of separating javascript from markup. There should be a distinct difference, IMHO, between what is for display purposes and what is for execution purposes. With that said, avoid using onclick attribute and embedding javascript:* in a href attribute.
Alternatives?
You can include javascript library files using AJAX.
You can setup javascript to look for changes in the DOM (i.e. if it's a "standard task", make the anchor use a CSS class name that can be used to bind a specific mechanism when it's later added dynamically. (jQuery does a great job at this with .delegate()))
Run your scripts POST-AJAX call. (Bring in the new content, then use javascript to [re]bind the functionality) e.g.:
function ajaxCallback(content){
// add content to dom
// search within newly added content for elements that need binding
}

Custom View Engine to solve the Javascript/PartialView Issue?

I have seen many questions raised around PartialViews and Javascript: the problem is a PartialView that requires Javascript, e.g. a view that renders a jqGrid:
The partial View needs a <div id="myGrid"></div>
and then some script:
<script>
$(document).ready(function(){
$('#myGrid').jqGrid( { // config params go here
});
}
</script>
The issue is how to include the PartialView without littering the page with inline tags and multiple $(document).ready tags.
We would also like to club the results from multiple RenderPartial calls into a single document.Ready() call.
And lastly we have the issue of the Javascript library files such as JQuery and JQGrid.js which should ideally be included at the bottom of the page (right before the $.ready block) and ideally only included when the appropriate PartialViews are used on the page.
In scouring the WWW it does not appear that anyone has solved this issue. A potential way might be to implement a custom View Engine. I was wondering if anyone had any alternative suggestions I may have missed?
This is a good question and it is something my team struggled with when JQuery was first released. One colleague wrote a page base class that combined all of the document ready calls into one, but it was a complete waste of time and our client's money.
There is no need to combine the $(document).ready() calls into one as they will all be called, one after the other in the order that they appear on the page. this is due to the multi-cast delegate nature of the method and it won't have a significant affect on performance. You might find your page slightly more maintainable, but maintainability is seldom an issue with jQuery as it has such concise syntax.
Could you expand on the reasons for wanting to combine them? I find a lot of developers are perfectionists and want their markup to be absolutely perfect. Rather, I find that when it is good enough for the client, when it performs adequately and displays properly, then my time is better spent delivering the next requirement. I have wasted a lot of time in the past formatting HTML that no-one will ever look at.
Any script that you want to appear at the bottom of the page should go inside the ClientScriptManager.RegisterStartupScript Method as it renders at the bottom of the page.
http://msdn.microsoft.com/en-us/library/z9h4dk8y.aspx
Edit Just noticed that your question was specific to ASP.NET MVC. My answer is more of an ASP.NET answer but in terms of the rendered html, most of my comments are still relevant. Multiple document.ready functions are not a problem.
The standard jQuery approach is to write a single script that will add behaviour to multiple elements. So, add a class to the divs that you want to contain a grid and call a function on each one:
<script language="text/javascript">
$(document).ready(function(){
$('.myGridClass').each(function(){
$(this).jqGrid( {
// config params can be determined from
//attributes added to the div element
var url = $(this).attr("data-url");
});
});
}
</script>
You only need to add this script once on your page and in your partial views you just have:
<div class="myGridClass" data-url="http://whatever-url-to-be-used"></div>
Notice the data-url attribute. This is HTML5 syntax, which will fail HTML 4 validation. It will still work in HTML 4 browsers. It only matters if you have to run your pages through html validators. And I can see you already know about HTML5
Not pretty but as regards your last point can you not send the appropriate tags as a ViewData dictionary in the action that returns the partial?

Why is it bad practice to use links with the javascript: "protocol"?

In the 1990s, there was a fashion to put Javascript code directly into <a> href attributes, like this:
Press me!
And then suddenly I stopped to see it. They were all replaced by things like:
Press me!
For a link whose sole purpose is to trigger Javascript code, and has no real href target, why is it encouraged to use the onclick property instead of the href property?
The execution context is different, to see this, try these links instead:
Press me! <!-- result: undefined -->
Press me! <!-- result: A -->
javascript: is executed in the global context, not as a method of the element, which is usually want you want. In most cases you're doing something with or in relation to the element you acted on, better to execute it in that context.
Also, it's just much cleaner, though I wouldn't use in-line script at all. Check out any framework for handling these things in a much cleaner way. Example in jQuery:
$('a').click(function() { alert(this.tagName); });
Actually, both methods are considered obsolete. Developers are instead encouraged to separate all JavaScript in an external JS file in order to separate logic and code from genuine markup
http://www.alistapart.com/articles/behavioralseparation
http://en.wikipedia.org/wiki/Unobtrusive_JavaScript
The reason for this is that it creates code that is easier to maintain and debug, and it also promotes web standards and accessibility. Think of it like this: Looking at your example, what if you had hundreds of links like that on a page and needed to change out the alert behavior for some other function using external JS references, you'd only need to change a single event binding in one JS file as opposed to copying and pasting a bunch of code over and over again or doing a find-and-replace.
Couple of reasons:
Bad code practice:
The HREF tag is to indicate that there is a hyperlink reference to another location. By using the same tag for a javascript function which is not actually taking the user anywhere is bad programming practice.
SEO problems:
I think web crawlers use the HREF tag to crawl throughout the web site & link all the connected parts. By putting in javascript, we break this functionality.
Breaks accessibility:
I think some screen readers will not be able to execute the javascript & might not know how to deal with the javascript while they expect a hyperlink. User will expect to see a link in the browser status bar on hover of the link while they will see a string like: "javascript:" which might confuse them etc.
You are still in 1990's:
The mainstream advice is to have your javascript in a seperate file & not mingle with the HTML of the page as was done in 1990's.
HTH.
I open lots of links in new tabs - only to see javascript:void(0). So you annoy me, as well as yourself (because Google will see the same thing).
Another reason (also mentioned by others) is that different languages should be separated into different documents. Why? Well,
Mixed languages aren't well supported
by most IDEs and validators.
Embedding CSS and JS into HTML pages
(or anything else for that matter)
pretty much destroys opportunities to
have the embedded language checked for correctness
statically. Sometimes, the embedding language as well.
(A PHP or ASP document isn't valid HTML.)
You don't want syntax
errors or inconsistencies to show up
only at runtime.
Another reason is to have a cleaner separation between
the kinds of things you need to
specify: HTML for content, CSS for
layout, JS usually for more layout
and look-and-feel. These don't map
one to one: you usually want to apply
layout to whole categories of
content elements (hence CSS) and look and feel as well
(hence jQuery). They may be changed at different
times that the content elements are changed (in fact
the content is often generated on the fly) and by
different people. So it makes sense to keep them in
separate documents as well.
Using the javascript: protocol affects accessibility, and also hurts how SEO friendly your page is.
Take note that HTML stands for Hypter Text something something... Hyper Text denotes text with links and references in it, which is what an anchor element <a> is used for.
When you use the javascript: 'protocol' you're misusing the anchor element. Since you're misusing the <a> element, things like the Google Bot and the Jaws Screen reader will have trouble 'understanding' your page, since they don't care much about your JS but care plenty about the Hyper Text ML, taking special note of the anchor hrefs.
It also affects the usability of your page when a user who does not have JavaScript enabled visits your page; you're breaking the expected functionality and behavior of links for those users. It will look like a link, but it won't act like a link because it uses the javascript protocol.
You might think "but how many people have JavaScript disabled nowadays?" but I like to phrase that idea more along the lines of "How many potential customers am I willing to turn away just because of a checkbox in their browser settings?"
It boils down to how href is an HTML attribute, and as such it belongs to your site's information, not its behavior. The JavaScript defines the behavior, but your never want it to interfere with the data/information. The epitome of this idea would be the external JavaScript file; not using onclick as an attribute, but instead as an event handler in your JavaScript file.
Short Answer: Inline Javascript is bad for the reasons that inline CSS is bad.
The worst problem is probably that it breaks expected functionality.
For example, as others has pointed out, open in new window/tab = dead link = annoyed/confused users.
I always try to use onclick instead, and add something to the URL-hash of the page to indicate the desired function to trigger and add a check at pageload to check the hash and trigger the function.
This way you get the same behavior for clicks, new tab/window and even bookmarked/sent links, and things don't get to wacky if JS is off.
In other words, something like this (very simplified):
For the link:
onclick = "doStuff()"
href = "#dostuff"
For the page:
onLoad = if(hash="dostuff") doStuff();
Also, as long as we're talking about deprecation and semantics, it's probably worth pointing out that '</a>' doesn't mean 'clickable' - it means 'anchor,' and implies a link to another page. So it would make sense to use that tag to switch to a different 'view' in your application, but not to perform a computation. The fact that you don't have a URL in your href attribute should be a sign that you shouldn't be using an anchor tag.
You can, alternately, assign a click event action to nearly any html element - maybe an <h1>, an <img>, or a <p> would be more appropriate? At any rate, as other people have mentioned, add another attribute (an 'id' perhaps) that javascript can use as a 'hook' (document.getElementById) to get to the element and assign an onclick. That way you can keep your content (HTML) presentation (CSS) and interactivity (JavaScript) separated. And the world won't end.
I typically have a landing page called "EnableJavascript.htm" that has a big message on it saying "Javascript must be enabled for this feature to work". And then I setup my anchor tags like this...
<a href="EnableJavascript.htm" onclick="funcName(); return false;">
This way, the anchor has a legitimate destination that will get overwritten by your Javascript functionality whenever possible. This will degrade gracefully. Although, now a days, I generally build web sites with complete functionality before I decide to sprinkle some Javascript into the mix (which all together eliminates the need for anchors like this).
Using onclick attribute directly in the markup is a whole other topic, but I would recommend an unobtrusive approach with a library like jQuery.
I think it has to do with what the user sees in the status bar. Typically applications should be built for failover in case javascript isn't enabled however this isn't always the case.
With all the spamming that is going on people are getting smarter and when an email looks 'phishy' more and more people are looking at the status bar to see where the link will actually take them.
Remember to add 'return false;' to the end of your link so the page doesn't jump to the top on the user (unless that's the behaviour you are looking for).

Wiring element events

I recently read a blog post. In it, the author told readers to wire up all their onclick events not inline, but when the DOM's ready, like this (jQuery example):
<script type="text/javascript">
$(document).ready(function() {
$("myElement").click(...
});
</script>
This, for all the elements on the page with events attached to them. And that script block, with all its wirings, should go at the end of the page.
He said that setting it in-line was more difficult to maintain:
<span id="myElement" onclick="...">moo</span>
But he didn't say why.
Is this true in others' experiences? Is it a better practice to do this? What are its advantages?
Well, it's generally regarded as good style to separate code and content from each other as much as possible. If you use that method, you have wonderfully clean HTML:
<span id="myElement">moo</span>
and a separate, central code repository that you can keep in one place, and could even put into an external Javascript file.
Editing your HTML layout becomes really fun then, and it looks great as well.
I don't always follow this rule myself to the letter, and I'm not as zealous about it as others. But I allow myself a maximum of a function call onclick='do_stuff()' when doing stuff inline. Anything more complex turns to code soup really fast.
Having all of your event handlers in one place makes the code much easier to maintain, because you don't need to hunt through the HTML looking for event handlers. It also allows you to have both single and double quotes in the handlers, and will fail on syntax errors when the script block is parsed instead of when the event is first fired.
Also, every inline event handler requires the browser to parse a separate expression (equivalent to an eval call), hurting performance.
It's just messy. It's best to keep your logic separate from your markup, just as you ought to keep your markup separate from your styling. If you keep clean separation, managing code is easier since you don't need to hunt through your markup to modify what happens when you hover over images, for instance.
# index.html
<img class="thumbnail" src="puppies.jpg" />
# index.js
$("img.thumbnail").fadeTo(0, 0.5).hover(
function () { $(this).fadeTo("fast", 1.0); },
function () { $(this).fadeTo("slow", 0.5); }
);
# index.css
img.thumbnail { border:1px dotted red; }
Advantages are you won't have to search for events generated by html controls in different controls(ASCX) or Page in case your page has many controls(ASCX) embeded.
You can even debug if the event is registered using Firebug or some other debugger.
Read this article on behavioral separation which explains the graceful degradation when JavaScript is disabled. You can also find articles on unobtrusive JavaScript to understand the spirit of jQuery.
Also, if you use event bubbling you can avoid having multitudes of event handlers.
An advantage for me of an inline event is you can see directly what will happen when you perform the action, just by inspecting the element.
This is particularly quick if you maintain an old code or done by someone else.
Another advantage is about speed, attaching events later use the DOM and it is probably not as fast as inline generation.
And by using an unobtrusive templating engine like PURE you can keep you HTML clean of any JS logic

Why Stackoverflow binds user actions dynamically with javascript?

Checking the HTML source of a question I see for instance:
<a id="comments-link-xxxxx" class="comments-link">add comment</a><noscript> JavaScript is needed to access comments.</noscript>
And then in the javascript source:
// Setup our click events..
$().ready(function() {
$("a[id^='comments-link-']").click(function() { comments.show($(this).attr("id").substr("comments-link-".length)); });
});
It seems that all the user click events are binded this way.
The downsides of this approach are obvious for people browsing the site with no javascript but, what are the advantages of adding events dynamically whith javascript over declaring them directly?
You don't have to type the same string over and over again in the HTML (which if nothing else would increase the number of typos to debug)
You can hand over the HTML/CSS to a designer who need not have any javascript skills
You have programmatic control over what callbacks are called and when
It's more elegant because it fits the conceptual separation between layout and behaviour
It's easier to modify and refactor
On the last point, imagine if you wanted to add a "show comments" icon somewhere else in the template. It'd be very easy to bind the same callback to the icon.
Attaching events via the events API instead of in the mark-up is the core of unobtrusive javascript. You are welcome to read this wikipedia article for a complete overview of why unobtrusive javascripting is important.
The same way that you separate styles from mark-up you want to separate scripts from mark-up, including events.
I see this as one of the fundamental principals of good software development:
The separation of presentation and logic.
HTML/CSS is a presentation language essentially. Javascript is for creating logic. It is a good practice to separate any logic from your presentation if possible.
This way you can have a light-weight page where you can handle all your actions via javascript. Instead of having to use loads of different urls and actions embedded into the page, just write one javascript function that finds the link, and hooks it up, no matter where on the page you dump that 'comment' link.
This saves loads of repeating html :)
The only advantage I see is a reduction of the page size, and thus a lower bandwith need.
Edit: As I'm being downvoted, let met explain a more my answer.
My point is that, using a link as an empty anchor is just a bad practice, nothing else! Of course separation of JavaScript logic from HTML is great. Of course it's easier to refactor and debug. But here, it's against the main principle of unobtrusive JavaScript: Gracefull degradation!
A good solution would be to have to possible call of the comments: one through a REAL link that will point to a simple page showing the comment and another which returns only the comments (in a JSON notation or similar format) with the purpose of being called through AJAX to be injected directly in the main page.
Doing so, the method using the AJAX method should also take care of cancelling the other call, to avoid that the user is redirected to the simple page. That would be Unobtrusive JavaScript. Here it's just JavaScript put on a misused anchor tag.

Categories

Resources