Best way to execute Javascript on an anchor - javascript

Generally, there are 3 ways (that I am aware of) to execute javascript from an <a/> tag:
1) Use onclick():
hello
2) Directly link:
hello
3) Or attach externally:
// In an onload event or similar
document.getElementById('hello').onclick = window.alert('Hello');
return false;
<a id="hello" href="#">hello</a>
I am actually loading the link via AJAX, so #3 is basically out. So, is it better to do #1 or #2 or something completely different? Also, why? What are the pitfalls that I should be aware of?
Also of note, the anchor really doesn't link anywhere, hence the href="#", I am using a so the styles conform as this is still an object to be clicked and a button is inappropriate in the context.
Thanks

If you are loading the content via ajax and need to hook up event handlers, then you have these choices:
Put a javascript handler in your HTML with your option 1) or 2). In my mind option 1) is a cleaner way of specifying it, but I don't think there's a mountain of difference between 1) or 2) - they both do essentially the same thing. I'm not a fan of this option in general because I think there's value in keeping the markup and the code separate.
After loading the content with ajax, call some local code that will find and hook up all the links. This would be the same kind of code you would have in your page and execute on DOMReady if the HTML had been static HTML in your page. I would use addEventListener (falling back to attachEvent) to hook up this way as it more cleanly allows multiple listeners for a single object.
Call some code after you load the content with ajax that finds all the links and hooks up the clicks to some generic click handler that can then examine meta data in the link and figure out what should be done on that click based on the meta data. For example, this meta data could be attributes on the clicked link.
When you load the content, also load code that can find each link individually and hook up an appropriate event handler for each link much the way one would do it if the content was just being loaded in a regular page. This would meet the desire of separating HTML from JS as the JS would find each appropriate link and hook up an event handler for it with addEventListener or attachEvent.
Much like jQuery .live() works, hook up a generic event handler for unhandled clicks on links at the document level and dispatch each click based on some meta data in the link.
Run some code that uses an actual framework like jQuery's .live() capability rather than building your own capability.
Which I would use would depend a little on the circumstances.
First of all, of your three options for attaching an event handler, I'd use a new option #4. I'd use addEventListener (falling back to attachEvent for old versions of IE) rather than assigning to onclick because this more cleanly allows for multiple listeners on an item. If it were me, I'd be using a framework (jQuery or YUI) that makes the cross browser compatibility invisible. This allows complete separation of HTML and JS (no JS inline with the HTML) which I think is desirable in any project involving more than one person and just seems cleaner to me..
Then, it's just a question for me for which of the options above I'd use to run the code that hooks up these event listeners.
If there were a lot of different snippets of HTML that I was dynamically loading and it would be cleaner if they were all "standalone" and separately maintainable, then I would want to load both HTML and relevant code at the same time so have the newly loaded code handle hooking up to it's appropriate links.
If a generic standalone system wasn't really required because there were only a few snippets to be loaded and the code to handle them could be pre-included in the page, then I'd probably just make a function call after the HTML snippet was loaded via ajax to have the javascript hook up to the links in the snippet that had just been loaded. This would maintain the complete separation between HTML and JS, but be pretty easy to implement. You could put some sort of key object in each snippet that would identify which piece of JS to call or could be used as a parameter to pass to the JS or the JS could just examine the snippet to see which objects were available and hook up to whichever ones were present.

Number 3 is not "out" if you want to load via AJAX.
var link = document.createElement("a");
//Add attributes (href, text, etc...)
link.onclick = function () { //This has to be a function, not a string
//Handle the click
return false; //to prevent following the link
};
parent.appendChild(link); //Add it to the DOM

Modern browsers support a Content Security Policy or CSP. This is the highest level of web security and strongly recommended if you can apply it because it completely blocks all XSS attacks.
The way that CSP does this is disabling all the vectors where a user could inject Javascript into a page - in your question that is both options 1 and 2 (especially 1).
For this reason best practice is always option 3, as any other option will break if CSP is enabled.

I'm a firm believer of separating javascript from markup. There should be a distinct difference, IMHO, between what is for display purposes and what is for execution purposes. With that said, avoid using onclick attribute and embedding javascript:* in a href attribute.
Alternatives?
You can include javascript library files using AJAX.
You can setup javascript to look for changes in the DOM (i.e. if it's a "standard task", make the anchor use a CSS class name that can be used to bind a specific mechanism when it's later added dynamically. (jQuery does a great job at this with .delegate()))
Run your scripts POST-AJAX call. (Bring in the new content, then use javascript to [re]bind the functionality) e.g.:
function ajaxCallback(content){
// add content to dom
// search within newly added content for elements that need binding
}

Related

Grouping plugin initializations and functions into a single script

I have a very basic question about grouping (jQuery) plugin initializations-- and really any type of script-- into a single script throughout a website: in my templates, I typically have a "tools.js" file that includes various plugin initializations, click functions and the like. For the sake of ease, neatness and number of server requests, I like to keep these functions/script calls centralized in a single file, however, on certain pages, various scripts won't apply-- say, a fitvids.js script initialization that might be used on one page with a video, and not on another. Thus, I'm wondering if this is problematic in any way, i.e. can this create problems if a certain library isn't included on a given page-- but it's initialization is-- or a selector referenced in a click function is not present on a given page?
Thanks for any insight here.
In my design, I will have only shared code like plugin definitions or common utility methods shared between pages. The functional code like event handlers or plugin initialization for each page will be kept separate.
If there is a cross cutting concern in multiple pages then it will be either converted as a plugin or as a utility method which will be placed in a shared file but the actual usage will be done for each page separately.
If the selector is not present in the DOM and has a click handler in js, it should not be a problem. The reason behind is that the click function is never triggered. It is an issue if the attached handler is executed by any chance and js can't find your selector.

Keep the javascript fully separated from markup on primefaces

I'm working on a web interface with the help of primefaces framework.
In that interface, one of the objectives is to have the code divided in javascript functions that do not share information between each other and they don't allow being invoked by other parts (that eases testing and reduces the number and complexity of possible use-cases).
All "parts" are encapsulated using:
(function (window, document, undefined){
var $ = window.jQuery;
// main content here
})(window,document);
The communication required between each part is minimal and the required one is made using DOM events where an object is passed between each other. (if the event is not caught, it's just a functionality that didn't act. If it caused something to break, the js does not stop working, among other reasons).
This has been working for quite a while with minimal bugs found until I had to work with jsf+primefaces.
By reading the documentation, primefaces has many XML tags that do not map to HTML tags. One of the main ones I have to work with is <p:ajax>.
This tag was many on*-like attributes whose concept works much like the HTML3's ideology of writing javascript in HTML's "on*" attributes. Still, those <p:ajax> are always attached to specific XML elements like <h:inputText> or <p:commandButton> and that's where I started looking at.
In primefaces documentation, there's information about the inline on* attributes but I was fully unable to find information about jsf or primefaces' personalized DOM events.
How it appears with primefaces, I'm forced to change the javascript code so that functions/methods can be called inline in the HTML. That would require a lot of work also because, depending on the situation, the js code might even not be there (because the feature it enables is not required for that page).
How do I make the system on primefaces such that I have my javascript fully detached from the jsf/primefaces XML (and the whole HTML output I can manage).
EDIT:
I ran out of ideas on where to look at, I'll work on looking at primefaces source code now. I may get better luck there.
EDIT:
Meanwhile I got some ideas for searching using different keywords and I found this(see: "Client Side API"):
http://courses.coreservlets.com/Course-Materials/pdf/jsf/primefaces/users-guide/p-ajaxStatus.pdf
This is near what I wanted but it seems like it does not exist for the elements I mentioned above. I'll work on continuing searching for more.
After some testing, investigation, etc... I was finally able to understand the whole story of what was happening.
Primefaces was doing everything right after all! The <p:ajax> has the correct code to send all the events it should! The problem lies in jQuery itself.
jQuery's trigger() method (and it's shortcuts) works in such way that it handles all events directly inside jQuery bubbling and calling the callbacks registered using on() (or any of the shorthands).
The main issue in jQuery is that it only resend the "click" event to the DOM because it tries to use a method in the DOM element with the same name as the event. In the DOM, (at the moment) the only situation when that happens is the "click" event. That's why I was getting the click event and not the rest of the events.
With that, the mistery and confusion was, finally, solved. uff!

How to use page-mod to modify element loaded by JavaScript

I'm creating firefox addon to add onclick event to the specific button. ("input" element)
The button is placed in http://example.com/welcome#_pg=compose
but when I open the page, following error occures:
TypeError: document.querySelector("#send_top") is null
#send_top is id of the button which I want to modify. So, the button is not found.
This error occurs because http://example.com/welcome and http://example.com/welcome#_pg=compose is completely different pages.
In this case, the addon seems loading http://example.com/welcome but there is no button whose '#send_top' ID.
When #_pg=compose anchor is added, the button is loaded by JavaScript.
How can I load http://example.com/welcome#_pg=compose to modify the button?
Three thoughts to help you debug this:
to correctly match the url you should consider using a regular expression instead of the page-match syntax - this might allow you to react to the anchors in a more predictable way
I've found that when using content scripts with pages that are heavily modified by JS, you can run into timing issues. A hacky workaround might be to look for the element you want and, if it isn' there, do a setTimeout for a 100 milliseconds or so and then re-check. Ugly, yes, but it worked for some example code I used with the new twitter UI, for example.
You can use the unsafeWindow variable in your content script to directly access the page's window object - this object will contain any changes JS has made to the page and is not proxied. You should use unsafeWindow with great caution however as its use represent a possible security problem. In particular, you should never trust any data coming from unsafeWindow, ever.

Why is it bad practice to use links with the javascript: "protocol"?

In the 1990s, there was a fashion to put Javascript code directly into <a> href attributes, like this:
Press me!
And then suddenly I stopped to see it. They were all replaced by things like:
Press me!
For a link whose sole purpose is to trigger Javascript code, and has no real href target, why is it encouraged to use the onclick property instead of the href property?
The execution context is different, to see this, try these links instead:
Press me! <!-- result: undefined -->
Press me! <!-- result: A -->
javascript: is executed in the global context, not as a method of the element, which is usually want you want. In most cases you're doing something with or in relation to the element you acted on, better to execute it in that context.
Also, it's just much cleaner, though I wouldn't use in-line script at all. Check out any framework for handling these things in a much cleaner way. Example in jQuery:
$('a').click(function() { alert(this.tagName); });
Actually, both methods are considered obsolete. Developers are instead encouraged to separate all JavaScript in an external JS file in order to separate logic and code from genuine markup
http://www.alistapart.com/articles/behavioralseparation
http://en.wikipedia.org/wiki/Unobtrusive_JavaScript
The reason for this is that it creates code that is easier to maintain and debug, and it also promotes web standards and accessibility. Think of it like this: Looking at your example, what if you had hundreds of links like that on a page and needed to change out the alert behavior for some other function using external JS references, you'd only need to change a single event binding in one JS file as opposed to copying and pasting a bunch of code over and over again or doing a find-and-replace.
Couple of reasons:
Bad code practice:
The HREF tag is to indicate that there is a hyperlink reference to another location. By using the same tag for a javascript function which is not actually taking the user anywhere is bad programming practice.
SEO problems:
I think web crawlers use the HREF tag to crawl throughout the web site & link all the connected parts. By putting in javascript, we break this functionality.
Breaks accessibility:
I think some screen readers will not be able to execute the javascript & might not know how to deal with the javascript while they expect a hyperlink. User will expect to see a link in the browser status bar on hover of the link while they will see a string like: "javascript:" which might confuse them etc.
You are still in 1990's:
The mainstream advice is to have your javascript in a seperate file & not mingle with the HTML of the page as was done in 1990's.
HTH.
I open lots of links in new tabs - only to see javascript:void(0). So you annoy me, as well as yourself (because Google will see the same thing).
Another reason (also mentioned by others) is that different languages should be separated into different documents. Why? Well,
Mixed languages aren't well supported
by most IDEs and validators.
Embedding CSS and JS into HTML pages
(or anything else for that matter)
pretty much destroys opportunities to
have the embedded language checked for correctness
statically. Sometimes, the embedding language as well.
(A PHP or ASP document isn't valid HTML.)
You don't want syntax
errors or inconsistencies to show up
only at runtime.
Another reason is to have a cleaner separation between
the kinds of things you need to
specify: HTML for content, CSS for
layout, JS usually for more layout
and look-and-feel. These don't map
one to one: you usually want to apply
layout to whole categories of
content elements (hence CSS) and look and feel as well
(hence jQuery). They may be changed at different
times that the content elements are changed (in fact
the content is often generated on the fly) and by
different people. So it makes sense to keep them in
separate documents as well.
Using the javascript: protocol affects accessibility, and also hurts how SEO friendly your page is.
Take note that HTML stands for Hypter Text something something... Hyper Text denotes text with links and references in it, which is what an anchor element <a> is used for.
When you use the javascript: 'protocol' you're misusing the anchor element. Since you're misusing the <a> element, things like the Google Bot and the Jaws Screen reader will have trouble 'understanding' your page, since they don't care much about your JS but care plenty about the Hyper Text ML, taking special note of the anchor hrefs.
It also affects the usability of your page when a user who does not have JavaScript enabled visits your page; you're breaking the expected functionality and behavior of links for those users. It will look like a link, but it won't act like a link because it uses the javascript protocol.
You might think "but how many people have JavaScript disabled nowadays?" but I like to phrase that idea more along the lines of "How many potential customers am I willing to turn away just because of a checkbox in their browser settings?"
It boils down to how href is an HTML attribute, and as such it belongs to your site's information, not its behavior. The JavaScript defines the behavior, but your never want it to interfere with the data/information. The epitome of this idea would be the external JavaScript file; not using onclick as an attribute, but instead as an event handler in your JavaScript file.
Short Answer: Inline Javascript is bad for the reasons that inline CSS is bad.
The worst problem is probably that it breaks expected functionality.
For example, as others has pointed out, open in new window/tab = dead link = annoyed/confused users.
I always try to use onclick instead, and add something to the URL-hash of the page to indicate the desired function to trigger and add a check at pageload to check the hash and trigger the function.
This way you get the same behavior for clicks, new tab/window and even bookmarked/sent links, and things don't get to wacky if JS is off.
In other words, something like this (very simplified):
For the link:
onclick = "doStuff()"
href = "#dostuff"
For the page:
onLoad = if(hash="dostuff") doStuff();
Also, as long as we're talking about deprecation and semantics, it's probably worth pointing out that '</a>' doesn't mean 'clickable' - it means 'anchor,' and implies a link to another page. So it would make sense to use that tag to switch to a different 'view' in your application, but not to perform a computation. The fact that you don't have a URL in your href attribute should be a sign that you shouldn't be using an anchor tag.
You can, alternately, assign a click event action to nearly any html element - maybe an <h1>, an <img>, or a <p> would be more appropriate? At any rate, as other people have mentioned, add another attribute (an 'id' perhaps) that javascript can use as a 'hook' (document.getElementById) to get to the element and assign an onclick. That way you can keep your content (HTML) presentation (CSS) and interactivity (JavaScript) separated. And the world won't end.
I typically have a landing page called "EnableJavascript.htm" that has a big message on it saying "Javascript must be enabled for this feature to work". And then I setup my anchor tags like this...
<a href="EnableJavascript.htm" onclick="funcName(); return false;">
This way, the anchor has a legitimate destination that will get overwritten by your Javascript functionality whenever possible. This will degrade gracefully. Although, now a days, I generally build web sites with complete functionality before I decide to sprinkle some Javascript into the mix (which all together eliminates the need for anchors like this).
Using onclick attribute directly in the markup is a whole other topic, but I would recommend an unobtrusive approach with a library like jQuery.
I think it has to do with what the user sees in the status bar. Typically applications should be built for failover in case javascript isn't enabled however this isn't always the case.
With all the spamming that is going on people are getting smarter and when an email looks 'phishy' more and more people are looking at the status bar to see where the link will actually take them.
Remember to add 'return false;' to the end of your link so the page doesn't jump to the top on the user (unless that's the behaviour you are looking for).

Best Practices for onload Javascript

What is the best way to handle several different onload scripts spread across many pages?
For example, I have 50 different pages, and on each page I want to set a different button click handler when the dom is ready.
Is it best to set onclicks like this on each individual page,
<a id="link1" href="#" onclick="myFunc()" />
Or a very long document ready function in an external js file,
Element.observe(window, 'load', function() {
if ($('link1')) {
// set click handler
}
if ($('link2')) {
// set click hanlder
}
...
}
Or split each if ($('link')) {} section into script tags and place them on appropriate pages,
Or lastly, split each if ($('link')) {} section into its own separate js file and load appropriately per page?
Solution 1 seems like the least elegant and is relatively obtrusive, solution 2 will lead to a very lengthy load function, solution 3 is less obtrusive then 1 but still not great, and solution 4 will load require the user to download a separate js file per page he visits.
Are any of these best (or worst) or is there a solution 5 I'm not thinking of?
Edit: I am asking about the design pattern, not which onload function is the proper one to use.
Have you thought about making a class for each type of behavior you'd like to attach to an element? That way you could reuse functionality between pages, just in case there was overlap.
For example, let's say that on some of the pages you want to a have button that pops up some extra information on the page. Your html could look like this:
More info
And your JavaScript could look like this:
jQuery(".more-info").click(function() { ... });
If you stuck to some kind of convention, you could also add multiple classes to a link and have it do a few different things if you needed (since jQuery will let you stack event handlers on an element).
Basically, you're focusing on the behaviors for each type of element you're attaching JavaScript to, rather than picking out specific ids of elements to attach functionality to.
I'd also suggest putting all of the JavaScript into one common file or a limited number of common files. The main reason being that, after the first page load, the JavaScript would be cached and won't need to load on each page. Another reason is that it would encourage you do develop common behaviors for buttons that are available throughout the site.
In any case, I would discourage attaching the onlick directly in the html (option #1). It's obtrusive and limits the flexibility you have with your JavaScript.
Edit: I didn't realize Diodeus had posted a very similar answer (which I agree with).
First of all I dont understand why you think setting event listeners is obtrusive?
but ...
Solution one is a bad idea
<a id="link1" href="#" onclick="myFunc()" />
because you should keep your make-up and your scripts seperate.
Solution two is a bad idea
Element.observe(window, 'load', function() {
if ($('link1')) {
// set click handler
}
if ($('link2')) {
// set click hanlder
}
...
}
because you are using a lot of unneeded javascript for every page.
Solution three is a bad idea for the same reason I said solution one is a bad idea.
Solution 4 is the best idea, yeah its one extra load per page, but if for each page you just split each if ($('link')) {} section, the file size can not be that large? Plus, if you take this code out of the global javascript, then its load time will be reduced.
You could hack the class name and use it in a creative manner:
<a class="loadevent functionA" id="link1" href="#" onclick="myFunc()" />
... on another page...
<a class="loadevent functionB" id="link1" href="#" onclick="myFunc()" />
You could select by class name "loadevent" and grab the other class names for that tag, the other class name being the actual function name you want to hook into. This way one handler would be able to do every page and all you have to do is provide the corresponding class names.
I would use JQuery's document ready if possible
$(document).ready(function() {
// jQuery goodness here.
});
While Chris is somewhat correct in that you can do this:
$(document.ready(function() {
// A
});
$(document.ready(function() {
// B
});
$(document.ready(function() {
// C
});
and all functions will be called (in the order they are encountered), it's worth mentioning that the ready() event isn't quite the same to onload(). From the jQuery API docs:
Binds a function to be executed
whenever the DOM is ready to be
traversed and manipulated.
You may want the load() event instead:
$(document).load(function() {
// do stuff
});
which will wait for images and the like to be loaded.
Without resorting to the kneejerk jQuery, if the page varying JS is relatively light I would include it in an inline header script (binding to the onload event trigger, yes) similar to #4, but I wouldn't do this as a separate JS script and download, I'd be looking to handle this with a server side include - however you want to handle that (me? I'd go with XSLT includes).
That gives you both a high degree of modular separation and keeps the download as light as possible.
Having a lot of different pages, I would allow different handling of events for those pages ...
If, however, differencies were slight, I would try to find a pattern I could hang on to, and probably make a real simple algorithm to tell the pages apart ...
The last thing I would resort to, was to use a (big) library (jquery, mootools or whatever !-) if I wasn't going to use it in any other way ...
Now you're talking of best practices, best practice would always be what your users will experience as the lightest solution, and in that, users should be understood in the widest possible way, including developers and so on who are to maintain that site !o]

Categories

Resources