Why Stackoverflow binds user actions dynamically with javascript? - javascript

Checking the HTML source of a question I see for instance:
<a id="comments-link-xxxxx" class="comments-link">add comment</a><noscript> JavaScript is needed to access comments.</noscript>
And then in the javascript source:
// Setup our click events..
$().ready(function() {
$("a[id^='comments-link-']").click(function() { comments.show($(this).attr("id").substr("comments-link-".length)); });
});
It seems that all the user click events are binded this way.
The downsides of this approach are obvious for people browsing the site with no javascript but, what are the advantages of adding events dynamically whith javascript over declaring them directly?

You don't have to type the same string over and over again in the HTML (which if nothing else would increase the number of typos to debug)
You can hand over the HTML/CSS to a designer who need not have any javascript skills
You have programmatic control over what callbacks are called and when
It's more elegant because it fits the conceptual separation between layout and behaviour
It's easier to modify and refactor
On the last point, imagine if you wanted to add a "show comments" icon somewhere else in the template. It'd be very easy to bind the same callback to the icon.

Attaching events via the events API instead of in the mark-up is the core of unobtrusive javascript. You are welcome to read this wikipedia article for a complete overview of why unobtrusive javascripting is important.
The same way that you separate styles from mark-up you want to separate scripts from mark-up, including events.

I see this as one of the fundamental principals of good software development:
The separation of presentation and logic.
HTML/CSS is a presentation language essentially. Javascript is for creating logic. It is a good practice to separate any logic from your presentation if possible.

This way you can have a light-weight page where you can handle all your actions via javascript. Instead of having to use loads of different urls and actions embedded into the page, just write one javascript function that finds the link, and hooks it up, no matter where on the page you dump that 'comment' link.
This saves loads of repeating html :)

The only advantage I see is a reduction of the page size, and thus a lower bandwith need.
Edit: As I'm being downvoted, let met explain a more my answer.
My point is that, using a link as an empty anchor is just a bad practice, nothing else! Of course separation of JavaScript logic from HTML is great. Of course it's easier to refactor and debug. But here, it's against the main principle of unobtrusive JavaScript: Gracefull degradation!
A good solution would be to have to possible call of the comments: one through a REAL link that will point to a simple page showing the comment and another which returns only the comments (in a JSON notation or similar format) with the purpose of being called through AJAX to be injected directly in the main page.
Doing so, the method using the AJAX method should also take care of cancelling the other call, to avoid that the user is redirected to the simple page. That would be Unobtrusive JavaScript. Here it's just JavaScript put on a misused anchor tag.

Related

Javascript replacements for on(event) attributes

If this sounds like I'm asking for opinion, sorry I'm not expressing myself better.
My question is, why is it necessary to replace attributes like onclick by anonymous functions? What is the advantage?
For example, I have a web page that needs to be brought up to date, so I need to replace
<input id="text" onfocus="this.blur()">
by
<input id="text">
.
.
<script>
$('#text').focus(function(){this.blur()});
<script>
But what does this do? What is the specific advantage of this over the original? I searched, but I couldn't find any real reason, only opinions.
There are many reasons not to use dom0 events.
You can find some of these under the topic "unobtrusive javascript".
http://en.wikipedia.org/wiki/Unobtrusive_JavaScript
Decoupling the markup and the behaviour is the most obvious one
The main advantages are clarity and flexibility:
Limiting ad-hoc javascript code is considered as a good practice since all your javascript logic will be found in one place.
It allow you to write decoupled code and "non-obstrusive" javascript, which basically means that you will be able to easily change the behavior of your webpage without touching the HTML.
You can say than using "onenvent" attributes over event handlers is quite the same as using the style attribute over an attached CSS file.
using the on() function allows to add as many event listeners as you see fit.
One good advantage is that this keeps your HTML simpler & lighter and could be loaded and rendered faster while your javascript on the bottom is still being loaded. If you consider performance its better to load HTML (& css) at the very beginning and scripts at the last.
Also, it separates code from the markup, making everything less
messed up, more systematic and easier to debug.
And you can't simply write bulky java script code in attributes it becomes inconvenient, you'd have to write in functions anyway for complex logic.
No one has mentioned that inline handlers are a vector for XSS attacks.
Keeping JavaScript out of HTML allows you to harden your site and set a Content-Security-Policy: script-src 'self'
that disable JavaScript in HTML.

Custom View Engine to solve the Javascript/PartialView Issue?

I have seen many questions raised around PartialViews and Javascript: the problem is a PartialView that requires Javascript, e.g. a view that renders a jqGrid:
The partial View needs a <div id="myGrid"></div>
and then some script:
<script>
$(document).ready(function(){
$('#myGrid').jqGrid( { // config params go here
});
}
</script>
The issue is how to include the PartialView without littering the page with inline tags and multiple $(document).ready tags.
We would also like to club the results from multiple RenderPartial calls into a single document.Ready() call.
And lastly we have the issue of the Javascript library files such as JQuery and JQGrid.js which should ideally be included at the bottom of the page (right before the $.ready block) and ideally only included when the appropriate PartialViews are used on the page.
In scouring the WWW it does not appear that anyone has solved this issue. A potential way might be to implement a custom View Engine. I was wondering if anyone had any alternative suggestions I may have missed?
This is a good question and it is something my team struggled with when JQuery was first released. One colleague wrote a page base class that combined all of the document ready calls into one, but it was a complete waste of time and our client's money.
There is no need to combine the $(document).ready() calls into one as they will all be called, one after the other in the order that they appear on the page. this is due to the multi-cast delegate nature of the method and it won't have a significant affect on performance. You might find your page slightly more maintainable, but maintainability is seldom an issue with jQuery as it has such concise syntax.
Could you expand on the reasons for wanting to combine them? I find a lot of developers are perfectionists and want their markup to be absolutely perfect. Rather, I find that when it is good enough for the client, when it performs adequately and displays properly, then my time is better spent delivering the next requirement. I have wasted a lot of time in the past formatting HTML that no-one will ever look at.
Any script that you want to appear at the bottom of the page should go inside the ClientScriptManager.RegisterStartupScript Method as it renders at the bottom of the page.
http://msdn.microsoft.com/en-us/library/z9h4dk8y.aspx
Edit Just noticed that your question was specific to ASP.NET MVC. My answer is more of an ASP.NET answer but in terms of the rendered html, most of my comments are still relevant. Multiple document.ready functions are not a problem.
The standard jQuery approach is to write a single script that will add behaviour to multiple elements. So, add a class to the divs that you want to contain a grid and call a function on each one:
<script language="text/javascript">
$(document).ready(function(){
$('.myGridClass').each(function(){
$(this).jqGrid( {
// config params can be determined from
//attributes added to the div element
var url = $(this).attr("data-url");
});
});
}
</script>
You only need to add this script once on your page and in your partial views you just have:
<div class="myGridClass" data-url="http://whatever-url-to-be-used"></div>
Notice the data-url attribute. This is HTML5 syntax, which will fail HTML 4 validation. It will still work in HTML 4 browsers. It only matters if you have to run your pages through html validators. And I can see you already know about HTML5
Not pretty but as regards your last point can you not send the appropriate tags as a ViewData dictionary in the action that returns the partial?

Why is it bad practice to use links with the javascript: "protocol"?

In the 1990s, there was a fashion to put Javascript code directly into <a> href attributes, like this:
Press me!
And then suddenly I stopped to see it. They were all replaced by things like:
Press me!
For a link whose sole purpose is to trigger Javascript code, and has no real href target, why is it encouraged to use the onclick property instead of the href property?
The execution context is different, to see this, try these links instead:
Press me! <!-- result: undefined -->
Press me! <!-- result: A -->
javascript: is executed in the global context, not as a method of the element, which is usually want you want. In most cases you're doing something with or in relation to the element you acted on, better to execute it in that context.
Also, it's just much cleaner, though I wouldn't use in-line script at all. Check out any framework for handling these things in a much cleaner way. Example in jQuery:
$('a').click(function() { alert(this.tagName); });
Actually, both methods are considered obsolete. Developers are instead encouraged to separate all JavaScript in an external JS file in order to separate logic and code from genuine markup
http://www.alistapart.com/articles/behavioralseparation
http://en.wikipedia.org/wiki/Unobtrusive_JavaScript
The reason for this is that it creates code that is easier to maintain and debug, and it also promotes web standards and accessibility. Think of it like this: Looking at your example, what if you had hundreds of links like that on a page and needed to change out the alert behavior for some other function using external JS references, you'd only need to change a single event binding in one JS file as opposed to copying and pasting a bunch of code over and over again or doing a find-and-replace.
Couple of reasons:
Bad code practice:
The HREF tag is to indicate that there is a hyperlink reference to another location. By using the same tag for a javascript function which is not actually taking the user anywhere is bad programming practice.
SEO problems:
I think web crawlers use the HREF tag to crawl throughout the web site & link all the connected parts. By putting in javascript, we break this functionality.
Breaks accessibility:
I think some screen readers will not be able to execute the javascript & might not know how to deal with the javascript while they expect a hyperlink. User will expect to see a link in the browser status bar on hover of the link while they will see a string like: "javascript:" which might confuse them etc.
You are still in 1990's:
The mainstream advice is to have your javascript in a seperate file & not mingle with the HTML of the page as was done in 1990's.
HTH.
I open lots of links in new tabs - only to see javascript:void(0). So you annoy me, as well as yourself (because Google will see the same thing).
Another reason (also mentioned by others) is that different languages should be separated into different documents. Why? Well,
Mixed languages aren't well supported
by most IDEs and validators.
Embedding CSS and JS into HTML pages
(or anything else for that matter)
pretty much destroys opportunities to
have the embedded language checked for correctness
statically. Sometimes, the embedding language as well.
(A PHP or ASP document isn't valid HTML.)
You don't want syntax
errors or inconsistencies to show up
only at runtime.
Another reason is to have a cleaner separation between
the kinds of things you need to
specify: HTML for content, CSS for
layout, JS usually for more layout
and look-and-feel. These don't map
one to one: you usually want to apply
layout to whole categories of
content elements (hence CSS) and look and feel as well
(hence jQuery). They may be changed at different
times that the content elements are changed (in fact
the content is often generated on the fly) and by
different people. So it makes sense to keep them in
separate documents as well.
Using the javascript: protocol affects accessibility, and also hurts how SEO friendly your page is.
Take note that HTML stands for Hypter Text something something... Hyper Text denotes text with links and references in it, which is what an anchor element <a> is used for.
When you use the javascript: 'protocol' you're misusing the anchor element. Since you're misusing the <a> element, things like the Google Bot and the Jaws Screen reader will have trouble 'understanding' your page, since they don't care much about your JS but care plenty about the Hyper Text ML, taking special note of the anchor hrefs.
It also affects the usability of your page when a user who does not have JavaScript enabled visits your page; you're breaking the expected functionality and behavior of links for those users. It will look like a link, but it won't act like a link because it uses the javascript protocol.
You might think "but how many people have JavaScript disabled nowadays?" but I like to phrase that idea more along the lines of "How many potential customers am I willing to turn away just because of a checkbox in their browser settings?"
It boils down to how href is an HTML attribute, and as such it belongs to your site's information, not its behavior. The JavaScript defines the behavior, but your never want it to interfere with the data/information. The epitome of this idea would be the external JavaScript file; not using onclick as an attribute, but instead as an event handler in your JavaScript file.
Short Answer: Inline Javascript is bad for the reasons that inline CSS is bad.
The worst problem is probably that it breaks expected functionality.
For example, as others has pointed out, open in new window/tab = dead link = annoyed/confused users.
I always try to use onclick instead, and add something to the URL-hash of the page to indicate the desired function to trigger and add a check at pageload to check the hash and trigger the function.
This way you get the same behavior for clicks, new tab/window and even bookmarked/sent links, and things don't get to wacky if JS is off.
In other words, something like this (very simplified):
For the link:
onclick = "doStuff()"
href = "#dostuff"
For the page:
onLoad = if(hash="dostuff") doStuff();
Also, as long as we're talking about deprecation and semantics, it's probably worth pointing out that '</a>' doesn't mean 'clickable' - it means 'anchor,' and implies a link to another page. So it would make sense to use that tag to switch to a different 'view' in your application, but not to perform a computation. The fact that you don't have a URL in your href attribute should be a sign that you shouldn't be using an anchor tag.
You can, alternately, assign a click event action to nearly any html element - maybe an <h1>, an <img>, or a <p> would be more appropriate? At any rate, as other people have mentioned, add another attribute (an 'id' perhaps) that javascript can use as a 'hook' (document.getElementById) to get to the element and assign an onclick. That way you can keep your content (HTML) presentation (CSS) and interactivity (JavaScript) separated. And the world won't end.
I typically have a landing page called "EnableJavascript.htm" that has a big message on it saying "Javascript must be enabled for this feature to work". And then I setup my anchor tags like this...
<a href="EnableJavascript.htm" onclick="funcName(); return false;">
This way, the anchor has a legitimate destination that will get overwritten by your Javascript functionality whenever possible. This will degrade gracefully. Although, now a days, I generally build web sites with complete functionality before I decide to sprinkle some Javascript into the mix (which all together eliminates the need for anchors like this).
Using onclick attribute directly in the markup is a whole other topic, but I would recommend an unobtrusive approach with a library like jQuery.
I think it has to do with what the user sees in the status bar. Typically applications should be built for failover in case javascript isn't enabled however this isn't always the case.
With all the spamming that is going on people are getting smarter and when an email looks 'phishy' more and more people are looking at the status bar to see where the link will actually take them.
Remember to add 'return false;' to the end of your link so the page doesn't jump to the top on the user (unless that's the behaviour you are looking for).

Designing a website for both javascript script support and not support

Okay i know that it's important for your website to work fine with javascript disabled.
In my opinion one way to start thinking about how to design such websites is to detect javascript at the homepage and if it's not enabled redirect to another version of website that does not javascript code and works with pure html (like gmail)
Another method that i have in mind is that for example think of a X (close button) on a dialog box on a webpage. What if pressing the X without any javascript interference lead to sending a request to the server and in the server side we hide that dialog next time we are rendering the page, And also we bind a javascript function to onclick of the link and in case of the javascript enabled it will hide the dialog instantly.
What do you think of this? How would you design a website to support both?
One way to deal with this is to :
First, create the site, without any javascript
Then, when every works, add javascript enhancements where suitable
This way, if JS is disabled, the "first" version of the site still works.
You can do exactly the same with CSS, naturally -- there is even one "CSS Naked Day" each day, showing what websites look like without CSS ^^
One example ?
You have a standard HTML form, that POSTs data to your server when submitted, and the re-creation of the page by the server displays a message like "thanks for subscriving"
You then add some JS + Ajax stuff : instead of reloading the whole page while submitting the form, you do an Ajax request, that only send the data ; and, in return, it displays "thanks for subscribing" without reloading the page
In this case, if javascript is disabled, the first "standard" way of doing things still works.
This is (part of) what is called Progressive enhancement
The usual method is what's called progressive enhancement.
Basically you take a simple HTML website, with regular forms.
The next enhancement is CSS - you make it look good.
Then you can enhance it further with Javascript - you can add or remove elements, add effects and so on.
The basic HTML is always there for old browsers (or those with script blockers, for example).
For example a form to post a comment might look like this:
<form action="post-comment.php" method="post" id="myForm">
<input type="text" name="comment">
</form>
Then you can enhance it with javascript to make it AJAXy
$('#myForm').submit(...);
Ideally the AJAX callback should use the same code as post-comment.php - either by calling the same file or via include, then you don't have to duplicate code.
In terms, it is not important to make your site work with JavaScript disabled. People who disable JavaScript are people who want to hack bugs into your site, they don't deserve to navigate it correctly. Don't waste your efforts with them. Everybody know the Web is unsurfable without JavaScript.
The only thing you have to be careful is about your forms: Don't ever trust filters in JavaScript, Always filter it again on server-side, ALWAYS!
Use Progressive Enhancement, study jquery to understand it. It takes some time till you get your head around it. For example your idea:
to detect javascript at the homepage
and if it's not enabled redirect to
another version of website that does
not javascript code and works with
pure html
how would you detect if javascript is disabled? not with javascript, obivously...
you're thinking the wrong way round: the most basic version has to be the default version, and then, if you detect more advanced capabilities, you can use them.
Try to avoid separate versions for different bowsers/capabilities for as long as you can. It's so much work to keep all versions in sync and up-do-date.
Some good ressources to get you started:
Understanding Progressive Enhancement
Progressive Enhancement with JavaScript
Test-Driven Progressive Enhancement
The best way is to design a page that works adequately without JS. Then add a <script> block at the bottom of the <body> section with code like this:
window.onload = function() {
// Do DOM manipulations to add JS functionality here. For instance...
document.getElementById('someInputField').onchange = function() {
// Do stuff here that you can't do in HTML, like on-the-fly validation
}
}
Study the jQuery examples. They show lots of things like this. This is called "unobtrusive JavaScript". Google for that to find more examples.
EDIT: The jQuery version of the above is:
$(document).ready(function() {
// Do DOM manipulations to add JS functionality here. For instance...
$('#someInputField').change(function() {
// Do stuff here that you can't do in HTML, like on-the-fly validation
});
});
I added this just to show the lower verbosity of jQuery vs. standard DOM manipulation. There is a minor difference between window.onload and document.ready, discussed in the jQuery docs and tutorials.
What you're aiming for is progressive enhancement. I'd go about this by first designing the site without the JavaScript and, once it works, start adding your JavaScript events via a library such as jQuery so that the behaviour of the site is completely separate from the presentation. This way you can provide a higher level of functionality and polish for those who have JavaScript enabled in their browsers and those who don't.

Best Practices for onload Javascript

What is the best way to handle several different onload scripts spread across many pages?
For example, I have 50 different pages, and on each page I want to set a different button click handler when the dom is ready.
Is it best to set onclicks like this on each individual page,
<a id="link1" href="#" onclick="myFunc()" />
Or a very long document ready function in an external js file,
Element.observe(window, 'load', function() {
if ($('link1')) {
// set click handler
}
if ($('link2')) {
// set click hanlder
}
...
}
Or split each if ($('link')) {} section into script tags and place them on appropriate pages,
Or lastly, split each if ($('link')) {} section into its own separate js file and load appropriately per page?
Solution 1 seems like the least elegant and is relatively obtrusive, solution 2 will lead to a very lengthy load function, solution 3 is less obtrusive then 1 but still not great, and solution 4 will load require the user to download a separate js file per page he visits.
Are any of these best (or worst) or is there a solution 5 I'm not thinking of?
Edit: I am asking about the design pattern, not which onload function is the proper one to use.
Have you thought about making a class for each type of behavior you'd like to attach to an element? That way you could reuse functionality between pages, just in case there was overlap.
For example, let's say that on some of the pages you want to a have button that pops up some extra information on the page. Your html could look like this:
More info
And your JavaScript could look like this:
jQuery(".more-info").click(function() { ... });
If you stuck to some kind of convention, you could also add multiple classes to a link and have it do a few different things if you needed (since jQuery will let you stack event handlers on an element).
Basically, you're focusing on the behaviors for each type of element you're attaching JavaScript to, rather than picking out specific ids of elements to attach functionality to.
I'd also suggest putting all of the JavaScript into one common file or a limited number of common files. The main reason being that, after the first page load, the JavaScript would be cached and won't need to load on each page. Another reason is that it would encourage you do develop common behaviors for buttons that are available throughout the site.
In any case, I would discourage attaching the onlick directly in the html (option #1). It's obtrusive and limits the flexibility you have with your JavaScript.
Edit: I didn't realize Diodeus had posted a very similar answer (which I agree with).
First of all I dont understand why you think setting event listeners is obtrusive?
but ...
Solution one is a bad idea
<a id="link1" href="#" onclick="myFunc()" />
because you should keep your make-up and your scripts seperate.
Solution two is a bad idea
Element.observe(window, 'load', function() {
if ($('link1')) {
// set click handler
}
if ($('link2')) {
// set click hanlder
}
...
}
because you are using a lot of unneeded javascript for every page.
Solution three is a bad idea for the same reason I said solution one is a bad idea.
Solution 4 is the best idea, yeah its one extra load per page, but if for each page you just split each if ($('link')) {} section, the file size can not be that large? Plus, if you take this code out of the global javascript, then its load time will be reduced.
You could hack the class name and use it in a creative manner:
<a class="loadevent functionA" id="link1" href="#" onclick="myFunc()" />
... on another page...
<a class="loadevent functionB" id="link1" href="#" onclick="myFunc()" />
You could select by class name "loadevent" and grab the other class names for that tag, the other class name being the actual function name you want to hook into. This way one handler would be able to do every page and all you have to do is provide the corresponding class names.
I would use JQuery's document ready if possible
$(document).ready(function() {
// jQuery goodness here.
});
While Chris is somewhat correct in that you can do this:
$(document.ready(function() {
// A
});
$(document.ready(function() {
// B
});
$(document.ready(function() {
// C
});
and all functions will be called (in the order they are encountered), it's worth mentioning that the ready() event isn't quite the same to onload(). From the jQuery API docs:
Binds a function to be executed
whenever the DOM is ready to be
traversed and manipulated.
You may want the load() event instead:
$(document).load(function() {
// do stuff
});
which will wait for images and the like to be loaded.
Without resorting to the kneejerk jQuery, if the page varying JS is relatively light I would include it in an inline header script (binding to the onload event trigger, yes) similar to #4, but I wouldn't do this as a separate JS script and download, I'd be looking to handle this with a server side include - however you want to handle that (me? I'd go with XSLT includes).
That gives you both a high degree of modular separation and keeps the download as light as possible.
Having a lot of different pages, I would allow different handling of events for those pages ...
If, however, differencies were slight, I would try to find a pattern I could hang on to, and probably make a real simple algorithm to tell the pages apart ...
The last thing I would resort to, was to use a (big) library (jquery, mootools or whatever !-) if I wasn't going to use it in any other way ...
Now you're talking of best practices, best practice would always be what your users will experience as the lightest solution, and in that, users should be understood in the widest possible way, including developers and so on who are to maintain that site !o]

Categories

Resources