what is unobtrusive in Passport.js [duplicate] - javascript

What is the difference between obtrusive and unobtrusive javascript - in plain english. Brevity is appreciated. Short examples are also appreciated.

No javascript in the markup is unobtrusive:
Obtrusive:
<div onclick="alert('obstrusive')">Information</div>
Unobtrusive:
<div id="informationHeader">Information</div>
window.informationHeader.addEventListener('click', (e) => alert('unobstrusive'))

I don't endorse this anymore as it was valid in 2011 but perhaps not in 2018 and beyond.
Separation of concerns. Your HTML and CSS aren't tied into your JS code. Your JS code isn't inline to some HTML element. Your code doesn't have one big function (or non-function) for everything. You have short, succinct functions.
Modular.
This happens when you correctly separate concerns. Eg, Your awesome canvas animation doesn't need to know how vectors work in order to draw a box.
Don't kill the experience if they don't have JavaScript installed, or aren't running the most recent browsers-- do what you can to gracefully degrade experience.
Don't build mountains of useless code when you only need to do something small. People endlessly complicate their code by re-selecting DOM elements, goofing up semantic HTML and tossing numbered IDs in there, and other strange things that happen because they don't understand the document model or some other bit of technology-- so they rely on "magic" abstraction layers that slow everything down to garbage-speed and bring in mountains of overhead.

Separation of HTML and JavaScript (define your JavaScript in external JavaScript files)
Graceful degradation (important parts of the page still work with JavaScript disabled).
For a long-winded explanation, checkout the Wikipedia page on the subject.

To expand on Mike's answer: using UJS behavior is added "later".
<div id="info">Information</div>
... etc ...
// In an included JS file etc, jQueryish.
$(function() {
$("#info").click(function() { alert("unobtrusive!"); }
});
UJS may also imply gentle degradation (my favorite kind), for example, another means to get to the #info click functionality, perhaps by providing an equivalent link. In other words, what happens if there's no JavaScript, or I'm using a screen reader, etc.

unobtrusive - "not obtrusive; inconspicuous, unassertive, or reticent."
obtrusive - "having or showing a disposition to obtrude, as by imposing oneself or one's opinions on others."
obtrude - "to thrust (something) forward or upon a person, especially without warrant or invitation"
So, speaking of imposing one's opinions, in my opinion the most important part of unobtrusive JavaScript is that from the user's point of view it doesn't get in the way. That is, the site will still work if JavaScript is turned off by browser settings. With or without JavaScript turned on the site will still be accessible to people using screen readers, a keyboard and no mouse, and other accessibility tools. Maybe (probably) the site won't be as "fancy" for such users, but it will still work.
If you think in term's of "progressive enhancement" your site's core functionality will work for everybody no matter how they access it. Then for users with JavaScript and CSS enabled (most users) you enhance it with more interactive elements.
The other key "unobtrusive" factor is "separation of concerns" - something programmers care about, not users, but it can help stop the JavaScript side of things from obtruding on the users' experience. From the programmer's point of view avoiding inline script does tend to make the markup a lot prettier and easier to maintain. It's generally a lot easier to debug script that isn't scattered across a bunch of inline event handlers.

Even if you don't do ruby on rails, these first few paragraphs still offer a great explanation of the benefits of unobtrusive javascript.
Here's a summary:
Organisation: the bulk of your javascript code will be separate from your HTML and CSS, hence you know exactly where to find it
DRY/Efficiency: since javascript is stored outside of any particular page on your site, it's easy to reuse it in many pages. In other words, you don't have to copy/paste the same code into many different places (at least nowhere near as much as you would otherwise)
User Experience: since your code can is moved out into other files, those can be stored in the client side cache and only downloaded once (on the first page of your site), rather than needing to fetch javascript on every page load on your site
Ease of minimization, concatenation: since your javascript will not be scattered inside HTML, it will be very easy to make its file size smaller through tools that minimise and concatenate your javascript. Smaller javascript files means faster page loads.
Obfuscation: you may not care about this, but typically minifying and concatenating javascript will make it much more difficult to read, so if you didn't want people snooping through your javascript and figuring out what it does, and seeing the names of your functions and variables, that will help.
Serviceability: if you're using a framework, it will probably have established conventions around where to store javascript files, so if someone else works on your app, or if you work on someone else's, you'll be able to make educated guesses as to where certain javascript code is located

Related

Replace flash with jquery/html5

I have a project with that uses flash for most pages. Now the client want to replace the flash with jquery/html. So from where I have to start with?
The project has swf file and it is embedded by swfobject(javascript).
Can someone help me with giving a idea or steps how I can convert the swf to javascript/html?
If the Flash doesn't contain a huge amount of animation/dynamic interaction then I'd say you should follow a process along these lines:
Create HTML documents where the content is structured in a simple, consistent format. If you're not sure about how to do this then I'd suggest you find out some of the basic principles associated with semantic markup (Google is your friend here, there are plenty of great resources - but here's a starter I just came across). Don't worry about elements which involve animation or special user interaction at this stage - just set aside a part of your HTML where such elements need to go for now
Use CSS to layout the document structure with the desired appearance. Do this one step at a time - starting from the largest elements in the page and working your way down to the smallest ones (I find this the best way to build your UI reliably though others may approach it differently).
Once your basic pages are structured correctly and looking good it's time to focus on the animated/interactive aspects. The easiest way to do this is to use other people's work: jQuery plugins. Identify what functionality you need and find plugins that already provide that functionality (i.e. you mentioned a slideshow function - Google for "jQuery slideshow" and play around with the options - there will be plenty).
Realistically, if you're not particularly familiar with HTML/CSS/JS this will not be a simple task for you. Here's a few other thoughts that might help you:
Focus initially on the content structure: if you get this right everything else builds on top of it with much less pain. In addition, Google really likes good document structure so this will go a very small way to getting the site better search result rankings.
Don't worry too much about HTML5: aside from the fact that you have to do extra work to make it fully browser compatible (at least, if you have need of a great deal of browser compatibility), it just isn't really necessary to take advantage of elements like nav or video yet (unless your client won't allow the use of Flash video - but that's for another discussion). Don't bite off more than you can chew.
Be consistent, this is related to the first point above - if you apply structure to your elements consistently across all of your documents you can then take advantage of the same CSS and JS much more easily.
As for the w3schools comments elsewhere on this page, I would say don't use them for tutorials - but they can be a useful reference source for learning HTML elements and attributes and for CSS rules (although there are many other sites who could help here).
Well, I hope this helps you out. Sorry I can't be more specific but I'd need a much more detailed description of your problem before I could give you much more... Good luck
1) Learn Flash (hopefully you've achieved that)
2) Learn Javascript. Learn html5. There are a lot of resources and tutorials.
3) Take the source of flash project. Read it and slowly rewrite to javascript.
4) Once you have translated the project, it is propably suboptimal. Think in javascript to change some flash-like constructions into javascript-like. Do minor optimizations until you're happy with results.
Have you tried Wallaby or Swiffy? Adobe is reacting to the demise of excessive Flash usage in other ways too.

Javascript ouside public_html?

Is there any way/reason to keep your js/jQuery ouside public_html? Are there security benefits?
All javascript is readable no matter where you put it on your server. The best you can do is obfuscate it:
How can I obfuscate (protect) JavaScript?
Plus, jQuery is open source, so there is no benefit of storing it any place in particular.
JavaScript is code that is executed by your visitor's browsers so it must be public. It is good practice to compress your JavaScript code because it will require less bandwidth to transfer and it will make it harder for other to read it (i.e. figure out how to exploit it). Yahoo offers a free JavaScript compressor.
Is there any reasons to keep your JavaScript outside public_html?
It allows for a more optimized server-side and client-side caching of the resources.
It allows for a more parallelized download of the resources from the server by the client.
It makes it easier for you to separate your UI's presentational layer (your HTML code) from its controls and logic layer (your JavaScript code), which also means:
It makes it easier for you to manage.
It's easier to share across your team (you won't have designers and developers working on the same pieces of code, and you separate functionalities in smaller resources (even if you compile them together for a production server).
As a result of this, the design of your JS code will most probably be more prone to testability.
It also allows you to not load some bits and pieces you may not need all the time. It thus also facilitates code reuse for some of these bits when they're needed somewhere else, without having to duplicate them in other huge files of stuff put together.
Is there any ways to keep your JavaScript outside public_html?
Sure. Just put the code in a external JS files and inject them normally using script tags. Or inline them with your build script or template engine. Be sure to decouple your HTML from your JS (no ugly JS injected directly with on<something>=javascript:doThisOrThat() in your tags) and to use more standard and robust methods instead (use a system compliant with the W3C DOM Level 2 Event model. jQuery is very practical for this anyway, as it's designed with progressive enhancement in mind).
Are there security benefits?
Short answer: no. But you have a clear increase in code quality, which would be easily relating to your overall security. But having your JS within the same file is not a security flaw. If what you meant is that you're concerned about your JS code being visible by visitors, as other answers already mentioned: you cannot do anything it about this. You can make it harder and more painful to look at (via obfuscation), but this won't prevent someone who really wants to to dig into your code. My recommendation: don't waste your time and efforts on that.
Unless it's in a writable location (which it shouldn't be), no.

Javascript Ajax Graceful-degradation, with Different Pages?

I'm starting to give a little more attention to making my javascript and ajax degrade gracefully. Which is more recommended:
working on incorporating the graceful degradation into your existing code (can be tricky)
or
developing a different sets of pages for the non-js users.
I'm leaning towards the different sets of pages, because I feel it's easier and I get to deliver the best possible results for each user type (js-enabled or js-disabled). Do you agree with me, and if not, why do you disagree?
I'm also worrying about hacking attempts. For example hacker gets to the js-enabled version, then disables his js. Any thoughts on this point? I don't know much about hacking, but can this be a security concern if I go with the separate versions?
Thanks in advance
Though it doesn't work well for existing sites, often it's more useful to use the Progressive Enhancement paradigm: build the site so it works with no special add-ons, then start layering your awesomeness on top of that.
This way you can be sure it works from the ground up and everyone (including those who use screen readers, those who turn off images or stylesheets, and those who don't use javascript) can all access your site.
For an existing site, however, it will depend on what functionality the ajax is delivering. In general you should strive to mirror all the ajax functionality with js disabled. If you have security holes in your js version, than you probably will in your non-js version too. AJAX can't get to anything that can't be accessed via ordinary URL.
Developing two separate sets of pages, one for JS enabled and one for non-JS, is obviously a lot of work, not only initially, but also as your application keeps evolving. If that doesn't bother you too much, I think that's the way to go. I think you are right about same-page graceful degradation being very tricky sometimes. Sometimes this is just because of the layout: With JS enabled, you can simply hide and show elements, where as without JS: where to put everything? Separate sets of pages can help keep page structure cleaner.
About hacking attempts: You can never, never, never rely on client-side JavaScript validation. Everything has to be checked (or re-checked) server-side, and your server-side code may make no assumptions whatsoever on the user input. Therefore, I think the scenario of someone de-activating JS while using the application is irrelevant. Try to keep the expected user input uniform for the non-JS and the JS versions, validate it properly, and you're good.
You'll probably want to check out jQuery Ajaxy. It lets you gracefully upgrade your website into a full featured ajax one without any server side modifications, so everything still works for javascript disabled users and search engines. It also supports hashes so your back and forward buttons still work.
It's been implemented on these two sites (which I know of) http://wbhomes.com.au and http://www.balupton.com

Using Javascript to render data onload

This post probably will need some modification. I'll do my best to explain...
Basically, as a tester, I have noticed that sometimes programers who use template-based web back ends push a lot of stuff into onload handlers that then do stuff like load menu items, change display values in forms, etc.
For example, a page that displays your network configuration loads blank (or dummy values) for the IP info, then loads a block of variables in an onload function that sets the values when the page is rendered.
My experience (and gut feeling) is that this is a really bad practice, for a couple reasons.
1- If the page is displayed in an environment where Javascript is off (such as using "Send Page") the page will not display properly in that environment.
2- The HTML page becomes very hard to diagnose, because what is actually on screen is needs to be pieced together by executing the javascript in your head (this problem is less prominent w/ Firefox because of Firebug).
3- Most of the time, this is not being done via a standard practice of feature of the environment. In other words, there isn't a service on the back-end, the back-end code looks just as spaghetti as the resulting HTML.
and, not really a reason, more a correlation:
I have noticed that most coders that do this are generally the coders that have a lot of code-related bugs or critical integration bugs.
So, I'm not saying we shouldn't use javascript, I think what I'm saying is, when you produce a page dynamically, the dynamic behavior should be isolated to the back-end, and you should avoid changing the displayed information after the page is loaded and rendered.
I think what you're saying is what we should be doing is Progressive Enhancement with JavaScript.
Also related: Progressive Enhancement with CSS, Understanding Progressive Enhancement and Test-Driven Progressive Enhancement.
So the actual question is "What are advantages/disadvantages" of javascript content generation?
here's one: a lot of the things designers want are hard in straight html/css, or not fully supported. using Jquery to do zebra-tables with ":odd" for instance. Sometimes the server-side framework doesn't have good ways to accomplish this, so the way to get the cleanest code is actually to split it up like that.

When should I use Inline vs. External Javascript?

I would like to know when I should include external scripts or write them inline with the html code, in terms of performance and ease of maintenance.
What is the general practice for this?
Real-world-scenario - I have several html pages that need client-side form validation. For this I use a jQuery plugin that I include on all these pages. But the question is, do I:
write the bits of code that configure this script inline?
include all bits in one file that's share among all these html pages?
include each bit in a separate external file, one for each html page?
Thanks.
At the time this answer was originally posted (2008), the rule was simple: All script should be external. Both for maintenance and performance.
(Why performance? Because if the code is separate, it can easier be cached by browsers.)
JavaScript doesn't belong in the HTML code and if it contains special characters (such as <, >) it even creates problems.
Nowadays, web scalability has changed. Reducing the number of requests has become a valid consideration due to the latency of making multiple HTTP requests. This makes the answer more complex: in most cases, having JavaScript external is still recommended. But for certain cases, especially very small pieces of code, inlining them into the site’s HTML makes sense.
Maintainability is definitely a reason to keep them external, but if the configuration is a one-liner (or in general shorter than the HTTP overhead you would get for making those files external) it's performance-wise better to keep them inline. Always remember, that each HTTP request generates some overhead in terms of execution time and traffic.
Naturally this all becomes irrelevant the moment your code is longer than a couple of lines and is not really specific to one single page. The moment you want to be able to reuse that code, make it external. If you don't, look at its size and decide then.
If you only care about performance, most of advice in this thread is flat out wrong, and is becoming more and more wrong in the SPA era, where we can assume that the page is useless without the JS code. I've spent countless hours optimizing SPA page load times, and verifying these results with different browsers. Across the board the performance increase by re-orchestrating your html, can be quite dramatic.
To get the best performance, you have to think of pages as two-stage rockets. These two stages roughly correspond to <head> and <body> phases, but think of them instead as <static> and <dynamic>. The static portion is basically a string constant which you shove down the response pipe as fast as you possibly can. This can be a little tricky if you use a lot of middleware that sets cookies (these need to be set before sending http content), but in principle it's just flushing the response buffer, hopefully before jumping into some templating code (razor, php, etc) on the server. This may sound difficult, but then I'm just explaining it wrong, because it's near trivial. As you may have guessed, this static portion should contain all javascript inlined and minified. It would look something like
<!DOCTYPE html>
<html>
<head>
<script>/*...inlined jquery, angular, your code*/</script>
<style>/* ditto css */</style>
</head>
<body>
<!-- inline all your templates, if applicable -->
<script type='template-mime' id='1'></script>
<script type='template-mime' id='2'></script>
<script type='template-mime' id='3'></script>
Since it costs you next to nothing to send this portion down the wire, you can expect that the client will start receiving this somewhere around 5ms + latency after connecting to your server. Assuming the server is reasonably close this latency could be between 20ms to 60ms. Browsers will start processing this section as soon as they get it, and the processing time will normally dominate transfer time by factor 20 or more, which is now your amortized window for server-side processing of the <dynamic> portion.
It takes about 50ms for the browser (chrome, rest maybe 20% slower) to process inline jquery + signalr + angular + ng animate + ng touch + ng routes + lodash. That's pretty amazing in and of itself. Most web apps have less code than all those popular libraries put together, but let's say you have just as much, so we would win latency+100ms of processing on the client (this latency win comes from the second transfer chunk). By the time the second chunk arrives, we've processed all js code and templates and we can start executing dom transforms.
You may object that this method is orthogonal to the inlining concept, but it isn't. If you, instead of inlining, link to cdns or your own servers the browser would have to open another connection(s) and delay execution. Since this execution is basically free (as the server side is talking to the database) it must be clear that all of these jumps would cost more than doing no jumps at all. If there were a browser quirk that said external js executes faster we could measure which factor dominates. My measurements indicate that extra requests kill performance at this stage.
I work a lot with optimization of SPA apps. It's common for people to think that data volume is a big deal, while in truth latency, and execution often dominate. The minified libraries I listed add up to 300kb of data, and that's just 68 kb gzipped, or 200ms download on a 2mbit 3g/4g phone, which is exactly the latency it would take on the same phone to check IF it had the same data in its cache already, even if it was proxy cached, because the mobile latency tax (phone-to-tower-latency) still applies. Meanwhile, desktop connections that have lower first-hop latency typically have higher bandwidth anyway.
In short, right now (2014), it's best to inline all scripts, styles and templates.
EDIT (MAY 2016)
As JS applications continue to grow, and some of my payloads now stack up to 3+ megabytes of minified code, it's becoming obvious that at the very least common libraries should no longer be inlined.
Externalizing javascript is one of the yahoo performance rules:
http://developer.yahoo.com/performance/rules.html#external
While the hard-and-fast rule that you should always externalize scripts will generally be a good bet, in some cases you may want to inline some of the scripts and styles. You should however only inline things that you know will improve performance (because you've measured this).
i think the specific to one page, short script case is (only) defensible case for inline script
Actually, there's a pretty solid case to use inline javascript. If the js is small enough (one-liner), I tend to prefer the javascript inline because of two factors:
Locality. There's no need to navigate an external file to validate the behaviour of some javascript
AJAX. If you're refreshing some section of the page via AJAX, you may lose all of your DOM handlers (onclick, etc) for that section, depending on how you binded them. For example, using jQuery you can either use the live or delegate methods to circumvent this, but I find that if the js is small enough it is preferrable to just put it inline.
Another reason why you should always use external scripts is for easier transition to Content Security Policy (CSP). CSP defaults forbid all inline script, making your site more resistant to XSS attacks.
I would take a look at the required code and divide it into as many separate files as needed. Every js file would only hold one "logical set" of functions etc. eg. one file for all login related functions.
Then during site developement on each html page you only include those that are needed.
When you go live with your site you can optimize by combining every js file a page needs into one file.
The only defense I can offer for inline javascipt is that when using strongly typed views with .net MVC you can refer to c# variables mid javascript which I've found useful.
On the point of keeping JavaScript external:
ASP.NET 3.5SP1 recently introduced functionality to create a Composite script resource (merge a bunch of js files into one). Another benefit to this is when Webserver compression is turned on, downloading one slightly larger file will have a better compression ratio then many smaller files (also less http overhead, roundtrip etc...). I guess this saves on the initial page load, then browser caching kicks in as mentioned above.
ASP.NET aside, this screencast explains the benefits in more detail:
http://www.asp.net/learn/3.5-SP1/video-296.aspx
Three considerations:
How much code do you need (sometimes libraries are a first-class consumer)?
Specificity: is this code only functional in the context of this specific document or element?
Every code inside the document tends to make it longer and thus slower. Besides that SEO considerations make it obvious, that you minimize internal scripting ...
External scripts are also easier to debug using Firebug. I like to Unit Test my JavaScript and having it all external helps. I hate seeing JavaScript in PHP code and HTML it looks like a big mess to me.
Another hidden benefit of external scripts is that you can easily run them through a syntax checker like jslint. That can save you from a lot of heartbreaking, hard-to-find, IE6 bugs.
In your scenario it sounds like writing the external stuff in one file shared among the pages would be good for you. I agree with everything said above.
During early prototyping keep your code inline for the benefit of fast iteration, but be sure to make it all external by the time you reach production.
I'd even dare to say that if you can't place all your Javascript externally, then you have a bad design under your hands, and you should refactor your data and scripts
Google has included load times into it's page ranking measurements, if you inline a lot, it will take longer for the spiders to crawl thru your page, this may be influence your page ranking if you have to much included. in any case different strategies may have influence on your ranking.
well I think that you should use inline when making single page websites as scripts will not need to be shared across multiple pages
Having internal JS pros:
It's easier to manage & debug
You can see what's happening
Internal JS cons:
People can change it around, which really can annoy you.
external JS pros:
no changing around
you can look more professional (or at least that's what I think)
external JS cons:
harder to manage
its hard to know what's going on.
Always try to use external Js as inline js is always difficult to maintain.
Moreover, it is professionally required that you use an external js since majority of the developers recommend using js externally.
I myself use external js.

Categories

Resources