For my current project, I require the facility to be able to remove all functionality from a page, so that it is complete and literal static page. Removing the ability to follow any links, and disabling and javascript listeners allowing content to be changed on the page. Here is my attempt so far:
$("*").unbind().attr("href", "#");
But in the pursuit of a perfect script, and to allow it to work in every eventuality for any possible page (and with the uncertainty of a on liner being effective enough), I thought i'd consult the experts here at stackOverflow.
In summary, my question is, 'Can this be (and has it been) done in a one liner, is there anything this could miss?'. Please break this as best you can.
No. Nothing in this stops meta redirects, or timeouts or intervals already in flight, and it does nothing about same origin iframes (or ones that can become same origin via document.domain) that can reach back into the parent page to redynamize content.
EDIT:
The sheer number of ways scripts can stay submerged to pop up later is large, so unless you control all the code that can run before you want to do this, I would be inclined to say that it's impossible in practice to lock this down unless you have a team including some browser implementors working on this for some time.
Other possible sources of submarine scripts : XMLHttpRequests (and their onreadystatechange handlers), flash objects that canscript, web workers, and embedding code to run in things like Object.prototype.toString.
I did not want to write a lengthy comment so I'm posting this instead.
As #Felix Kling said, I don't think your code will remove the href attributes on every element but rather remove every element and then select their href attributes.
You probably need to write:
$("*").attr("href", "#").detach() ;
to remove the attributes instead of the elements.
Other than that, I doubt that you could remove the event handlers in one line. For one thing you would need to account for DOM level 2 Event registration (only settable with scripting) and DOM level 1 Event registration (via attributes or scripting).
As far as I'm concerned, your best bet is to make a shallow document copy using an XML parser and replace the old document (which you could backup-save to the window).
First: Your code will remove everything from the page, leaving a blank page. I cannot see how it would make the page "static".
$('*').detach();
will remove every element form the DOM. Nothing left. So yes, you remove every functionality in a way, but you also remove all the content.
Update: Even with the change from detach to unbind, the below points are still valid.
Event listeners added in the markup via oneventname="foo()" won't be affected.
javascript: URLs e.g. in images might still be triggered.
Event listeners added to window and document will persist.
See a DEMO.
Related
I'm trying to figure out if altering the DOM of a website will present any accessibility problems. I am placing all of the jQuery in a $(document).ready() function. Will this cause any accessibility issues with the altered elements?
We don't have access to the theme template HTML files, only CSS and JS files. So for example I'm adding a div into our theme using$('[element name]').before('<div>[div content here]</div>') will this content be as accessible as the rest of the DOM as long as I include all the appropriate aria attributes etc.?
In theory, you shouldn't rely on JavaScript to produce the whole HTML code of your site, it's basically a bad practice.
However, it's exactly how big frameworks like angular and react work.
Given that 99% of browsers support JavaScript, it's in fact no longer a problem nowadays.
The true answer is in fact both yes and no, it depends. It depends on the actual HTML code injected.
The key point is that, you must have the same care with generated code as with the code directly written in HTML by hand, i.e. be careful on headings, form labels, alt texts, ARIA attributes if you need them, etc. all the time and in particular each time you add/remove something in the DOM. Additionally, you must pay attention to where the focus is or might be and a few other things.
It's often overlooked precisely because some people assume that it's anyway not accessible, what isn't true.
In order to be accessible, a site with dynamic contents must be accessible at any moment. If it isn't always the case, then you will lose users in need of accessibility at some point. In practice the loss of accessibility often happens at the most critical moment: checkout or paiement, maybe not because of your fault if the paiement site isn't accessible.
You might even improve accessibility by manipulating the DOM via JavaScript (JS). So no, per se, manipulating the DOM does not pose accessibility issues.
If you cannot control the HTML, and the theme is badly accessible, all you can do to improve that is using JavaScript. Think adding role attributes to generic <div> elements. Also, CSS-only solutions seem appealing, but are often not exposing the appropriate state via ARIA-attributes to assistive technology, which needs to be corrected via JS.
Whether your manipulations produce problems or improve accessibility, therefore depends strongly on your implementation.
Here are some examples.
Adding or Removing Content
When adding content, like in your example, it depends on where that content is added in the document, and at which moment.
If it’s being added on DOM Ready, there should be no issue (One exception might be live regions added after the DOM loaded). But if it’s being added at arbitrary moments, it’s problematic.
The Web Content Accessibility Guidelines (WCAG) refer to this as a Change of Context, which must not happen on focus, on input and on user request
See A change of content is not always a change of context. Changes in content, such as an expanding outline, dynamic menu, or a tab control do not necessarily change the context, unless they also change one of the above (e.g., focus).
If it’s being added after DOM Ready, it should happen on user request, or must be a status message and carry the appropriate role to be announced.
For example, in a Disclosure pattern, the aria-expanded of the trigger indicates that new content will become accessible right after the trigger on pressing it. It might just be added to the DOM, depending on the implementation.
Lazy Loading Content
Another, very valid use case would be content that’s added asynchronously. This is tricky to get right, but basically aria-busy can render this more accessible.
Imagine that there's a button on one web page (not mine) and when it's clicked it performs some
Javascript. I want to have a button on my web page that performs exactly the same. So I need to
attach all necessary js files (but first I have to find them) to my html page and sometimes add some js to my html page.
What I usually do in this case? I inspect this button html element to see if there's onclick attribute for this button. If it is, I see the function called when button is clicked and then I try to search for this function in current html page and all js files attached to page. Also I need to find all dependencies (like jQuery, fancybox etc.).
If the button doesn't have onclick attribute I have to look for direct getElementById or jQuery selector pointing to this button with rest of code there. Sometimes there's no such a selector and I have to find a nested selector - really hard and annoying thing.
Is there any better, automated way for doing things above. Ideally after selecting the element in DOM (button in this case) and pressing some magic button I will be able to see all js files involved in processing this click and also js code in html page.
It's going to involve digging no matter what you do. But Chrome's Dev Tools can help with the attached event handlers, to an extent. When you right-click an element and inspect it, on the right-hand side there's a panel showing various tabs: [Styles] [Computed] [Event Listeners] [DOM Breakpoints] [Properties]. The [Event Listeners] one shows the listeners directly attached to that element. Of course, on a site using jQuery (which is more than half the sites using JavaScript at all), looking at the handler will dump you into the jQuery event handling code, but it's a start.
Just as a side point: While it's fine to look at the source of pages for inspiration, or to see how they solved a particular problem, or what plugins they're using to get an effect, etc., I assume you're not grabbing large sections of their actual code (as opposed to libraries and plugins with liberal licenses) without their permission, which is probably not cool.
I write a Mozilla Firefox Addon, that lets me comment websites: When I open a website and click somewhere, the plugin creates a <div> box at this location, where I can enter a comment text. Later, when I open the website again, the plugin automatically puts my previously created comment boxes at the places they where before. (Similar to a comment feature in many PDF readers, etc.)
This leads to a security problem: A website could use an event listener to listen to the creation of new <div> elements and read their content, allowing it to read my private comments.
How can I solve this security issue? Basically, I want a Firefox addon to put private content in a website, while the website should not be able to access this content via JavaScript. (Unless I want it to.)
I could listen to listeners and detach them as soon as the website attaches them - but that does sound like a solid solution.
Is there a security concept in order to make my addon the authority over DOM changes, respectively, to control the access to certain elements?
Alternatively, would it be possible to implement some sort of overlay, which would not be an actual part of the websites DOM but only accessible by the addon?
Similar security problems should occur with other addons. How do they solve it?
If you inject the DOM in a document, the document will always be able to manipulate it, you can't really do much about it. You can either:
1) Don't inject your comment directly in the document, but just a placeholder were there is the first words of the comment, or an image version of the comment (you can generate that with canvas), leave the full ones in your JavaScript Add-on scope, that is not accessible from the page: when you click to edit or add, you can open a panel instead and do the editing there.
2) Inject an iframe, if you have your page remotely in another domain shouldn't be a problem at all, the parent's document can't access to the iframe; but also viceversa: you need to attach content script to your iframe in order to talk with your add-on code, and then you can use your add-on code to send and receive messages from both iframe and parent's document.
If you use a local resource:// document, I'm afraid you need a terrible workaround instead, and you need to use sandbox policies to avoid that the parent's document can communicate with the iframe itself. See my reply here: Firefox Addon SDK: Loading addon file into iframe
3) Use CSS: you can apply a CSS to a document via contentStyle and contentStyleFile in page-mods. The CSS attached in this way can't be inspected by the document itself, and you could use content to add your text to the page, without actually adding DOM that can be inspected. So, your style for instance could be:
span#comment-12::after{
content: 'Hello World';
}
Where the DOM you add could be:
<div><span id='comment-12'></span></div>
If the page tries to inspect the content of the span, it will get an empty text node; and because from the page itself the stylesheet added in this way cannot be inspected, they cannot the styles rules to get the text.
Not sure if there are alternatives, those are the solutions that pop to my mind.
Add-ons that do similar things implement some combination of a whitelist / blacklist feature where the add-on user either specifies which sites they want the action to happen on, or a range of sites they don't want it to happen on. As an add-on author, you would create this and perhaps provide a sensible default configuration. Adblock Plus does something similar.
Create an iframe and bind all your events to the new DOM. By giving it a different domain to the website, you will prevent them from listening in to events and changes.
Addons can use use the anonymous content API used by the devtools to create its node highlighter overlays.
Although the operations supported on anonymous content are fairly limited, so it may or may not be sufficient for your use-case.
Almost all web pages that I see designed to set the focus to an input box add the code into a body onload event. This causes the code to execute once the entire html document has loaded. In theory, this seems like good practice.
However, in my experience, what this usually causes is double work on the user, as they have already entered data into two or three fields and are typing into another when their cursor is jumped back without their knowledge. I've seen a staggering number of users type the last 2/3 of their password into the beginning of a username field. As such, I've always placed the JS focus code immediately after the input to insure there is no delay.
My question is: Is there any technical reason not to place this focus code inline? Is there an advantage to calling it at the end of the page, or within an onload event? I'm curious why it has become common practice considering the obvious practical drawbacks.
A couple thoughts:
I would use a framework like jQuery and have this type of code run on $(document).ready(.... window.onload doesn't run until everything on the page is fully loaded, which explains the delay you have experienced. $(document).ready(... runs when jQuery determines the DOM has been loaded. You could probably write the same sort of logic without jQuery, but it varies by browser.
I prefer to keep my Javascript separate from my HTML because it allows for a cleaner separation of concerns. Then your behavior is then kept separate from your document structure which is separate from your presentation in your CSS. This also allows you to more easily re-use logic and maintain that code — possibly across projects.
Google and Yahoo both suggest placing scripts at the bottom of the html page for performance reasons.
The Yahoo article: http://developer.yahoo.com/performance/rules.html#js_bottom
You should definitely place the script in the appropriate place if it means the correct user experience -- in fact I would load that part of the script (Used for tabbing inputs) before the inputs to ensure it always works no matter how slow the connection.
The "document.ready" function allows you to ensure the elements you want to reference are in the dom and fires right when your whole document dom is loaded (This does not mean images are fully loaded).
If you want you could have the inputs start out as disabled and then reenable them on document ready. This would handle the rare case the script is not ready yet when the inputs are displayed.
Well if you call it before whole page has loaded you really don't know if the element already has been loaded when you make your call. And if you make your call in pre-hand you should check if the element really exists even if you know it always should.
Then to make the call inline, which might seem ideal. But on the other hand it's really bad if a page takes that long to load that you can make several inputs during the loading phase.
Also you could check if input has been made etc.
Also it is possible to check if any input on page contains focus if($("input::focus, textarea::focus").length)... and otherwise set focus on desired input.
Use the autofocus HTML attribute to specify which element should initially receive focus. This decouples JavaScript and gracefully degrades in older browsers.
I'm dynamically adding a lot of input fields through jQuery but the page gets really slow when reaching 200+ inputs (think of the page like a html excel sheet). This is fine really because this scenario is not very common. However, when I dynamically remove the input fields from the page using jQuery's htmlObj.remove() function, the page is still slow as if there were hundreds of inputs still there. Is there any way to explicitly free memory in jQuery/javascript?
My experience with this is from using Firefox. When using Internet Explorer, the page is really slow from the start but that's a whole different story.
The technique I'm using is called event delegation as it's supposed to be the least memory resourceful approach, compared to having all handlers explicitly bound to every object on the page.
Sadly, blur and focus events do not work with event delegation and therefore I need to bind these to every input. This could possibly be the memory hog here. Also, in Firefox it seems I can't use checkboxes for 'changed' or 'key[down|up]' events in event delegation as these checkbox events do not bubble up to the document. Here binding explicitly too.
Anyone can share some experience with this? Can't really show a demo right now as the site has not been launched yet.
Thx.
read this, I'm sure it will help.