Avoid FF JS automatic HTML encoding? - javascript

I'm trying to make simple templating for users on a site. I have a test line like this :
<div id="test">Test</div>
It will alert the HTML properly with the following JS in all browsers except FF:
alert( document.getElementById( 'test' ).innerHTML );
In FF it will change the curly braces to their HTML encoded version. I don't want to just URL decode in case the user enters HTML with an actual URL instead of one of the templated ones. Any ideas to solve this outside of REGEXing the return value?
My fiddle is here: http://jsfiddle.net/davestein/ppWkT/
EDIT
Since it's seemingly impossible to avoid the difference in FF, and we're still early in development, we are just going to switch to using [] instead of {}. Marking #Quentin as the correct answer since it's what I'm going by,

When you get the innerHTML of something, you get a serialised representation of the DOM which will include any error recovery or replacing constructs with equivalents that the browser does.
There is no way to get the original source from the DOM.

If your code won't contain %xx elsewhere, you can just run it through unescape().

Related

How to link to text file in HTML for script use

What I'm after is this idea: The document contains some link to a text file (for load order reasons, I have several excessively bulky script and image files as well as a huge wall of HTML and want to get this operation done within 5 seconds even on 5kb/s) and then a script is able to reference this text file (to avoid messy code), a bit like:
textFile = document.getElementById ("textFileLink");
someText = textFile.read ();
doSomething (someText);
Some ideas I have tried:
Use the link toString method mentioned in passing in the living standard, this merely returns the url itself.
Instead have a script which exists solely to dump a 10k character string into a global variable (definitely bad)
As above but into a display:none HTML element (maybe not quite as bad?)
As above but LocalStorage?
is this possible, or do I have to do some kind of server-side black magic?
Try using the fetch API:
fetch('path/to/demo.txt').then((res) => res.text()).then((data) => {
// code
});
Fetch should be relatively quick... tell me if it works or if there are errors

Do or did javascript: URLs ever work in CSS, and if so, can it be prevented?

So I saw a code snippet today and was horrified:
<p style='background-image: url("javascript:alert(&apos;foo&apos;);");'>Hello</p>
Is it possible to execute javascript from within CSS this way? (It didn’t work when I tested it on a clean Firefox profile, but maybe I made some stupid mistake here, but the concept works.)
If so, what means are there to prevent this, either with an HTTP header or by declarations made by the HTML itself (e.g. when sourcing CSS files from another server)?
If not, was this never possible or has this changed?
The current CSS spec says only "valid image formats" can be used in a background-image:
In some cases, an image is invalid, such as a ‘<url>’ pointing to a resource that is not a valid image format. An invalid image is rendered as a solid-color ‘transparent’ image with no intrinsic dimensions. [...] If the UA cannot download, parse, or otherwise successfully display the contents at the URL as an image, it must be treated as an invalid image.
The spec is silent on whether or not a javascript: url that returns valid image data would work -- it'd be an interesting exercise to try to construct one! -- but I'd be pretty darn surprised if it did.
User agents may vary in how they handle invalid URIs or URIs that designate unavailable or inapplicable resources.
(As #Kaiido points out below, scripts within SVG will not run in this situation either, so I'd expect the whole javascript: protocol to be treated as an "inapplicable resource".)
IE supports CSS expressions:
width:expression(document.body.clientWidth > 955 ? "955px": "100%" );
but they are not standard and are not portable across browsers. Avoid them if possible. They are deprecated since IE8.
Yes, in the past this attack vector worked (older browsers like IE6). I believe most modern browsers should protect against this kind of attack. That said, there can always be more complicated attacks that may get around current protections. If you are including any user-generated content anywhere, it is best to sanitize it before injecting it into your site.
It's possible to execute JavaScript where a URI is expected by prefixing it with javascript:. This is, in fact, how bookmarklets work. I don't think however that this would work with css url(), but it does with href or window.location.
say foo
I think whoever wrote that bit of code was confused about it.

Why does a JS script placed after the </body> and </header> get executed?

I was working on something in PHP and I wanted to include a file and insert something at the end. Without thinking about it, I did the include and then echoed out the material I wanted to insert, which was a JS script.
When I looked at the output, I realized I had forgotten about the tags in the included file. The script was inserted after them, but surprisingly (at least to me) it was executed.
Had you asked me before I did this if a script after the and tags would execute, I would have said "I don't think so." I would have said that I thought it would not execute because I had assumed, up to now that anything after the and tags is ignored by browsers.
So, had you asked, I would have given that answer and I would have been quite wrong.
A script placed after the and tags does execute - why?
I've tried it with FF 3.6.24 and I.E 8.0.7601.17514 and it behaves the same in both.
Any text after the and tags is displayed - why?
Does anyone have any thoughts on this? And, is this something I might be able to rely upon? If so, I can simplify some processing, here and there.
Here's the page I was playing with http://www.bobnovell.com/PastHtmlEndTesting.shtml - let me know if your particular browser does not execute the script and/or display the text that I've put after the script.
Bob
This is well specified behavior in HTML5 though it will be flagged by an HTML5 validator.
The after after body insertion mode defines what happens to content that is after the </html> tag. The rule that handles this case is:
Anything else
↪ Parse error. Switch the insertion mode to "in body" and reprocess the token.
So techinically, it's a parse error, but one with well defined behavior. The <script> element is parsed and executed as if it had appeared in the body, and the element should appear in the DOM in the body.
Most browsers will not treat "parse errors" as fatal. The HTML 5 spec explains:
Certain points in the parsing algorithm are said to be parse errors. The error handling for parse errors is well-defined: user agents must either act as described below when encountering such problems, or must abort processing at the first error that they encounter for which they do not wish to apply the rules described below.

view contents of large array?

I have a very large array I need to check for debugging purposes, problem is it crashes firebug and the like if I try to view the data.
Is there a way I can dump the array to a text file or something?
Why not just dump it on the document itself? If you are using Firefox, try the following:
document.write(myBigArray.toSource());
Then copy paste like your usually do on normal website.
p/s: toSource() requires browser that supports Javascript 1.3 and above
Opera has scrollable alerts, it's very useful for developing.
EDIT: Tested with success for messages with 500000 lines. You can also copy from it.
Post the array to the server (json/hidden field normal form post), and use your server-side language to save that array dump to a file.
If you are using IE you could try copying a string representation of the array to the clipboard.
There are some libraries that can help in writing to files. You can use ActiveX, but that binds you to internet explorer for your debugging and that's kind of outside the javascript world.

Make NSXMLParser skip an Element

I'm using NSXMLParser on an iPhone App to parse HTML Files for a RSS or Atom Feed Link.
Everything works fine until the parser find a <script> element that includes Javascript code without the CDATA Declaration, this causes a Parse Error.
Is possible to tell the parser to skip all the elements named <script>?
Why not just implement parser:parseErrorOccured: and tell it to fail gracefully? I don't believe there's a way to say 'skip this element'
It's not possible to my knowledge to just skip an element. However you may be able to use regex replacement to filter out the invalid content.
Another possibility would maybe to use Tidy to try to clean it up before parsing.

Categories

Resources