How do I check in javascript if the html page was valid? - javascript

Is there a way to query the browser if the page that loaded the javascript was valid, at least as far as the browser is concerned? Obviously the browser loads this page and not so much as validates it, but rather interprets it for display. Is there a way to query the list of errors and warnings that the browser generated when processing the html?
This would be a neat way to generate warning in selenium for syntax of the page.

Out of sheer necessity browsers do not actually validate html in any way and only parse it.
If you wish to know if the browser had any issue parsing it, you can take a stringified version of the original html, and compare it to a stringified version of the HTML after the browser has parsed it.
If the browser encountered any parsing issues (no matter how small), it will have edited your HTML source in order to make the DOM tree generate properly.
Note though that even this method is not foolproof, because the browser will only fix problems it can understand, for example, using an invalid html tag has no effect as far as the browser is concerned when it comes to parsing your html.

You could AJAX in the W3C validator and interpret the results. Something like this:
jQuery(function($){
var yourURL = window.location.href;
var urlencode = encodeURIComponent(yourURL);
$('#your-results-container').load('http://validator.w3.org/check?uri='+urlencode+' #results');
});`
This would take your current URL, urlencode it, run it through the W3C validator and load the results div of that page into your div "your-results-container". You could then parse through it and do whatever you wanted with IDs or classes of errors or warnings.

Related

How can I use JSON.parse in a bookmarklet in IE9 if the page is not in standards mode?

I'm creating a bookmarklet that will accept a JSON string through a prompt like so:
prompt("Enter your JSON", "");
Once they enter their JSON string, I convert their string into a JSON Object using JSON.parse. Unfortunately, as far as I can tell JSON.parse is not supported in IE9. The object of my bookmarklet is to loop through the available <input>'s on the page and fill their values with data from my JSON object.
What can I do in order to get this to work in IE9?
One possible solution - This answer: https://stackoverflow.com/a/7146404/556079 suggests that JSON.parse will work if my document is in standards mode. Since this is a bookmarklet, I have no control over the doctype of the page. Would it be possible to change the doctype using my bookmarklet?
I would prefer to avoid loading libraries in my bookmarklet like http://bestiejs.github.io/json3. If this can be done without resorting to that, that would be ideal.
Alternatively, the input doesn't need to even be a JSON. I just need to get some user submitted data into a loop where I can reference anywhere between 2 and 50 values.

using decodeURIComponent within asp.net

I encoded an html text property using javascript and pass it into my database as such.
I mean
the javascript for string like "Wales&PALS"
encodeURIComponent(e.value);
converted to "Wales%20PALS"
I want to convert it back to "Wales&PALS" from asp.net. Any idea on how to embed
decodeURIComponent(datatablevalues)
in my asp.net function to return the desired text?
As a prevention for SQL injection we use parametrized queries or stored procedures. Encoding isn't really suitable for that. Html encoding is nice if you expect your users to add stuff to your website and you want to prevent them injecting malicious javascript for instance. By encoding the string the browser would just print out the contents. What you're doing is that you encode the string, add it to the database, but then you try to decode it back to the original state and display it for the clients. That way you're vulnerable to many kinds of javascript injections..
If that's what you intended, no problem, just be aware of the consequences. Know "why" and "how" every time you make a decision like this. It's kinda dangerous.
For instance, if you wanted to enable your users to add html tags as a means of enhancing the inserted content, a more secure alternative for this would be to create your own set of tags (or use an existing one like BBCode), so the input never contains any html markup and when you insert it into the database, simply parse it first to switch to real html tags. Asp.net engine will never allow malicious input during a request (unless you voluntarily force it do so) and because you already control parsing the input, you can be sure it's secure when you output it, so there's no need for some additional processing.
Just an idea for you :)
If you really insist on doing it your way (encode -> db -> decode -> output), we have some options how to do that. I'll show you one example:
For instance you could create a new get-only property, that would return your decoded data. (you will still maintain the original encoded data if you need to). Something like this:
public string DecodedData
{
get
{
return HttpUtility.UrlDecode(originalData);
}
}
http://msdn.microsoft.com/en-us/library/system.web.httputility.aspx
If you're trying to encode a html input, maybe you'd be better off with a different encoding mechanism. Not sure if javascripts encodeURIComponent can correctly parse out html.
Try UrlDecode in HttpServerUtility. API page for it

Parsing very large JSON strings in IE causing problems

I'm parsing a 2MB JSON string in IE8. The JSON.Parse line is taking a little while to return and IE8 shows a message asking the user if they want to abort the script.
Is there any way I can suppress this message? (or somehow speed up JSON.Parse)
I know about Microsoft KB175500, however this is not suitable as my target users will not have administrator access to make the registry modifications on their SOE machines.
I had this same question. Apparently there is no way to suppress the message, but there are tricks to make IE think it's still working by using an asynchronous iteration pattern (dead link, view comments below).
This comes from an answer to one of my questions:
loop is too slow for IE7/8
If the browser is unhappy with how long the JSON parser is taking, there are only four choices here I know of:
Get a faster JSON parser that doesn't take so long.
Break up your JSON data into smaller pieces so you are only parsing smaller pieces at once.
Modify a JSON parser to work in chunks so it can parse part of the data in one chunk, then on a short timeout, parse the next chunk, etc... This will prevent the browser prompt, but is probably a lot of work to write/modify a JSON parser that works this way.
If you can be sure the content is safe, then you could see if using eval instead of a JSON parser works around the issue.

Opera User-JS: how do I get the raw server response?

I'm writing some user-JS for Opera. It reacts on a request that doesn't have an extension, e.g. /stuff/code/MyFile, or has one not related to JavaScript, e.g. /stuff/code/load.do. The content-type of the response is set to text/html, even though it returns pure JavaScript source (text/javascript). As I don't have access to the server code I simply have to live with this.
The problem now is that I want to format the source with line numbers and such and display it inside Opera. Therefore, I wrote some user-JS to react on AfterEvent.DOMContentLoaded (also tried AfterEvent.load, same thing). It reads e.event.target.body.innerHTML to gain access to the body, i.e. the JavaScript-code.
That alone would work nicely, if only the source wouldn't contain HTML-tags or comparison operators (<, >). Since it does, I never get the output I want. Opera seems to have some internal logic to convert the text/html-response into its own representation format. This includes that e.g. a CRLF after a HTML-tag is removed or code between two "matching" < and > (comparison operators!) are crunched together into one single line applying ="" after each word in there.
And that's where the problem is.
If I request the same URL without my user-JS and then look at the source of the "page" I see a clean JavaScript-code identical to what the server sent out. And this is what I want to get access to.
If I use innerText instead of innerHTML, Opera strips out the HTML-tags making the file different to the original, too.
I also tried to look at outerHTML, outerText and textContent, but they all have the same problems.
I know that Opera doesn't do anything wrong here. The server says it's a text/html and Opera simply does what it usually does with a text/html-kind of response.
Therefore, my question is: is there any way to get the untouched response with a user-JS?
There isn't any way to access the pre-parsed markup from JS. The only way to do that would be to use XMLHttpRequest to request the content yourself.

XML and Javascript: the right tool for the job?

For years I've been reading about XML and I have just not quite grokked it. Most documents I see about it simply explain the syntax (extraordinarily easy to understand) and say that it's portable: I've worked with Unix my whole life so the idea of putting things in plain text to be portable is hardly revolutionary. My specific question is that I have a document (my CV) that I would like to present to web visitors in several formats: as a webpage, as a pdf, or even as plain text. Is XML and Javascript the right approach to take?
What I need is for the document to be easily editable, conversion easy and just easy general upkeep. For example, when I publish a paper, I'd like to take less than five minutes to add the info and then have everything go automatically from there.
Give me your opinions: I also use LaTeX compulsively, so my current approach has been just to have my CV in LaTeX and to convert it to a web-page using LaTeXML. However, I sorta have the feeling that with everybody jumping up and down about XML and Javascript, that there might be something good to learn about it.
I would also like to simplify maintaining my homepage by not duplicating the same footer for every single page that I set up.
Thanks,
Joel
Edit: I'll also take any book recommendations!
I think this is a slight misunderstanding of the combination of JavaScript and XML.
XML, in and of itself is an excellent means of representing data. It's largely human-readable, and easily parsed with libraries in nearly every programming language. That is the main benefit of XML.
Using XML with JavaScript is certainly a solution, but I think it's a matter of the question you're asking. JavaScript can parse XML, and allow you to obtain and manipulate data from your XML document. If you want to grab data from a server without reloading your HTML page (synchronously or asynchronously), then using JavaScript and XML is a valid way to do that.
If you want to, however, display your XML as a webpage, you would likely be better off using XML and XSLT [wikipedia], or perhaps PHP and XPath, to transform the document into browser-readable HTML. On the other hand, you could use nearly any language to convert the XML to a plain-text file, rich text file, or store it in a normalized database.
To sum up, XML is a great way to store data, because it can be used in so many different ways, and by so many different languages. It's an answer to many different questions; you just have to figure out which questions you're asking.
To elaborate on my comment
The transformation to whatever output you desire is depending on how you store your CV on your server and whether you have the possibility to process it on the server. If you store it in XML, you can transform it to desired (binary) output using server based tools - that would for php be pdf and word (on windows server platform) for example. XML would be interesting from a mark-up point of view since it would make it clear where the table of contents, headers, lists of experience and so one would be found.
JavaScript cannot transform something into PDF or word, that has to be done on the server. What javascript can do is to get a text from the server in XML or JSON using AJAX and manipulate this into what the user sees on the screen. For XML that can be done with XSL(T) too. If you want for self-education purposes to use JavaScript, JSON is very nice since it is in my opinion more readable than XML and it creates a populated javascript object with the least work.
Footer in javascript: in the page have
<script type="text/javascript" src="footer.js"></script> and in footer.js, you can for example do
var footerText = 'Here goes whatever you want';
document.write(footerText);
Comparison between XML and JSON
I've got a webpage with browser-side XSLT transformation up and running for years. It's a playground, only some words in german. See how easy it is to build this on heese.net/test. You can switch between "Beispiel" (=Demo) and XSL. The sourcecode of the page in the iframe is the XML. You can do this serverside with 3 lines of PHP-code.
On Javascript: you can use it with XSLT and I show this on my site, but it can't interact. First the XSLT builds an HTML page out of your XML data and after this job is completely done the Javascript in the resultig HTML document begins to work.
Parsing XML with Javascript is a different task.

Categories

Resources