MDN explains how to use the window.screen object, but also says "DOM Level 0. Not part of specification."
W3Schools says that window.screen.* properties are supported in all major browsers.
If I understand this correctly... window.screen is completely non-standard, but is nonetheless universally supported. Is that right?
If this is the case, are there any cross-browser differences I need to be aware of, or can I just use it? I'm mostly interested in screen.availWidth, by the way.
Quirksmode compatibility tables to the rescue!
http://www.quirksmode.org/dom/w3c_cssom.html#screenview
Most, but not all values are supported by the major browsers.
You should be fine with it.
The reason that it is not part of a standard is because DOM Level 0 was introduced before standards were around. DOM Level 0 is also called the Legacy DOM, and it was created at the same time NetScape 2.0 made JavaScript in the browser a reality; in effect, DOM Level 0 was the very first DOM spec.
The Legacy DOM will be around for a long time, if not then it would break backward compatibility with a TON of very popular scripts already in existence.
EDIT: In other words, your understanding is completely correct. It is not "standardized" but it is completely universal and will remain so for a long time.
Related
Case: e is of type HtmlElement and not css selector
I am talking about any attribute, not just standard allowed ones, like atom-type or data-atom-type, whatever may be the name, will it work without jQuery?
I suspect $(e).attr(name,value) is too slow, first of all it is creating an entire jQuery object ($(e) !== $(e) // two objects are not same) (jsPerf: http://jsperf.com/jquery-attr-vs-native-setattribute/28) and then it invokes certain checks and then sets value, which most browsers easily support e.setAttribute.
Is there any problem replacing $(e).attr(name,value) to e.setAttribute(name,value)?
IE8 supports setAttribute as per MSDN documentation. Is there any mobile browser or any browser which will not support it?
Eventually I want to improve performance of my JavaScript framework, initially we used jQuery extensively for its cross browser DOM features.
We have now understood that unless you are using css selector, most functions such as attr,val,text are better called with their direct DOM counter part when you have instance of HtmlElement.
I suspect $(e).attr(name,value) is too slow, first of all it is creating an entire jQuery object and then it invokes certain checks and then sets value, which most browsers easily support e.setAttribute.
If you measure it, you'll find that the difference in performance is large-ish in relative terms, but miniscule in absolute terms, and it's absolute terms we normally care about. It just doesn't matter in 99.999999% of cases. If you run into a specific performance problem, and trace it to using jQuery, then consider optimizing at that point.
what is benefit of $(e).attr(name,value) vs e.setAttribute(name,value)?
In the specific case you mention, where e is an HTMLElement, there are only a couple of benefits:
There are a couple of IE-specific bugs in setAttribute that jQuery works around for you
There are some "attributes" people set when they really should be setting a property, for instance checked or disabled; jQuery maps those (this is mostly a legacy feature these days, as people should be using prop)
It does some pre-processing on boolean values for you, letting you use $(e).attr("checked", true) when true really should be "checked"
IE8 supports setAttribute as per MSDN documentation. Is there any mobile browser or any browser which will not support it?
All browsers support setAttribute. As I mention earlier, various versions of IE have had various bugs in it, but it's there and mostly works.
I often hear about "DOM level 1", "DOM level 2", "DOM level 3" and "DOM level 4" and realized that I don't know the difference between any of them or how they relate to each other.
I know the very basics - DOM is Document Object Model, and is what provides access for scripting languages (particularly, but as far as I know, not limited to various versions of ECMAScript, such as ECMAScript 5.1) to access elements of an HTML document. (Some sites I read - such as the dom introduction on quirksmode - say that it's for any XML document, but HTML is a sufficient subset.)
The dates on w3c's DOM technical reports seem to imply that each subsequent DOM level supersedes the previous ones.
Sadly, the best reference I've found to provide clarification has been wikipedia, which seems to say the same - the Standardization section says subsequent levels "added" extra functionality, while not mentioning removing anything.
Now, for my questions, which may be rapid fire, but hopefully express the general state of my ignorance:
What's the relation of one DOM level to another?
Are lower level DOMs complete subsets of higher level DOMs? Has any functionality been removed as the DOM level advances? When I see statements like The level 1 DOM will work fine on an HTML document and In the Level 1 DOM, each object, whatever it may be exactly, is a Node (both from the quirksmode intro), does this imply that such statements are true for levels 2, 3 and 4? (These are all kind of the same question, just asked different ways)
Is citing DOM level really little more than a shorthand way of how modern a user agent must be for a particular function to work?
Obviously, I can study each specification off of the w3c's DOM technical reports, but was hoping to get answers from those with first-hand experience. Just by glancing at the changes section of the spec for DOM level 3, I see that most of the changes from 2 to 3 were additions, though some of the key implementations in the Node interface have changed. Did these changes break anything?
I'd like to do more than just nod sagely next time someone tells me, "Oh, that's DOM level 2, so it's ok," so would welcome any references I have missed or firsthand information that I didn't glean from my research.
First, I'll relate a message from MDN's writeup of DOM levels (emphasis in original):
The DOM used to be written as a set of levels. That is no longer the case. These days it is maintained as the DOM Living Standard. This page provides an historical overview of the olden days.
This is confirmed in a W3C document called "W3C DOM4". We might take that to mean "DOM Level 4", and assume it adds an additional DOM level, but the text of the specification actually says:
This document is published as a snapshot of the DOM Living Specification.
So, this is a historical discussion, but still one worth having.
A "DOM Level" was a collection of specifications that described DOM objects, methods, and behaviors. Higher levels of the DOM specification built on the previous levels. Changes happened in two ways:
The addition of a totally new specification category (e.g., Level 3 adds "Validation" and "Load and Save" specifications, which did not exist in Level 2)
The modification of an existing specification category (e.g. updating the "Core" spec)
Obviously, the first type of change was purely additive, rather than subtractive. The second kind of change also seems to have been nearly exclusively additive, probably because the W3C was interested in preserving backward compatibility with previous versions.
Changes that are not backward-compatible tend to be rare and fairly minor. The Document.doctype change you cite, for example, was actually largely additive. Level 3 added the sentence:
For HTML documents, a DocumentType object may be returned, independently of the presence or absence of document type declaration in the HTML document.
This simply gave greater flexibility to allow DOM implementations to add in a doctype in HTML when the author omitted a <!DOCTYPE>. The only functionality this would break is the ability to programmatically detect the presence of an author-specified doctype, which doesn't seem to be particularly valuable.
Probably the reason you've heard someone say, "Oh, that's DOM level 2, so it's ok," is because DOM level 2 is more widely supported than DOM level 3. In some cases, this isn't even a question of old browser support: Firefox marked their lack of support for the DOM 3's "Load and Save" specification as WONTFIX. All Level 2 specifications, by contrast, are supported pretty well by modern browsers, and enjoy support from much older browsers (since Level 2 is four years older than level 3).
Just a couple of notes on DOM4 to add to apsillers' answer:
... In the Level 1
DOM, each object, whatever it may be exactly, is a Node ..., does this imply that such statements are true for
levels 2, 3 and 4?
That's a definite no. Attributes in DOM4 are not Nodes.
DOM4 makes a number of significant non-backward compatible changes. The attributes are not nodes change is a big one if you're not using javascript or a duck typed language. Also document.createElement() on a XML document will create the element in the http://www.w3.org/1999/xhtml namespace, where earlier levels create the element in no namespace. Browsers have long done this, but typical XML oriented DOM implementations have used the DOM3 and earlier way. That's a big shift if you migrate from a DOM3 implementation to a DOM4 one in a non-browser context.
when get a element's style, we always use
if(document.defaultView && document.defaultView.getComputedStyle) to check whether the browser support the method or not.
why not use if(window.getComputedStyle)?
So in short, the reason why we use document.defaultView && document.defaultView.getComputedStyle is that we want a cross-browser working-on-every-element method of checking whenever it supports fetching computed styles.
Simple if(window.getComputedStyle) would fail for iframes in Firefox 3.6 (according to article linked in comment by Alex K.).
According to the MDN defaultView is no longer required
In many code samples, getComputedStyle is used from the document.defaultView object. In nearly all cases, this is needless, as getComputedStyle exists on the window object as well. It's likely the defaultView pattern was a combination of folks not wanting to write a testing spec for window and making an API that was also usable in Java.
There was a bug in Firefox 3.6 (2010/2011) that needed defaultView fix
There are lots of DOM/CSS inconsistencies between browsers. But how many core JS differences are there between browsers? One that recently tripped me up is that in Firefox, setTimeout callback functions get passed an extra parameter (https://developer.mozilla.org/en/window.setTimeout).
Also, now that browsers are implementing new functions (e.g. Array.map), it can get confusing to know what you can/can't use if you are trying to write code that must work on all browsers (even back to IE6).
Is there a website that cleanly organizes these types of differences?
I find QuirksMode and WebDevout to have the best tables regarding CSS and DOM quirks. You can bridge those incompatibilities with jQuery. There is also this great list started by Paul Irish which includes pretty much any polyfill you could ever need, including ones for ES5 methods such as Array.map.
There doesn't appear to be anything out there that clearly outlines all these issues (very surprising actually). If you use jQuery there is a nice browser compatibility doc section that outlines supported browsers and known issues. I just deal with issues as they come up (as you should be browser testing in all cases anyways) and you can document them if you want to make sure you are coding correctly or if you run into issues and need to know fixes. It's easy to find issues when you do a quick search on a particular topic.
Well, I'm going to open up a CW:
Prior to Firefox 4 Function.apply only accept an Array, not an array-like object. Ref MDC: Function.apply
Some engines (which ones?) promote the result of String.prototype methods from string to String. Ref a String.prototype's "this" doesn't return a string?
Firefox 4 may insert "event loops" into seemingly synchronous code. Ref Asynchronous timer event running synchronously ("buggy") in Firefox 4?
Earlier Firefox versions would accept a trailing , in object literals. Ref trailing comma problem, javascript (Seems "fixed" in FF6).
Firefox and IE both treat function-expression productions incorrectly (but differently).
Math.round/Math.toFixed. Ref Math.round(num) vs num.toFixed(0) and browser inconsistencies
The IE vs. W3C Event Model -- both are missing events/features of the other.
The web browser DOM has been around since the late '90s, but it remains one of the largest constraints in performance/speed.
We have some of the world's most brilliant minds from Google, Mozilla, Microsoft, Opera, W3C, and various other organizations working on web technologies for all of us, so obviously this isn't a simple "Oh, we didn't optimize it" issue.
My question is if i were to work on the the part of a web browser that deals specifically with this, why would I have such a hard time making it run faster?
My question is not asking what makes it slow, it's asking why hasn't it become faster?
This seems to be against the grain of what's going on elsewhere, such as JS engines with performance near that of C++ code.
Example of quick script:
for (var i=0;i<=10000;i++){
someString = "foo";
}
Example of slow because of DOM:
for (var i=0;i<=10000;i++){
element.innerHTML = "foo";
}
Some details as per request:
After bench marking, it looks like it's not an unsolvable slow issue, but often the wrong tool is used, and the tool used depends on what you're doing cross-browser.
It looks like the DOM efficiency varies greatly between browsers, but my original presumption that the dom is slow and unsolvable seems to be wrong.
I ran tests against Chrome, FF4, and IE 5-9, you can see the operations per second in this chart:
Chrome is lightning fast when you use the DOM API, but vastly slower using the .innerHTML operator (by a magnitude 1000-fold slower), however, FF is worse than Chrome in some areas (for instance, the append test is much slower than Chrome), but the InnerHTML test runs much faster than chrome.
IE seems to actually be getting worse at using DOM append and better at innerHTML as you progress through versions since 5.5 (ie, 73ops/sec in IE8 now at 51 ops/sec in IE9).
I have the test page over here:
http://jsperf.com/browser-dom-speed-tests2
What's interesting is that it seems different browsers seem to all be having different challenges when generating the DOM. Why is there such disparity here?
When you change something in the DOM it can have myriad side-effects to do with recalculating layouts, style sheets etc.
This isn't the only reason: when you set element.innerHTML=x you are no longer dealing with ordinary "store a value here" variables, but with special objects which update a load of internal state in the browser when you set them.
The full implications of element.innerHTML=x are enormous. Rough overview:
parse x as HTML
ask browser extensions for permission
destroy existing child nodes of element
create child nodes
recompute styles which are defined in terms of parent-child relationships
recompute physical dimensions of page elements
notify browser extensions of the change
update Javascript variables which are handles to real DOM nodes
All these updates have to go through an API which bridges Javascript and the HTML engine. One reason that Javascript is so fast these days is that we compile it to some faster language or even machine code, masses of optimisations happen because the behaviour of the values is well-defined. When working through the DOM API, none of this is possible. Speedups elsewhere have left the DOM behind.
Firstly, anything you do to the DOM could be a user visible change. If you change the DOM, the browser has to lay everything out again. It could be faster, if the browser caches the changes, then only lays out every X ms (assuming it doesn't do this already), but perhaps there's not a huge demand for this kind of feature.
Second, innerHTML isn't a simple operation. It's a dirty hack that MS pushed, and other browsers adopted because it's so useful; but it's not part of the standard (IIRC). Using innerHTML, the browser has to parse the string, and convert it to a DOM. Parsing is hard.
Original test author is Hixie (http://nontroppo.org/timer/Hixie_DOM.html).
This issue has been discussed on StackOverflow here and Connect (bug-tracker) as well. With IE10, the issue is resolved. By resolved, I mean they have partially moved on to another way of updating DOM.
IE team seems to handle the DOM update similar to Excel-macros team at Microsoft, where it's considered a poor practice to update the live-cells on the sheet. You, the developer, is supposed to take the heavy lifting task offline and then update the live team in batch. In IE you are supposed to do that using document-fragment (as opposed to document). With new emerging ECMA and W3C standards, document-frags are depreciated. So IE team has done some pretty work to contain the issue.
It took them few weeks to strip it down from ~42,000 ms in IE10-ConsumerPreview to ~600 ms IE10-RTM. But it took lots of leg pulling to convince them that this IS an issue. Their claim was that there is no real-world example which has 10,000 updates per element. Since the scope and nature of rich-internet-applications (RIAs) can't be predicted, its vital to have performance close to the other browsers of the league. Here is another take on DOM by OP on MS Connect (in comments):
When I browse to http://nontroppo.org/timer/Hixie_DOM.html, it takes
~680ms and if I save the page and run it locally, it takes ~350ms!
Same thing happens if I use button-onclick event to run the script
(instead of body-onload). Compare these two versions:
jsfiddle.net/uAySs/ <-- body onload
vs.
jsfiddle.net/8Kagz/ <-- button onclick
Almost 2x difference..
Apparently, the underlying behavior of onload and onclick varies as well. It may get even better in future updates.
Actually, innerHTML is less slow than createElement.
In an effort to optimize I found js can parse enormous json effortlessly. Json parsers can have a huge number of nested function calls without issues. One can toggle between display:none and display:block thousands of elements without issues.
But if you try create a few thousand elements (or even if you simply clone them) performance is terrible. You don't even have to add them to the document!
Then, when they are created, insert and remove from the page works supper fast again.
It looks to me like the slowness has little to do with their relation to other elements of the page.