Why is 'ontouchstart' in window "supported by most browsers"? - javascript

I'm in the process of refactoring some code that someone else wrote. There is function that uses:
!!('ontouchstart' in window)
I've seen this used in other projects: https://github.com/Modernizr/Modernizr/blob/master/feature-detects/touchevents.js#L40
And in a Stackoverflow answer: https://stackoverflow.com/a/4819886/1127635
But it seems like it could be slower than alternatives: http://jsperf.com/hasownproperty-vs-in-vs-undefined/12
So why use this possibly slower alternative? What browsers don't support other solutions?

Both of your alternative tests are flawed in some way:
window.ontouchstart !== null tests for a non-null listener. Testing the value of ontouchstart is a risky approach because libraries or other code might change the value of ontouchstart. Testing the value is a bad approach; it would be much better to test for the existence of the property itself, which brings us to your next proposed test...
window.hasOwnProperty('ontouchstart') tests if the window object has its own ontouchstart property. In some browsers (I've just confirmed this on Chrome 37 and IE9), window doesn't have its own on-event properties; instead, they are properties of window.__proto__.
We shouldn't test for a value (because previous code may have changed the value before we run our code) and we can't test for window's own property, because browser differ in their implementation of where event listener properties exist in window's prototype chain. So, our most consistent option is to test whether the property exists (regardless of value) anywhere in window's prototype chain. This is exactly what we do with the in operator.
Of course, if someone else's code runs before our test, they could add an ontouchstart property where there originally wasn't one. Testing support for events with absolute rigor simply isn't possible and it's an awful business.

Related

JavaScript: Extending Element prototype

I have seen a lot of discussion regarding extending Element. As far as I can tell, these are the main issues:
It may conflict with other libraries,
It adds undocumented features to DOM routines,
It doesn’t work with legacy IE, and
It may conflict with future changes.
Given a project which references no other libraries, documents changes, and doesn’t give a damn for historical browsers:
Is there any technical reason not to extend the Element prototype. Here is an example of how this is useful:
Element.prototype.toggleAttribute=function(attribute,value) {
if(value===undefined) value=true;
if(this.hasAttribute(attribute)) this.removeAttribute(attribute);
else this.addAttribute(attribute,value);
};
I’ve seen too many comments about the evils of extending prototypes without offering a reasonable explanation.
Note 1: The above example is possibly too obvious, as toggleAttribute is the sort of method which might be added in the future. For discussion, imagine that it’s called manngoToggleAttribute.
Note 2: I have removed a test for whether the method already exists. Even if such a method already exists, it is more predictable to override it. In any case, I am assuming that the point here is that the method has not yet been defined, let alone implemented. That is the point here.
Note 3: I see that there is now a standard method called toggleAttribute which doesn’t behave exactly the same. With modification, the above would be a simple polyfill. This doesn’t change the point of the question.
Is it ok? Technically yes. Should you extend native APIs? As a rule of thumb no. Unfortunately the answer is more complex. If you are writing a large framework like Ember or Angular it may be a good idea to do so because your consumers will have Benifits if better API convenience. But if you're only doing this for yourself then the rule of thumb is no.
The reasoning is that doing so destabilizes the trust of that object. In other words by adding, changing, modifying a native object it no longer follows the well understood and documented behavior that anyone else (including your future self) will expect.
This style hides implementation that can go unnoticed. What is this new method?, Is it a secret browser thing?, what does it do?, Found a bug do I report this to Google or Microsoft now?. A bit exaggerated but the point is that the truth of an API has now changed and it is unexpected in this one off case. It makes maintainability need extra thought and understanding that would not be so if you just used your own function or wrapper object. It also makes changes harder.
Relevant post: Extending builtin natives. Evil or not?
Instead of trying to muck someone else's (or standard) code just use your own.
function toggleAttribute(el, attribute, value) {
var _value = (value == null ? true : value;
if (el.hasAttribute(attribute)) {
el.removeAttribute(attribute);
} else {
el.addAttribute(attribute, _value);
}
};
Now it is safe, composible, portable, and maintainable. Plus other developers (including your future self) won't scratch their heads confused where this magical method that is not documented in any standard or JS API came from.
Do not modify objects you don't own.
Imagine a future standard defines Element.prototype.toggleAttribute. Your code checks if it has a truthy value before assigning your function. So you could end up with the future native function, which may behave differently than what you expected.
Even more, just reading Element.prototype.toggleAttribute might call a getter, which could run some code with undesired sideways effects. For example, see what happens when you get Element.prototype.id.
You could skip the check and assign your function directly. But that could run a setter, with some undesired sideways effects, and your function wouldn't be assigned as the property.
You could use a property definition instead of a property assignment. That should be safer... unless Element.prototype has some special [[DefineOwnProperty]] internal method (e.g. is a proxy).
It might fail in lots of ways. Don't do this.
In my assessment: no
Massive overwriting Element.prototype slow down performance and can conflict with standardization, but a technical reason does not exist.
I'm using several Element.prototype custom methods.
so far so good until I observe a weird behaviour.
<!DOCTYPE html >
<html >
<body>
<script>
function doThis( ){
alert('window');
}
HTMLElement.prototype.doThis = function( ){
alert('HTMLElement.prototype');
}
</script>
<button onclick="doThis( )" >Do this</button>
</body>
</html>
when button is clicked, the prototype method is executed instead of the global one.
The browser seems to assume this.doThis() which is weird. To overcome, I have to use window.doThis() in the onclick.
It might be better if w3c can come with with diff syntax for calling native/custom methods e.g.
myElem.toggleAttribute() // call native method
myElem->toggleAttribute() // call custom method
Is there any technical reason not to extend the Element prototype.
Absolutely none!
pardon me:
ABSOLUTELY NONE!
In addition
the .__proto__, was practically an a illegal (Mozilla) prototype extension until yesterday. - Today, it's a Standard.
p.s.: You should avoid the use of if(!Element.prototype.toggleAttribute) syntax by any means, the if("toggleAttribute" in Element.prototype) will do.

.hasOwnProperty('getComputedStyle') false in IE 11

So I did a little work on a colour picker module adding the ability to parse human readable colours. I leveraged .getComputedStyle() to perform the conversion.
I implemented detection of the feature (should be IE 9+) with:
window.hasOwnProperty('getComputedStyle')
This is when I noticed some strange behavior. In Chrome and FF this reported true as expected. However in IE 11 (which does support it) it reported false.
I'm a little stumped as to why this is happening. I've performed other ways of checking its support. I'm stumped however as to why IE reports false whilst it does support it.
Not too sure if this is overkill but this fiddle simply logs the response so you can see for yourself. https://jsfiddle.net/xrgrgrhe/
Don't perform feature detection in this way; browsers aren't always consistent about where certain properties and methods are defined on the prototype chain. Instead, simply access the property:
if ( window.getComputedStyle ) {
/* Proceed to use window.getComputedStyle */
}
Functions are truthy, while undefined is falsy. As a result, this test will pass if the method is defined anywhere on the prototype, rather than directly on the Window instance object.
For what it's worth, the original test in the question also returns true in Microsoft Edge.

Why is checking for an attribute using dot notation before removing faster than removing the attribute outright?

I asked this question, and it turned out that when removing an attribute from an element, checking whether the element exists first using elem.xxx!==undefined makes the runtime faster. Proof.
Why is it quicker? There's more code to go through and you'll have to encounter the removeAttribute() method whichever way you go about this.
Well, first thing you need to know is that elem.xxx is not the same as elem.getAttribute() or any other method relative to the attribute.
elem.xxx is a property of a DOM element while attribute and element on the HTML inside the DOM, both are similar but different. For exemple, take this DOM element: <a href="#"> and this code :
//Let say var a is the <a> tag
a.getAttribute('href');// == #
a.href;// == http://www.something.com/# (i.e the complet URL)
But let take a custom attribute : <a custom="test">
//Let say var a is the <a> tag
a.getAttribute('custom');// == test
a.custom;// == undefined
So you can't really compare the speed of both since they don't achieve the same result. But one is clearly faster since properties are a fast access data while attribute use the get/hasAttribute DOM functions.
Now, Why without the condition is faster? Simply because removeAttribute doesn't care is the attribute is missing, it check if it is not.
So using hasAttribute before removeAttribute is like doing the check twice, but the condition is a little slower since it need to check if the condition is satisfied to run the code.
I have a suspicion that the reason for the speed boost are trace trees.
Trace trees were first introduced by Andreas Gal and Michael Franz of the University of California, Irvine, in their paper Incremental Dynamic Code Generation with Trace Trees.
In his blog post Tracing the Web Andreas Gal (the co-author of the paper) explains how tracing Just-in-Time compilers works.
To explain tracing JIT compilers as sententiously as possible (since my knowledge about the subject isn't profound) a tracing JIT compiler does the following:
Initially all the code to be run is interpreted.
A count is kept for the number of times each code path is executed (e.g. the number of times the true branch of an if statement is executed).
When the number of times a code path is taken is greater than a predefined threshold the code path is compiled into machine code to speed up execution (e.g. I believe SpiderMonkey executes code paths executed more than once).
Now let's take a look at your code and understand what is causing the speed boost:
Test Case 1: Check
if (elem.hasAttribute("xxx")) {
elem.removeAttribute("xxx");
}
This code has a code path (i.e. an ifstatement). Remember that tracing JITs only optimize code paths and not entire functions. This is what I believe is happening:
Since the code is being benchmarked by JSPerf it's being executed more than once (an understatement). Hence it is compiled into machine code.
However it still incurs the overhead of the extra function call to hasAttribute which is not JIT compiled because it's not a part of the conditional code path (the code between the curly braces).
Hence although the code inside the curly braces is fast the conditional check itself is slow because it's not compiled. It is interpreted. The result is that the code is slow.
Test Case 2: Remove
elem.removeAttribute("xxx");
In this test case we don't have any conditional code paths. Hence the JIT compiler never kicks in. Thus the code is slow.
Test Case 3: Check (Dot Notation)
if (elem.xxx !== undefined) {
elem.removeAttribute("xxx");
}
This is the same as the first test case with one significant difference:
The conditional check is a simple non-equivalence check. Hence it doesn't incur the full overhead of a function call.
Most JavaScript interpreters optimize simple equivalence checks like this by assuming a fixed data type for both the variables. Since the data type of elem.xxx or undefined is not changing every iteration this optimization makes the conditional check even faster.
The result is that the conditional check (although interpreted) does not slow down the compiled code path significantly. Hence this code is the fastest.
Of course this is just speculation on my part. I don't know the internals of a JavaScript engine and I hence my answer is not canonical. However I opine that it is a good educated guess.
Your proof is incorrect...
elem.class !== undefined always evaluates to false and thus elem.removeAttribute("class") is never called, therefore, this test will always be quicker.
The correct property on elem to use is className, e.g.:
typeof elem.className !== "undefined"
As Karl-André Gagnon pointed out, accessing a [native] JavaScript property and invoking a DOM function/property are two different operations.
Some DOM properties are exposed as JavaScript properties via the DOM IDL; these are not the same as adhoc JS properties and require DOM access. Also, even though the DOM properties are exposed, there is not strict relation with DOM attributes!
For instance, inputElm.value = "x" will not update the DOM attribute, even though the element will display and report an updated value. If the goal is to deal with DOM attributes, the only correct method is to use hasAttribute/setAttribute, etc.
I've been working on deriving a "fair" micro-benchmark for the different function calls, but it is fairly hard and there is alot of different optimization that occurs. Here my best result, which I will use to argue my case.
Note that there is no if or removeAttribute to muddle up the results and I am focusing only on the DOM/JS property access. Also, I attempt to rule out the claim that the speed difference is merely due to a function call and I assign the results to avoid blatant browser optimizations. YMMV.
Observations:
Access to a JS property is fast. This is to be expected1,2
Calling a function can incur a higher cost than direct property access1, but is not nearly as slow as DOM properties or DOM functions. That is, it is not merely a "function call" that makes hasAttribute so much slower.
DOM properties access is slower than native JS property access; however, performance differs widely between the DOM properties and browsers. My updated micro-benchmark shows a trend that DOM access - be it via DOM property or DOM function - may be slower than native JS property access2.
And going back to the very top: Accessing a non-DOM [JS] property on an element is fundamentally different than accessing a DOM property, much less a DOM attribute, on the same element. It is this fundamental difference, and optimizations (or lack thereof) between the approaches across browsers, that accounts for the observed performance differences.
1 IE 10 does some clever trick where the fake function call is very fast (and I suspect the call has been elided) even though it has abysmal JS property access. However, considering IE an outlier or merely reinforcement that the function call is not what introduces the inherently slower behavior, doesn't detract from my primary argument: it is the DOM access that is fundamentally slower.
2 I would love to say DOM property access is slower, but FireFox does some amazing optimization of input.value (but not img.src). There is some special magic that happens here. Firefox does not optimize the DOM attribute access.
And, different browsers may exhibit entirely different results .. however, I don't think that one has to consider any "magic" with the if or removeAttribute to at least isolate what I believe to be the "performance issue": actually using the DOM.

Javascript - Determining support for Element.children feature

I recently faced a problem with determining browsers' support for certain DOM features. One of them was Element.children feature, which is still causing me headache. I have the following line in my code:
var NATIVE_CHILDREN = Element.prototype.hasOwnProperty('children');
It is supposed to check if the browser supports Element.children -feature [https://developer.mozilla.org/en/DOM/Element.children].
According to MDN and quick testing, all the major browsers support this feature.
On Firebug on Firefox, value of NATIVE_CHILDREN is expectedly true. Surprisingly, on Chrome, Safari and Opera the value is false (unfortunately I don't have accees to machine with Windows to check what IE thinks about it).
According to DOM4 - Free Editor's Draft 5 April 2012 [http://dom.spec.whatwg.org/#element], children should be part of Element object's prototype. Apperantly Chrome's, Safari's and Opera's Element object doesn't contain such a method!
I have tried checking the prototypes of HTMLCollection and Node (I also tested HTMLParagraphElement and HTMLBodyElement), but none of them seem to contain method called 'children' (except on Firefox). How can I make my test to work cross-browser? I don't want to use any external libraries for this, because this is for my own little library.
I think the reason why this test might return false on Chrome is that you're checking on the prototype. This is not the best way, for several reasons:
Different browsers can (and do) use different implementations of the prototype, some prototypes are not accessible in IE for instance. In this case, I'd say your issue is the result of chrome relying on the (non standard) __proto__ property rather then prototype. I can't remember when, but I had a similar issue with chrome, and this was the source if the problem.
AFAIK all browsers have a children property for their elements, though they behave differently in some cases, so I have some doubt as to the use of checking the existence of such a property.
If you still want to check this, why not use document.body.hasOwnProperty('children')? Returns true on FF, Chrome, Safari and IE.
That's because some engines only slap on the children attribute on element creation. A quick test in the Chrome console shows that:
Element.prototype.hasOwnProperty( 'children' ); //false
//however,
document.createElement( 'foo' ).hasOwnProperty( 'children' ); //true
//or even
!!document.createElement( 'foo' ).children; //true
Non-function properties often don't appear on the prototype, for a simple reason - they aren't set yet, and it doesn't make sense if they will be. Element.prototype doesn't have any children, because it's not an element, it's a prototype for elements.
It is safer to check if( 'children' in document.body) than to mess around with prototypes. Important to note the quotes, if not a variable children might be used/created...
According to QuirksMode, all browsers support children except Firefox 3 (which is a surprise to me, since it worked when I tested in that browser...), so there should be no need to test for this property.

typeof for html plugin elements

When using ECMAScripts typeof on plugin elements (i.e. embed or object), Safari & FireFox return "function":
typeof(window.document['myPlugin']) // "function"
This can't be influenced on the plugin side, as the browser doesn't call the plugin here. Funny enough, in IE the same line evaluates to "object".
Is that simply implementation dependent behaviour as per ECMAScript §11.4.3 or am i missing something here?
The specs are all very vague when it comes to how typeof should behave with a plugin object, since ECMAScript wasn't written with plugins in mind. Hence on IE with an activex control it will tend to respond with "object" because that's how they decided to deal with it; Firefox and I believe Safari both respond with "function" because that is how they determined to deal with it.
Both answers make sense; remember that when you access the plugin with document.getElementById("myPlugin"), you aren't just getting a reference to the plugin, you're getting a reference to the HTML element that hosts the plugin, which happens to proxy calls to the plugin. Being an HTML element, it has other properties and methods that you don't even know about.
It does seem like object would make more sense in this case, but an object generally does not, cannot have a default function, and so my guess is that firefox determined to respond that it is a function() because there is no way in the NPAPI to query to see if the default function exists, short of calling InvokeDefault. while you can call a default method on an ActiveX IDispatch interface as well, it really seems more like an incidental side-effect than a design feature.
Not a terribly scientific answer, but one that might help.

Categories

Resources