JS cross browser inconsistencies/differences - javascript

There are lots of DOM/CSS inconsistencies between browsers. But how many core JS differences are there between browsers? One that recently tripped me up is that in Firefox, setTimeout callback functions get passed an extra parameter (https://developer.mozilla.org/en/window.setTimeout).
Also, now that browsers are implementing new functions (e.g. Array.map), it can get confusing to know what you can/can't use if you are trying to write code that must work on all browsers (even back to IE6).
Is there a website that cleanly organizes these types of differences?

I find QuirksMode and WebDevout to have the best tables regarding CSS and DOM quirks. You can bridge those incompatibilities with jQuery. There is also this great list started by Paul Irish which includes pretty much any polyfill you could ever need, including ones for ES5 methods such as Array.map.

There doesn't appear to be anything out there that clearly outlines all these issues (very surprising actually). If you use jQuery there is a nice browser compatibility doc section that outlines supported browsers and known issues. I just deal with issues as they come up (as you should be browser testing in all cases anyways) and you can document them if you want to make sure you are coding correctly or if you run into issues and need to know fixes. It's easy to find issues when you do a quick search on a particular topic.

Well, I'm going to open up a CW:
Prior to Firefox 4 Function.apply only accept an Array, not an array-like object. Ref MDC: Function.apply
Some engines (which ones?) promote the result of String.prototype methods from string to String. Ref a String.prototype's "this" doesn't return a string?
Firefox 4 may insert "event loops" into seemingly synchronous code. Ref Asynchronous timer event running synchronously ("buggy") in Firefox 4?
Earlier Firefox versions would accept a trailing , in object literals. Ref trailing comma problem, javascript (Seems "fixed" in FF6).
Firefox and IE both treat function-expression productions incorrectly (but differently).
Math.round/Math.toFixed. Ref Math.round(num) vs num.toFixed(0) and browser inconsistencies
The IE vs. W3C Event Model -- both are missing events/features of the other.

Related

Javascript: Where getter/setter values are stored? [duplicate]

I was thinking about this today and I realized I don't have a clear picture here.
Here are some statements I think to be true (please correct me if I'm wrong):
the DOM is a collection of interfaces specified by W3C.
when parsing HTML source code, the browser creates a DOM tree which has nodes that implement DOM interfaces.
the ECMAScript spec has no reference of browser host objects (DOM, BOM, HTML5 APIs etc.).
how the DOM is actually implemented depends on browser internals and is probably different among most of them.
modern JS interpreters use JIT to improve the code performance and translate it to bytecode
I am curious about what happens behind the scenes when I call document.getElementById('foo'). Does the call get delegated to browser native code by the interpreter or does the browser have JS implementations of all host objects? Do you know about any optimizations they do in regard to this?
I read this overview of browser internals but it didn't mention anything about this. I will look through the Chrome and FF source when I have time, but I thought about asking here first. :)
All of your bullet points are correct, except:
modern JS interpreters use JIT to improve the code performance and translate it to bytecode
should be "...and translate it to native code". SpiderMonkey (the JS engine in Firefox) worked as a bytecode interpreter for a long time before the current JS speed arms race.
On Mozilla's JS-to-DOM bridge:
The host objects are typically implemented in C++, though there is an experiment underway to implement DOM in JS. So when a web page calls document.getElementById('foo'), the actual work of retrieving the element by its ID is done in a C++ method, as hsivonen noted.
The specific way the underlying C++ implementation gets called depends on the API and also changed over time (note that I'm not involved in the development, so might be wrong about some details, here's a blog post by jst, who was actually involved in creating much of this code):
At the lowest level every JS engine provides APIs to define host objects. For example, the browser can call JS_DefineFunctions (as demonstrated in the SpiderMonkey User Guide) to let the engine know that whenever script calls a function with the specified name, a provided C callback should be called. Same for other aspects of the host objects (e.g. enumeration, property getters/setters, etc.)
For the core ECMAScript functionality and in some tricky DOM cases the JS engine/the browser uses these APIs directly to define host objects and their behaviors, but it requires a lot of common boilerplate code for e.g. checking parameter types, converting them to the appropriate C++ types, error handling etc.
For reasons I won't go into, let's say historically, Mozilla made heavy use of XPCOM for many of its objects, including much of the DOM. One feature of XPCOM is its binding to JS called XPConnect. Among other things, XPConnect can take an interface definition in IDL (such as nsIDOMDocument; or more precisely its compiled representation), expose an object with the specified properties to the script, and later, when a script calls getElementById, perform the necessary parameter checks/conversions and route the call directly to a C++ method (nsDocument::GetElementById(const nsAString& aId, nsIDOMElement** aReturn))
The way XPConnect worked was quite inefficient: it registered generic functions as callbacks to be executed when a script accesses a host object, and these generic functions figured out what they needed to do in every particular case dynamically. This post about quickstubs walks you through one example.
"Quick stubs" mentioned in the previous link is a way to optimize JS->C++ calls time by trading some code size for it: instead of always using generic C++ functions that know how to make any kind of call, the specialized code is automatically generated at the Firefox build time for a pre-defined list of "hot" calls.
Later on the JIT (tracemonkey at that time) was taught to generate the code calling C++ methods as part of the native code generated for "hot" paths in JS. I'm not sure how the newer JITs (jaegermonkey) work in this regard.
With "paris bindings" the objects are exposed to webpage JS without any reliance on XPConnect, instead generating all the necessary glue JSClass code based on WebIDL (instead of XPCOM-era IDL). See also posts by developers who worked on this: jst and khuey. Also see How is the web-exposed DOM implemented?
I'm fuzzy on details of the three last points in particular, so take it with a grain of salt.
The most recent improvements are listed as dependencies of bug 622298, but I don't follow them closely.
JS calls to DOM methods like getElementById cause the JS engine to call into the C++ code that implements the DOM. For example, in Firefox, the call ends up in nsDocument::GetElementById(const nsAString& aId, nsIDOMElement** aReturn).
As you can see, Firefox maintains a hashtable that maps ids to elements in C++ as an optimization in this case, so it doesn't walk the whole DOM tree looking for the id.
The DOM is implemented as a language-independent library pretty much in all major browser implementations, which means it's in a different library from the Javascript engine. For example in IE, the JS engine is implemented in jscript.dll while the DOM is implemented in mshtml.dll. Safari has Nitro(JS) and WebCore(DOM). Chrome has V8(JS) and WebCore(DOM), and Firefox has SpiderMonkey/TraceMonkey(JS) and Gecko(DOM).
What this means is that anytime your JS has to access the DOM, it has to reach over to the DOM library - which is inherently slow because of all the marshaling that has to take place. An analogy that has been used is 2 pieces of land connected by a toll bridge, any time you touch the DOM, you must cross over the bridge and cross back - paying a performance toll.
References
Video: Building High Performance Web Applications and Sites
Book: High Performance Javascript (Chapter 3 on the DOM)

what is benefit of $(e).attr(name,value) vs e.setAttribute(name,value)?

Case: e is of type HtmlElement and not css selector
I am talking about any attribute, not just standard allowed ones, like atom-type or data-atom-type, whatever may be the name, will it work without jQuery?
I suspect $(e).attr(name,value) is too slow, first of all it is creating an entire jQuery object ($(e) !== $(e) // two objects are not same) (jsPerf: http://jsperf.com/jquery-attr-vs-native-setattribute/28) and then it invokes certain checks and then sets value, which most browsers easily support e.setAttribute.
Is there any problem replacing $(e).attr(name,value) to e.setAttribute(name,value)?
IE8 supports setAttribute as per MSDN documentation. Is there any mobile browser or any browser which will not support it?
Eventually I want to improve performance of my JavaScript framework, initially we used jQuery extensively for its cross browser DOM features.
We have now understood that unless you are using css selector, most functions such as attr,val,text are better called with their direct DOM counter part when you have instance of HtmlElement.
I suspect $(e).attr(name,value) is too slow, first of all it is creating an entire jQuery object and then it invokes certain checks and then sets value, which most browsers easily support e.setAttribute.
If you measure it, you'll find that the difference in performance is large-ish in relative terms, but miniscule in absolute terms, and it's absolute terms we normally care about. It just doesn't matter in 99.999999% of cases. If you run into a specific performance problem, and trace it to using jQuery, then consider optimizing at that point.
what is benefit of $(e).attr(name,value) vs e.setAttribute(name,value)?
In the specific case you mention, where e is an HTMLElement, there are only a couple of benefits:
There are a couple of IE-specific bugs in setAttribute that jQuery works around for you
There are some "attributes" people set when they really should be setting a property, for instance checked or disabled; jQuery maps those (this is mostly a legacy feature these days, as people should be using prop)
It does some pre-processing on boolean values for you, letting you use $(e).attr("checked", true) when true really should be "checked"
IE8 supports setAttribute as per MSDN documentation. Is there any mobile browser or any browser which will not support it?
All browsers support setAttribute. As I mention earlier, various versions of IE have had various bugs in it, but it's there and mostly works.

Is it safe to use window.screen?

MDN explains how to use the window.screen object, but also says "DOM Level 0. Not part of specification."
W3Schools says that window.screen.* properties are supported in all major browsers.
If I understand this correctly... window.screen is completely non-standard, but is nonetheless universally supported. Is that right?
If this is the case, are there any cross-browser differences I need to be aware of, or can I just use it? I'm mostly interested in screen.availWidth, by the way.
Quirksmode compatibility tables to the rescue!
http://www.quirksmode.org/dom/w3c_cssom.html#screenview
Most, but not all values are supported by the major browsers.
You should be fine with it.
The reason that it is not part of a standard is because DOM Level 0 was introduced before standards were around. DOM Level 0 is also called the Legacy DOM, and it was created at the same time NetScape 2.0 made JavaScript in the browser a reality; in effect, DOM Level 0 was the very first DOM spec.
The Legacy DOM will be around for a long time, if not then it would break backward compatibility with a TON of very popular scripts already in existence.
EDIT: In other words, your understanding is completely correct. It is not "standardized" but it is completely universal and will remain so for a long time.

Is "monkey patching" really that bad? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Some languages like Ruby and JavaScript have open classes which allow you to modify interfaces of even core classes like numbers, strings, arrays, etc. Obviously doing so could confuse others who are familiar with the API but is there a good reason to avoid it otherwise, assuming that you are adding to the interface and not changing existing behavior?
For example, it might be nice to add a an Array.map implementation to web browsers which don't implement ECMAScript 5th edition (and if you don't need all of jQuery). Or your Ruby arrays might benefit from a "sum" convenience method which uses "inject". As long as the changes are isolated to your systems (e.g. not part of a software package you release for distribution) is there a good reason not to take advantage of this language feature?
Monkey-patching, like many tools in the programming toolbox, can be used both for good and for evil. The question is where, on balance, such tools tend to be most used. In my experience with Ruby the balance weighs heavily on the "evil" side.
So what's an "evil" use of monkey-patching? Well, monkey-patching in general leaves you wide open to major, potentially undiagnosable clashes. I have a class A. I have some kind of monkey-patching module MB that patches A to include method1, method2 and method3. I have another monkey-patching module MC that also patches A to include a method2, method3 and method4. Now I'm in a bind. I call instance_of_A.method2: whose method gets called? The answer to that can depend on a lot of factors:
In which order did I bring in the patching modules?
Are the patches applied right off or in some kind of conditional circumstance?
AAAAAAARGH! THE SPIDERS ARE EATING MY EYEBALLS OUT FROM THE INSIDE!
OK, so #3 is perhaps a tad over-melodramatic....
Anyway, that's the problem with monkey-patching: horrible clashing problems. Given the highly-dynamic nature of the languages that typically support it you're already faced with a lot of potential "spooky action at a distance" problems; monkey-patching just adds to these.
Having monkey-patching available is nice if you're a responsible developer. Unfortunately, IME, what tends to happen is that someone sees monkey-patching and says, "Sweet! I'll just monkey-patch this in instead of checking to see if other mechanisms might not be more appropriate." This is a situation roughly analogous to Lisp code bases created by people who reach for macros before they think of just doing it as a function.
Wikipedia has a short summary of the pitfalls of monkey-patching:
http://en.wikipedia.org/wiki/Monkey_patch#Pitfalls
There's a time and place for everything, also for monkey-patching. Experienced developers have many techniques up their sleeves and learn when to use them. It's seldom a technique per se that's "evil", just inconsiderate use of it.
With regards to Javascript:
is there a good reason to avoid it otherwise, assuming that you are adding to the interface and not changing existing behavior?
Yes. Worst-case, even if you don't alter existing behavior, you could damage the future syntax of the language.
This is exactly what happened with Array.prototype.flatten and Array.prototype.contains. In short, the specification was written up for those methods, their proposals got to stage 3, and then browsers started shipping it. But, in both cases, it was found that there were ancient libraries which patched the built-in Array object with their own methods with the same name as the new methods, and had different behavior; as a result, websites broke, the browsers had to back out of their implementations of the new methods, and the specification had to be edited. (The methods were renamed.)
If you mutate a built-in object like Array on your own browser, on your own computer, that's fine. (This is a very useful technique for userscripts.) If you mutate a built-in object on your public-facing site, that's less fine - it may eventually result in problems like the above. If you happen to control a big site (like stackoverflow.com) and you mutate a built-in object, you can almost guarantee that browsers will refuse to implement new features/methods which break your site (because then users of that browser will not be able to use your site, and they will be more likely to migrate to a different browser). (see here for an explanation of these sorts of interactions between the specification writers and browser makers)
All that said, with regards to the specific example in your question:
For example, it might be nice to add a an Array.map implementation to web browsers which don't implement ECMAScript 5th edition
This is a very common and trustworthy technique, called a polyfill.
A polyfill is code that implements a feature on web browsers that do not support the feature. Most often, it refers to a JavaScript library that implements an HTML5 web standard, either an established standard (supported by some browsers) on older browsers, or a proposed standard (not supported by any browsers) on existing browsers
For example, if you wrote a polyfill for Array.prototype.map (or, to take a newer example, for Array.prototype.flatMap) which was perfectly in line with the official Stage 4 specification, and then ran code that defined Array.prototype.flatMap on browsers which didn't have it already:
if (!Array.prototype.flatMap) {
Array.prototype.flatMap = function(...
// ...
}
}
If your implementation is correct, this is perfectly fine, and is very commonly done all over the web so that obsolete browsers can understand newer methods. polyfill.io is a common service for this sort of thing.
As long as the changes are isolated to
your systems (e.g. not part of a
software package you release for
distribution) is there a good reason
not to take advantage of this language
feature?
As a lone developer on an isolated problem there are no issues with extending or altering native objects. Also on larger projects this is a team choice that should be made.
Personally I dislike having native objects in javascript altered but it's a common practice and it's a valid choice to make. If your going to write a library or code that is meant to be used by other's I would heavily avoid it.
It is however a valid design choice to allow the user to set a config flag which states please overwrite native objects with your convenience methods because there's so convenient.
To illustrate a JavaScript specific pitfall.
Array.protoype.map = function map() { ... };
var a = [2];
for (var k in a) {
console.log(a[k]);
}
// 2, function map() { ... }
This issue can be avoided by using ES5 which allows you to inject non-enumerable properties into an object.
This is mainly a high level design choice and everyone needs to be aware / agreeing on this.
It's perfectly reasonable to use "monkey patching" to correct a specific, known problem where the alternative would be to wait for a patch to fix it. That means temporarily taking on responsibility for fixing something until there's a "proper", formally released fix that you can deploy.
A considered opinion by Gilad Bracha on Monkey Patching: http://gbracha.blogspot.com/2008/03/monkey-patching.html
The conditions your describe -- adding (not changing) existing behavior, and not releasing your code to the outside world -- seem relatively safe. Problems could come up, however, if the next version of Ruby or JavaScript or Rails changes their API. For example, what if some future version of jQuery checks to see if Array.map is already defined, and assumes it's the EMCA5Script version of map when in actuality it's your monkey-patch?
Similarly, what if you define "sum" in Ruby, and one day you decide you want to use that ruby code in Rails or add the Active Support gem to your project. Active Support also defines a sum method (on Enumerable), so there's a clash.

Javascript trimAll() doesn't seem to work

When i executed this,
var a = trimAll(document.getElementById("txt_msg").value);
When i inspected through web developer toolbar I got the Error: trimAll is not defined..
Any suggestion...
As others have mentioned, the error is indicating that the trimAll() function has not been defined. As trimAll() is not a standard JavaScript function, you would need to write one named trimAll() in order to call it.
There are many ways to write a string trimming function. Some functions are compact, some are easy to read, others are blazing fast.
It's worth keeping in mind that native JavaScript trim is supported in ECMAScript 5. I suspect the intention of your trimAll() function call would be the same functionality as trim().
So, if you plan to write your own trim function, it might be worth while checking for the existence of a native trim and using that in preference to your own string trimming method if you prefer.
According to the latest edition of the ECMAScript spec, trimAll is not a standard Javascript function. Either it is a browser-specific extension, or a third-party library.
Exercise due caution if you want your Javascript / website to work in all browsers.
Here's a trimAll function that might help:
http://www.jslab.dk/library/String.trimAll

Categories

Resources