Is isNaN() really necessary? - javascript

According to the MDN page1 regarding isNaN (emphasis mine):
Unlike all other possible values in JavaScript, it is not possible to rely on the equality operators (== and ===) to determine whether a value is NaN or not, because both NaN == NaN and NaN === NaN evaluate to false. Hence, the necessity of an isNaN function.
However, since NaN is the only value in JavaScript that will compare unequal to itself, it is in fact possible to rely on the strict equality operator to determine, unequivocally, wheter a value is NaN or not:
const value = NaN;
console.log(value === value)
That said, what's the explanation for that MDN sentence?
PS: I do understand that MDN is not an official documentation (that would be ECMA) and that much like a wiki anyone can edit it. Even so, it's a very reliable source.
1: That paragraph has been edited after this question was posted, as mentioned by the answerer.

No, having a built-in isNaN function isn't necessary. isNaN is just for convenience's sake. Sure, one could manually do:
const isNaN = arg => arg !== arg
for every script you wanted to perform such an operation in, but it'd be tedious.
Or, without the function, consider reading code like:
if (parsedNumber !== parsedNumber) {
// It's NaN
}
That'd look really weird, and it's in programmers' best interests to have code be as readable as possible.

Related

Comparison operator giving idiosyncratic result in JavaScript

A friend of mine today told me to open the Chrome Console and see the output for three JavaScript commands that I report here below (with the corresponding output).
> Boolean([])
< true
> Boolean("")
< false
> [] == ""
< true
When I told him that it was probably a bug, he replied that it is a famous thing and a JavaScript developer should know about it.
Is it true? Is there any logic that justifies the output seen above or it is just a bug of the language?
To compare [] and "", JavaScript tries to bring them to same type, in this case: String.
You'll notice a similar result with this (and it makes sense to us):
[].toString() == "" // true
Wow! What a great question! This is such crazy behavior when coming to JavaScript from a different language right? Or heck, even if JavaScript is your first language it's still crazy. But it is indeed the language working as intended.
There's an amazing answer/explanation of this behavior and why it happens here. In a nutshell, JavaScript does a lot of type casting and interesting things under the hood when you use the equality operator (==). More often, you'll probably want to be using the identity operator (===) as it performs stricter comparison and JavaScript doesn't attempt to do any type-casting or magic beneath the surface when using it.
Double equals performs type coercion. It'll attempt to convert each one to a common type before checking equality. This is why it's recommended JavaScript developers always use triple equals(===) which performs a strict type equality comparison.
In this case, [] will be converted to an empty string which is considered falsy like the empty string it's being compared to. The same situation can be seen in the example below:
[5]==5
true

OK to use type coercion when checking for undefined/null?

Is it acceptable to use type coercion (== instead of ===) to check for undefined/null?
What are the downsides? Is there a better way to check for undefined/null?
Clarification: I am looking for a statement that will check for both undefined and null.
test(null);
test(undefined);
function test(myVar) {
if (myVar != undefined) {
alert('never gets called')
}
}
I'm going to attempt to address this question as objectively as possible, but it deals with some mushy "best practices" opinion stuff that can trigger people to forget that there's more than one way to do things.
Is it acceptable to use type coercion (== instead of ===) to check for undefined/null?
It's acceptable to write code however you see fit regardless of anyone who tells you otherwise. Including me.
That's not really helpful here, so let me expand on that a bit, and I'll let you decide how you feel about it.
Code is a tool that can be used to solve problems, and == is a tool within JavaScript code to solve a specific problem. It has a well-defined algorithm which it follows for checking a type of equality between two parameters.
===, on the other hand, is a different tool that solves a different, specific problem. It also has a well-defined algorithm which it follows for checking a type of equality between two parameters.
If one tool or the other is appropriate for your use-case, then you shouldn't hesitate to use the tool.
If you need a hammer, use a hammer.
But if all you have is a hammer, then everything looks like a nail, and this is the core issue with ==. Developers coming from other languages often aren't aware of how == works and use it when === would be appropriate.
In fact, most use cases are such that === should be preferred over ==, but as you've asked in your question: that's not true in this instance.
What are the downsides?
If you relax your standards for === and allow developers to use == you may very well get bitten by misuse of ==, which is why sticking religiously to === may be acceptable to some, and seem foolish to others.
if (val === null || val === undefined) {...
is not particularly difficult to write, and is very explicit about intent.
Alternatively,
if (val == null) {...
is much more concise, and not difficult to understand either.
Is there a better way to check for undefined/null?
"better" is subjective, but I'm going to say, "no" anyway because I'm not aware of any that improve the situation in any meaningful way.
At this point I will also note that there are other checks that could be performed instead of null/undefined. Truthy/falsey checks are common, but they are another tool used to solve yet another specific problem:
if (!val) {...
is much more concise than either of the previous options, however it comes with the baggage of swallowing other falsey values.
Additional notes on enforcement via linting:
If you follow the strictness of Crockford's JSLint, == is never tolerated, in the understanding that a tool that's "sometimes useful" but "mostly dangerous" isn't worth the risk.
ESLint, on the other hand, allows for a more liberal interpretation of the rule by providing an option to allow null as a specific exception.
== undefined and == null are both equivalent to checking if the value is either undefined or null, and nothing else.
As for "acceptable", it depends on who is looking at the code, and whether your linter is configured to ignore this special case.
I can tell you for sure that some minifiers already do this optimisation for you.
The ideal way to test for undefined is using typeof. Note you only need to test them separately if you care about a difference between undefined and null:
test(null);
test(undefined);
function test(myVar) {
if (typeof myVar === "undefined") {
// its undefined
}
if (myVar === null) {
// its null
}
}
If you don't care if its undefined/null, then just do:
test(null);
test(undefined);
function test(myVar) {
if (!myVar) {
// its undefined or null, dunno which, don't care!
}
}

SonarQube suggest a!==a instead of a===NaN

There is a SonarQube JavaScript Rule (javascript:S2688) which says that the use of a === NaN is a bug because it's always false.
I agree with that but I think to use a !== a instead (this is suggested by SonarQube) is a very bad idea.
It's a funny JavaScript fact but certainly not a "best practice".
What about Number.isNaN(a)? Why is this not the suggested solution? Are there any differences or problems which I've missed?
the use of a === NaN is a bug because it's always false.
This behaviour is not a bug, because it is how NaN has been defined to work. But if you actually used a === NaN in a program then that would be a bug because of the always-false result.
a !== a instead ... is a very bad idea. It's a funny JavaScript fact but certainly not a "best practice".
I disagree with your "certainly". Due to problems with the original global isNaN() function (which I'll explain in a moment), a !== a was, historically the best way to test for NaN. So in fact it is a very common practice to use that technique, and I would expect the vast majority of experienced JavaScript developers to be familiar with it.
NaN is the only value that tests as not equal to itself.
What about isNaN(a)? Why is this not the suggested solution? Are there any differences or problems which I've missed?
The original, global isNaN() function doesn't actually test whether its argument is NaN. Nor does it test if its argument is some other non-numeric value. What it does is first try to convert its argument to a number and then test if the result of that conversion is equal to NaN. This implicit conversion means that, e.g., isNaN("test") returns true even though a string is not equal to the value NaN. And isNaN("") returns false because an empty string can be coerced to 0. If that behaviour is what you're looking for then yes, use isNaN().
So all of that is why ECMAScript 6/2015 introduced a new function, Number.isNaN(), which does test specifically for the value NaN, giving an equivalent result to the old-school a !== a.
As suggested in the comments, for older browsers (basically old IE) that don't support Number.isNaN(), if you want something clearer than a !== a and don't mind longer code you can do this:
typeof a == "number" && isNaN(a)
...which is one of the two Number.isNaN() polyfills suggested by MDN. (The other just uses a !== a.)

What does undefined === undefined compare?

After reading through Lodash's code a bit and learning that it uses typeof comparisons more often than not (in its suite of _.is* tools), I ran some tests to confirm that it is faster, which it is, in fact (if marginally).
Discussing my confusion with a fellow developer, he pointed out that in case 1:
var a;
return a === undefined;
Two objects undergo comparison, whereas in case 2 (the faster case):
var a;
return typeof a === 'undefined';
is a far more simple and flat string compare.
I was always of the thought that undefined inhabited a static place in memory, and all triple equals was doing was comparing that reference. Who's correct (if either)?
JSPerf Test:
http://jsperf.com/testing-for-undefined-lodash
In this case (with var a in scope), both of the two posted pieces of code have identical semantics as x === undefined is only true when x is undefined and typeof x will only returned "undefined" when x is undefined (or not resolved in the execution context).
That out of the way, we end up with:
undefined === undefined
vs.
"undefined" === "undefined"
So, even in a naive implementation case the "difference" between the two is SameObject(a,b) vs StringEquals(a,b) as per the rules for Strict Equality Comparison.
But modern implementations of JavaScript are quite competitive and, as can be seen, are very well optimized for this case. I don't know of the exact implementation details, but there are at least two different techniques that could allow the performance (in FF/Chrome/Webkit) to be the same for both cases:
ECMAScript implementations may intern strings, such that there is only one "undefined" string. This would make the StringEquals(a,b) effectively the same as SameObject(a,b) which would end up amounting to a "pointer comparison" in the implementation - this alone could explain why the performance is the same.
Additionally, because typeof x === "undefined" is a common idiom, it could be translated during the parsing to end up in a special call that performs the same, say TypeOfUndefined(x). That is, the implementation could bypass the === (and StringEquals) entirely if it so chose.
IE comes out as a the "clear loser" and seems to be missing an applicable optimization here.
The implementation of the undefined value is outside the scope of ECMAScript; there are no mandates that it must be a single value/object in the implementation.
In practice, however, undefined (and other special values like null) is most likely implemented as either a singleton (e.g. "one object in memory") or as an immediate value or value flag (such that there is actually no undefined object).
According to EcmaScript (see http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-262.pdf ) if both expressions are strings (typeof returns a string) it perfroms a string comparison (char by char), otherwise it just compares references (in case when one of the sides is undefined). There is no way that string comparison would be more efficient then simple two numbers (i.e. addresses in memory) comparison. However it is possible that it is highly optimized (browsers do not follow EcmaScript in 100%) so it's really hard to say.
Proper benchmark:
http://jsperf.com/undefined-comparison-reference-vs-string
Note that in your test the initial value of a is undefined (may or may not matter, it seems that it does).
Conclusion: always use
return a === undefined;
It definitely won't be slower and it might be faster. Besides it seems more natural to compare something to a special undefined object rather then to a string.

Primitives and Object Wrapper Equivalence in JavaScript

EDIT: Based on everyone's feedback, the original version of this question is more design-related, not standards-related. Making more SO-friendly.
Original:
Should a JS primitive be considered "equivalent" to an object-wrapped version of that primitive according to the ECMA standards?
Revised Question
Is there a universal agreement on how to compare primitive-wrapped objects in current JavaScript?
var n = new Number(1),
p = 1;
n === p; // false
typeof n; // "object"
typeof p; // "number"
+n === p; // true, but you need coercion.
EDIT:
As #Pointy commented, ECMA spec (262, S15.1.2.4) describes a Number.isNaN() method that behaves as follows:
Number.isNaN(NaN); // true
Number.isNaN(new Number(NaN)); // false
Number.isNaN(+(new Number(NaN))); // true, but you need coercion.
Apparently, the justification for this behavior is that isNaN will return true IF the argument coerces to NaN. new Number(NaN) does not directly coerce based on how the native isNaN operates.
It seems that the performance hit and trickiness in type conversion, etc, of directly using native Object Wrappers as opposed to primitives outweighs the semantic benefits for now.
See this JSPerf.
The short answer to your question is no, there is no consensus on how to compare values in JS because the question is too situational; it depends heavily on your particular circumstances.
But, to give some advice/provide a longer answer .... object versions of primitives are evil (in the "they will cause you lots of bugs sense", not in a moral sense), and should be avoided if possible. Therefore, unless you have a compelling reason to handle both, I'd suggest that you do not account for object-wrapped primitives, and simply stick to un-wrapped primitives in your code.
Plus, if you don't account for wrapped primitives it should eliminate any need for you to even have an equals method in the first place.
* Edit *
Just saw your latest comment, and if you need to compare arrays then the built-in == and === won't cut it. Even so, I'd recommend making an arrayEquals method rather than just an equals method, as you'll avoid lots of drama by keeping your function as focused as possible and using the built-in JS comparators as much as possible.
And if you do wrap that in some sort of general function, for convenience:
function equals(left, right) {
if (left.slice && right.slice) { // lame array check
return arrayEquals(left, right);
}
return left == right;
}
I'd still recommend against handling primitive-wrapped objects, unless by "handle" you make your function throw an error if it's passed a primitive-wrapped object. Again, because these objects will only cause you trouble, you should strive to avoid them as much as possible, and not leave yourself openings to introduce bad code.

Categories

Resources