Reading through the ECMAScript 5.1 specification, +0 and -0 are distinguished.
Why then does +0 === -0 evaluate to true?
JavaScript uses IEEE 754 standard to represent numbers. From Wikipedia:
Signed zero is zero with an associated sign. In ordinary arithmetic, −0 = +0 = 0. However, in computing, some number representations allow for the existence of two zeros, often denoted by −0 (negative zero) and +0 (positive zero). This occurs in some signed number representations for integers, and in most floating point number representations. The number 0 is usually encoded as +0, but can be represented by either +0 or −0.
The IEEE 754 standard for floating point arithmetic (presently used by most computers and programming languages that support floating point numbers) requires both +0 and −0. The zeroes can be considered as a variant of the extended real number line such that 1/−0 = −∞ and 1/+0 = +∞, division by zero is only undefined for ±0/±0 and ±∞/±∞.
The article contains further information about the different representations.
So this is the reason why, technically, both zeros have to be distinguished.
However, +0 === -0 evaluates to true. Why is that (...) ?
This behaviour is explicitly defined in section 11.9.6, the Strict Equality Comparison Algorithm (emphasis partly mine):
The comparison x === y, where x and y are values, produces true or false. Such a comparison is performed as follows:
(...)
If Type(x) is Number, then
If x is NaN, return false.
If y is NaN, return false.
If x is the same Number value as y, return true.
If x is +0 and y is −0, return true.
If x is −0 and y is +0, return true.
Return false.
(...)
(The same holds for +0 == -0 btw.)
It seems logically to treat +0 and -0 as equal. Otherwise we would have to take this into account in our code and I, personally, don't want to do that ;)
Note:
ES2015 introduces a new comparison method, Object.is. Object.is explicitly distinguishes between -0 and +0:
Object.is(-0, +0); // false
I'll add this as an answer because I overlooked #user113716's comment.
You can test for -0 by doing this:
function isMinusZero(value) {
return 1/value === -Infinity;
}
isMinusZero(0); // false
isMinusZero(-0); // true
I just came across an example where +0 and -0 behave very differently indeed:
Math.atan2(0, 0); //returns 0
Math.atan2(0, -0); //returns Pi
Be careful: even when using Math.round on a negative number like -0.0001, it will actually be -0 and can screw up some subsequent calculations as shown above.
Quick and dirty way to fix this is to do smth like:
if (x==0) x=0;
or just:
x+=0;
This converts the number to +0 in case it was -0.
2021's answer
Are +0 and -0 the same?
Short answer: Depending on what comparison operator you use.
Long answer:
Basically, We've had 4 comparison types until now:
‘loose’ equality
console.log(+0 == -0); // true
‘strict’ equality
console.log(+0 === -0); // true
‘Same-value’ equality (ES2015's Object.is)
console.log(Object.is(+0, -0)); // false
‘Same-value-zero’ equality (ES2016)
console.log([+0].includes(-0)); // true
As a result, just Object.is(+0, -0) makes difference with the other ones.
const x = +0, y = -0; // true -> using ‘loose’ equality
console.log(x === y); // true -> using ‘strict’ equality
console.log([x].indexOf(y)); // 0 (true) -> using ‘strict’ equality
console.log(Object.is(x, y)); // false -> using ‘Same-value’ equality
console.log([x].includes(y)); // true -> using ‘Same-value-zero’ equality
In the IEEE 754 standard used to represent the Number type in JavaScript, the sign is represented by a bit (a 1 indicates a negative number).
As a result, there exists both a negative and a positive value for each representable number, including 0.
This is why both -0 and +0 exist.
Answering the original title Are +0 and -0 the same?:
brainslugs83 (in comments of answer by Spudley) pointed out an important case in which +0 and -0 in JS are not the same - implemented as function:
var sign = function(x) {
return 1 / x === 1 / Math.abs(x);
}
This will, other than the standard Math.sign return the correct sign of +0 and -0.
We can use Object.is to distinguish +0 and -0, and one more thing, NaN==NaN.
Object.is(+0,-0) //false
Object.is(NaN,NaN) //true
I'd blame it on the Strict Equality Comparison method ( '===' ).
Look at section 4d
see 7.2.13 Strict Equality Comparison on the specification
If you need sign function that supports -0 and +0:
var sign = x => 1/x > 0 ? +1 : -1;
It acts as Math.sign, except that sign(0) returns 1 and sign(-0) returns -1.
There are two possible values (bit representations) for 0. This is not unique. Especially in floating point numbers this can occur. That is because floating point numbers are actually stored as a kind of formula.
Integers can be stored in separate ways too. You can have a numeric value with an additional sign-bit, so in a 16 bit space, you can store a 15 bit integer value and a sign-bit. In this representation, the value 1000 (hex) and 0000 both are 0, but one of them is +0 and the other is -0.
This could be avoided by subtracting 1 from the integer value so it ranged from -1 to -2^16, but this would be inconvenient.
A more common approach is to store integers in 'two complements', but apparently ECMAscript has chosen not to. In this method numbers range from 0000 to 7FFF positive. Negative numbers start at FFFF (-1) to 8000.
Of course, the same rules apply to larger integers too, but I don't want my F to wear out. ;)
Wikipedia has a good article to explain this phenomenon: http://en.wikipedia.org/wiki/Signed_zero
In brief, it both +0 and -0 are defined in the IEEE floating point specifications. Both of them are technically distinct from 0 without a sign, which is an integer, but in practice they all evaluate to zero, so the distinction can be ignored for all practical purposes.
Related
I came across a statement "If -0 is subtracted from +0, the result is -0" in a JavaScript book published in year 2012.
However, when I compute +0 - (-0) in browser, it returns 0 instead of -0. I would like to know whether there is a change in ECMAScript since then or is it just simply an error/typo in the book.
If what the book mentioned is true, I would like to hear explanation and elaboration on this part.
Book: Professional JavaScript for Web Developers, 3rd Ed. by Nicholas C. Zakas (Chapter 3 - pg 63)
The book is incorrect. Maybe it meant -0 - +0. From 12.7.5:
The sum of two negative zeroes is −0. The sum of two positive zeroes, or of two zeroes of opposite sign, is +0.
Given numeric operands a and b, it is always the case that a–b produces the same result as a +(–b).
and 12.5.0:
The unary - operator converts its operand to Number type and then negates it. Negating +0 produces −0, and negating −0 produces +0.
Also, I skipped to another random page in the book and found this:
Comma Operator
The comma operator allows execution of more than one operation in a single statement, as illustrated here:
var num1=1, num2=2, num3=3;
which is not an instance of the comma operator. Two for two; get a refund.
According to the ECMAScript 6.0 specification:
...there is both a positive zero and a negative zero. For brevity, these values are also referred to for expository purposes by the symbols +0 and -0, respectively. (Note that these two different zero Number values are produced by the program expressions +0 (or simply 0) and -0.)
So, +0 and -0 are different Number values but they are considered equal.
I've checked that -0 === +0 equates to true.
I assume this is just an artifact of how numbers are stored in memory and that there is no benefit/purpose/use of these values.
Am I correct?
Also, wikipedia states:
while the two zero representations behave as equal under numeric
comparisons, they yield different results in some operations
Are there any such operations in JavaScript?
It's a matter of how numbers are stored and represented in the memory, and processed, specially for floating point arithmetic.
Signed zero is zero with an associated sign. In ordinary arithmetic, −0 = +0 = 0. However, in computing, some number representations allow for the existence of two zeros, often denoted by −0 (negative zero) and +0 (positive zero). This occurs in some signed number representations for integers, and in most floating point number representations. The number 0 is usually encoded as +0, but can be represented by either +0 or −0.
More in this previous answer
I was reading the ECMAScript 5.1 spec. It says:
The slice method takes two arguments, start and end [...]. If start is negative, it is treated as
length+start where length is the length of the array. If end is negative, it is treated as length+end where length is the length
of the array.
What does "negative" mean? It makes sense that, like in math,
If num > 0, then num it is positive
If num < 0, then num is negative.
But what about +0 and -0? In math there is a single 0, which is not positive nor negative. My guess was that, in ECMAScript,
+0 (a.k.a. positive zero) is positive.
-0 (a.k.a. negative zero) is negative.
But I tried using -0 with slice, and browsers treat it as non-negative.
Then, are both +0 and -0 non-positive and non-negative, despite their names?
Where is the positiveness or negativeness of a number defined? I didn't find that defined in the ECMAScript spec. Is the definition inherited from IEEE 754?
Your confusion is in this part:
But what about +0 and -0? In math there is a single 0, which is not positive nor negative. My guess was that, in ECMAScript,
+0 (a.k.a. positive zero) is positive.
-0 (a.k.a. negative zero) is negative.
+0 is not positive; -0 is not negative. Conceptually they both represent the number zero or, when underflow occurs, any number with a magnitude too small to be represented with the finite number of bits available.
The decision to have +0 and -0 comes more from IEEE than from ECMA.
Things can be confusing if you don't distinguish between the literals +0 and -0, which represent the mathematical value 0, and the values +0 and -0, which are the in memory representation, respectively, for:
Any mathematical value from 0 to the smallest positive real number that can be stored in the double precision 64-bit data format
Any mathematical value from the largest negative real number that can be stored in the double precision 64-bit data format to 0
If you have a variable containing the Number instance -0, this could be representing the real number 0 (which obviously has no sign), or it could be representing the real number 10^-10000.
If you see the literal -0 or +0 in code, this will be interpreted as the real number 0, which is stored (just like any sufficiently tiny but not actually 0 real number) as the Number -0 or +0, as the case may be.
Here are some relevant sections from the spec that will hopefully clarify things:
Numeric literals
The Number type
Algorithm conventions
Why -0===+0
The terms "positive number" and "negative number" are indeed defined in the ECMAScript spec:
8.5 The Number Type
The Number type has exactly 18437736874454810627 (that is,
264−253+3) values [...]
The 9007199254740990 (that is, 253−2) distinct
“Not-a-Number” values of the IEEE Standard are represented in
ECMAScript as a single special NaN value [...]
There are two other special values, called positive Infinity and
negative Infinity [...]
The other 18437736874454810624 (that is,
264−253) values are called the finite numbers.
Half of these are positive numbers and half are negative
numbers; for every finite positive Number value there is a
corresponding negative value having the same magnitude.
Therefore,
Since +0 and -0 are two of those finite numbers, each must either be positive or negative.
Since each finite positive number must have a negative counterpart, either +0 is positive and -0 is negative, or the opposite.
It would be too trollish if the positive zero was a negative number, and negative zero was a positive one. So we can (probably) assume that +0 is positive and -0 is negative.
However, according to the following, neither +0 nor -0 can be negative:
5.2 Algorithm Conventions
The mathematical function abs(x) yields the absolute value of x,
which is −x if x is negative (less than zero) and otherwise is
x itself.
In fact, in most cases the spec seems to differentiate the case when a variable is positive or negative from the case when it's zero. For example,
5.2 Algorithm Conventions
The mathematical function sign(x) yields 1 if x is positive
and −1 if
x is negative. The sign function is not used in this standard for cases when x is zero.
Therefore, the spec is contradictory.
In a recent post on http://wtfjs.com/. An author writes following without explanation which happens to be true.
0 === -0 //returns true
My understanding about === operator is it returns true if operands point to same object.
Also, - operator returns a reference to negative value of operand. With this rule, 0 and -0 should not be the same.
So, why is 0 === -0 ?
=== does not always mean point to the same object. It does on objects, but on value types, it compares the value. Hence how this works:
var x = 0;
var y = 0;
var isTrue = (x === y);
document.write(isTrue); // true
JavaScript used IEEE floating point standard where 0 and -0 are two different numbers, however, the ECMAScript standard states the parser must interpret 0 and -0 as the same:
§5.2 (page 12)
Mathematical operations such as addition, subtraction, negation, multiplication, division, and the mathematical functions defined later in this clause should always be understood as computing exact mathematical results on mathematical real numbers, which do not include infinities and do not include a negative zero that is distinguished from positive zero. Algorithms in this standard that model floating-point arithmetic include explicit
steps, where necessary, to handle infinities and signed zero and to perform rounding. If a mathematical operation or function is applied to a floating-point number, it should be understood as being applied to the exact mathematical value represented by that floating-point number; such a floating-point number must be finite, and if it is +0 or -0 then the corresponding mathematical value is simply 0.
In fact, 0 and -0 are not the same even at the bit level. However, there is a special case implemented for +/-0 so they compare as equal.
The === operator compares by value when applied to primitive numbers.
Primitive numbers are not objects. You're doing a value comparison of the numbers, not an identity comparison of objects.
positive zero is equal to negative zero.
This is from the comparison algorithm for the === operator
If Type(x) is Number, then
If x is NaN, return false.
If y is NaN, return false.
If x is the same Number value as y, return true.
If x is +0 and y is −0, return true.
If x is −0 and y is +0, return true.
Return false.
Reading through the ECMAScript 5.1 specification, +0 and -0 are distinguished.
Why then does +0 === -0 evaluate to true?
JavaScript uses IEEE 754 standard to represent numbers. From Wikipedia:
Signed zero is zero with an associated sign. In ordinary arithmetic, −0 = +0 = 0. However, in computing, some number representations allow for the existence of two zeros, often denoted by −0 (negative zero) and +0 (positive zero). This occurs in some signed number representations for integers, and in most floating point number representations. The number 0 is usually encoded as +0, but can be represented by either +0 or −0.
The IEEE 754 standard for floating point arithmetic (presently used by most computers and programming languages that support floating point numbers) requires both +0 and −0. The zeroes can be considered as a variant of the extended real number line such that 1/−0 = −∞ and 1/+0 = +∞, division by zero is only undefined for ±0/±0 and ±∞/±∞.
The article contains further information about the different representations.
So this is the reason why, technically, both zeros have to be distinguished.
However, +0 === -0 evaluates to true. Why is that (...) ?
This behaviour is explicitly defined in section 11.9.6, the Strict Equality Comparison Algorithm (emphasis partly mine):
The comparison x === y, where x and y are values, produces true or false. Such a comparison is performed as follows:
(...)
If Type(x) is Number, then
If x is NaN, return false.
If y is NaN, return false.
If x is the same Number value as y, return true.
If x is +0 and y is −0, return true.
If x is −0 and y is +0, return true.
Return false.
(...)
(The same holds for +0 == -0 btw.)
It seems logically to treat +0 and -0 as equal. Otherwise we would have to take this into account in our code and I, personally, don't want to do that ;)
Note:
ES2015 introduces a new comparison method, Object.is. Object.is explicitly distinguishes between -0 and +0:
Object.is(-0, +0); // false
I'll add this as an answer because I overlooked #user113716's comment.
You can test for -0 by doing this:
function isMinusZero(value) {
return 1/value === -Infinity;
}
isMinusZero(0); // false
isMinusZero(-0); // true
I just came across an example where +0 and -0 behave very differently indeed:
Math.atan2(0, 0); //returns 0
Math.atan2(0, -0); //returns Pi
Be careful: even when using Math.round on a negative number like -0.0001, it will actually be -0 and can screw up some subsequent calculations as shown above.
Quick and dirty way to fix this is to do smth like:
if (x==0) x=0;
or just:
x+=0;
This converts the number to +0 in case it was -0.
2021's answer
Are +0 and -0 the same?
Short answer: Depending on what comparison operator you use.
Long answer:
Basically, We've had 4 comparison types until now:
‘loose’ equality
console.log(+0 == -0); // true
‘strict’ equality
console.log(+0 === -0); // true
‘Same-value’ equality (ES2015's Object.is)
console.log(Object.is(+0, -0)); // false
‘Same-value-zero’ equality (ES2016)
console.log([+0].includes(-0)); // true
As a result, just Object.is(+0, -0) makes difference with the other ones.
const x = +0, y = -0; // true -> using ‘loose’ equality
console.log(x === y); // true -> using ‘strict’ equality
console.log([x].indexOf(y)); // 0 (true) -> using ‘strict’ equality
console.log(Object.is(x, y)); // false -> using ‘Same-value’ equality
console.log([x].includes(y)); // true -> using ‘Same-value-zero’ equality
In the IEEE 754 standard used to represent the Number type in JavaScript, the sign is represented by a bit (a 1 indicates a negative number).
As a result, there exists both a negative and a positive value for each representable number, including 0.
This is why both -0 and +0 exist.
Answering the original title Are +0 and -0 the same?:
brainslugs83 (in comments of answer by Spudley) pointed out an important case in which +0 and -0 in JS are not the same - implemented as function:
var sign = function(x) {
return 1 / x === 1 / Math.abs(x);
}
This will, other than the standard Math.sign return the correct sign of +0 and -0.
We can use Object.is to distinguish +0 and -0, and one more thing, NaN==NaN.
Object.is(+0,-0) //false
Object.is(NaN,NaN) //true
I'd blame it on the Strict Equality Comparison method ( '===' ).
Look at section 4d
see 7.2.13 Strict Equality Comparison on the specification
If you need sign function that supports -0 and +0:
var sign = x => 1/x > 0 ? +1 : -1;
It acts as Math.sign, except that sign(0) returns 1 and sign(-0) returns -1.
There are two possible values (bit representations) for 0. This is not unique. Especially in floating point numbers this can occur. That is because floating point numbers are actually stored as a kind of formula.
Integers can be stored in separate ways too. You can have a numeric value with an additional sign-bit, so in a 16 bit space, you can store a 15 bit integer value and a sign-bit. In this representation, the value 1000 (hex) and 0000 both are 0, but one of them is +0 and the other is -0.
This could be avoided by subtracting 1 from the integer value so it ranged from -1 to -2^16, but this would be inconvenient.
A more common approach is to store integers in 'two complements', but apparently ECMAscript has chosen not to. In this method numbers range from 0000 to 7FFF positive. Negative numbers start at FFFF (-1) to 8000.
Of course, the same rules apply to larger integers too, but I don't want my F to wear out. ;)
Wikipedia has a good article to explain this phenomenon: http://en.wikipedia.org/wiki/Signed_zero
In brief, it both +0 and -0 are defined in the IEEE floating point specifications. Both of them are technically distinct from 0 without a sign, which is an integer, but in practice they all evaluate to zero, so the distinction can be ignored for all practical purposes.