Result of -0 subtracted from +0 - javascript

I came across a statement "If -0 is subtracted from +0, the result is -0" in a JavaScript book published in year 2012.
However, when I compute +0 - (-0) in browser, it returns 0 instead of -0. I would like to know whether there is a change in ECMAScript since then or is it just simply an error/typo in the book.
If what the book mentioned is true, I would like to hear explanation and elaboration on this part.
Book: Professional JavaScript for Web Developers, 3rd Ed. by Nicholas C. Zakas (Chapter 3 - pg 63)

The book is incorrect. Maybe it meant -0 - +0. From 12.7.5:
The sum of two negative zeroes is −0. The sum of two positive zeroes, or of two zeroes of opposite sign, is +0.
Given numeric operands a and b, it is always the case that a–b produces the same result as a +(–b).
and 12.5.0:
The unary - operator converts its operand to Number type and then negates it. Negating +0 produces −0, and negating −0 produces +0.
Also, I skipped to another random page in the book and found this:
Comma Operator
The comma operator allows execution of more than one operation in a single statement, as illustrated here:
var num1=1, num2=2, num3=3;
which is not an instance of the comma operator. Two for two; get a refund.

Related

Can I use -0 in the Javascript? [duplicate]

Reading through the ECMAScript 5.1 specification, +0 and -0 are distinguished.
Why then does +0 === -0 evaluate to true?
JavaScript uses IEEE 754 standard to represent numbers. From Wikipedia:
Signed zero is zero with an associated sign. In ordinary arithmetic, −0 = +0 = 0. However, in computing, some number representations allow for the existence of two zeros, often denoted by −0 (negative zero) and +0 (positive zero). This occurs in some signed number representations for integers, and in most floating point number representations. The number 0 is usually encoded as +0, but can be represented by either +0 or −0.
The IEEE 754 standard for floating point arithmetic (presently used by most computers and programming languages that support floating point numbers) requires both +0 and −0. The zeroes can be considered as a variant of the extended real number line such that 1/−0 = −∞ and 1/+0 = +∞, division by zero is only undefined for ±0/±0 and ±∞/±∞.
The article contains further information about the different representations.
So this is the reason why, technically, both zeros have to be distinguished.
However, +0 === -0 evaluates to true. Why is that (...) ?
This behaviour is explicitly defined in section 11.9.6, the Strict Equality Comparison Algorithm (emphasis partly mine):
The comparison x === y, where x and y are values, produces true or false. Such a comparison is performed as follows:
(...)
If Type(x) is Number, then
If x is NaN, return false.
If y is NaN, return false.
If x is the same Number value as y, return true.
If x is +0 and y is −0, return true.
If x is −0 and y is +0, return true.
Return false.
(...)
(The same holds for +0 == -0 btw.)
It seems logically to treat +0 and -0 as equal. Otherwise we would have to take this into account in our code and I, personally, don't want to do that ;)
Note:
ES2015 introduces a new comparison method, Object.is. Object.is explicitly distinguishes between -0 and +0:
Object.is(-0, +0); // false
I'll add this as an answer because I overlooked #user113716's comment.
You can test for -0 by doing this:
function isMinusZero(value) {
return 1/value === -Infinity;
}
isMinusZero(0); // false
isMinusZero(-0); // true
I just came across an example where +0 and -0 behave very differently indeed:
Math.atan2(0, 0); //returns 0
Math.atan2(0, -0); //returns Pi
Be careful: even when using Math.round on a negative number like -0.0001, it will actually be -0 and can screw up some subsequent calculations as shown above.
Quick and dirty way to fix this is to do smth like:
if (x==0) x=0;
or just:
x+=0;
This converts the number to +0 in case it was -0.
2021's answer
Are +0 and -0 the same?
Short answer: Depending on what comparison operator you use.
Long answer:
Basically, We've had 4 comparison types until now:
‘loose’ equality
console.log(+0 == -0); // true
‘strict’ equality
console.log(+0 === -0); // true
‘Same-value’ equality (ES2015's Object.is)
console.log(Object.is(+0, -0)); // false
‘Same-value-zero’ equality (ES2016)
console.log([+0].includes(-0)); // true
As a result, just Object.is(+0, -0) makes difference with the other ones.
const x = +0, y = -0; // true -> using ‘loose’ equality
console.log(x === y); // true -> using ‘strict’ equality
console.log([x].indexOf(y)); // 0 (true) -> using ‘strict’ equality
console.log(Object.is(x, y)); // false -> using ‘Same-value’ equality
console.log([x].includes(y)); // true -> using ‘Same-value-zero’ equality
In the IEEE 754 standard used to represent the Number type in JavaScript, the sign is represented by a bit (a 1 indicates a negative number).
As a result, there exists both a negative and a positive value for each representable number, including 0.
This is why both -0 and +0 exist.
Answering the original title Are +0 and -0 the same?:
brainslugs83 (in comments of answer by Spudley) pointed out an important case in which +0 and -0 in JS are not the same - implemented as function:
var sign = function(x) {
return 1 / x === 1 / Math.abs(x);
}
This will, other than the standard Math.sign return the correct sign of +0 and -0.
We can use Object.is to distinguish +0 and -0, and one more thing, NaN==NaN.
Object.is(+0,-0) //false
Object.is(NaN,NaN) //true
I'd blame it on the Strict Equality Comparison method ( '===' ).
Look at section 4d
see 7.2.13 Strict Equality Comparison on the specification
If you need sign function that supports -0 and +0:
var sign = x => 1/x > 0 ? +1 : -1;
It acts as Math.sign, except that sign(0) returns 1 and sign(-0) returns -1.
There are two possible values (bit representations) for 0. This is not unique. Especially in floating point numbers this can occur. That is because floating point numbers are actually stored as a kind of formula.
Integers can be stored in separate ways too. You can have a numeric value with an additional sign-bit, so in a 16 bit space, you can store a 15 bit integer value and a sign-bit. In this representation, the value 1000 (hex) and 0000 both are 0, but one of them is +0 and the other is -0.
This could be avoided by subtracting 1 from the integer value so it ranged from -1 to -2^16, but this would be inconvenient.
A more common approach is to store integers in 'two complements', but apparently ECMAscript has chosen not to. In this method numbers range from 0000 to 7FFF positive. Negative numbers start at FFFF (-1) to 8000.
Of course, the same rules apply to larger integers too, but I don't want my F to wear out. ;)
Wikipedia has a good article to explain this phenomenon: http://en.wikipedia.org/wiki/Signed_zero
In brief, it both +0 and -0 are defined in the IEEE floating point specifications. Both of them are technically distinct from 0 without a sign, which is an integer, but in practice they all evaluate to zero, so the distinction can be ignored for all practical purposes.

What is the purpose of having both positive and negative zero (+0, also written as 0, and -0)?

According to the ECMAScript 6.0 specification:
...there is both a positive zero and a negative zero. For brevity, these values are also referred to for expository purposes by the symbols +0 and -0, respectively. (Note that these two different zero Number values are produced by the program expressions +0 (or simply 0) and -0.)
So, +0 and -0 are different Number values but they are considered equal.
I've checked that -0 === +0 equates to true.
I assume this is just an artifact of how numbers are stored in memory and that there is no benefit/purpose/use of these values.
Am I correct?
Also, wikipedia states:
while the two zero representations behave as equal under numeric
comparisons, they yield different results in some operations
Are there any such operations in JavaScript?
It's a matter of how numbers are stored and represented in the memory, and processed, specially for floating point arithmetic.
Signed zero is zero with an associated sign. In ordinary arithmetic, −0 = +0 = 0. However, in computing, some number representations allow for the existence of two zeros, often denoted by −0 (negative zero) and +0 (positive zero). This occurs in some signed number representations for integers, and in most floating point number representations. The number 0 is usually encoded as +0, but can be represented by either +0 or −0.
More in this previous answer

Why does string to number comparison work in Javascript

I am trying to compare a value coming from a HTML text field with integers. And it works as expected.
Condition is -
x >= 1 && x <= 999;
Where x is the value of text field. Condition returns true whenever value is between 1-999 (inclusive), else false.
Problem is, that the value coming from the text field is of string type and I'm comparing it with integer types. Is it okay to have this comparison like this or should I use parseInt() to convert x to integer ?
Because JavaScript defines >= and <= (and several other operators) in a way that allows them to coerce their operands to different types. It's just part of the definition of the operator.
In the case of <, >, <=, and >=, the gory details are laid out in §11.8.5 of the specification. The short version is: If both operands are strings (after having been coerced from objects, if necessary), it does a string comparison. Otherwise, it coerces the operands to numbers and does a numeric comparison.
Consequently, you get fun results, like that "90" > "100" (both are strings, it's a string comparison) but "90" < 100 (one of them is a number, it's a numeric comparison). :-)
Is it okay to have this comparison like this or should I use parseInt() to convert x to integer ?
That's a matter of opinion. Some people think it's totally fine to rely on the implicit coercion; others think it isn't. There are some objective arguments. For instance, suppose you relied on implicit conversion and it was fine because you had those numeric constants, but later you were comparing x to another value you got from an input field. Now you're comparing strings, but the code looks the same. But again, it's a matter of opinion and you should make your own choice.
If you do decide to explicitly convert to numbers first, parseInt may or may not be what you want, and it doesn't do the same thing as the implicit conversion. Here's a rundown of options:
parseInt(str[, radix]) - Converts as much of the beginning of the string as it can into a whole (integer) number, ignoring extra characters at the end. So parseInt("10x") is 10; the x is ignored. Supports an optional radix (number base) argument, so parseInt("15", 16) is 21 (15 in hex). If there's no radix, assumes decimal unless the string starts with 0x (or 0X), in which case it skips those and assumes hex. Does not look for the new 0b (binary) or 0o (new style octal) prefixes; both of those parse as 0. (Some browsers used to treat strings starting with 0 as octal; that behavior was never specified, and was [specifically disallowed][2] in the ES5 specification.) Returns NaN if no parseable digits are found.
Number.parseInt(str[, radix]) - Exactly the same function as parseInt above. (Literally, Number.parseInt === parseInt is true.)
parseFloat(str) - Like parseInt, but does floating-point numbers and only supports decimal. Again extra characters on the string are ignored, so parseFloat("10.5x") is 10.5 (the x is ignored). As only decimal is supported, parseFloat("0x15") is 0 (because parsing ends at the x). Returns NaN if no parseable digits are found.
Number.parseFloat(str) - Exactly the same function as parseFloat above.
Unary +, e.g. +str - (E.g., implicit conversion) Converts the entire string to a number using floating point and JavaScript's standard number notation (just digits and a decimal point = decimal; 0x prefix = hex; 0b = binary [ES2015+]; 0o prefix = octal [ES2015+]; some implementations extend it to treat a leading 0 as octal, but not in strict mode). +"10x" is NaN because the x is not ignored. +"10" is 10, +"10.5" is 10.5, +"0x15" is 21, +"0o10" is 8 [ES2015+], +"0b101" is 5 [ES2015+]. Has a gotcha: +"" is 0, not NaN as you might expect.
Number(str) - Exactly like implicit conversion (e.g., like the unary + above), but slower on some implementations. (Not that it's likely to matter.)
Bitwise OR with zero, e.g. str|0 - Implicit conversion, like +str, but then it also converts the number to a 32-bit integer (and converts NaN to 0 if the string cannot be converted to a valid number).
So if it's okay that extra bits on the string are ignored, parseInt or parseFloat are fine. parseInt is quite handy for specifying radix. Unary + is useful for ensuring the entire string is considered. Takes your choice. :-)
For what it's worth, I tend to use this function:
const parseNumber = (str) => str ? +str : NaN;
(Or a variant that trims whitespace.) Note how it handles the issue with +"" being 0.
And finally: If you're converting to number and want to know whether the result is NaN, you might be tempted to do if (convertedValue === NaN). But that won't work, because as Rick points out below, comparisons involving NaN are always false. Instead, it's if (isNaN(convertedValue)).
The MDN's docs on Comparision states that the operands are converted to a common type before comparing (with the operator you're using):
The more commonly used abstract comparison (e.g. ==) converts the operands to the same Type before making the comparison. For relational abstract comparisons (e.g., <=), the operands are first converted to primitives, then to the same type, before comparison.
You'd only need to apply parseInt() if you were using a strict comparision, that does not perform the auto casting before comparing.
You should use parseInt if the var is a string. Add = to compare datatype value:
parseInt(x) >== 1 && parseInt(x) <== 999;

What do "positive" and "negative" mean in ECMAScript? +0 and -0

I was reading the ECMAScript 5.1 spec. It says:
The slice method takes two arguments, start and end [...]. If start is negative, it is treated as
length+start where length is the length of the array. If end is negative, it is treated as length+end where length is the length
of the array.
What does "negative" mean? It makes sense that, like in math,
If num > 0, then num it is positive
If num < 0, then num is negative.
But what about +0 and -0? In math there is a single 0, which is not positive nor negative. My guess was that, in ECMAScript,
+0 (a.k.a. positive zero) is positive.
-0 (a.k.a. negative zero) is negative.
But I tried using -0 with slice, and browsers treat it as non-negative.
Then, are both +0 and -0 non-positive and non-negative, despite their names?
Where is the positiveness or negativeness of a number defined? I didn't find that defined in the ECMAScript spec. Is the definition inherited from IEEE 754?
Your confusion is in this part:
But what about +0 and -0? In math there is a single 0, which is not positive nor negative. My guess was that, in ECMAScript,
+0 (a.k.a. positive zero) is positive.
-0 (a.k.a. negative zero) is negative.
+0 is not positive; -0 is not negative. Conceptually they both represent the number zero or, when underflow occurs, any number with a magnitude too small to be represented with the finite number of bits available.
The decision to have +0 and -0 comes more from IEEE than from ECMA.
Things can be confusing if you don't distinguish between the literals +0 and -0, which represent the mathematical value 0, and the values +0 and -0, which are the in memory representation, respectively, for:
Any mathematical value from 0 to the smallest positive real number that can be stored in the double precision 64-bit data format
Any mathematical value from the largest negative real number that can be stored in the double precision 64-bit data format to 0
If you have a variable containing the Number instance -0, this could be representing the real number 0 (which obviously has no sign), or it could be representing the real number 10^-10000.
If you see the literal -0 or +0 in code, this will be interpreted as the real number 0, which is stored (just like any sufficiently tiny but not actually 0 real number) as the Number -0 or +0, as the case may be.
Here are some relevant sections from the spec that will hopefully clarify things:
Numeric literals
The Number type
Algorithm conventions
Why -0===+0
The terms "positive number" and "negative number" are indeed defined in the ECMAScript spec:
8.5 The Number Type
The Number type has exactly 18437736874454810627 (that is,
264−253+3) values [...]
The 9007199254740990 (that is, 253−2) distinct
“Not-a-Number” values of the IEEE Standard are represented in
ECMAScript as a single special NaN value [...]
There are two other special values, called positive Infinity and
negative Infinity [...]
The other 18437736874454810624 (that is,
264−253) values are called the finite numbers.
Half of these are positive numbers and half are negative
numbers; for every finite positive Number value there is a
corresponding negative value having the same magnitude.
Therefore,
Since +0 and -0 are two of those finite numbers, each must either be positive or negative.
Since each finite positive number must have a negative counterpart, either +0 is positive and -0 is negative, or the opposite.
It would be too trollish if the positive zero was a negative number, and negative zero was a positive one. So we can (probably) assume that +0 is positive and -0 is negative.
However, according to the following, neither +0 nor -0 can be negative:
5.2 Algorithm Conventions
The mathematical function abs(x) yields the absolute value of x,
which is −x if x is negative (less than zero) and otherwise is
x itself.
In fact, in most cases the spec seems to differentiate the case when a variable is positive or negative from the case when it's zero. For example,
5.2 Algorithm Conventions
The mathematical function sign(x) yields 1 if x is positive
and −1 if
x is negative. The sign function is not used in this standard for cases when x is zero.
Therefore, the spec is contradictory.

Are +0 and -0 the same?

Reading through the ECMAScript 5.1 specification, +0 and -0 are distinguished.
Why then does +0 === -0 evaluate to true?
JavaScript uses IEEE 754 standard to represent numbers. From Wikipedia:
Signed zero is zero with an associated sign. In ordinary arithmetic, −0 = +0 = 0. However, in computing, some number representations allow for the existence of two zeros, often denoted by −0 (negative zero) and +0 (positive zero). This occurs in some signed number representations for integers, and in most floating point number representations. The number 0 is usually encoded as +0, but can be represented by either +0 or −0.
The IEEE 754 standard for floating point arithmetic (presently used by most computers and programming languages that support floating point numbers) requires both +0 and −0. The zeroes can be considered as a variant of the extended real number line such that 1/−0 = −∞ and 1/+0 = +∞, division by zero is only undefined for ±0/±0 and ±∞/±∞.
The article contains further information about the different representations.
So this is the reason why, technically, both zeros have to be distinguished.
However, +0 === -0 evaluates to true. Why is that (...) ?
This behaviour is explicitly defined in section 11.9.6, the Strict Equality Comparison Algorithm (emphasis partly mine):
The comparison x === y, where x and y are values, produces true or false. Such a comparison is performed as follows:
(...)
If Type(x) is Number, then
If x is NaN, return false.
If y is NaN, return false.
If x is the same Number value as y, return true.
If x is +0 and y is −0, return true.
If x is −0 and y is +0, return true.
Return false.
(...)
(The same holds for +0 == -0 btw.)
It seems logically to treat +0 and -0 as equal. Otherwise we would have to take this into account in our code and I, personally, don't want to do that ;)
Note:
ES2015 introduces a new comparison method, Object.is. Object.is explicitly distinguishes between -0 and +0:
Object.is(-0, +0); // false
I'll add this as an answer because I overlooked #user113716's comment.
You can test for -0 by doing this:
function isMinusZero(value) {
return 1/value === -Infinity;
}
isMinusZero(0); // false
isMinusZero(-0); // true
I just came across an example where +0 and -0 behave very differently indeed:
Math.atan2(0, 0); //returns 0
Math.atan2(0, -0); //returns Pi
Be careful: even when using Math.round on a negative number like -0.0001, it will actually be -0 and can screw up some subsequent calculations as shown above.
Quick and dirty way to fix this is to do smth like:
if (x==0) x=0;
or just:
x+=0;
This converts the number to +0 in case it was -0.
2021's answer
Are +0 and -0 the same?
Short answer: Depending on what comparison operator you use.
Long answer:
Basically, We've had 4 comparison types until now:
‘loose’ equality
console.log(+0 == -0); // true
‘strict’ equality
console.log(+0 === -0); // true
‘Same-value’ equality (ES2015's Object.is)
console.log(Object.is(+0, -0)); // false
‘Same-value-zero’ equality (ES2016)
console.log([+0].includes(-0)); // true
As a result, just Object.is(+0, -0) makes difference with the other ones.
const x = +0, y = -0; // true -> using ‘loose’ equality
console.log(x === y); // true -> using ‘strict’ equality
console.log([x].indexOf(y)); // 0 (true) -> using ‘strict’ equality
console.log(Object.is(x, y)); // false -> using ‘Same-value’ equality
console.log([x].includes(y)); // true -> using ‘Same-value-zero’ equality
In the IEEE 754 standard used to represent the Number type in JavaScript, the sign is represented by a bit (a 1 indicates a negative number).
As a result, there exists both a negative and a positive value for each representable number, including 0.
This is why both -0 and +0 exist.
Answering the original title Are +0 and -0 the same?:
brainslugs83 (in comments of answer by Spudley) pointed out an important case in which +0 and -0 in JS are not the same - implemented as function:
var sign = function(x) {
return 1 / x === 1 / Math.abs(x);
}
This will, other than the standard Math.sign return the correct sign of +0 and -0.
We can use Object.is to distinguish +0 and -0, and one more thing, NaN==NaN.
Object.is(+0,-0) //false
Object.is(NaN,NaN) //true
I'd blame it on the Strict Equality Comparison method ( '===' ).
Look at section 4d
see 7.2.13 Strict Equality Comparison on the specification
If you need sign function that supports -0 and +0:
var sign = x => 1/x > 0 ? +1 : -1;
It acts as Math.sign, except that sign(0) returns 1 and sign(-0) returns -1.
There are two possible values (bit representations) for 0. This is not unique. Especially in floating point numbers this can occur. That is because floating point numbers are actually stored as a kind of formula.
Integers can be stored in separate ways too. You can have a numeric value with an additional sign-bit, so in a 16 bit space, you can store a 15 bit integer value and a sign-bit. In this representation, the value 1000 (hex) and 0000 both are 0, but one of them is +0 and the other is -0.
This could be avoided by subtracting 1 from the integer value so it ranged from -1 to -2^16, but this would be inconvenient.
A more common approach is to store integers in 'two complements', but apparently ECMAscript has chosen not to. In this method numbers range from 0000 to 7FFF positive. Negative numbers start at FFFF (-1) to 8000.
Of course, the same rules apply to larger integers too, but I don't want my F to wear out. ;)
Wikipedia has a good article to explain this phenomenon: http://en.wikipedia.org/wiki/Signed_zero
In brief, it both +0 and -0 are defined in the IEEE floating point specifications. Both of them are technically distinct from 0 without a sign, which is an integer, but in practice they all evaluate to zero, so the distinction can be ignored for all practical purposes.

Categories

Resources