Why is Infinity / Infinity not 1? - javascript

If
Infinity === Infinity
>> true
and
typeOf Infinity
>> "number"
then why is
Infinity / Infinity
>>NaN
and not 1?

Beware any assumptions you make about the arithmetic behaviour of infinity.
If ∞/∞ = 1, then 1×∞ = ∞. By extension, since 2×∞ = ∞, it must also be the case that ∞/∞ = 2.
Since it has come up in discussion against another answer, I'd like to point out that the equation 2×∞ = ∞ does not imply that there are multiple infinities. All countably infinite sets have the same cardinality. I.e., the set of integers has the same cardinality as the set of odd numbers, even though the second set is missing half the elements from the first set. (OTOH, there are other kinds of "infinity", such as the cardinality of the set of reals, but doubling the countable infinity doesn't produce one of these. Nor does squaring it, for that matter.)

Because the specification says so:
Division of an infinity by an infinity results in NaN.
I'm not a mathematician, but even from that point of view, having 1 as result it does not make sense. Infinities can be different and only because they are equal in JavaScript does not justify treating them as equal in all other cases (or letting the division return 1 for that matter). (edit: as I said, I'm not a mathematician ;)).

The result is mathematically undefined. It has nothing to do with javascript. See the following explanation.

It's recognizable from Calculus one! It's a indeterminate form!

Related

Why does Math.min() return -0 from [+0, 0, -0]

I know (-0 === 0) comes out to be true. I am curious to know why -0 < 0 happens?
When I run this code in stackoverflow execution context, it returns 0.
const arr = [+0, 0, -0];
console.log(Math.min(...arr));
But when I run the same code in the browser console, it returns -0. Why is that? I have tried to search it on google but didn't find anything useful. This question might not add value to someone practical example, I wanted to understand how does JS calculates it.
const arr = [+0, 0, -0];
console.log(Math.min(...arr)); // -0
-0 is not less than 0 or +0, both -0 < 0 and -0 < +0 returns False, you're mixing the behavior of Math.min with the comparison of -0 with 0/+0.
The specification of Math.min is clear on this point:
b. If number is -0𝔽 and lowest is +0𝔽, set lowest to -0𝔽.
Without this exception, the behavior of Math.min and Math.max would depend on the order of arguments, which can be considered an odd behavior — you probably want Math.min(x, y) to always equal Math.min(y, x) — so that might be one possible justification.
Note: This exception was already present in the 1997 specification for Math.min(x, y), so that's not something that was added later on.
This is a specialty of Math.min, as specified:
21.3.2.25 Math.min ( ...args )
[...]
For each element number of coerced, do
a. If number is NaN, return NaN.
b. If number is -0𝔽 and lowest is +0𝔽, set lowest to -0𝔽.
c. If number < lowest, set lowest to number.
Return lowest.
Note that in most cases, +0 and -0 are treated equally, also in the ToString conversion, thus (-0).toString() evaluates to "0". That you can observe the difference in the browser console is an implementation detail of the browser.
The point of this answer is to explain why the language design choice of having Math.min be fully commutative makes sense.
I am curious to know why -0 < 0 happens?
It doesn't really; < is a separate operation from "minimum", and Math.min isn't based solely on IEEE < comparison like b<a ? b : a.
That would be non-commutative wrt. NaN as well as signed-zero. (< is false if either operand is NaN, so that would produce a).
As far as principle of least surprise, it would be at least as surprising (if not moreso) if Math.min(-1,NaN) was NaN but Math.min(NaN, -1) was -1.
The JS language designers wanted Math.min to be NaN-propagating, so basing it just on < wasn't possible anyway. They chose to make it fully commutative including for signed zero, which seems like a sensible decision.
OTOH, most code doesn't care about signed zero, so this language design choice costs a bit of performance for everyone to cater to the rare cases where someone wants well-defined signed-zero semantics.
If you want a simple operation that ignores NaN in an array, iterate yourself with current_min = x < current_min ? x : current_min. That will ignore all NaN, and also ignore -0 for current_min <= +0.0 (IEEE comparison). Or if current_min starts out NaN, it will stay NaN. Many of those things are undesirable for a Math.min function, so it doesn't work that way.
If you compare other languages, the C standard fmin function is commutative wrt. NaN (returning the non-NaN if there is one, opposite of JS), but is not required to be commutative wrt. signed zero. Some C implementations choose to work like JS for +-0.0 for fmin / fmax.
But C++ std::min is defined purely in terms of a < operation, so it does work that way. (It's intended to work generically, including on non-numeric types like strings; unlike std::fmin it doesn't have any FP-specific rules.) See What is the instruction that gives branchless FP min and max on x86? re: x86's minps instruction and C++ std::min which are both non-commutative wrt. NaN and signed zero.
IEEE 754 < doesn't give you a total order over distinct FP numbers. Math.min does except for NaNs (e.g. if you built a sorting network with it and Math.max.) Its order disagrees with Math.max: they both return NaN if there is one, so a sorting network using min/max comparators would produce all NaNs if there were any in the input array.
Math.min alone wouldn't be sufficient for sorting without something like == to see which arg it returned, but that breaks down for signed zero as well as NaN.
The spec is curiously contradictory. The < comparison rule explicitly says that -0 is not less than +0. However, the spec for Math.min() says the opposite: if the current (while iterating through the arguments) value is -0, and the smallest value so far is +0, then the smallest value should be set to -0.
I would like for somebody to activate the T.J. Crowder signal for this one.
edit — it was suggested in some comments that a possible reason for the behavior is to make it possible to detect a -0 value, even though for almost all purposes in normal expressions the -0 is treated as being plain 0.

Infinite from zero division

In JavaScript, if you divide by 0 you get Infinity
typeof Infinity; //number
isNaN(Infinity); //false
This insinuates that Infinity is a number (of course, no argument there).
What I learned that anything divided by zero is in an indeterminate form and has no value, is not a number.
That definition however is for arithmetic, and I know that in programming it can either yield Infinity, Not a Number, or just throw an exception.
So why throw Infinity? Does anybody have an explanation on that?
First off, resulting in Infinity is not due to some crazy math behind the scenes. The spec states that:
Division of a non-zero finite value by a zero results in a signed infinity. The sign is determined by the rule already stated above.
The logic of the spec authors goes along these lines:
2/1 = 2. Simple enough.
2/0.5 = 4. Halving the denominator doubles the result.
...and so on:
2/0.0000000000000005 = 4e+1. As the denominator trends toward zero, the result grows. Thus, the spec authors decided for division by zero to default to Infinity, as well as any other operation that results in a number too big for JavaScript to represent [0]. (instead of some quasi-numeric state or a divide by zero exception).
You can see this in action in the code of Google's V8 engine: https://github.com/v8/v8/blob/bd8c70f5fc9c57eeee478ed36f933d3139ee221a/src/hydrogen-instructions.cc#L4063
[0] "If the magnitude is too large to represent, the operation overflows; the result is then an infinity of appropriate sign."
Javascript is a loosely typed language which means that it doesn't have to return the type you were expecting from a function.
Infinity isn't actually an integer
In a strongly typed language if your function was supposed to return an int this means the only thing you can do when you get a value that isn't an int is to throw an exception
In loosely typed language you have another option which is to return a new type that represents the result better (such as in this case infinity)
Infinity is very different than indetermination.
If you compute x/0+ you get +infinity and for x/o- you get -infinity (x>0 in that example).
Javascript uses it to note that you have exceeded the capacity of the underlaying floating point storage.
You can then handle it to direct your sw towards either exceptional cases or big number version of your computation.
Infinity is actually consistent in formulae. Without it, you have to break formulae into small pieces, and you end up with more complicated code.
Try this, and you get j as Infinity:
var i = Infinity;
var j = 2*i/5;
console.log("result = "+j);
This is because Javascript uses Floating point arithmetics and it's exception for handling division by zero.
Division by zero (an operation on finite operands gives an exact infinite result, e.g., 1/0 or log(0)) (returns ±infinity by default).
wikipedia source
When x tends towards 0 in the formula y=1/x, y tends towards infinity. So it would make sense if something that would end up as a really high number (following that logic) would be represented by infinity. Somewhere around 10^320, JavaScript returns Infinity instead of the actual result, so if the calculation would otherwise end up above that threshold, it just returns infinity instead.
As determined by the ECMAScript language specification:
The sign of the result is positive if both operands have the same
sign, negative if the operands have different signs.
Division of an infinity by a zero results in an infinity. The sign is
determined by the rule already stated above.
Division of a nonzero finite value by a zero results in a signed infinity. The sign is determined by the rule already stated above.
As the denominator of an arithmetic fraction tends towards 0 (for a finite non-zero numerator) the result tends towards +Infinity or -Infinity depending on the signs of the operands. This can be seen by:
1/0.1 = 10
1/0.01 = 100
1/0.001 = 1000
1/0.0000000001 = 10000000000
1/1e-308 = 1e308
Taking this further then when you perform a division by zero then the JavaScript engine gives the result (as determined by the spec quoted above):
1/0 = Number.POSITIVE_INFINITY
-1/0 = Number.NEGATIVE_INFINITY
-1/-0 = Number.POSITIVE_INFINITY
1/-0 = Number.NEGATIVE_INFINITY
It is the same if you divide by a sufficiently large value:
1/1e309 = Number.POSITIVE_INFINITY

In JavaScript, why does zero divided by zero return NaN, but any other divided by zero return Infinity?

It seems to me that the code
console.log(1 / 0)
should return NaN, but instead it returns Infinity. However this code:
console.log(0 / 0)
does return NaN. Can someone help me to understand the reasoning for this functionality? Not only does it seem to be inconsistent, it also seems to be wrong, in the case of x / 0 where x !== 0
Because that's how floating-point is defined (more generally than just Javascript). See for example:
http://en.wikipedia.org/wiki/Floating-point#Infinities
http://en.wikipedia.org/wiki/NaN#Creation
Crudely speaking, you could think of 1/0 as the limit of 1/x as x tends to zero (from the right). And 0/0 has no reasonable interpretation at all, hence NaN.
In addition to answers based on the mathematical concept of zero, there is a special consideration for floating point numbers. Every underflow result, every non-zero number whose absolute magnitude is too small to represent as a non-zero number, is represented as zero.
0/0 may really be 1e-500/1e-600, or 1e-600/1e-500, or many other ratios of very small values.
The actual ratio could be anything, so there is no meaningful numerical answer, and the result should be a NaN.
Now consider 1/0. It does not matter whether the 0 represents 1e-500 or 1e-600. Regardless, the division would overflow and the correct result is the value used to represent overflows, Infinity.
I realize this is old, but I think it's important to note that in JS there is also a -0 which is different than 0 or +0 which makes this feature of JS much more logical than at first glance.
1 / 0 -> Infinity
1 / -0 -> -Infinity
which logically makes sense since in calculus, the reason dividing by 0 is undefined is solely because the left limit goes to negative infinity and the right limit to positive infinity. Since the -0 and 0 are different objects in JS, it makes sense to apply the positive 0 to evaluate to positive Infinity and the negative 0 to evaluate to negative Infinity
This logic does not apply to 0/0, which is indeterminate. Unlike with 1/0, we can get two results taking limits by this method with 0/0
lim h->0(0/h) = 0
lim h->0(h/0) = Infinity
which of course is inconsistent, so it results in NaN

JavaScript Math.floor: how guarantee number will round down?

I want to normalize an array so that each value is
in [0-1) .. i.e. "the max will never be 1 but the min can be 0."
This is not unlike the random function returning numbers in the same range.
While looking at this, I found that .99999999999999999===1 is true!
Ditto (1-Number.MIN_VALUE) === 1 But Math.ceil(Number.MIN_VALUE) is 1, as it should be.
Some others: Math.floor(.999999999999) is 0
while Math.floor(.99999999999999999) is 1
OK so there are rounding problems in JS.
Is there any way I can normalize a set of numbers to lie in the range [0,1)?
It may help to examine the steps that JavaScript performs of each of your expressions.
In .99999999999999999===1:
The source text .99999999999999999 is converted to a Number. The closest Number is 1, so that is the result. (The next closest Number is 0.99999999999999988897769753748434595763683319091796875, which is 1–2–53.)
Then 1 is compared to 1. The result is true.
In (1-Number.MIN_VALUE) === 1:
Number.MIN_VALUE is 2–1074, about 5e–304.
1–2–1074 is extremely close to one. The exact value cannot be represented as a Number, so the nearest value is used. Again, the nearest value is 1.
Then 1 is compared to 1. The result is true.
In Math.ceil(Number.MIN_VALUE):
Number.MIN_VALUE is 2–1074, about 5e–304.
The ceiling function of that value is 1.
In Math.floor(.999999999999):
The source text .999999999999 is converted to a Number. The closest Number is 0.99999999999900002212172012150404043495655059814453125, so that is the result.
The floor function of that value is 0.
In Math.floor(.99999999999999999):
The source text .99999999999999999 is converted to a Number. The closest Number is 1, so that is the result.
The floor function of 1 is 1.
There are only two surprising things here, at most. One is that the numerals in the source text are converted to internal Number values. But this should not be surprising. Of course text has to be converted to internal representations of numbers, and the Number type cannot perfectly store all the infinitely many numbers. So it has to round. And of course numbers very near 1 round to 1.
The other possibly surprising thing is that 1-Number.MIN_VALUE is 1. But this is actually the same issue: The exact result is not representable, but it is very near 1, so 1 is used.
The Math.floor function works correctly. It never introduces any error, and you do not have to do anything to guarantee that it will round down. It always does.
However, since you want to normalize numbers, it seems likely you are going to divide numbers at some point. When you divide, there may be rounding problems, because many results of division are not exactly representable, so they must be rounded.
However, that is a separate problem, and you have not given enough information in this question to address the specific calculations you plan to do. You should open a separate question for it.
Javascript will treat any number between 0.999999999999999994 and 1 as 1, so just subtract .000000000000000006.
Of course that's not as easy as it sounds, since .000000000000000006 is evaluated as 0 in Javascript, so you could do something like:
function trueFloor(x)
{
x = x * 100;
if(x > .0000000000000006)
x = x - .0000000000000006;
x = Math.floor(x/100);
return x;
}
EDIT: Or at least you'd think you could. Apparently JS casts .99999999999999999 to 1 before passing it to a function, so you'd have to try something like:
trueFloor("0.99999999999999999")
function trueFloor(str)
{
x=str.substring(0,9) + 0;
return Math.floor(x); //=> 0
}
Not sure why you'd need that level of precision, but in theory, I guess it works. You can see a working fiddle here
As long as you cast your insanely precise float as a string, that's probably your best bet.
Please understand one thing: this...
.999999999999999999
... is just a Number literal. Just as
.999999999999999998
.999999999999999997
.999999999999999996
...
... you see the pattern.
How JavaScript treats these literals is completely another story. And yes, this treatment is limited by the number of bits that can be used to store a Number value.
The number of possible floating point literals is infinite by definition - no matter how small is the range set for them. For example, take the ones shown above: how many of numbers very close to 1 you may express? Right, it's infinite: just keep appending 9 to the line.
But the container for each Number value is quite finite: it has 64 bits. That means, it can store 2^64 different values (Infinite, -Infinite and NaN among them) - and that's all.
You want to work with such literals anyway? Use Strings to store them, not Numbers - and some BigMath JS library (take your pick) to work with those values - as Strings, again.
But from your question it looks like you're not, as you talked about array of Numbers - Number values, that is. And in no way there can be .999999999999999999 stored there, as there is no such Number value in JavaScript.

Javascript Infinity Object

I'm caclulating the mean value of a function's request/sec, appearently the result number sometimes is too long so it displays as Infinity, is there a way to round it so it show a number only? Or make a sleep()/wait() while it's on Infinity?
well to be exactly, im monitoring req/sec on a graph, when it's infinity the line goes up not towards zero
It's not too long to display. If you get Inf then you can't do anything with it other than know that it is something larger than the maximum possible value. This is the behavior of IEEE floating point numbers that are used in JavaScript.
Probably the cause for this Infinity is a division by zero, not a big number.
You are most likely unintentionally dividing by zero.
var num = 1/0;
console.log(num);
//>Infinity
Conditionally check that the divisor is not null.
You can check the maximum value of an integer as follows:
console.log([Number.MAX_VALUE, Number.MIN_VALUE]);
//>[1.7976931348623157e+308, 5e-324]
See also the official ECMA Description on Numbers

Categories

Resources