To infinity and beyond in JavaScript - javascript

In JavaScript there is a global property named Infinity, and to the best of my knowledge the value of Infinity is 1.797693134862315E+308 (I may be wrong there).
I also understand that any number larger than 1.797693134862315E+308 is considered a "bad number", if this is the case then why does my code (below) work perfectly fine?
This is my code:
// Largest number in JavaScript = "1.797693134862315E+308"
// Buzz = Infinity + "0.1"
var buzz = 1.897693134862315E+308;
// Why is no error is thrown, even though the value of "buzz" is a bad number...
if(buzz >= Infinity) {
console.log("To infinity and beyond.");
}
The output is:
=> "To infinity and beyond."
There is a working example of my code on Repl.it

The value of Infinity is Infinity. It is not the number you mention, which is Number.MAX_VALUE. Infinity is a constant that has meaning in the number system.
Adding a small number to a large floating-point value doesn't overflow because the number is a large floating-point value and that's how floating point works. If you add a large enough number to a large number, as in
Number.MAX_VALUE + Number.MAX_VALUE
then it will overflow and you'll get Infinity.
You can read more about IEEE 754 Floating Point math on Wikipedia or various other sources.

Related

Why 13596*0.1 is different than 13596/10?

I just stumble accross something really odd in Javascript.
My script was multiplying 13596 by 0.1, and the output was : 1359.6000000000001
We can agree that 0.1 = 1/10, so I tried it :
13596/10 = 1359.6
I tested it with Firefox and Chrome, the same results occurs.
I wondered if it was related to floating, so I tried the following :
13596 * parseFloat(0.1) = 1359.6000000000001
Nope.
By the way, this is not equal :
(13596*0.1) === (13596/10) => false
Does anyone has an idea about this result ?
(Here's the JSFiddle.)
Remember that, with floating points, a number you see on-screen is not necessarily exactly the number the computer is modelling.
For example, using node:
> 1359.6
1359.6
> (1359.6).toFixed(20)
'1359.59999999999990905053'
Node shows you 1359.6, but when you ask for more precision, you can see that the real number is not exactly what you saw -- it was rounded. The same is true of 0.1:
> (0.1).toFixed(20)
'0.10000000000000000555'
Generally when you work with floats you accept this imprecision and round numbers before displaying them.
Some numbers are exactly representable, but it's safest to assume that floating point is imprecise. For example, 0.5 can be represented exactly:
> (0.5).toFixed(20)
'0.50000000000000000000'
Dividing by 10 actually produces the same result as multiplying by 0.1:
> 13596/10
1359.6
> (13596/10).toFixed(20)
'1359.59999999999990905053'
In other languages division between integers results in an integer, however in JavaScript all numbers are modelled as floating points.
Generally whenever you need to precisely represent decimal numbers you should use a decimal number type, though this is not available natively in JavaScript.
Also there is no point to use the code parseFloat(0.1) as 0.1 is already a float.

Infinite from zero division

In JavaScript, if you divide by 0 you get Infinity
typeof Infinity; //number
isNaN(Infinity); //false
This insinuates that Infinity is a number (of course, no argument there).
What I learned that anything divided by zero is in an indeterminate form and has no value, is not a number.
That definition however is for arithmetic, and I know that in programming it can either yield Infinity, Not a Number, or just throw an exception.
So why throw Infinity? Does anybody have an explanation on that?
First off, resulting in Infinity is not due to some crazy math behind the scenes. The spec states that:
Division of a non-zero finite value by a zero results in a signed infinity. The sign is determined by the rule already stated above.
The logic of the spec authors goes along these lines:
2/1 = 2. Simple enough.
2/0.5 = 4. Halving the denominator doubles the result.
...and so on:
2/0.0000000000000005 = 4e+1. As the denominator trends toward zero, the result grows. Thus, the spec authors decided for division by zero to default to Infinity, as well as any other operation that results in a number too big for JavaScript to represent [0]. (instead of some quasi-numeric state or a divide by zero exception).
You can see this in action in the code of Google's V8 engine: https://github.com/v8/v8/blob/bd8c70f5fc9c57eeee478ed36f933d3139ee221a/src/hydrogen-instructions.cc#L4063
[0] "If the magnitude is too large to represent, the operation overflows; the result is then an infinity of appropriate sign."
Javascript is a loosely typed language which means that it doesn't have to return the type you were expecting from a function.
Infinity isn't actually an integer
In a strongly typed language if your function was supposed to return an int this means the only thing you can do when you get a value that isn't an int is to throw an exception
In loosely typed language you have another option which is to return a new type that represents the result better (such as in this case infinity)
Infinity is very different than indetermination.
If you compute x/0+ you get +infinity and for x/o- you get -infinity (x>0 in that example).
Javascript uses it to note that you have exceeded the capacity of the underlaying floating point storage.
You can then handle it to direct your sw towards either exceptional cases or big number version of your computation.
Infinity is actually consistent in formulae. Without it, you have to break formulae into small pieces, and you end up with more complicated code.
Try this, and you get j as Infinity:
var i = Infinity;
var j = 2*i/5;
console.log("result = "+j);
This is because Javascript uses Floating point arithmetics and it's exception for handling division by zero.
Division by zero (an operation on finite operands gives an exact infinite result, e.g., 1/0 or log(0)) (returns ±infinity by default).
wikipedia source
When x tends towards 0 in the formula y=1/x, y tends towards infinity. So it would make sense if something that would end up as a really high number (following that logic) would be represented by infinity. Somewhere around 10^320, JavaScript returns Infinity instead of the actual result, so if the calculation would otherwise end up above that threshold, it just returns infinity instead.
As determined by the ECMAScript language specification:
The sign of the result is positive if both operands have the same
sign, negative if the operands have different signs.
Division of an infinity by a zero results in an infinity. The sign is
determined by the rule already stated above.
Division of a nonzero finite value by a zero results in a signed infinity. The sign is determined by the rule already stated above.
As the denominator of an arithmetic fraction tends towards 0 (for a finite non-zero numerator) the result tends towards +Infinity or -Infinity depending on the signs of the operands. This can be seen by:
1/0.1 = 10
1/0.01 = 100
1/0.001 = 1000
1/0.0000000001 = 10000000000
1/1e-308 = 1e308
Taking this further then when you perform a division by zero then the JavaScript engine gives the result (as determined by the spec quoted above):
1/0 = Number.POSITIVE_INFINITY
-1/0 = Number.NEGATIVE_INFINITY
-1/-0 = Number.POSITIVE_INFINITY
1/-0 = Number.NEGATIVE_INFINITY
It is the same if you divide by a sufficiently large value:
1/1e309 = Number.POSITIVE_INFINITY

Javascript Infinity Object

I'm caclulating the mean value of a function's request/sec, appearently the result number sometimes is too long so it displays as Infinity, is there a way to round it so it show a number only? Or make a sleep()/wait() while it's on Infinity?
well to be exactly, im monitoring req/sec on a graph, when it's infinity the line goes up not towards zero
It's not too long to display. If you get Inf then you can't do anything with it other than know that it is something larger than the maximum possible value. This is the behavior of IEEE floating point numbers that are used in JavaScript.
Probably the cause for this Infinity is a division by zero, not a big number.
You are most likely unintentionally dividing by zero.
var num = 1/0;
console.log(num);
//>Infinity
Conditionally check that the divisor is not null.
You can check the maximum value of an integer as follows:
console.log([Number.MAX_VALUE, Number.MIN_VALUE]);
//>[1.7976931348623157e+308, 5e-324]
See also the official ECMA Description on Numbers

Adding to Number.MAX_VALUE

The answer to this question may be painfully obvious but I can't find it in the Mozilla docs nor on Google from a cursory search.
If you have some code like this
Number.MAX_VALUE + 1; // Infinity, right?
Number.MIN_VALUE - 1; // -Infinity, right?
Then I would expect adding anything to Number.MAX_VALUE would push it over to Infinity. The result is just Number.MAX_VALUE spat right back at me.
However, when playing around in the Chrome JS console, I noticed that it didn't actually become Infinity until I added/subtracted enough:
Number.MAX_VALUE + Math.pow(100,1000); // now we hit Infinity
Number.MIN_VALUE - Math.pow(100,1000); // -Infinity at last
What is the explanation for this "buffer" between Number.MAX_VALUE and Infinity?
Standardwise...
In ECMAScript, addition of two nonzero finite numbers is implemented as (ECMA-262 §11.6.3 "Applying the Additive Operators to Numbers"):
the sum is computed and rounded to the nearest representable value using IEEE 754 round-to-nearest mode. If the magnitude is too large to represent, the operation overflows and the result is then an infinity of appropriate sign.
IEEE-754's round-to-nearest mode specifies that (IEEE-754 2008 §4.3.1 "Rounding-direction attributes to nearest")
In the following two rounding-direction attributes, an infinitely precise result with magnitude at least bemax ( b − ½ b1-p ) shall round to ∞ with no change in sign; here emax and p are determined by the destination format (see 3.3). With:
roundTiesToEven, the floating-point number nearest to the infinitely precise result shall be delivered; if the two nearest floating-point numbers bracketing an unrepresentable infinitely precise result are equally near, the one with an even least significant digit shall be delivered
roundTiesToAway, the floating-point number nearest to the infinitely precise result shall be delivered; if the two nearest floating-point numbers bracketing an unrepresentable infinitely precise result are equally near, the one with larger magnitude shall be delivered.
ECMAScript does not specify which of the round-to-nearest, but it doesn't matter here because both gives the same result. The number in ECMAScript is "double", in which
b = 2
emax = 1023
p = 53,
so the result must be at least 21024 - 2970 ~ 1.7976931348623158 × 10308 in order to round to infinity. Otherwise it will just round to MAX_VALUE, because that is the closer than Infinity.
Notice that MAX_VALUE = 21024 - 2971, so you need to add at least 2971 - 2970 = 2970 ~ 9.979202 × 10291 in order to get infinity. We could check:
>>> Number.MAX_VALUE + 9.979201e291
1.7976931348623157e+308
>>> Number.MAX_VALUE + 9.979202e291
Infinity
Meanwhile, your Math.pow(100,1000) ~ 26643.9 is well beyond 21024 - 2970. It is already infinity.
If you look at Number.MAX_VALUE.toString(2), you'll see that the binary representation of MAX_VALUE is 53 ones followed by 971 zeros. This because IEEE 754 floating points are made of a mantissa coefficient multiplied by a power of 2 (so the other half of the floating point number is the exponent). With MAX_VALUE, both the mantissa and the exponent are maxed out, so you see a bunch of ones bit-shifted up a lot.
In short, you need to increase MAX_VALUE enough to actually affect the mantissa, otherwise your additional value gets lost and rounded out.
Math.pow(2, 969) is the lowest power of 2 that will not tip MAX_VALUE into Infinity.

2.9999999999999999 >> .5?

I heard that you could right-shift a number by .5 instead of using Math.floor(). I decided to check its limits to make sure that it was a suitable replacement, so I checked the following values and got the following results in Google Chrome:
2.5 >> .5 == 2;
2.9999 >> .5 == 2;
2.999999999999999 >> .5 == 2; // 15 9s
2.9999999999999999 >> .5 == 3; // 16 9s
After some fiddling, I found out that the highest possible value of two which, when right-shifted by .5, would yield 2 is 2.9999999999999997779553950749686919152736663818359374999999¯ (with the 9 repeating) in Chrome and Firefox. The number is 2.9999999999999997779¯ in IE.
My question is: what is the significance of the number .0000000000000007779553950749686919152736663818359374? It's a very strange number and it really piqued my curiosity.
I've been trying to find an answer or at least some kind of pattern, but I think my problem lies in the fact that I really don't understand the bitwise operation. I understand the idea in principle, but shifting a bit sequence by .5 doesn't make any sense at all to me. Any help is appreciated.
For the record, the weird digit sequence changes with 2^x. The highest possible values of the following numbers that still truncate properly:
for 0: 0.9999999999999999444888487687421729788184165954589843749¯
for 1: 1.9999999999999999888977697537484345957636833190917968749¯
for 2-3: x+.99999999999999977795539507496869191527366638183593749¯
for 4-7: x+.9999999999999995559107901499373838305473327636718749¯
for 8-15: x+.999999999999999111821580299874767661094665527343749¯
...and so forth
Actually, you're simply ending up doing a floor() on the first operand, without any floating point operations going on. Since the left shift and right shift bitwise operations only make sense with integer operands, the JavaScript engine is converting the two operands to integers first:
2.999999 >> 0.5
Becomes:
Math.floor(2.999999) >> Math.floor(0.5)
Which in turn is:
2 >> 0
Shifting by 0 bits means "don't do a shift" and therefore you end up with the first operand, simply truncated to an integer.
The SpiderMonkey source code has:
switch (op) {
case JSOP_LSH:
case JSOP_RSH:
if (!js_DoubleToECMAInt32(cx, d, &i)) // Same as Math.floor()
return JS_FALSE;
if (!js_DoubleToECMAInt32(cx, d2, &j)) // Same as Math.floor()
return JS_FALSE;
j &= 31;
d = (op == JSOP_LSH) ? i << j : i >> j;
break;
Your seeing a "rounding up" with certain numbers is due to the fact the JavaScript engine can't handle decimal digits beyond a certain precision and therefore your number ends up getting rounded up to the next integer. Try this in your browser:
alert(2.999999999999999);
You'll get 2.999999999999999. Now try adding one more 9:
alert(2.9999999999999999);
You'll get a 3.
This is possibly the single worst idea I have ever seen. Its only possible purpose for existing is for winning an obfusticated code contest. There's no significance to the long numbers you posted -- they're an artifact of the underlying floating-point implementation, filtered through god-knows how many intermediate layers. Bit-shifting by a fractional number of bytes is insane and I'm surprised it doesn't raise an exception -- but that's Javascript, always willing to redefine "insane".
If I were you, I'd avoid ever using this "feature". Its only value is as a possible root cause for an unusual error condition. Use Math.floor() and take pity on the next programmer who will maintain the code.
Confirming a couple suspicions I had when reading the question:
Right-shifting any fractional number x by any fractional number y will simply truncate x, giving the same result as Math.floor() while thoroughly confusing the reader.
2.999999999999999777955395074968691915... is simply the largest number that can be differentiated from "3". Try evaluating it by itself -- if you add anything to it, it will evaluate to 3. This is an artifact of the browser and local system's floating-point implementation.
If you wanna go deeper, read "What Every Computer Scientist Should Know About Floating-Point Arithmetic": https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
Try this javascript out:
alert(parseFloat("2.9999999999999997779553950749686919152736663818359374999999"));
Then try this:
alert(parseFloat("2.9999999999999997779553950749686919152736663818359375"));
What you are seeing is simple floating point inaccuracy. For more information about that, see this for example: http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems.
The basic issue is that the closest that a floating point value can get to representing the second number is greater than or equal to 3, whereas the closes that the a float can get to the first number is strictly less than three.
As for why right shifting by 0.5 does anything sane at all, it seems that 0.5 is just itself getting converted to an int (0) beforehand. Then the original float (2.999...) is getting converted to an int by truncation, as usual.
I don't think your right shift is relevant. You are simply beyond the resolution of a double precision floating point constant.
In Chrome:
var x = 2.999999999999999777955395074968691915273666381835937499999;
var y = 2.9999999999999997779553950749686919152736663818359375;
document.write("x=" + x);
document.write(" y=" + y);
Prints out: x = 2.9999999999999996 y=3
The shift right operator only operates on integers (both sides). So, shifting right by .5 bits should be exactly equivalent to shifting right by 0 bits. And, the left hand side is converted to an integer before the shift operation, which does the same thing as Math.floor().
I suspect that converting 2.9999999999999997779553950749686919152736663818359374999999
to it's binary representation would be enlightening. It's probably only 1 bit different
from true 3.
Good guess, but no cigar.
As the double precision FP number has 53 bits, the last FP number before 3 is actually
(exact): 2.999999999999999555910790149937383830547332763671875
But why it is
2.9999999999999997779553950749686919152736663818359375
(and this is exact, not 49999... !)
which is higher than the last displayable unit ? Rounding. The conversion routine (String to number) simply is correctly programmed to round the input the the next floating point number.
2.999999999999999555910790149937383830547332763671875
.......(values between, increasing) -> round down
2.9999999999999997779553950749686919152736663818359375
....... (values between, increasing) -> round up to 3
3
The conversion input must use full precision. If the number is exactly the half between
those two fp numbers (which is 2.9999999999999997779553950749686919152736663818359375)
the rounding depends on the setted flags. The default rounding is round to even, meaning that the number will be rounded to the next even number.
Now
3 = 11. (binary)
2.999... = 10.11111111111...... (binary)
All bits are set, the number is always odd. That means that the exact half number will be rounded up, so you are getting the strange .....49999 period because it must be smaller than the exact half to be distinguishable from 3.
I suspect that converting 2.9999999999999997779553950749686919152736663818359374999999 to its binary representation would be enlightening. It's probably only 1 bit different from true 3.
And to add to John's answer, the odds of this being more performant than Math.floor are vanishingly small.
I don't know if JavaScript uses floating-point numbers or some kind of infinite-precision library, but either way, you're going to get rounding errors on an operation like this -- even if it's pretty well defined.
It should be noted that the number ".0000000000000007779553950749686919152736663818359374" is quite possibly the Epsilon, defined as "the smallest number E such that (1+E) > 1."

Categories

Resources