Rounding quirk in JavaScript or IEEE-754? - javascript

I've come across a curious issue in one of my unit tests where I'm getting unexpected rounding results in JavaScript:
(2.005).toFixed(2)
// produces "2.00"
(2.00501).toFixed(2)
// produces "2.01"
Initially I suspected this was a Webkit only issue but it repros in Gecko which implies to me that it is an expected side effect of either ECMA-262 or IEEE-754. I'm assuming the binary representation of 2.005 is ever so slightly less? Or does ECMA-262 specify a round-to-even methodology for toFixed?
Anyone care to shed some insight as to what is happening under the hood just to give me peace of mind?
Update: thanks for the comments.
I should add, one of the things that made me a little nervous was the comments found in a quick search in Webkit dtoa.cpp which seemed to imply that there were multiple paths to rounding and the devs weren't really sure how it worked, including a related FIXME:
https://trac.webkit.org/browser/trunk/Source/WTF/wtf/dtoa.cpp#L1110
Also, not that it means much but IE9 rounds it as I expected, implying that it either isn't part of ECMA-262 or they have a bug.

If the specification hasn't changed since Rev. 6 of the ECMA 262 draft (edition 5.1, March 2011), (2.005).toFixed(2) must return the string "2.00", since a "Number value" is a
primitive value corresponding to a double-precision 64-bit binary format IEEE 754 value
and the interpretation of numeric literals is specified in 7.8.3 and 8.5 to conform to IEEE 754 "round to nearest" mode (with ties rounded to even significand), which for 2.005 results in the value
x = 4514858626438922 * 2^(-51) = 2.00499999999999989341858963598497211933135986328125
In section 15.7.4.5 which deals with toFixed, the relevant step 8. a. is:
Let n be an integer for which the exact mathematical value of n ÷ 10f – x is as close to zero as possible. If there are two such n, pick the larger n.
and 2.00 - x is closer to zero than 2.01 - x, so n must be 200 here. The conversion to a string proceeds then in the natural way.
Also, not that it means much but IE9 rounds it as I expected, implying that it either isn't part of ECMA-262 or they have a bug.
A bug. Maybe they tried to go the easy way and multiply with 10^digits and round. x*100 is exactly 200.5, so that would produce a string of "2.01".

Related

Where is the specification for conversion in JavaScript's left shift (<<) operator

I would like to determine where my lack of knowledge is with respect to JavaScript's number handling. The gap shows up in the way that JS handles the shift left operator.
It is my understanding that in in JavaScript "all numbers are floating point". But that doesn't fit with bit shifting operators, and indeed, the demonstrable behavior also tells me there's more to this than I'm aware of.
I checked the specification here: bitshift operators and all it says (at this point) is
The Left Shift Operator ( << ) Performs a bitwise left shift operation on the left operand by the amount specified by the right operand.
But if all numbers are in fact represented as a floating point format, then this, taken literally would mean we'd be shifting the bit representations of mantissa and exponent, and would get total nonsense values as the exponent bits moved into the mantissa bits. So, my next thought was that perhaps it just takes the mantissa part, shifts that, and puts it back. But that doesn't appear to be the case either, since 2.5 << 1 is 4, not 5.
I also found that any number with a large exponent value seems to result in zeroes.
So, my guess, from this brief examination, is that the system actually takes the number, performs a conversion that would be described as "chop out a 32 bit two's complement integer value" and then does a shift on that, before converting the resulting int back to a floating representation. (Unless I was lied to about the "all numbers are floats" in the first place!)
Can anyone tell me:
a) was I lied to about the "all numbers are floats" thing?
b) if I'm right in my guess about the "convert to int, shift, then convert back" behavior, where is that documented. I'm have to believe it's in the spec, but it's a big one, and while I've done some searching, I've not read it all (nor do I particularly want to if I can get a hint!)
It can be found in the abstract operation Number::leftShift:
Let lnum be ! ToInt32(x).
Let rnum be ! ToUint32(y).
Whereas ToInt32 basically performs:
Let int32bit be int modulo 2³²

(Novice Programmer) mod(3^146, 293) among others returning the same incorrect values in Matlab and JS

First note that mod(3^146,293)=292. For some reason, inputting mod(3^146,293) in Matlab returns 275. Inputting Math.pow(3,146) % 293 in JS returns 275. This same error occurs (as far as I can tell) every time. This leads me to believe I am missing something obvious but cannot seem to tell what.
Any help is much appreciated.
As discussed in the answers to this related question, MATLAB uses double-precision floating point numbers by default, which have limits on their resolution (i.e. the floating point relative accuracy, eps). For example:
>> a = 3^146
a =
4.567759074507741e+69
>> eps(a)
ans =
7.662477704329444e+53
In this case, 3146 is on the order of 1069 and the relative accuracy is on the order of 1053. With only 16 digits of precision, a double can't store the exact integer representation of an arbitrary 70 digit integer.
An alternative in MATLAB is to use the Symbolic Toolbox to create symbolic numbers with a greater resolution. This gives you the answer you expect:
>> a = sym('3^146')
a =
4567759074507740406477787437675267212178680251724974985372646979033929
>> mod(a, 293)
ans =
292
Math.pow(3, 146) is is larger than the constant Number.MAX_SAFE_INTEGER in JavaScript which represents the upper limit of numbers that can be represented without losing any accuracy. Therefore JavaScript cannot accurately represent Math.pow(3, 146) within the 64 bit limit.
MatLab also has limits on its integer size but can represent a large number with the Symbolic Math Toolbox.
There are also algorithms that you can implement to accomplish this without overflowing.

Javascript toFixed() is not working as expected

I am using toFixed but the method does not operate as expected
parseFloat(19373.315).toFixed(2);
//19373.31 Chrome
Expected Output : 19373.32
parseFloat(9373.315).toFixed(2);
// 9373.32 Working fine
Why does the first example round down, whereas the second example round up?
The problem is that binary floating point representation of most decimal fractions is not exact. The internal representation of 19373.315 may actually be something like 19373.314999999, so toFixed rounds down, while 19373.315 might be 19373.315000001, which rounds up.
Why does the first example round down, whereas the second example round up?
Look at the binary representation of the two values in memory.
const farr = new Float64Array(2);
farr[0] = 19373.315;
farr[1] = 9373.315;
const uarr = new Uint32Array(farr.buffer);
console.log(farr[0], uarr[1].toString(2).padStart(32, 0) + uarr[0].toString(2).padStart(32, 0));
console.log(farr[1], uarr[3].toString(2).padStart(32, 0) + uarr[2].toString(2).padStart(32, 0));
Without diving into the details, we can see that the second value has an additional '1' at the end, which is lost in the first larger value when it is fit into 64 bits.
Other answers have explained why, I would suggest using a library like numeral.js which will round things as you would expect.
Assuming toFixed casts to 32-bit float;
Check with this utility...
19373.315 is stored as 19373.314453125 (an error of -0.000546875) in 32-bit floating point format.
This is despite (19373.315).toFixed(4) coming out as 19373.3150.
Even if this is "expected" or "intended", I'd still report it as a bug.
It should use a double during the rounding check, and thus proper rounding during conversion to fixed string.
I think the spec even says so. :\
In the V8 javascript engine source, the Number.prototype.toFixed function invokes DoubleToFixedCString in this file ...
There's probably some inappropriate optimization in there... (Looking into it.)
I'd suggest submitting an additional test case for V8 with 19373.315 specifically.
(19373.3150).toFixed(39) yields 19373.314999999998690327629446983337402343750.
Rounding occurs once to bring it up to 19373.315 - which is correct - but not at the right digit when rounding to 2 digits.
I think this should have a second pass on rounding here to catch edge cases like this. I think it might have to round to n+1 digits, then again to n digits. Maybe there's some other clever way to fix it though.
function toFixedFixed(a,n) {
return (a|0) + parseFloat((a % 1).toFixed(n+1)).toFixed(n).substr(1);
}
console.log(toFixedFixed(19373.315,2)); // "19373.32"
console.log(toFixedFixed(19373.315,3)); // "19373.315"
console.log(toFixedFixed(19373.315,4)); // "19373.3150"
console.log(toFixedFixed(19373.315,37)); // "19373.3149999999986903276294469833374023438"
console.log(toFixedFixed(19373.315,38)); // "19373.31499999999869032762944698333740234375"
console.log(toFixedFixed(19373.315,39)); // "19373.314999999998690327629446983337402343750"
(Adopted from my comments on Vahid Rahmani's answer, who is correct.)

Midpoint 'rounding' when dealing with large numbers?

So I was trying to understand JavaScript's behavior when dealing with large numbers. Consider the following (tested in Firefox and Chrome):
console.log(9007199254740993) // 9007199254740992
console.log(9007199254740994) // 9007199254740994
console.log(9007199254740995) // 9007199254740996
console.log(9007199254740996) // 9007199254740996
console.log(9007199254740997) // 9007199254740996
console.log(9007199254740998) // 9007199254740998
console.log(9007199254740999) // 9007199254741000
Now, I'm aware of why it's outputting the 'wrong' numbers—it's trying to convert them to floating point representations and it's rounding off to the nearest possible floating point value—but I'm not entirely sure about why it picks these particular numbers. My guess is that it's trying to round to the nearest 'even' number, and since 9007199254740996 is divisible by 4 while 9007199254740994 is not, it considers 9007199254740996 to be more 'even'.
What algorithm is it using to determine the internal representation? My guess is that it's an extension of regular midpoint rounding (round to even is the default rounding mode in IEEE 754 functions).
Is this behavior specified as part of the ECMAScript standard, or is it implementation dependent?
As pointed out by Mark Dickinson in a comment on the question, the ECMA-262 ECMAScript Language Specification requires the use of IEEE 754 64-bit binary floating point to represent the Number Type. The relevant rounding rules are "Choose the member of this set that is closest in value to x. If two values of the set are equally close, then the one with an even significand is chosen...".
These rules are general, applying to rounding results of arithmetic as well as the values of literals.
The following are all the numbers in the relevant range for the question that are exactly representable in IEEE 754 64-bit binary floating point. Each is shown as its decimal value, and also as a hexadecimal representation of its bit pattern. A number with an even significand has an even rightmost hexadecimal digit in its bit pattern.
9007199254740992 bit pattern 0x4340000000000000
9007199254740994 bit pattern 0x4340000000000001
9007199254740996 bit pattern 0x4340000000000002
9007199254740998 bit pattern 0x4340000000000003
9007199254741000 bit pattern 0x4340000000000004
Each of the even inputs is one of these numbers, and rounds to that number. Each of the odd inputs is exactly half way between two of them, and rounds to the one with the even significand. This results in rounding the odd inputs to 9007199254740992, 9007199254740996, and 9007199254741000.
Patricia Shanahan's answer helped a lot and explained my primary question. However, to second part of the question—whether or not this behavior is implementation dependent—it turns out that yes it is, but in a slightly different way than I originally thought. Quoting from ECMA-262
5.1 § 7.8.3:
… the rounded value must be the Number value for the MV (as specified in 8.5), unless the literal is a DecimalLiteral and the literal has more than 20 significant digits, in which case the Number value may be either the Number value for the MV of a literal produced by replacing each significant digit after the 20th with a 0 digit or the Number value for the MV of a literal produced by replacing each significant digit after the 20th with a 0 digit and then incrementing the literal at the 20th significant digit position.
In other words, an implementation may choose to ignore everything after the 20th digit. Consider this:
console.log(9007199254740993.00001)
Both Chrome and Firefox will output 9007199254740994, however, Internet Explorer will output 9007199254740992 because it chooses to ignore the after the 20th digit. Interestingly, this doesn't appear to be standards-compliant behavior (at least as I read this standard). it should interpret this the same as 9007199254740993.0001, but it does not.
JavaScript represents numbers as 64-bit floating point values. This is defined in the standard.
http://en.wikipedia.org/wiki/Double-precision_floating-point_format
So there's nothing related with midpoint rounding going on there.
As a hint, every 32 bit integer has an exact representation in double-precision floating format.
Ok, since you're asking for the exact algorithm, I checked how Chrome's V8 engine does it.
V8 defines a StringToDouble function, which calls InternalStringToDouble in the following file:
https://github.com/v8/v8/blob/master/src/conversions-inl.h#L415
And this in turn, calls the Strotd function defined there:
https://github.com/v8/v8/blob/master/src/strtod.cc

2.9999999999999999 >> .5?

I heard that you could right-shift a number by .5 instead of using Math.floor(). I decided to check its limits to make sure that it was a suitable replacement, so I checked the following values and got the following results in Google Chrome:
2.5 >> .5 == 2;
2.9999 >> .5 == 2;
2.999999999999999 >> .5 == 2; // 15 9s
2.9999999999999999 >> .5 == 3; // 16 9s
After some fiddling, I found out that the highest possible value of two which, when right-shifted by .5, would yield 2 is 2.9999999999999997779553950749686919152736663818359374999999¯ (with the 9 repeating) in Chrome and Firefox. The number is 2.9999999999999997779¯ in IE.
My question is: what is the significance of the number .0000000000000007779553950749686919152736663818359374? It's a very strange number and it really piqued my curiosity.
I've been trying to find an answer or at least some kind of pattern, but I think my problem lies in the fact that I really don't understand the bitwise operation. I understand the idea in principle, but shifting a bit sequence by .5 doesn't make any sense at all to me. Any help is appreciated.
For the record, the weird digit sequence changes with 2^x. The highest possible values of the following numbers that still truncate properly:
for 0: 0.9999999999999999444888487687421729788184165954589843749¯
for 1: 1.9999999999999999888977697537484345957636833190917968749¯
for 2-3: x+.99999999999999977795539507496869191527366638183593749¯
for 4-7: x+.9999999999999995559107901499373838305473327636718749¯
for 8-15: x+.999999999999999111821580299874767661094665527343749¯
...and so forth
Actually, you're simply ending up doing a floor() on the first operand, without any floating point operations going on. Since the left shift and right shift bitwise operations only make sense with integer operands, the JavaScript engine is converting the two operands to integers first:
2.999999 >> 0.5
Becomes:
Math.floor(2.999999) >> Math.floor(0.5)
Which in turn is:
2 >> 0
Shifting by 0 bits means "don't do a shift" and therefore you end up with the first operand, simply truncated to an integer.
The SpiderMonkey source code has:
switch (op) {
case JSOP_LSH:
case JSOP_RSH:
if (!js_DoubleToECMAInt32(cx, d, &i)) // Same as Math.floor()
return JS_FALSE;
if (!js_DoubleToECMAInt32(cx, d2, &j)) // Same as Math.floor()
return JS_FALSE;
j &= 31;
d = (op == JSOP_LSH) ? i << j : i >> j;
break;
Your seeing a "rounding up" with certain numbers is due to the fact the JavaScript engine can't handle decimal digits beyond a certain precision and therefore your number ends up getting rounded up to the next integer. Try this in your browser:
alert(2.999999999999999);
You'll get 2.999999999999999. Now try adding one more 9:
alert(2.9999999999999999);
You'll get a 3.
This is possibly the single worst idea I have ever seen. Its only possible purpose for existing is for winning an obfusticated code contest. There's no significance to the long numbers you posted -- they're an artifact of the underlying floating-point implementation, filtered through god-knows how many intermediate layers. Bit-shifting by a fractional number of bytes is insane and I'm surprised it doesn't raise an exception -- but that's Javascript, always willing to redefine "insane".
If I were you, I'd avoid ever using this "feature". Its only value is as a possible root cause for an unusual error condition. Use Math.floor() and take pity on the next programmer who will maintain the code.
Confirming a couple suspicions I had when reading the question:
Right-shifting any fractional number x by any fractional number y will simply truncate x, giving the same result as Math.floor() while thoroughly confusing the reader.
2.999999999999999777955395074968691915... is simply the largest number that can be differentiated from "3". Try evaluating it by itself -- if you add anything to it, it will evaluate to 3. This is an artifact of the browser and local system's floating-point implementation.
If you wanna go deeper, read "What Every Computer Scientist Should Know About Floating-Point Arithmetic": https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
Try this javascript out:
alert(parseFloat("2.9999999999999997779553950749686919152736663818359374999999"));
Then try this:
alert(parseFloat("2.9999999999999997779553950749686919152736663818359375"));
What you are seeing is simple floating point inaccuracy. For more information about that, see this for example: http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems.
The basic issue is that the closest that a floating point value can get to representing the second number is greater than or equal to 3, whereas the closes that the a float can get to the first number is strictly less than three.
As for why right shifting by 0.5 does anything sane at all, it seems that 0.5 is just itself getting converted to an int (0) beforehand. Then the original float (2.999...) is getting converted to an int by truncation, as usual.
I don't think your right shift is relevant. You are simply beyond the resolution of a double precision floating point constant.
In Chrome:
var x = 2.999999999999999777955395074968691915273666381835937499999;
var y = 2.9999999999999997779553950749686919152736663818359375;
document.write("x=" + x);
document.write(" y=" + y);
Prints out: x = 2.9999999999999996 y=3
The shift right operator only operates on integers (both sides). So, shifting right by .5 bits should be exactly equivalent to shifting right by 0 bits. And, the left hand side is converted to an integer before the shift operation, which does the same thing as Math.floor().
I suspect that converting 2.9999999999999997779553950749686919152736663818359374999999
to it's binary representation would be enlightening. It's probably only 1 bit different
from true 3.
Good guess, but no cigar.
As the double precision FP number has 53 bits, the last FP number before 3 is actually
(exact): 2.999999999999999555910790149937383830547332763671875
But why it is
2.9999999999999997779553950749686919152736663818359375
(and this is exact, not 49999... !)
which is higher than the last displayable unit ? Rounding. The conversion routine (String to number) simply is correctly programmed to round the input the the next floating point number.
2.999999999999999555910790149937383830547332763671875
.......(values between, increasing) -> round down
2.9999999999999997779553950749686919152736663818359375
....... (values between, increasing) -> round up to 3
3
The conversion input must use full precision. If the number is exactly the half between
those two fp numbers (which is 2.9999999999999997779553950749686919152736663818359375)
the rounding depends on the setted flags. The default rounding is round to even, meaning that the number will be rounded to the next even number.
Now
3 = 11. (binary)
2.999... = 10.11111111111...... (binary)
All bits are set, the number is always odd. That means that the exact half number will be rounded up, so you are getting the strange .....49999 period because it must be smaller than the exact half to be distinguishable from 3.
I suspect that converting 2.9999999999999997779553950749686919152736663818359374999999 to its binary representation would be enlightening. It's probably only 1 bit different from true 3.
And to add to John's answer, the odds of this being more performant than Math.floor are vanishingly small.
I don't know if JavaScript uses floating-point numbers or some kind of infinite-precision library, but either way, you're going to get rounding errors on an operation like this -- even if it's pretty well defined.
It should be noted that the number ".0000000000000007779553950749686919152736663818359374" is quite possibly the Epsilon, defined as "the smallest number E such that (1+E) > 1."

Categories

Resources