Rounding Mode in Number() method in Javascript - javascript

I'm rewriting the JavaScript code to Java. I came across this line in JavaScript code.
Number("<amount-values>").toFixed(0)
As the value is related to money, I'm using BigDecimal in Java in order to make accurate calculations and not to lose any precisions.
Could you please help me with the correct Rounding mode in BigDecimal that is equivalent to rounding mode of Number() method in JavaScript, where both the rounding's will produce the same result.
BigDecimal.setScale(0)
Both the scalings are producing different results when there are decimals. Created a very small table and I see that Number() method is aligning with HALF_DOWN, HALF_EVEN, HALF_UP. But this is very small example and there could be many number of other cases.
Yes, I don't want anything after the decimal point in my output.

Just to be clear, Number.prototype.toFixed() is not equivalent to Math.Round
(5.5).toFixed(0) == 6
(5.4).toFixed(0) == 5
(-5.5).toFixed(0) === **-6**
Math.round(5.5) == 6
Math.round(5.4) == 5
Math.round(-5.5) == **-5**
You should know this and decide what behavior you want, if any negative values are involved.
If all your values are positive, the logic is pretty simple: if the decimal is greater than or equal to .5 then you round up (Math.ceil), otherwise you round down (Math.floor). I believe "Math.ceil" and "Math.floor" are the method names in both JS and Java.
Since you have decimals in your example, you need to be clear if you actually want the behavior as showcased by (-5.5).toFixed(0) vs Math.round(-5.5).
It seems that toFixed effectively brings things on either side of 0 back towards 0.
Math.round on the other hand seems to round the values down, to a lower values (either less positive, or more negative).

Related

Modulus not working correctly with Math.log10(n)

Math.log10(8)/Math.log10(2) // this is equal to exactly 3
However, when using the modulo operator, sth doesn't add up!
Math.log10(8)%Math.log10(2) //this is not equal to zero(0).
I was expecting the modulo operator to equal to zero
Please explain this phenomenon and a way to find reminders for natural logs. Thanks
You are encountering rounding error which inherently happens in floating point arithmetic.
Math.log10(8) in JavaScript evaluates to 0.9030899869919435.
On WolframAlpha log10(8) is evaluated to 0.9030899869919435856412166841734790803045696443863256239312823833.
These are different numbers. Neither is wrong, but one is more precise than the other.
When doing math with imprecise decimal approximations of numbers like you are here, you will get imprecise results. This is especially pronounced when you try do modulo operations on such imprecise approximations. For example 8 % 2 and 8 % 2.000000001 give very different results. The former is 0 and the latter is just shy of 2.
You can use a library like math.js which can avoid floating point math in these sorts of calculations, and where math.mod(math.log(8, 10), math.log(2, 10)) does indeed evaluate to 0.

Javascript toFixed() is not working as expected

I am using toFixed but the method does not operate as expected
parseFloat(19373.315).toFixed(2);
//19373.31 Chrome
Expected Output : 19373.32
parseFloat(9373.315).toFixed(2);
// 9373.32 Working fine
Why does the first example round down, whereas the second example round up?
The problem is that binary floating point representation of most decimal fractions is not exact. The internal representation of 19373.315 may actually be something like 19373.314999999, so toFixed rounds down, while 19373.315 might be 19373.315000001, which rounds up.
Why does the first example round down, whereas the second example round up?
Look at the binary representation of the two values in memory.
const farr = new Float64Array(2);
farr[0] = 19373.315;
farr[1] = 9373.315;
const uarr = new Uint32Array(farr.buffer);
console.log(farr[0], uarr[1].toString(2).padStart(32, 0) + uarr[0].toString(2).padStart(32, 0));
console.log(farr[1], uarr[3].toString(2).padStart(32, 0) + uarr[2].toString(2).padStart(32, 0));
Without diving into the details, we can see that the second value has an additional '1' at the end, which is lost in the first larger value when it is fit into 64 bits.
Other answers have explained why, I would suggest using a library like numeral.js which will round things as you would expect.
Assuming toFixed casts to 32-bit float;
Check with this utility...
19373.315 is stored as 19373.314453125 (an error of -0.000546875) in 32-bit floating point format.
This is despite (19373.315).toFixed(4) coming out as 19373.3150.
Even if this is "expected" or "intended", I'd still report it as a bug.
It should use a double during the rounding check, and thus proper rounding during conversion to fixed string.
I think the spec even says so. :\
In the V8 javascript engine source, the Number.prototype.toFixed function invokes DoubleToFixedCString in this file ...
There's probably some inappropriate optimization in there... (Looking into it.)
I'd suggest submitting an additional test case for V8 with 19373.315 specifically.
(19373.3150).toFixed(39) yields 19373.314999999998690327629446983337402343750.
Rounding occurs once to bring it up to 19373.315 - which is correct - but not at the right digit when rounding to 2 digits.
I think this should have a second pass on rounding here to catch edge cases like this. I think it might have to round to n+1 digits, then again to n digits. Maybe there's some other clever way to fix it though.
function toFixedFixed(a,n) {
return (a|0) + parseFloat((a % 1).toFixed(n+1)).toFixed(n).substr(1);
}
console.log(toFixedFixed(19373.315,2)); // "19373.32"
console.log(toFixedFixed(19373.315,3)); // "19373.315"
console.log(toFixedFixed(19373.315,4)); // "19373.3150"
console.log(toFixedFixed(19373.315,37)); // "19373.3149999999986903276294469833374023438"
console.log(toFixedFixed(19373.315,38)); // "19373.31499999999869032762944698333740234375"
console.log(toFixedFixed(19373.315,39)); // "19373.314999999998690327629446983337402343750"
(Adopted from my comments on Vahid Rahmani's answer, who is correct.)

JavaScript Math.floor: how guarantee number will round down?

I want to normalize an array so that each value is
in [0-1) .. i.e. "the max will never be 1 but the min can be 0."
This is not unlike the random function returning numbers in the same range.
While looking at this, I found that .99999999999999999===1 is true!
Ditto (1-Number.MIN_VALUE) === 1 But Math.ceil(Number.MIN_VALUE) is 1, as it should be.
Some others: Math.floor(.999999999999) is 0
while Math.floor(.99999999999999999) is 1
OK so there are rounding problems in JS.
Is there any way I can normalize a set of numbers to lie in the range [0,1)?
It may help to examine the steps that JavaScript performs of each of your expressions.
In .99999999999999999===1:
The source text .99999999999999999 is converted to a Number. The closest Number is 1, so that is the result. (The next closest Number is 0.99999999999999988897769753748434595763683319091796875, which is 1–2–53.)
Then 1 is compared to 1. The result is true.
In (1-Number.MIN_VALUE) === 1:
Number.MIN_VALUE is 2–1074, about 5e–304.
1–2–1074 is extremely close to one. The exact value cannot be represented as a Number, so the nearest value is used. Again, the nearest value is 1.
Then 1 is compared to 1. The result is true.
In Math.ceil(Number.MIN_VALUE):
Number.MIN_VALUE is 2–1074, about 5e–304.
The ceiling function of that value is 1.
In Math.floor(.999999999999):
The source text .999999999999 is converted to a Number. The closest Number is 0.99999999999900002212172012150404043495655059814453125, so that is the result.
The floor function of that value is 0.
In Math.floor(.99999999999999999):
The source text .99999999999999999 is converted to a Number. The closest Number is 1, so that is the result.
The floor function of 1 is 1.
There are only two surprising things here, at most. One is that the numerals in the source text are converted to internal Number values. But this should not be surprising. Of course text has to be converted to internal representations of numbers, and the Number type cannot perfectly store all the infinitely many numbers. So it has to round. And of course numbers very near 1 round to 1.
The other possibly surprising thing is that 1-Number.MIN_VALUE is 1. But this is actually the same issue: The exact result is not representable, but it is very near 1, so 1 is used.
The Math.floor function works correctly. It never introduces any error, and you do not have to do anything to guarantee that it will round down. It always does.
However, since you want to normalize numbers, it seems likely you are going to divide numbers at some point. When you divide, there may be rounding problems, because many results of division are not exactly representable, so they must be rounded.
However, that is a separate problem, and you have not given enough information in this question to address the specific calculations you plan to do. You should open a separate question for it.
Javascript will treat any number between 0.999999999999999994 and 1 as 1, so just subtract .000000000000000006.
Of course that's not as easy as it sounds, since .000000000000000006 is evaluated as 0 in Javascript, so you could do something like:
function trueFloor(x)
{
x = x * 100;
if(x > .0000000000000006)
x = x - .0000000000000006;
x = Math.floor(x/100);
return x;
}
EDIT: Or at least you'd think you could. Apparently JS casts .99999999999999999 to 1 before passing it to a function, so you'd have to try something like:
trueFloor("0.99999999999999999")
function trueFloor(str)
{
x=str.substring(0,9) + 0;
return Math.floor(x); //=> 0
}
Not sure why you'd need that level of precision, but in theory, I guess it works. You can see a working fiddle here
As long as you cast your insanely precise float as a string, that's probably your best bet.
Please understand one thing: this...
.999999999999999999
... is just a Number literal. Just as
.999999999999999998
.999999999999999997
.999999999999999996
...
... you see the pattern.
How JavaScript treats these literals is completely another story. And yes, this treatment is limited by the number of bits that can be used to store a Number value.
The number of possible floating point literals is infinite by definition - no matter how small is the range set for them. For example, take the ones shown above: how many of numbers very close to 1 you may express? Right, it's infinite: just keep appending 9 to the line.
But the container for each Number value is quite finite: it has 64 bits. That means, it can store 2^64 different values (Infinite, -Infinite and NaN among them) - and that's all.
You want to work with such literals anyway? Use Strings to store them, not Numbers - and some BigMath JS library (take your pick) to work with those values - as Strings, again.
But from your question it looks like you're not, as you talked about array of Numbers - Number values, that is. And in no way there can be .999999999999999999 stored there, as there is no such Number value in JavaScript.

Javascript .toFixed() more precise rounding alternative?

For rounding decimals (prices), I've been using .toFixed(2) for quite some time now. But I just recently discovered that Javascript can't "precisely" round decimals. I was a bit shocked that even 10.005 couldn't be rounded correctly to 10.01. It just got rounded down to 10.00. And other times it did round correctly. I like to have control over my code, so this is a big no-no for me.
And since I'm calculating prices, I think I need something more (100%) accurate for rounding only 2- or 3-decimal numbers, maybe a 4-decimal one.
Is there no straightforward way of doing basic rounding in javascript, the correct way?
UPDATE: As Felix Kling has suggested, the method of processing my prices as integers of cents, there are also drawbacks to this (besides more code)?
The reason that a number like 10.005 can't be rounded corretly is that you don't really have the number 10.005, you only have a number that is the closest possible one that can be represented using a double precision floating point variable.
The actual number that you have might be someting like 10.00499999999276253, and that would naturally round to 10.00 rather than 10.01.
To handle monetary values you should use a data type that can represent the value exactly. As numbers in Javascript are always floating point numbers, what you are left with is representing the numbers as text, and writing your own functions to do the math (or find someone who has done that already).
This should work for you, here's a fiddle to play around with it http://jsfiddle.net/5ffyC/1/
function moneyRound(flt){
var splitStr = flt.toString().split('.'),
whole = (flt * 100) | 0;
if (splitStr.length > 1 && splitStr[1].length > 2){
return splitStr[1][2] > 4? (whole + 1) / 100: whole / 100;
} else {
return flt;
}
}

2.9999999999999999 >> .5?

I heard that you could right-shift a number by .5 instead of using Math.floor(). I decided to check its limits to make sure that it was a suitable replacement, so I checked the following values and got the following results in Google Chrome:
2.5 >> .5 == 2;
2.9999 >> .5 == 2;
2.999999999999999 >> .5 == 2; // 15 9s
2.9999999999999999 >> .5 == 3; // 16 9s
After some fiddling, I found out that the highest possible value of two which, when right-shifted by .5, would yield 2 is 2.9999999999999997779553950749686919152736663818359374999999¯ (with the 9 repeating) in Chrome and Firefox. The number is 2.9999999999999997779¯ in IE.
My question is: what is the significance of the number .0000000000000007779553950749686919152736663818359374? It's a very strange number and it really piqued my curiosity.
I've been trying to find an answer or at least some kind of pattern, but I think my problem lies in the fact that I really don't understand the bitwise operation. I understand the idea in principle, but shifting a bit sequence by .5 doesn't make any sense at all to me. Any help is appreciated.
For the record, the weird digit sequence changes with 2^x. The highest possible values of the following numbers that still truncate properly:
for 0: 0.9999999999999999444888487687421729788184165954589843749¯
for 1: 1.9999999999999999888977697537484345957636833190917968749¯
for 2-3: x+.99999999999999977795539507496869191527366638183593749¯
for 4-7: x+.9999999999999995559107901499373838305473327636718749¯
for 8-15: x+.999999999999999111821580299874767661094665527343749¯
...and so forth
Actually, you're simply ending up doing a floor() on the first operand, without any floating point operations going on. Since the left shift and right shift bitwise operations only make sense with integer operands, the JavaScript engine is converting the two operands to integers first:
2.999999 >> 0.5
Becomes:
Math.floor(2.999999) >> Math.floor(0.5)
Which in turn is:
2 >> 0
Shifting by 0 bits means "don't do a shift" and therefore you end up with the first operand, simply truncated to an integer.
The SpiderMonkey source code has:
switch (op) {
case JSOP_LSH:
case JSOP_RSH:
if (!js_DoubleToECMAInt32(cx, d, &i)) // Same as Math.floor()
return JS_FALSE;
if (!js_DoubleToECMAInt32(cx, d2, &j)) // Same as Math.floor()
return JS_FALSE;
j &= 31;
d = (op == JSOP_LSH) ? i << j : i >> j;
break;
Your seeing a "rounding up" with certain numbers is due to the fact the JavaScript engine can't handle decimal digits beyond a certain precision and therefore your number ends up getting rounded up to the next integer. Try this in your browser:
alert(2.999999999999999);
You'll get 2.999999999999999. Now try adding one more 9:
alert(2.9999999999999999);
You'll get a 3.
This is possibly the single worst idea I have ever seen. Its only possible purpose for existing is for winning an obfusticated code contest. There's no significance to the long numbers you posted -- they're an artifact of the underlying floating-point implementation, filtered through god-knows how many intermediate layers. Bit-shifting by a fractional number of bytes is insane and I'm surprised it doesn't raise an exception -- but that's Javascript, always willing to redefine "insane".
If I were you, I'd avoid ever using this "feature". Its only value is as a possible root cause for an unusual error condition. Use Math.floor() and take pity on the next programmer who will maintain the code.
Confirming a couple suspicions I had when reading the question:
Right-shifting any fractional number x by any fractional number y will simply truncate x, giving the same result as Math.floor() while thoroughly confusing the reader.
2.999999999999999777955395074968691915... is simply the largest number that can be differentiated from "3". Try evaluating it by itself -- if you add anything to it, it will evaluate to 3. This is an artifact of the browser and local system's floating-point implementation.
If you wanna go deeper, read "What Every Computer Scientist Should Know About Floating-Point Arithmetic": https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
Try this javascript out:
alert(parseFloat("2.9999999999999997779553950749686919152736663818359374999999"));
Then try this:
alert(parseFloat("2.9999999999999997779553950749686919152736663818359375"));
What you are seeing is simple floating point inaccuracy. For more information about that, see this for example: http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems.
The basic issue is that the closest that a floating point value can get to representing the second number is greater than or equal to 3, whereas the closes that the a float can get to the first number is strictly less than three.
As for why right shifting by 0.5 does anything sane at all, it seems that 0.5 is just itself getting converted to an int (0) beforehand. Then the original float (2.999...) is getting converted to an int by truncation, as usual.
I don't think your right shift is relevant. You are simply beyond the resolution of a double precision floating point constant.
In Chrome:
var x = 2.999999999999999777955395074968691915273666381835937499999;
var y = 2.9999999999999997779553950749686919152736663818359375;
document.write("x=" + x);
document.write(" y=" + y);
Prints out: x = 2.9999999999999996 y=3
The shift right operator only operates on integers (both sides). So, shifting right by .5 bits should be exactly equivalent to shifting right by 0 bits. And, the left hand side is converted to an integer before the shift operation, which does the same thing as Math.floor().
I suspect that converting 2.9999999999999997779553950749686919152736663818359374999999
to it's binary representation would be enlightening. It's probably only 1 bit different
from true 3.
Good guess, but no cigar.
As the double precision FP number has 53 bits, the last FP number before 3 is actually
(exact): 2.999999999999999555910790149937383830547332763671875
But why it is
2.9999999999999997779553950749686919152736663818359375
(and this is exact, not 49999... !)
which is higher than the last displayable unit ? Rounding. The conversion routine (String to number) simply is correctly programmed to round the input the the next floating point number.
2.999999999999999555910790149937383830547332763671875
.......(values between, increasing) -> round down
2.9999999999999997779553950749686919152736663818359375
....... (values between, increasing) -> round up to 3
3
The conversion input must use full precision. If the number is exactly the half between
those two fp numbers (which is 2.9999999999999997779553950749686919152736663818359375)
the rounding depends on the setted flags. The default rounding is round to even, meaning that the number will be rounded to the next even number.
Now
3 = 11. (binary)
2.999... = 10.11111111111...... (binary)
All bits are set, the number is always odd. That means that the exact half number will be rounded up, so you are getting the strange .....49999 period because it must be smaller than the exact half to be distinguishable from 3.
I suspect that converting 2.9999999999999997779553950749686919152736663818359374999999 to its binary representation would be enlightening. It's probably only 1 bit different from true 3.
And to add to John's answer, the odds of this being more performant than Math.floor are vanishingly small.
I don't know if JavaScript uses floating-point numbers or some kind of infinite-precision library, but either way, you're going to get rounding errors on an operation like this -- even if it's pretty well defined.
It should be noted that the number ".0000000000000007779553950749686919152736663818359374" is quite possibly the Epsilon, defined as "the smallest number E such that (1+E) > 1."

Categories

Resources