Why does Java and Javascript Math.round(-1.5) to -1? - javascript

Today, I saw this behaviour of Java and Javascript's Math.round function.
It makes 1.40 to 1 as well as -1.40 to -1
It makes 1.60 to 2 as well as -1.60 to -2
Now, it makes 1.5 to 2.
But, makes -1.5 to -1.
I checked this behaviour in round equivalents of PhP and MySQL as well.
Both gave results as expected. i.e. round(-1.5) to -2
Even the Math.round definition says it should round it to nearest integer.
Wanted to know why is it so?

The problem is that the distance between 1 and 1.5 as well as 1.5 and 2 is exactly the same (0.5). There are several different ways you now could round:
always towards positive infinity
always towards negative infinity
always towards zero
always away from zero
towards nearest odd or even number
... (see Wikipedia)
Obviously, both Java and JS opted for the first one (which is not uncommon) while PHP and MySql round away from zero.

Rounding mode to round towards negative infinity. If the result is positive, behave as for RoundingMode.DOWN; if negative, behave as for RoundingMode.UP. Note that this rounding mode never increases the calculated value.
It is just matter of whole number and its position against number chart. From here you can see javadocs.
public static int round(float a)
Returns the closest int to the argument, with ties rounding up.
Special cases:
If the argument is NaN, the result is 0.
If the argument is negative infinity or any value less than or equal to the value of Integer.MIN_VALUE, the result is equal to the value of Integer.MIN_VALUE.
If the argument is positive infinity or any value greater than or equal to the value of Integer.MAX_VALUE, the result is equal to the value of Integer.MAX_VALUE.
Parameters:
a - a floating-point value to be rounded to an integer.
Returns:
the value of the argument rounded to the nearest int value.
Review this link too

From the Ecma script documentation,
Returns the Number value that is closest to x and is equal to a
mathematical integer. If two integer Number values are equally close
to x, then the result is the Number value that is closer to +∞. If x
is already an integer, the result is x.
where x is the number passed to Math.round().
So Math.round(1.5) will return 2 hence 2 is closer to +∞ while comparing with 1. Similarly Math.round(-1.5) will return -1 hence -1 is closer to +∞ while comparing with -2.

Related

Why round value is different?

I know if I round up -1.5, it's -2.
so I tried to do with C# it returns -2 correctly.
Console.WriteLine(Math.Round(-1.5, 0));
also I tried to do with Excel, it also returns -2.
=Round(-1.5,0)
but when I do with javascript, it returns -1.
Math.round(-1.5)
why this values are different?
and how can I get -2 instead of -1 when I do this with javascript?
Math.round(Math.abs(-1.5));
your value is negative that's why it gets -1. Just get the absolute value and then round it and multiply it to -1 to get -2.
yes round in javascript works as you said. One solution is convert your negative number to positive then use Math.round. At last you should convert your number to negative number.
function myFunction() {
num = -1.5;
document.getElementById("demo").innerHTML = Math.round(num);
if(num < 0)
document.getElementById("demo").innerHTML = -1 * Math.round(Math.abs(num));
}
That's just how they made it. It is acknowledged that it is different than most languages.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/round
If the fractional portion of the argument is greater than 0.5, the argument is rounded to the integer with the next higher absolute value. If it is less than 0.5, the argument is rounded to the integer with the lower absolute value. If the fractional portion is exactly 0.5, the argument is rounded to the next integer in the direction of +∞. Note that this differs from many languages' round() functions, which often round this case to the next integer away from zero, instead giving a different result in the case of negative numbers with a fractional part of exactly 0.5.

Finding JS max integer value in a funny way fails

Today I tried to find a funny and mysterious way to determine JavaScript's maximal integer value. One of the approaches was the following:
~(+!!![]) >>> (+!![]);
which evaluates actually to
~0 >>> 1
but it returns 2147483647 and not 4294967295 as it should. Why? Of course, the latter one would be the result of this operation for an unsigned integer, while my result is correct for a signed one. But how to force it?..
You're finding the maximum integer, and then shifting it to the right 1 bit, which divides it by 2. Use:
~0 >>> 0
to get the maximum integer.
Converting that to the "funny" way I'll leave as an exercise for the reader.

In JavaScript, why does zero divided by zero return NaN, but any other divided by zero return Infinity?

It seems to me that the code
console.log(1 / 0)
should return NaN, but instead it returns Infinity. However this code:
console.log(0 / 0)
does return NaN. Can someone help me to understand the reasoning for this functionality? Not only does it seem to be inconsistent, it also seems to be wrong, in the case of x / 0 where x !== 0
Because that's how floating-point is defined (more generally than just Javascript). See for example:
http://en.wikipedia.org/wiki/Floating-point#Infinities
http://en.wikipedia.org/wiki/NaN#Creation
Crudely speaking, you could think of 1/0 as the limit of 1/x as x tends to zero (from the right). And 0/0 has no reasonable interpretation at all, hence NaN.
In addition to answers based on the mathematical concept of zero, there is a special consideration for floating point numbers. Every underflow result, every non-zero number whose absolute magnitude is too small to represent as a non-zero number, is represented as zero.
0/0 may really be 1e-500/1e-600, or 1e-600/1e-500, or many other ratios of very small values.
The actual ratio could be anything, so there is no meaningful numerical answer, and the result should be a NaN.
Now consider 1/0. It does not matter whether the 0 represents 1e-500 or 1e-600. Regardless, the division would overflow and the correct result is the value used to represent overflows, Infinity.
I realize this is old, but I think it's important to note that in JS there is also a -0 which is different than 0 or +0 which makes this feature of JS much more logical than at first glance.
1 / 0 -> Infinity
1 / -0 -> -Infinity
which logically makes sense since in calculus, the reason dividing by 0 is undefined is solely because the left limit goes to negative infinity and the right limit to positive infinity. Since the -0 and 0 are different objects in JS, it makes sense to apply the positive 0 to evaluate to positive Infinity and the negative 0 to evaluate to negative Infinity
This logic does not apply to 0/0, which is indeterminate. Unlike with 1/0, we can get two results taking limits by this method with 0/0
lim h->0(0/h) = 0
lim h->0(h/0) = Infinity
which of course is inconsistent, so it results in NaN

JavaScript Math.floor: how guarantee number will round down?

I want to normalize an array so that each value is
in [0-1) .. i.e. "the max will never be 1 but the min can be 0."
This is not unlike the random function returning numbers in the same range.
While looking at this, I found that .99999999999999999===1 is true!
Ditto (1-Number.MIN_VALUE) === 1 But Math.ceil(Number.MIN_VALUE) is 1, as it should be.
Some others: Math.floor(.999999999999) is 0
while Math.floor(.99999999999999999) is 1
OK so there are rounding problems in JS.
Is there any way I can normalize a set of numbers to lie in the range [0,1)?
It may help to examine the steps that JavaScript performs of each of your expressions.
In .99999999999999999===1:
The source text .99999999999999999 is converted to a Number. The closest Number is 1, so that is the result. (The next closest Number is 0.99999999999999988897769753748434595763683319091796875, which is 1–2–53.)
Then 1 is compared to 1. The result is true.
In (1-Number.MIN_VALUE) === 1:
Number.MIN_VALUE is 2–1074, about 5e–304.
1–2–1074 is extremely close to one. The exact value cannot be represented as a Number, so the nearest value is used. Again, the nearest value is 1.
Then 1 is compared to 1. The result is true.
In Math.ceil(Number.MIN_VALUE):
Number.MIN_VALUE is 2–1074, about 5e–304.
The ceiling function of that value is 1.
In Math.floor(.999999999999):
The source text .999999999999 is converted to a Number. The closest Number is 0.99999999999900002212172012150404043495655059814453125, so that is the result.
The floor function of that value is 0.
In Math.floor(.99999999999999999):
The source text .99999999999999999 is converted to a Number. The closest Number is 1, so that is the result.
The floor function of 1 is 1.
There are only two surprising things here, at most. One is that the numerals in the source text are converted to internal Number values. But this should not be surprising. Of course text has to be converted to internal representations of numbers, and the Number type cannot perfectly store all the infinitely many numbers. So it has to round. And of course numbers very near 1 round to 1.
The other possibly surprising thing is that 1-Number.MIN_VALUE is 1. But this is actually the same issue: The exact result is not representable, but it is very near 1, so 1 is used.
The Math.floor function works correctly. It never introduces any error, and you do not have to do anything to guarantee that it will round down. It always does.
However, since you want to normalize numbers, it seems likely you are going to divide numbers at some point. When you divide, there may be rounding problems, because many results of division are not exactly representable, so they must be rounded.
However, that is a separate problem, and you have not given enough information in this question to address the specific calculations you plan to do. You should open a separate question for it.
Javascript will treat any number between 0.999999999999999994 and 1 as 1, so just subtract .000000000000000006.
Of course that's not as easy as it sounds, since .000000000000000006 is evaluated as 0 in Javascript, so you could do something like:
function trueFloor(x)
{
x = x * 100;
if(x > .0000000000000006)
x = x - .0000000000000006;
x = Math.floor(x/100);
return x;
}
EDIT: Or at least you'd think you could. Apparently JS casts .99999999999999999 to 1 before passing it to a function, so you'd have to try something like:
trueFloor("0.99999999999999999")
function trueFloor(str)
{
x=str.substring(0,9) + 0;
return Math.floor(x); //=> 0
}
Not sure why you'd need that level of precision, but in theory, I guess it works. You can see a working fiddle here
As long as you cast your insanely precise float as a string, that's probably your best bet.
Please understand one thing: this...
.999999999999999999
... is just a Number literal. Just as
.999999999999999998
.999999999999999997
.999999999999999996
...
... you see the pattern.
How JavaScript treats these literals is completely another story. And yes, this treatment is limited by the number of bits that can be used to store a Number value.
The number of possible floating point literals is infinite by definition - no matter how small is the range set for them. For example, take the ones shown above: how many of numbers very close to 1 you may express? Right, it's infinite: just keep appending 9 to the line.
But the container for each Number value is quite finite: it has 64 bits. That means, it can store 2^64 different values (Infinite, -Infinite and NaN among them) - and that's all.
You want to work with such literals anyway? Use Strings to store them, not Numbers - and some BigMath JS library (take your pick) to work with those values - as Strings, again.
But from your question it looks like you're not, as you talked about array of Numbers - Number values, that is. And in no way there can be .999999999999999999 stored there, as there is no such Number value in JavaScript.

How Close is the Javascript Math.Round to the C# Math.Round?

I know from reading this Stackoverflow question that the complier will look at your number, decide if the midpoint is an even or odd number and then return the even number. The example number was 2.5 which rounded to a 3. I've tried my own little experiments to see what happens, but I have yet to find any specifications about this, or even if it would be consistent between browsers.
Here's an example JavaScript snippet using jQuery for the display:
$(document).ready(function() {
$("#answer").html(showRounding());
});
function showRounding() {
var answer = Math.round(2.5);
return answer;
}
This returns a '3'.
What I would like to know is this: How close is the rounding in JavaScript to the the C# equivalent? The reason I'm doing this is because I would like to take a JavaScript method that uses Math.Round and rewrite the same method into C# and would like to know that I would be able to get the same results after rounding a number.
Here's the complete javascript specification for Math.round(x):
15.8.2.15 round (x) Returns the Number value that is closest to x and is
equal to a mathematical integer. If
two integer Number values are equally
close to x, then the result is the
Number value that is closer to +∞. If
x is already an integer, the result is
x.
If x is NaN, the result is NaN.
If x is +0, the result is +0.
If x is −0, the result is −0.
If x is +∞, the result is +∞.
If x is −∞, the result is −∞.
If x is greater than 0 but less than
0.5, the result is +0.
If x is less than 0 but greater than or equal to -0.5, the result
is −0.
NOTE 1 Math.round(3.5) returns 4, but
Math.round(–3.5) returns –3.
NOTE 2 The value of Math.round(x) is
the same as the value of
Math.floor(x+0.5), except when x is −0
or is less than 0 but greater than or
equal to -0.5; for these cases
Math.round(x) returns −0, but
Math.floor(x+0.5) returns +0.
The C# Language Specification does not stipulate any particular rounding algorithm. The closest thing we have is the documentation for .NET's Math.Round. From that, you can see that some of the javascript cases don't apply (Math.Round only handles decimals and doubles, not infinity), and the method's overloads give you a lot more control over the result - you can specify the number of fractional digits in the result and the midpoint rounding method. By default, Math.Round uses 'banker's rounding' (to even).
ECMAScript's rounding is basically naive asymmetric rounding (with added checks for +-Infinity). WRT porting your JavaScript code to C# you're probably best to avoid .NET's Math.Round (as it is always symmetric) and use Math.Floor instead:
double d = -3.5d;
double rounded = Math.Floor(d + 0.5); // -3 matches JavaScript's Math.round
That is, if you want strict emulation of ECMAScript's Math.round.

Categories

Resources