I know from reading this Stackoverflow question that the complier will look at your number, decide if the midpoint is an even or odd number and then return the even number. The example number was 2.5 which rounded to a 3. I've tried my own little experiments to see what happens, but I have yet to find any specifications about this, or even if it would be consistent between browsers.
Here's an example JavaScript snippet using jQuery for the display:
$(document).ready(function() {
$("#answer").html(showRounding());
});
function showRounding() {
var answer = Math.round(2.5);
return answer;
}
This returns a '3'.
What I would like to know is this: How close is the rounding in JavaScript to the the C# equivalent? The reason I'm doing this is because I would like to take a JavaScript method that uses Math.Round and rewrite the same method into C# and would like to know that I would be able to get the same results after rounding a number.
Here's the complete javascript specification for Math.round(x):
15.8.2.15 round (x) Returns the Number value that is closest to x and is
equal to a mathematical integer. If
two integer Number values are equally
close to x, then the result is the
Number value that is closer to +∞. If
x is already an integer, the result is
x.
If x is NaN, the result is NaN.
If x is +0, the result is +0.
If x is −0, the result is −0.
If x is +∞, the result is +∞.
If x is −∞, the result is −∞.
If x is greater than 0 but less than
0.5, the result is +0.
If x is less than 0 but greater than or equal to -0.5, the result
is −0.
NOTE 1 Math.round(3.5) returns 4, but
Math.round(–3.5) returns –3.
NOTE 2 The value of Math.round(x) is
the same as the value of
Math.floor(x+0.5), except when x is −0
or is less than 0 but greater than or
equal to -0.5; for these cases
Math.round(x) returns −0, but
Math.floor(x+0.5) returns +0.
The C# Language Specification does not stipulate any particular rounding algorithm. The closest thing we have is the documentation for .NET's Math.Round. From that, you can see that some of the javascript cases don't apply (Math.Round only handles decimals and doubles, not infinity), and the method's overloads give you a lot more control over the result - you can specify the number of fractional digits in the result and the midpoint rounding method. By default, Math.Round uses 'banker's rounding' (to even).
ECMAScript's rounding is basically naive asymmetric rounding (with added checks for +-Infinity). WRT porting your JavaScript code to C# you're probably best to avoid .NET's Math.Round (as it is always symmetric) and use Math.Floor instead:
double d = -3.5d;
double rounded = Math.Floor(d + 0.5); // -3 matches JavaScript's Math.round
That is, if you want strict emulation of ECMAScript's Math.round.
Related
I know if I round up -1.5, it's -2.
so I tried to do with C# it returns -2 correctly.
Console.WriteLine(Math.Round(-1.5, 0));
also I tried to do with Excel, it also returns -2.
=Round(-1.5,0)
but when I do with javascript, it returns -1.
Math.round(-1.5)
why this values are different?
and how can I get -2 instead of -1 when I do this with javascript?
Math.round(Math.abs(-1.5));
your value is negative that's why it gets -1. Just get the absolute value and then round it and multiply it to -1 to get -2.
yes round in javascript works as you said. One solution is convert your negative number to positive then use Math.round. At last you should convert your number to negative number.
function myFunction() {
num = -1.5;
document.getElementById("demo").innerHTML = Math.round(num);
if(num < 0)
document.getElementById("demo").innerHTML = -1 * Math.round(Math.abs(num));
}
That's just how they made it. It is acknowledged that it is different than most languages.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/round
If the fractional portion of the argument is greater than 0.5, the argument is rounded to the integer with the next higher absolute value. If it is less than 0.5, the argument is rounded to the integer with the lower absolute value. If the fractional portion is exactly 0.5, the argument is rounded to the next integer in the direction of +∞. Note that this differs from many languages' round() functions, which often round this case to the next integer away from zero, instead giving a different result in the case of negative numbers with a fractional part of exactly 0.5.
Today, I saw this behaviour of Java and Javascript's Math.round function.
It makes 1.40 to 1 as well as -1.40 to -1
It makes 1.60 to 2 as well as -1.60 to -2
Now, it makes 1.5 to 2.
But, makes -1.5 to -1.
I checked this behaviour in round equivalents of PhP and MySQL as well.
Both gave results as expected. i.e. round(-1.5) to -2
Even the Math.round definition says it should round it to nearest integer.
Wanted to know why is it so?
The problem is that the distance between 1 and 1.5 as well as 1.5 and 2 is exactly the same (0.5). There are several different ways you now could round:
always towards positive infinity
always towards negative infinity
always towards zero
always away from zero
towards nearest odd or even number
... (see Wikipedia)
Obviously, both Java and JS opted for the first one (which is not uncommon) while PHP and MySql round away from zero.
Rounding mode to round towards negative infinity. If the result is positive, behave as for RoundingMode.DOWN; if negative, behave as for RoundingMode.UP. Note that this rounding mode never increases the calculated value.
It is just matter of whole number and its position against number chart. From here you can see javadocs.
public static int round(float a)
Returns the closest int to the argument, with ties rounding up.
Special cases:
If the argument is NaN, the result is 0.
If the argument is negative infinity or any value less than or equal to the value of Integer.MIN_VALUE, the result is equal to the value of Integer.MIN_VALUE.
If the argument is positive infinity or any value greater than or equal to the value of Integer.MAX_VALUE, the result is equal to the value of Integer.MAX_VALUE.
Parameters:
a - a floating-point value to be rounded to an integer.
Returns:
the value of the argument rounded to the nearest int value.
Review this link too
From the Ecma script documentation,
Returns the Number value that is closest to x and is equal to a
mathematical integer. If two integer Number values are equally close
to x, then the result is the Number value that is closer to +∞. If x
is already an integer, the result is x.
where x is the number passed to Math.round().
So Math.round(1.5) will return 2 hence 2 is closer to +∞ while comparing with 1. Similarly Math.round(-1.5) will return -1 hence -1 is closer to +∞ while comparing with -2.
In JavaScript, if you divide by 0 you get Infinity
typeof Infinity; //number
isNaN(Infinity); //false
This insinuates that Infinity is a number (of course, no argument there).
What I learned that anything divided by zero is in an indeterminate form and has no value, is not a number.
That definition however is for arithmetic, and I know that in programming it can either yield Infinity, Not a Number, or just throw an exception.
So why throw Infinity? Does anybody have an explanation on that?
First off, resulting in Infinity is not due to some crazy math behind the scenes. The spec states that:
Division of a non-zero finite value by a zero results in a signed infinity. The sign is determined by the rule already stated above.
The logic of the spec authors goes along these lines:
2/1 = 2. Simple enough.
2/0.5 = 4. Halving the denominator doubles the result.
...and so on:
2/0.0000000000000005 = 4e+1. As the denominator trends toward zero, the result grows. Thus, the spec authors decided for division by zero to default to Infinity, as well as any other operation that results in a number too big for JavaScript to represent [0]. (instead of some quasi-numeric state or a divide by zero exception).
You can see this in action in the code of Google's V8 engine: https://github.com/v8/v8/blob/bd8c70f5fc9c57eeee478ed36f933d3139ee221a/src/hydrogen-instructions.cc#L4063
[0] "If the magnitude is too large to represent, the operation overflows; the result is then an infinity of appropriate sign."
Javascript is a loosely typed language which means that it doesn't have to return the type you were expecting from a function.
Infinity isn't actually an integer
In a strongly typed language if your function was supposed to return an int this means the only thing you can do when you get a value that isn't an int is to throw an exception
In loosely typed language you have another option which is to return a new type that represents the result better (such as in this case infinity)
Infinity is very different than indetermination.
If you compute x/0+ you get +infinity and for x/o- you get -infinity (x>0 in that example).
Javascript uses it to note that you have exceeded the capacity of the underlaying floating point storage.
You can then handle it to direct your sw towards either exceptional cases or big number version of your computation.
Infinity is actually consistent in formulae. Without it, you have to break formulae into small pieces, and you end up with more complicated code.
Try this, and you get j as Infinity:
var i = Infinity;
var j = 2*i/5;
console.log("result = "+j);
This is because Javascript uses Floating point arithmetics and it's exception for handling division by zero.
Division by zero (an operation on finite operands gives an exact infinite result, e.g., 1/0 or log(0)) (returns ±infinity by default).
wikipedia source
When x tends towards 0 in the formula y=1/x, y tends towards infinity. So it would make sense if something that would end up as a really high number (following that logic) would be represented by infinity. Somewhere around 10^320, JavaScript returns Infinity instead of the actual result, so if the calculation would otherwise end up above that threshold, it just returns infinity instead.
As determined by the ECMAScript language specification:
The sign of the result is positive if both operands have the same
sign, negative if the operands have different signs.
Division of an infinity by a zero results in an infinity. The sign is
determined by the rule already stated above.
Division of a nonzero finite value by a zero results in a signed infinity. The sign is determined by the rule already stated above.
As the denominator of an arithmetic fraction tends towards 0 (for a finite non-zero numerator) the result tends towards +Infinity or -Infinity depending on the signs of the operands. This can be seen by:
1/0.1 = 10
1/0.01 = 100
1/0.001 = 1000
1/0.0000000001 = 10000000000
1/1e-308 = 1e308
Taking this further then when you perform a division by zero then the JavaScript engine gives the result (as determined by the spec quoted above):
1/0 = Number.POSITIVE_INFINITY
-1/0 = Number.NEGATIVE_INFINITY
-1/-0 = Number.POSITIVE_INFINITY
1/-0 = Number.NEGATIVE_INFINITY
It is the same if you divide by a sufficiently large value:
1/1e309 = Number.POSITIVE_INFINITY
We all know that 00 is indeterminate.
But, javascript says that:
Math.pow(0, 0) === 1 // true
and C++ says the same thing:
pow(0, 0) == 1 // true
WHY?
I know that:
>Math.pow(0.001, 0.001)
0.9931160484209338
But why does Math.pow(0, 0) throw no errors? Or maybe a NaN would be better than 1.
In C++ The result of pow(0, 0) the result is basically implementation defined behavior since mathematically we have a contradictory situation where N^0 should always be 1 but 0^N should always be 0 for N > 0, so you should have no expectations mathematically as to the result of this either. This Wolfram Alpha forum posts goes into a bit more details.
Although having pow(0,0) result in 1 is useful for many applications as the Rationale for International Standard—Programming Languages—C states in the section covering IEC 60559 floating-point arithmetic support:
Generally, C99 eschews a NaN result where a numerical value is useful. [...] The results of pow(∞,0) and pow(0,0) are both 1, because there are applications that can exploit this definition. For example, if x(p) and y(p) are any analytic functions that become zero at p = a, then pow(x,y), which equals exp(y*log(x)), approaches 1 as p approaches a.
Update C++
As leemes correctly pointed out I originally linked to the reference for the complex version of pow while the non-complex version claims it is domain error the draft C++ standard falls back to the draft C standard and both C99 and C11 in section 7.12.7.4 The pow functions paragraph 2 says (emphasis mine):
[...]A domain error may occur if x is zero and y is zero.[...]
which as far as I can tell means this behavior is unspecified behavior Winding back a bit section 7.12.1 Treatment of error conditions says:
[...]a domain error occurs if an input argument is outside the domain over
which the mathematical function is defined.[...] On a domain error, the function returns an implementation-defined value; if the integer expression math_errhandling & MATH_ERRNO is nonzero, the integer expression errno acquires the value EDOM; [...]
So if there was a domain error then this would be implementation defined behavior but in both the latest versions of gcc and clang the value of errno is 0 so it is not a domain error for those compilers.
Update Javascript
For Javascript the ECMAScript® Language Specification in section 15.8 The Math Object under 15.8.2.13 pow (x, y) says amongst other conditions that:
If y is +0, the result is 1, even if x is NaN.
In JavaScript Math.pow is defined as follows:
If y is NaN, the result is NaN.
If y is +0, the result is 1, even if x is NaN.
If y is −0, the result is 1, even if x is NaN.
If x is NaN and y is nonzero, the result is NaN.
If abs(x)>1 and y is +∞, the result is +∞.
If abs(x)>1 and y is −∞, the result is +0.
If abs(x)==1 and y is +∞, the result is NaN.
If abs(x)==1 and y is −∞, the result is NaN.
If abs(x)<1 and y is +∞, the result is +0.
If abs(x)<1 and y is −∞, the result is +∞.
If x is +∞ and y>0, the result is +∞.
If x is +∞ and y<0, the result is +0.
If x is −∞ and y>0 and y is an odd integer, the result is −∞.
If x is −∞ and y>0 and y is not an odd integer, the result is +∞.
If x is −∞ and y<0 and y is an odd integer, the result is −0.
If x is −∞ and y<0 and y is not an odd integer, the result is +0.
If x is +0 and y>0, the result is +0.
If x is +0 and y<0, the result is +∞.
If x is −0 and y>0 and y is an odd integer, the result is −0.
If x is −0 and y>0 and y is not an odd integer, the result is +0.
If x is −0 and y<0 and y is an odd integer, the result is −∞.
If x is −0 and y<0 and y is not an odd integer, the result is +∞.
If x<0 and x is finite and y is finite and y is not an integer, the result is NaN.
emphasis mine
as a general rule, native functions to any language should work as described in the language specification. Sometimes this includes explicitly "undefined behavior" where it's up to the implementer to determine what the result should be, however this is not a case of undefined behavior.
It is just convention to define it as 1, 0 or to leave it undefined. The definition is wide spread because of the following definition:
ECMA-Script documentation says the following about pow(x,y):
If y is +0, the result is 1, even if x is NaN.
If y is −0, the result is 1, even if x is NaN.
[ http://www.ecma-international.org/ecma-262/5.1/#sec-15.8.2.13 ]
According to Wikipedia:
In most settings not involving continuity in the exponent, interpreting 00 as 1 simplifies formulas and eliminates the need for special cases in theorems.
There are several possible ways to treat 0**0 with pros and cons to each (see Wikipedia for an extended discussion).
The IEEE 754-2008 floating point standard recommends three different functions:
pow treats 0**0 as 1. This is the oldest defined version. If the power is an exact integer the result is the same as for pown, otherwise the result is as for powr (except for some exceptional cases).
pown treats 0**0 as 1. The power must be an exact integer. The value is defined for negative bases; e.g., pown(−3,5) is −243.
powr treats 0**0 as NaN (Not-a-Number – undefined). The value is also NaN for cases like powr(−3,2) where the base is less than zero. The value is defined by exp(power'×log(base)).
Donald Knuth
sort of settled this debate in 1992 with the following:
And went even more into details in his paper Two Notes on Notation.
Basically, while we don't have 1 as the limit of f(x)/g(x) for all not all functions f(x) and g(x), it still makes combinatorics so much simpler to define 0^0=1, and then just make special cases in the few places where you need to consider functions such as 0^x, which are weird anyway. After all x^0 comes up a lot more often.
Some of the best discussions I know of this topic (other than the Knuth paper) are:
Link
http://www.quora.com/Mathematics/What-is-math-0-0-math?share=1
https://math.stackexchange.com/questions/475337/the-binomial-formula-and-the-value-of-00
When you want to know what value you should give to f(a) when f isn't directly computable in a, you compute the limit of f when x tends towards a.
In case of x^y, usual limits tend towards 1 when x and y tend to 0, and especially x^x tends towards 1 when x tends to 0.
See http://www.math.hmc.edu/funfacts/ffiles/10005.3-5.shtml
The C language definition says (7.12.7.4/2):
A domain error may occur if x is zero and y is zero.
It also says (7.12.1/2):
On a domain error, the function returns an implementation-defined value; if the integer expression math_errhandling & MATH_ERRNO is nonzero, the integer expression errno acquires the value EDOM; if the integer expression math_errhandling & MATH_ERREXCEPT is nonzero, the ‘‘invalid’’ floating-point exception is raised.
By default, the value of math_errhandling is MATH_ERRNO, so check errno for the value EDOM.
I'd like to disagree with some of the previous answers' assertion that it's a matter of convention or convenience (covering some special cases for various theorems, etc) that 0^0 be defined as 1 instead of 0.
Exponentiation doesn't actually fit that well with our other mathematical notations, so the definition we all learn leaves room for confusion. A slightly different way of approaching it is to say that a^b (or exp(a, b), if you like) returns the value multiplicatively equivalent to multiplying some other thing by a, repeated b times.
When we multiply 5 by 4, 2 times, we get 80. We've multiplied 5 by 16. So 4^2 = 16.
When you multiply 14 by 0, 0 times, we are left with 14. We've multiplied it 1. Hence, 0^0 = 1.
This line of thinking might also help to clarify negative and fractional exponents. 4^(-2) is a 16th, because 'negative multiplication' is division - we divide by four twice.
a^(1/2) is root(a), because multiplying something by the root of a is half the multiplicative work as multiplying it by a itself - you would have to do it twice to multiply something by 4 = 4^1 = (4^(1/2))^2
For this to understand you need to solve calculus:
Expanding x^x around zero using Taylor series, we get:
So to understand what's going on with limit when x goes to zero,
we need to find out what's going on with second term x log(x), because other terms are proportional to x log(x) raised to some power.
We need to use transformation:
Now after this transformation we can use L'Hôpital's rule, which states that:
So differentiating that transformation we get:
So we've calculated that term log(x)*x approaches 0 when x approaches 0.
It's easy to see that other consecutive terms also approaches zero and even faster than second term.
So at point x=0, series becomes 1 + 0 + 0 + 0 + ... and thus equals to 1.
It seems to me that the code
console.log(1 / 0)
should return NaN, but instead it returns Infinity. However this code:
console.log(0 / 0)
does return NaN. Can someone help me to understand the reasoning for this functionality? Not only does it seem to be inconsistent, it also seems to be wrong, in the case of x / 0 where x !== 0
Because that's how floating-point is defined (more generally than just Javascript). See for example:
http://en.wikipedia.org/wiki/Floating-point#Infinities
http://en.wikipedia.org/wiki/NaN#Creation
Crudely speaking, you could think of 1/0 as the limit of 1/x as x tends to zero (from the right). And 0/0 has no reasonable interpretation at all, hence NaN.
In addition to answers based on the mathematical concept of zero, there is a special consideration for floating point numbers. Every underflow result, every non-zero number whose absolute magnitude is too small to represent as a non-zero number, is represented as zero.
0/0 may really be 1e-500/1e-600, or 1e-600/1e-500, or many other ratios of very small values.
The actual ratio could be anything, so there is no meaningful numerical answer, and the result should be a NaN.
Now consider 1/0. It does not matter whether the 0 represents 1e-500 or 1e-600. Regardless, the division would overflow and the correct result is the value used to represent overflows, Infinity.
I realize this is old, but I think it's important to note that in JS there is also a -0 which is different than 0 or +0 which makes this feature of JS much more logical than at first glance.
1 / 0 -> Infinity
1 / -0 -> -Infinity
which logically makes sense since in calculus, the reason dividing by 0 is undefined is solely because the left limit goes to negative infinity and the right limit to positive infinity. Since the -0 and 0 are different objects in JS, it makes sense to apply the positive 0 to evaluate to positive Infinity and the negative 0 to evaluate to negative Infinity
This logic does not apply to 0/0, which is indeterminate. Unlike with 1/0, we can get two results taking limits by this method with 0/0
lim h->0(0/h) = 0
lim h->0(h/0) = Infinity
which of course is inconsistent, so it results in NaN