We all know that 00 is indeterminate.
But, javascript says that:
Math.pow(0, 0) === 1 // true
and C++ says the same thing:
pow(0, 0) == 1 // true
WHY?
I know that:
>Math.pow(0.001, 0.001)
0.9931160484209338
But why does Math.pow(0, 0) throw no errors? Or maybe a NaN would be better than 1.
In C++ The result of pow(0, 0) the result is basically implementation defined behavior since mathematically we have a contradictory situation where N^0 should always be 1 but 0^N should always be 0 for N > 0, so you should have no expectations mathematically as to the result of this either. This Wolfram Alpha forum posts goes into a bit more details.
Although having pow(0,0) result in 1 is useful for many applications as the Rationale for International Standard—Programming Languages—C states in the section covering IEC 60559 floating-point arithmetic support:
Generally, C99 eschews a NaN result where a numerical value is useful. [...] The results of pow(∞,0) and pow(0,0) are both 1, because there are applications that can exploit this definition. For example, if x(p) and y(p) are any analytic functions that become zero at p = a, then pow(x,y), which equals exp(y*log(x)), approaches 1 as p approaches a.
Update C++
As leemes correctly pointed out I originally linked to the reference for the complex version of pow while the non-complex version claims it is domain error the draft C++ standard falls back to the draft C standard and both C99 and C11 in section 7.12.7.4 The pow functions paragraph 2 says (emphasis mine):
[...]A domain error may occur if x is zero and y is zero.[...]
which as far as I can tell means this behavior is unspecified behavior Winding back a bit section 7.12.1 Treatment of error conditions says:
[...]a domain error occurs if an input argument is outside the domain over
which the mathematical function is defined.[...] On a domain error, the function returns an implementation-defined value; if the integer expression math_errhandling & MATH_ERRNO is nonzero, the integer expression errno acquires the value EDOM; [...]
So if there was a domain error then this would be implementation defined behavior but in both the latest versions of gcc and clang the value of errno is 0 so it is not a domain error for those compilers.
Update Javascript
For Javascript the ECMAScript® Language Specification in section 15.8 The Math Object under 15.8.2.13 pow (x, y) says amongst other conditions that:
If y is +0, the result is 1, even if x is NaN.
In JavaScript Math.pow is defined as follows:
If y is NaN, the result is NaN.
If y is +0, the result is 1, even if x is NaN.
If y is −0, the result is 1, even if x is NaN.
If x is NaN and y is nonzero, the result is NaN.
If abs(x)>1 and y is +∞, the result is +∞.
If abs(x)>1 and y is −∞, the result is +0.
If abs(x)==1 and y is +∞, the result is NaN.
If abs(x)==1 and y is −∞, the result is NaN.
If abs(x)<1 and y is +∞, the result is +0.
If abs(x)<1 and y is −∞, the result is +∞.
If x is +∞ and y>0, the result is +∞.
If x is +∞ and y<0, the result is +0.
If x is −∞ and y>0 and y is an odd integer, the result is −∞.
If x is −∞ and y>0 and y is not an odd integer, the result is +∞.
If x is −∞ and y<0 and y is an odd integer, the result is −0.
If x is −∞ and y<0 and y is not an odd integer, the result is +0.
If x is +0 and y>0, the result is +0.
If x is +0 and y<0, the result is +∞.
If x is −0 and y>0 and y is an odd integer, the result is −0.
If x is −0 and y>0 and y is not an odd integer, the result is +0.
If x is −0 and y<0 and y is an odd integer, the result is −∞.
If x is −0 and y<0 and y is not an odd integer, the result is +∞.
If x<0 and x is finite and y is finite and y is not an integer, the result is NaN.
emphasis mine
as a general rule, native functions to any language should work as described in the language specification. Sometimes this includes explicitly "undefined behavior" where it's up to the implementer to determine what the result should be, however this is not a case of undefined behavior.
It is just convention to define it as 1, 0 or to leave it undefined. The definition is wide spread because of the following definition:
ECMA-Script documentation says the following about pow(x,y):
If y is +0, the result is 1, even if x is NaN.
If y is −0, the result is 1, even if x is NaN.
[ http://www.ecma-international.org/ecma-262/5.1/#sec-15.8.2.13 ]
According to Wikipedia:
In most settings not involving continuity in the exponent, interpreting 00 as 1 simplifies formulas and eliminates the need for special cases in theorems.
There are several possible ways to treat 0**0 with pros and cons to each (see Wikipedia for an extended discussion).
The IEEE 754-2008 floating point standard recommends three different functions:
pow treats 0**0 as 1. This is the oldest defined version. If the power is an exact integer the result is the same as for pown, otherwise the result is as for powr (except for some exceptional cases).
pown treats 0**0 as 1. The power must be an exact integer. The value is defined for negative bases; e.g., pown(−3,5) is −243.
powr treats 0**0 as NaN (Not-a-Number – undefined). The value is also NaN for cases like powr(−3,2) where the base is less than zero. The value is defined by exp(power'×log(base)).
Donald Knuth
sort of settled this debate in 1992 with the following:
And went even more into details in his paper Two Notes on Notation.
Basically, while we don't have 1 as the limit of f(x)/g(x) for all not all functions f(x) and g(x), it still makes combinatorics so much simpler to define 0^0=1, and then just make special cases in the few places where you need to consider functions such as 0^x, which are weird anyway. After all x^0 comes up a lot more often.
Some of the best discussions I know of this topic (other than the Knuth paper) are:
Link
http://www.quora.com/Mathematics/What-is-math-0-0-math?share=1
https://math.stackexchange.com/questions/475337/the-binomial-formula-and-the-value-of-00
When you want to know what value you should give to f(a) when f isn't directly computable in a, you compute the limit of f when x tends towards a.
In case of x^y, usual limits tend towards 1 when x and y tend to 0, and especially x^x tends towards 1 when x tends to 0.
See http://www.math.hmc.edu/funfacts/ffiles/10005.3-5.shtml
The C language definition says (7.12.7.4/2):
A domain error may occur if x is zero and y is zero.
It also says (7.12.1/2):
On a domain error, the function returns an implementation-defined value; if the integer expression math_errhandling & MATH_ERRNO is nonzero, the integer expression errno acquires the value EDOM; if the integer expression math_errhandling & MATH_ERREXCEPT is nonzero, the ‘‘invalid’’ floating-point exception is raised.
By default, the value of math_errhandling is MATH_ERRNO, so check errno for the value EDOM.
I'd like to disagree with some of the previous answers' assertion that it's a matter of convention or convenience (covering some special cases for various theorems, etc) that 0^0 be defined as 1 instead of 0.
Exponentiation doesn't actually fit that well with our other mathematical notations, so the definition we all learn leaves room for confusion. A slightly different way of approaching it is to say that a^b (or exp(a, b), if you like) returns the value multiplicatively equivalent to multiplying some other thing by a, repeated b times.
When we multiply 5 by 4, 2 times, we get 80. We've multiplied 5 by 16. So 4^2 = 16.
When you multiply 14 by 0, 0 times, we are left with 14. We've multiplied it 1. Hence, 0^0 = 1.
This line of thinking might also help to clarify negative and fractional exponents. 4^(-2) is a 16th, because 'negative multiplication' is division - we divide by four twice.
a^(1/2) is root(a), because multiplying something by the root of a is half the multiplicative work as multiplying it by a itself - you would have to do it twice to multiply something by 4 = 4^1 = (4^(1/2))^2
For this to understand you need to solve calculus:
Expanding x^x around zero using Taylor series, we get:
So to understand what's going on with limit when x goes to zero,
we need to find out what's going on with second term x log(x), because other terms are proportional to x log(x) raised to some power.
We need to use transformation:
Now after this transformation we can use L'Hôpital's rule, which states that:
So differentiating that transformation we get:
So we've calculated that term log(x)*x approaches 0 when x approaches 0.
It's easy to see that other consecutive terms also approaches zero and even faster than second term.
So at point x=0, series becomes 1 + 0 + 0 + 0 + ... and thus equals to 1.
Related
I know (-0 === 0) comes out to be true. I am curious to know why -0 < 0 happens?
When I run this code in stackoverflow execution context, it returns 0.
const arr = [+0, 0, -0];
console.log(Math.min(...arr));
But when I run the same code in the browser console, it returns -0. Why is that? I have tried to search it on google but didn't find anything useful. This question might not add value to someone practical example, I wanted to understand how does JS calculates it.
const arr = [+0, 0, -0];
console.log(Math.min(...arr)); // -0
-0 is not less than 0 or +0, both -0 < 0 and -0 < +0 returns False, you're mixing the behavior of Math.min with the comparison of -0 with 0/+0.
The specification of Math.min is clear on this point:
b. If number is -0𝔽 and lowest is +0𝔽, set lowest to -0𝔽.
Without this exception, the behavior of Math.min and Math.max would depend on the order of arguments, which can be considered an odd behavior — you probably want Math.min(x, y) to always equal Math.min(y, x) — so that might be one possible justification.
Note: This exception was already present in the 1997 specification for Math.min(x, y), so that's not something that was added later on.
This is a specialty of Math.min, as specified:
21.3.2.25 Math.min ( ...args )
[...]
For each element number of coerced, do
a. If number is NaN, return NaN.
b. If number is -0𝔽 and lowest is +0𝔽, set lowest to -0𝔽.
c. If number < lowest, set lowest to number.
Return lowest.
Note that in most cases, +0 and -0 are treated equally, also in the ToString conversion, thus (-0).toString() evaluates to "0". That you can observe the difference in the browser console is an implementation detail of the browser.
The point of this answer is to explain why the language design choice of having Math.min be fully commutative makes sense.
I am curious to know why -0 < 0 happens?
It doesn't really; < is a separate operation from "minimum", and Math.min isn't based solely on IEEE < comparison like b<a ? b : a.
That would be non-commutative wrt. NaN as well as signed-zero. (< is false if either operand is NaN, so that would produce a).
As far as principle of least surprise, it would be at least as surprising (if not moreso) if Math.min(-1,NaN) was NaN but Math.min(NaN, -1) was -1.
The JS language designers wanted Math.min to be NaN-propagating, so basing it just on < wasn't possible anyway. They chose to make it fully commutative including for signed zero, which seems like a sensible decision.
OTOH, most code doesn't care about signed zero, so this language design choice costs a bit of performance for everyone to cater to the rare cases where someone wants well-defined signed-zero semantics.
If you want a simple operation that ignores NaN in an array, iterate yourself with current_min = x < current_min ? x : current_min. That will ignore all NaN, and also ignore -0 for current_min <= +0.0 (IEEE comparison). Or if current_min starts out NaN, it will stay NaN. Many of those things are undesirable for a Math.min function, so it doesn't work that way.
If you compare other languages, the C standard fmin function is commutative wrt. NaN (returning the non-NaN if there is one, opposite of JS), but is not required to be commutative wrt. signed zero. Some C implementations choose to work like JS for +-0.0 for fmin / fmax.
But C++ std::min is defined purely in terms of a < operation, so it does work that way. (It's intended to work generically, including on non-numeric types like strings; unlike std::fmin it doesn't have any FP-specific rules.) See What is the instruction that gives branchless FP min and max on x86? re: x86's minps instruction and C++ std::min which are both non-commutative wrt. NaN and signed zero.
IEEE 754 < doesn't give you a total order over distinct FP numbers. Math.min does except for NaNs (e.g. if you built a sorting network with it and Math.max.) Its order disagrees with Math.max: they both return NaN if there is one, so a sorting network using min/max comparators would produce all NaNs if there were any in the input array.
Math.min alone wouldn't be sufficient for sorting without something like == to see which arg it returned, but that breaks down for signed zero as well as NaN.
The spec is curiously contradictory. The < comparison rule explicitly says that -0 is not less than +0. However, the spec for Math.min() says the opposite: if the current (while iterating through the arguments) value is -0, and the smallest value so far is +0, then the smallest value should be set to -0.
I would like for somebody to activate the T.J. Crowder signal for this one.
edit — it was suggested in some comments that a possible reason for the behavior is to make it possible to detect a -0 value, even though for almost all purposes in normal expressions the -0 is treated as being plain 0.
I came across a statement "If -0 is subtracted from +0, the result is -0" in a JavaScript book published in year 2012.
However, when I compute +0 - (-0) in browser, it returns 0 instead of -0. I would like to know whether there is a change in ECMAScript since then or is it just simply an error/typo in the book.
If what the book mentioned is true, I would like to hear explanation and elaboration on this part.
Book: Professional JavaScript for Web Developers, 3rd Ed. by Nicholas C. Zakas (Chapter 3 - pg 63)
The book is incorrect. Maybe it meant -0 - +0. From 12.7.5:
The sum of two negative zeroes is −0. The sum of two positive zeroes, or of two zeroes of opposite sign, is +0.
Given numeric operands a and b, it is always the case that a–b produces the same result as a +(–b).
and 12.5.0:
The unary - operator converts its operand to Number type and then negates it. Negating +0 produces −0, and negating −0 produces +0.
Also, I skipped to another random page in the book and found this:
Comma Operator
The comma operator allows execution of more than one operation in a single statement, as illustrated here:
var num1=1, num2=2, num3=3;
which is not an instance of the comma operator. Two for two; get a refund.
Today, I saw this behaviour of Java and Javascript's Math.round function.
It makes 1.40 to 1 as well as -1.40 to -1
It makes 1.60 to 2 as well as -1.60 to -2
Now, it makes 1.5 to 2.
But, makes -1.5 to -1.
I checked this behaviour in round equivalents of PhP and MySQL as well.
Both gave results as expected. i.e. round(-1.5) to -2
Even the Math.round definition says it should round it to nearest integer.
Wanted to know why is it so?
The problem is that the distance between 1 and 1.5 as well as 1.5 and 2 is exactly the same (0.5). There are several different ways you now could round:
always towards positive infinity
always towards negative infinity
always towards zero
always away from zero
towards nearest odd or even number
... (see Wikipedia)
Obviously, both Java and JS opted for the first one (which is not uncommon) while PHP and MySql round away from zero.
Rounding mode to round towards negative infinity. If the result is positive, behave as for RoundingMode.DOWN; if negative, behave as for RoundingMode.UP. Note that this rounding mode never increases the calculated value.
It is just matter of whole number and its position against number chart. From here you can see javadocs.
public static int round(float a)
Returns the closest int to the argument, with ties rounding up.
Special cases:
If the argument is NaN, the result is 0.
If the argument is negative infinity or any value less than or equal to the value of Integer.MIN_VALUE, the result is equal to the value of Integer.MIN_VALUE.
If the argument is positive infinity or any value greater than or equal to the value of Integer.MAX_VALUE, the result is equal to the value of Integer.MAX_VALUE.
Parameters:
a - a floating-point value to be rounded to an integer.
Returns:
the value of the argument rounded to the nearest int value.
Review this link too
From the Ecma script documentation,
Returns the Number value that is closest to x and is equal to a
mathematical integer. If two integer Number values are equally close
to x, then the result is the Number value that is closer to +∞. If x
is already an integer, the result is x.
where x is the number passed to Math.round().
So Math.round(1.5) will return 2 hence 2 is closer to +∞ while comparing with 1. Similarly Math.round(-1.5) will return -1 hence -1 is closer to +∞ while comparing with -2.
It seems to me that the code
console.log(1 / 0)
should return NaN, but instead it returns Infinity. However this code:
console.log(0 / 0)
does return NaN. Can someone help me to understand the reasoning for this functionality? Not only does it seem to be inconsistent, it also seems to be wrong, in the case of x / 0 where x !== 0
Because that's how floating-point is defined (more generally than just Javascript). See for example:
http://en.wikipedia.org/wiki/Floating-point#Infinities
http://en.wikipedia.org/wiki/NaN#Creation
Crudely speaking, you could think of 1/0 as the limit of 1/x as x tends to zero (from the right). And 0/0 has no reasonable interpretation at all, hence NaN.
In addition to answers based on the mathematical concept of zero, there is a special consideration for floating point numbers. Every underflow result, every non-zero number whose absolute magnitude is too small to represent as a non-zero number, is represented as zero.
0/0 may really be 1e-500/1e-600, or 1e-600/1e-500, or many other ratios of very small values.
The actual ratio could be anything, so there is no meaningful numerical answer, and the result should be a NaN.
Now consider 1/0. It does not matter whether the 0 represents 1e-500 or 1e-600. Regardless, the division would overflow and the correct result is the value used to represent overflows, Infinity.
I realize this is old, but I think it's important to note that in JS there is also a -0 which is different than 0 or +0 which makes this feature of JS much more logical than at first glance.
1 / 0 -> Infinity
1 / -0 -> -Infinity
which logically makes sense since in calculus, the reason dividing by 0 is undefined is solely because the left limit goes to negative infinity and the right limit to positive infinity. Since the -0 and 0 are different objects in JS, it makes sense to apply the positive 0 to evaluate to positive Infinity and the negative 0 to evaluate to negative Infinity
This logic does not apply to 0/0, which is indeterminate. Unlike with 1/0, we can get two results taking limits by this method with 0/0
lim h->0(0/h) = 0
lim h->0(h/0) = Infinity
which of course is inconsistent, so it results in NaN
I know from reading this Stackoverflow question that the complier will look at your number, decide if the midpoint is an even or odd number and then return the even number. The example number was 2.5 which rounded to a 3. I've tried my own little experiments to see what happens, but I have yet to find any specifications about this, or even if it would be consistent between browsers.
Here's an example JavaScript snippet using jQuery for the display:
$(document).ready(function() {
$("#answer").html(showRounding());
});
function showRounding() {
var answer = Math.round(2.5);
return answer;
}
This returns a '3'.
What I would like to know is this: How close is the rounding in JavaScript to the the C# equivalent? The reason I'm doing this is because I would like to take a JavaScript method that uses Math.Round and rewrite the same method into C# and would like to know that I would be able to get the same results after rounding a number.
Here's the complete javascript specification for Math.round(x):
15.8.2.15 round (x) Returns the Number value that is closest to x and is
equal to a mathematical integer. If
two integer Number values are equally
close to x, then the result is the
Number value that is closer to +∞. If
x is already an integer, the result is
x.
If x is NaN, the result is NaN.
If x is +0, the result is +0.
If x is −0, the result is −0.
If x is +∞, the result is +∞.
If x is −∞, the result is −∞.
If x is greater than 0 but less than
0.5, the result is +0.
If x is less than 0 but greater than or equal to -0.5, the result
is −0.
NOTE 1 Math.round(3.5) returns 4, but
Math.round(–3.5) returns –3.
NOTE 2 The value of Math.round(x) is
the same as the value of
Math.floor(x+0.5), except when x is −0
or is less than 0 but greater than or
equal to -0.5; for these cases
Math.round(x) returns −0, but
Math.floor(x+0.5) returns +0.
The C# Language Specification does not stipulate any particular rounding algorithm. The closest thing we have is the documentation for .NET's Math.Round. From that, you can see that some of the javascript cases don't apply (Math.Round only handles decimals and doubles, not infinity), and the method's overloads give you a lot more control over the result - you can specify the number of fractional digits in the result and the midpoint rounding method. By default, Math.Round uses 'banker's rounding' (to even).
ECMAScript's rounding is basically naive asymmetric rounding (with added checks for +-Infinity). WRT porting your JavaScript code to C# you're probably best to avoid .NET's Math.Round (as it is always symmetric) and use Math.Floor instead:
double d = -3.5d;
double rounded = Math.Floor(d + 0.5); // -3 matches JavaScript's Math.round
That is, if you want strict emulation of ECMAScript's Math.round.