How to calculate for the ln of a given number? - javascript

Good day all.
The google scientific calculator lets you calculate the ln of a given number.
examples:
ln(1) = 0
ln(2) = 0.69314718056
I've been trying to figure out the equation used by it to arrive at the answer. Any leads would be welcome.
I'm bad at math as you can tell. :(

If you want to verify the value for yourself, as some kind of programming exercise, the classical formula for the natural or Neperian (Napier's) logarithm is
ln(a)=limit(n -> inf) n*(root(n,a)-1),
so start with
n=1, a=2
and loop
n=n*2, a=sqrt(a),
output n*(a-1)
until some kind of convergence is reached. This will break down at some point due to the limits of floating point numbers, the repeated square root converges very fast towards 1.
The traditional definition without using the exponential function is via the integral
ln(a) = integral( 1/x, x=1..a)
where you can use the trapezoid or Simpson method of numerical integration to get increasingly more accurate results.
From the integral formula one gets via the geometric series the power series of the logarithm. A series based formula that will converge a little faster than the direct power series starts with the identity
ln(2)=ln(4/3)-ln(2/3)=ln(1+1/3)-ln(1-1/3)
from
a = (1+x)/(1-x) <==> x = (a-1)/(a+1).
Using
ln(1+x)=x-x^2/2+x^3/3-x^4/4+x^5/5-+...
the even powers in the above difference cancel, and
ln(1+x)-ln(1-x)=2*x*(1+x^2/3+x^4/5+...),
So for the computation of ln(2) initialize
x=1/3, xx=x*x, n=1, xpow=1, sum=0
and loop
sum+=xpow/n, xpow *= xx, n+=2
output 2*x*sum
again until some kind of convergence is reached.

ln x gives you the natural logarithm of x (or the value of y that makes the equation e^y = x true, where e is Euler's number)
Math.log(2);
The result will be:
0.6931471805599453
The log() method returns the natural logarithm (base E) of a number.
Note: If the parameter x is negative, NaN is returned.
Note: If the parameter x is 0, -Infinity is returned.

Related

Calculate googolplex binary from left to right

How to calculator googolplex (10^(10^100)) the leading N (Ex: 100) binary digits from the left ?
I know how to calculate a binary from the right to left, but this may take hundreds of years (Reference) to run...
Don't have an answer, but have a suggestion for further analysis.
if you want it in binary, then you want the bits starting in the Nth bit, where N=X+1 where X is described as follows:
2^X = 10^(10^100)
take log(b=10) =>
X = 10^100 / log(2) ==> ~ 3.3 E 100
Still not sure how to reduce it from there, but maybe playing with logarithm identities might be interesting. If you can compute X, maybe you can come up with a long division algorithm, though the running time argument on your reference makes me imagine the runtime for computing this could be the same. I.e. see you in ~600 years.
Another idea might be to investigate how numeric coprocessors create iEEE mantissas in binary.
Maybe there is an algorithm there you can leverage for something like this.
Just guessing though
So some basic math helps to outline the approach. I'll emphasize some highlights:
You want to compute the left digits accurately, in binary.
Digits, mathematically, are invariant with respect to multiplication/division by the base
Powers are equivalent to multiplication in log space
Multiplication/division by the base is equivalent to adding whole numbers in log space
BitBlitz has the right idea--you can use logarithms to solve this. In particular, take the logarithm of 10 in base 2, multiply that by 10^100, and ignore everything to the left of the (base 2) decimal place. To give you an idea, 10^100 is obviously 100 digits; using the 1K=2^10=10^3 approximation, that makes about 100/3 K or 33K, times 10 which is about 330 bits of shifting the log left to get past all of those bits you don't care about. Once you've flipped through that and start hitting the binary "decimals", you'll be computing the logarithms of the digits--left to right. Gather a huge chunk of such digits, perform the inverse log of it, and your resulting binary digits will match what you want to get.
You're definitely going to need a bignum library for this task; long double just isn't going to cut it. But it should be reasonably possible using this approach to gather a reasonable number of leftmost digits.

Infinite from zero division

In JavaScript, if you divide by 0 you get Infinity
typeof Infinity; //number
isNaN(Infinity); //false
This insinuates that Infinity is a number (of course, no argument there).
What I learned that anything divided by zero is in an indeterminate form and has no value, is not a number.
That definition however is for arithmetic, and I know that in programming it can either yield Infinity, Not a Number, or just throw an exception.
So why throw Infinity? Does anybody have an explanation on that?
First off, resulting in Infinity is not due to some crazy math behind the scenes. The spec states that:
Division of a non-zero finite value by a zero results in a signed infinity. The sign is determined by the rule already stated above.
The logic of the spec authors goes along these lines:
2/1 = 2. Simple enough.
2/0.5 = 4. Halving the denominator doubles the result.
...and so on:
2/0.0000000000000005 = 4e+1. As the denominator trends toward zero, the result grows. Thus, the spec authors decided for division by zero to default to Infinity, as well as any other operation that results in a number too big for JavaScript to represent [0]. (instead of some quasi-numeric state or a divide by zero exception).
You can see this in action in the code of Google's V8 engine: https://github.com/v8/v8/blob/bd8c70f5fc9c57eeee478ed36f933d3139ee221a/src/hydrogen-instructions.cc#L4063
[0] "If the magnitude is too large to represent, the operation overflows; the result is then an infinity of appropriate sign."
Javascript is a loosely typed language which means that it doesn't have to return the type you were expecting from a function.
Infinity isn't actually an integer
In a strongly typed language if your function was supposed to return an int this means the only thing you can do when you get a value that isn't an int is to throw an exception
In loosely typed language you have another option which is to return a new type that represents the result better (such as in this case infinity)
Infinity is very different than indetermination.
If you compute x/0+ you get +infinity and for x/o- you get -infinity (x>0 in that example).
Javascript uses it to note that you have exceeded the capacity of the underlaying floating point storage.
You can then handle it to direct your sw towards either exceptional cases or big number version of your computation.
Infinity is actually consistent in formulae. Without it, you have to break formulae into small pieces, and you end up with more complicated code.
Try this, and you get j as Infinity:
var i = Infinity;
var j = 2*i/5;
console.log("result = "+j);
This is because Javascript uses Floating point arithmetics and it's exception for handling division by zero.
Division by zero (an operation on finite operands gives an exact infinite result, e.g., 1/0 or log(0)) (returns ±infinity by default).
wikipedia source
When x tends towards 0 in the formula y=1/x, y tends towards infinity. So it would make sense if something that would end up as a really high number (following that logic) would be represented by infinity. Somewhere around 10^320, JavaScript returns Infinity instead of the actual result, so if the calculation would otherwise end up above that threshold, it just returns infinity instead.
As determined by the ECMAScript language specification:
The sign of the result is positive if both operands have the same
sign, negative if the operands have different signs.
Division of an infinity by a zero results in an infinity. The sign is
determined by the rule already stated above.
Division of a nonzero finite value by a zero results in a signed infinity. The sign is determined by the rule already stated above.
As the denominator of an arithmetic fraction tends towards 0 (for a finite non-zero numerator) the result tends towards +Infinity or -Infinity depending on the signs of the operands. This can be seen by:
1/0.1 = 10
1/0.01 = 100
1/0.001 = 1000
1/0.0000000001 = 10000000000
1/1e-308 = 1e308
Taking this further then when you perform a division by zero then the JavaScript engine gives the result (as determined by the spec quoted above):
1/0 = Number.POSITIVE_INFINITY
-1/0 = Number.NEGATIVE_INFINITY
-1/-0 = Number.POSITIVE_INFINITY
1/-0 = Number.NEGATIVE_INFINITY
It is the same if you divide by a sufficiently large value:
1/1e309 = Number.POSITIVE_INFINITY

calculate a derivative in javascript

To calculated the value of the derivative of a given function Func at a given point x, to get a good precision one would think this:
a = Fun( x - Number.MIN_VALUE)
b = Func( x + Number.MIN_VALUE)
return (b-a)/(2*Number.MIN_VALUE)
Now for any x + Number.MIN_VALUE (or x - Number.MIN_VALUE) both return x in javascript.
I tried of different values, and 1 + 1e-15 returns 1.000000000000001. Trying to get more precision, 1 + 1e-16 returns 1. So I'll have to use 1e-15 instead of Number.MIN_VALUE which is 5e-324.
Is there a way to get a better precision in this case in javascript?
This is not at all about javascript, actually.
Reducing the separation between the points will not increase your precision of the derivative. The values of the function in the close points will be computed with some error in the last digits, and finally, your error will be much larger than the point separation. Then you divide very small difference which has a huge relative error by a very small number, and you most likely get complete rubbish.
The best way to get a better value of the derivative is to calculate function in multiple points (with not very small separation) and then construct an approximation polynomial and differentiate it in the point of your interest.

Preserving the floating point & addition of a bitwise operation in javascript

I am trying to understand the way to add, subtract, divide, and multiply by operating on the bits.
It is necessary to do some optimizing in my JavaScript program due to many calculations running after an event has happened.
By using the code below for a reference I am able to understand that the carry holds the &ing value. Then by doing the XOr that sets the sum var to the bits that do not match in each n1 / n2 variable.
Here is my question.;) What does shifting the (n1 & n2)<<1 by 1 do? What is the goal by doing this? As with the XOr it is obvious that there is no need to do anything else with those bits because their decimal values are ok as they are in the sum var. I can't picture in my head what is being accomplished by the & shift operation.
function add(n1,n2)
{
var carry, sum;
// Find out which bits will result in a carry.
// Those bits will affect the bits directly to
// the left, so we shall shift one bit.
carry = (n1 & n2) << 1;
// In digital electronics, an XOR gate is also known
// as a quarter adder. Basically an addition is performed
// on each individual bit, and the carry is discarded.
//
// All I'm doing here is applying the same concept.
sum = n1 ^ n2;
// If any bits match in position, then perform the
// addition on the current sum and the results of
// the carry.
if (sum & carry)
{
return add(sum, carry);
}
// Return the sum.
else
{
return sum ^ carry;
};
};
The code above works as expected but it does not return the floating point values. I've got to have the total to be returned along with the floating point value.
Does anyone have a function that I can use with the above that will help me with floating point values? Are a website with a clear explanation of what I am looking for? I've tried searching for the last day are so and cannot find anything to go look over.
I got the code above from this resource.
http://www.dreamincode.net/code/snippet3015.htm
Thanks ahead of time!
After thinking about it doing a left shift to the 1 position is a multiplication by 2.
By &ing like this : carry = (n1 & n2) << 1; the carry var will hold a string of binaries compiled of the matched positions in n1 and n2. So, if n1 is 4 and n2 is 4 they both hold the same value. Therefore, by combing the two and right shifting to the 1 index will multiply 4 x 2 = 8; so carry would now equal 8.
1.) var carry = 00001000 =8
&
00001000 =8
2.) carry = now holds the single value of 00001000 =8
A left shift will multiply 8 x 2 =16, or 8 + 8 = 16
3.)carry = carry <<1 , shift all bits over one position
4.) carry now holds a single value of 00010000 = 16
I still cannot find anything on working with floating point values. If anyone has anything do post a link.
It doesn't work because the code assumes that the floating point numbers are represented as integer numbers, which they aren't. Floating point numbers are represented using the IEEE 754 standard, which breaks the numbers in three parts: a sign bit, a group of bits representing an exponent, and another group representing a number between 1 (inclusive) and 2 (exclusive), the mantissa, and the value is calculated as
(sign is set ? 1 : -1) * (mantissa ^ (exponent - bias))
Where the bias depends on the precision of the floating point number. So the algorithm you use for adding two numbers assumes that the bits represent an integer which is not the case for floating point numbers. Operations such as bitwise-AND and bitwise-OR also don't give the results that you'd expect in an integer world.
Some examples, in double precision, the number 2.3 is represented as (in hex) 4002666666666666, while the number 5.3 is represented as 4015333333333333. OR-ing those two numbers will give you 4017777777777777, which represents (roughly) 5.866666.
There are some good pointers on this format, I found the links at http://www.psc.edu/general/software/packages/ieee/ieee.php, http://babbage.cs.qc.edu/IEEE-754/ and http://www.binaryconvert.com/convert_double.html fairly good for understanding it.
Now, if you still want to implement the bitwise addition for those numbers, you can. But you'll have to break the number down in its parts, then normalize the numbers in the same exponent (otherwise you won't be able to add them), perform the addition on the mantissa, and finally normalize it back to the IEEE754 format. But, as #LukeGT said, you'll likely not get a better performance than the JS engine you're running. And some JS implementations don't even support bitwise operations on floating point numbers, so what usually ends up happening is that they first cast the numbers to integers, then perform the operation, which will make your results incorrect as well.
Floating point values have a complicated bit structure, which is very difficult to manipulate with bit operations. As a result, I doubt you could do any better than the Javascript engine at computing them. Floating point calculations are inherently slow, so you should try to avoid them if you're worried about speed.
Try using integers to represent a decimal number to x amount of digits instead. For example if you were working with currency, you could store things in terms of whole cents as opposed to dollars with fractional values.
Hope that helps.

2.9999999999999999 >> .5?

I heard that you could right-shift a number by .5 instead of using Math.floor(). I decided to check its limits to make sure that it was a suitable replacement, so I checked the following values and got the following results in Google Chrome:
2.5 >> .5 == 2;
2.9999 >> .5 == 2;
2.999999999999999 >> .5 == 2; // 15 9s
2.9999999999999999 >> .5 == 3; // 16 9s
After some fiddling, I found out that the highest possible value of two which, when right-shifted by .5, would yield 2 is 2.9999999999999997779553950749686919152736663818359374999999¯ (with the 9 repeating) in Chrome and Firefox. The number is 2.9999999999999997779¯ in IE.
My question is: what is the significance of the number .0000000000000007779553950749686919152736663818359374? It's a very strange number and it really piqued my curiosity.
I've been trying to find an answer or at least some kind of pattern, but I think my problem lies in the fact that I really don't understand the bitwise operation. I understand the idea in principle, but shifting a bit sequence by .5 doesn't make any sense at all to me. Any help is appreciated.
For the record, the weird digit sequence changes with 2^x. The highest possible values of the following numbers that still truncate properly:
for 0: 0.9999999999999999444888487687421729788184165954589843749¯
for 1: 1.9999999999999999888977697537484345957636833190917968749¯
for 2-3: x+.99999999999999977795539507496869191527366638183593749¯
for 4-7: x+.9999999999999995559107901499373838305473327636718749¯
for 8-15: x+.999999999999999111821580299874767661094665527343749¯
...and so forth
Actually, you're simply ending up doing a floor() on the first operand, without any floating point operations going on. Since the left shift and right shift bitwise operations only make sense with integer operands, the JavaScript engine is converting the two operands to integers first:
2.999999 >> 0.5
Becomes:
Math.floor(2.999999) >> Math.floor(0.5)
Which in turn is:
2 >> 0
Shifting by 0 bits means "don't do a shift" and therefore you end up with the first operand, simply truncated to an integer.
The SpiderMonkey source code has:
switch (op) {
case JSOP_LSH:
case JSOP_RSH:
if (!js_DoubleToECMAInt32(cx, d, &i)) // Same as Math.floor()
return JS_FALSE;
if (!js_DoubleToECMAInt32(cx, d2, &j)) // Same as Math.floor()
return JS_FALSE;
j &= 31;
d = (op == JSOP_LSH) ? i << j : i >> j;
break;
Your seeing a "rounding up" with certain numbers is due to the fact the JavaScript engine can't handle decimal digits beyond a certain precision and therefore your number ends up getting rounded up to the next integer. Try this in your browser:
alert(2.999999999999999);
You'll get 2.999999999999999. Now try adding one more 9:
alert(2.9999999999999999);
You'll get a 3.
This is possibly the single worst idea I have ever seen. Its only possible purpose for existing is for winning an obfusticated code contest. There's no significance to the long numbers you posted -- they're an artifact of the underlying floating-point implementation, filtered through god-knows how many intermediate layers. Bit-shifting by a fractional number of bytes is insane and I'm surprised it doesn't raise an exception -- but that's Javascript, always willing to redefine "insane".
If I were you, I'd avoid ever using this "feature". Its only value is as a possible root cause for an unusual error condition. Use Math.floor() and take pity on the next programmer who will maintain the code.
Confirming a couple suspicions I had when reading the question:
Right-shifting any fractional number x by any fractional number y will simply truncate x, giving the same result as Math.floor() while thoroughly confusing the reader.
2.999999999999999777955395074968691915... is simply the largest number that can be differentiated from "3". Try evaluating it by itself -- if you add anything to it, it will evaluate to 3. This is an artifact of the browser and local system's floating-point implementation.
If you wanna go deeper, read "What Every Computer Scientist Should Know About Floating-Point Arithmetic": https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
Try this javascript out:
alert(parseFloat("2.9999999999999997779553950749686919152736663818359374999999"));
Then try this:
alert(parseFloat("2.9999999999999997779553950749686919152736663818359375"));
What you are seeing is simple floating point inaccuracy. For more information about that, see this for example: http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems.
The basic issue is that the closest that a floating point value can get to representing the second number is greater than or equal to 3, whereas the closes that the a float can get to the first number is strictly less than three.
As for why right shifting by 0.5 does anything sane at all, it seems that 0.5 is just itself getting converted to an int (0) beforehand. Then the original float (2.999...) is getting converted to an int by truncation, as usual.
I don't think your right shift is relevant. You are simply beyond the resolution of a double precision floating point constant.
In Chrome:
var x = 2.999999999999999777955395074968691915273666381835937499999;
var y = 2.9999999999999997779553950749686919152736663818359375;
document.write("x=" + x);
document.write(" y=" + y);
Prints out: x = 2.9999999999999996 y=3
The shift right operator only operates on integers (both sides). So, shifting right by .5 bits should be exactly equivalent to shifting right by 0 bits. And, the left hand side is converted to an integer before the shift operation, which does the same thing as Math.floor().
I suspect that converting 2.9999999999999997779553950749686919152736663818359374999999
to it's binary representation would be enlightening. It's probably only 1 bit different
from true 3.
Good guess, but no cigar.
As the double precision FP number has 53 bits, the last FP number before 3 is actually
(exact): 2.999999999999999555910790149937383830547332763671875
But why it is
2.9999999999999997779553950749686919152736663818359375
(and this is exact, not 49999... !)
which is higher than the last displayable unit ? Rounding. The conversion routine (String to number) simply is correctly programmed to round the input the the next floating point number.
2.999999999999999555910790149937383830547332763671875
.......(values between, increasing) -> round down
2.9999999999999997779553950749686919152736663818359375
....... (values between, increasing) -> round up to 3
3
The conversion input must use full precision. If the number is exactly the half between
those two fp numbers (which is 2.9999999999999997779553950749686919152736663818359375)
the rounding depends on the setted flags. The default rounding is round to even, meaning that the number will be rounded to the next even number.
Now
3 = 11. (binary)
2.999... = 10.11111111111...... (binary)
All bits are set, the number is always odd. That means that the exact half number will be rounded up, so you are getting the strange .....49999 period because it must be smaller than the exact half to be distinguishable from 3.
I suspect that converting 2.9999999999999997779553950749686919152736663818359374999999 to its binary representation would be enlightening. It's probably only 1 bit different from true 3.
And to add to John's answer, the odds of this being more performant than Math.floor are vanishingly small.
I don't know if JavaScript uses floating-point numbers or some kind of infinite-precision library, but either way, you're going to get rounding errors on an operation like this -- even if it's pretty well defined.
It should be noted that the number ".0000000000000007779553950749686919152736663818359374" is quite possibly the Epsilon, defined as "the smallest number E such that (1+E) > 1."

Categories

Resources