recursive functions - why not an infinite loop? - javascript

So this is apparently a valid bit of code, but I can't figure out how the second call to power can ever be completed if the exponent argument is anything besides 0.
function power(base, exponent) {
if (exponent == 0)
return 1;
else
return base * power(base, exponent - 1);
}
From: http://imgur.com/Sa2BfHJ

Because the second call will keep calling with smaller numbers in exponent, until it reaches 0, and then return 1, and roll back aggregating the result ...
I think you'll have to do some reading on recursion :)
Here's a simple case of how it'll look:
power(2,2)
power(2,1)
power(2,0)
return 1
return 2*1 = 2
return 2*2 = 4
Taken and modified from this page.
Try this page for an animated view of the recursion (didn't work for me, it's an old page, needs java, and I don't have it installed on my machine ...)
Edit:
It was bothering me I couldn't find any simple example of this, so here's a quick console program that might help you visualize how this is working :
using System;
using System.Collections.Generic;
using System.Globalization;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace SO_Console
{
class Program
{
static void Main(string[] args)
{
int base_value = 0;
int exponent = 0;
string[] parts = new string[2];
int result = 0;
Console.Out.WriteLine("Please enter the Power to calculate in this format: x^y "
+ Environment.NewLine + "(where x is the base (int) and y is the exponent (int)."
+ Environment.NewLine);
var temp = Console.ReadLine();
if (!string.IsNullOrWhiteSpace(temp))
{
parts = temp.Split('^');
if (parts.Length != 2)
InvalidInput();
}
else
InvalidInput();
if (Int32.TryParse(parts[0], out base_value) && Int32.TryParse(parts[1], out exponent))
result = Power(base_value, exponent, "");
else
InvalidInput();
Console.Out.WriteLine(Environment.NewLine + "Final result = {0}", result);
Console.Out.WriteLine(Environment.NewLine + "Hit any key to quit.");
Console.Read();
}
/// <summary>
/// Recursive call to calculate Power x^y
/// </summary>
/// <param name="base_value">The base</param>
/// <param name="exponent">The exponent</param>
/// <param name="padding">Padding, for output.</param>
/// <returns></returns>
private static int Power(int base_value, int exponent, string padding)
{
Console.Out.WriteLine(string.Format("{2}Power called with: {0}^{1}", base_value, exponent, padding));
Thread.Sleep(750);
if (exponent == 0)
{
Console.Out.WriteLine("{0}{1}Base case reached, returning 1.{0}", Environment.NewLine ,padding);
return 1;
}
else
{
var return_value = base_value * Power(base_value, exponent - 1, padding + " ");
Console.Out.WriteLine("{0}Going back in the recursion, returning {1}.", padding, return_value);
Thread.Sleep(750);
return return_value;
}
}
/// <summary>
/// Inform user about bad input and quit.
/// </summary>
private static void InvalidInput()
{
Console.Out.WriteLine("Invalid input.");
return;
}
}
}
You can just paste and run it, and your results will look something along:
Edit 2:
I've written an article about this, explaining in details what happens why where. You're welcome to have a look at it here : simple power recursion, console application.

Recursion only terminates when you have edge case. In the case of exponentiation the edge case is:
n0 = 1
It reads as "any number raised to the power of 0 is 1".
The general case of exponentiation is:
nx = n × nx - 1
In a mathematical language like Haskell exponentiation would be defined as follows:
n ^ 0 = 1
n ^ x = n * n ^ (x - 1)
Interestingly if you give this function a negative integer then the edge condition will never execute and it would run into an infinite loop eventually terminating in a stack overflow.
However since we only use this function with whole numbers (0 and positive integers) you will never run into an infinite loop.
Nevertheless if you use a big enough exponent you will still run into a stack overflow because computers only have so much space to store intermediate results.
In most JavaScript browsers you can calculate 2 ^ 2 ^ 14. However if you try to calculate 2 ^ 2 ^ 15 then you get a stack overflow: http://jsfiddle.net/9chrJ/

Observe that xn = x × xn-1 and x0 = 1. This is why the code is correct.

Just try an example. Here's 2 to the power of 3
power(2,3) = 2 * (power(2,2) = 2 * (power(2,1) = 2 * (power(2,0) = 1)))
So:
power(2,3) = 2 * (2 * (2 * 1)))

Related

How to generate a fixed-length code from a set of integers of a specific bit count in JavaScript

Generate string from integer with arbitrary base in JavaScript received the following answer:
function parseInt(value, code) {
return [...value].reduce((r, a) => r * code.length + code.indexOf(a), 0);
}
function toString(value, code) {
var digit,
radix= code.length,
result = '';
do {
digit = value % radix;
result = code[digit] + result;
value = Math.floor(value / radix);
} while (value)
return result;
}
console.log(parseInt('dj', 'abcdefghijklmnopqrstuvwxyz0123456789+-'));
console.log(toString(123, 'abcdefghijklmnopqrstuvwxyz0123456789+-'));
console.log(parseInt('a', 'abcdefghijklmnopqrstuvwxyz0123456789+-'));
console.log(toString(0, 'abcdefghijklmnopqrstuvwxyz0123456789+-'));
I am interested something slightly different. Whereas this will generate the shortest code for the number, I would like to now generate a constant-length code based on the number of bits. I am not sure if this is also a complex radix solution as well.
Say I want to generate 8-bit codes using a 16-character alphabet. That means I should be able to take the first 4 bits to select 1 character, and the next 4 bits to select the second character. So I might end up with MV if my 16 character set was ABDHNMOPQRSTUVYZ. Likewise if I had a 16-bit range, I would have 4 character code, and 32-bit range would be an 8-character code. So calling code32(1, 'ABDHNMOPQRSTUVYZ') would give an 8 letter code, while code8(1, 'ABDHNMOPQRSTUVYZ') would give a 2 digit code.
How could that be implemented in JavaScript? Something along these lines?
code8(i, alpha) // 0 to 255 it accepts
code16(i, alpha) // 0 to 65535 it accepts
code32(i, alpha) // 0 to 2^32-1 it accepts
Likewise, how would you get the string code back into the original number (or bit sequence)?
This really comes down to changing toString so that:
It only accepts a code that has a length of a power of 2
It pads the result to a given number of "digits" (characters)
The actual number of digits you would use for a 16 bit number depends on the size of the code. If the code has 16 characters, then it can cover for 4 bits, and so an output of 4 characters would be needed. If however the code has 4 characters, then the output would need 8 characters. You can have cases where the match is not exact, like when you would have a code with 8 characters. Then the output would need 6 characters.
Here I have highlighted the changes to the toString method. My personal preference is to also put the value as last parameter to toString.
function toString(digitCount, code, value) { // <-- add argument digitCount
// Perform a sanity check: code must have a length that is power of 2
if (Math.log2(code.length) % 1) throw "code size not power of 2: " + code.length;
var digit,
radix = code.length,
result = '';
do {
digit = value % radix;
result = code[digit] + result;
value = Math.floor(value / radix);
} while (value)
return result.padStart(digitCount, code[0]); // Pad to the desired output size
}
console.log(toString(4, 'abcdefghijklmnop', 123));
console.log(toString(4, 'abcdefghijklmnop', 0));
console.log(toString(4, 'abcdefghijklmnop', 0xFFFF));
// You could define some more specific functions
const code8 = (code, value) => toString(Math.ceil(8 / Math.log2(code.length)), code, value);
const code16 = (code, value) => toString(Math.ceil(16 / Math.log2(code.length)), code, value);
console.log(code16('abcdefghijklmnop', 123));
console.log(code16('abcdefghijklmnop', 0));
console.log(code16('abcdefghijklmnop', 0xFFFF));
console.log(code8('abcdefghijklmnop', 123));
console.log(code8('abcdefghijklmnop', 0));
console.log(code8('abcdefghijklmnop', 0xFF));
EDIT: I just noticed that you required a decoder as well. It is easy to implement a non-optimal version too, while an optimal one can be implemented via go through each letter and accumulate their value times their weighs.
Is this what you want? I tested this code for bit=16 and bit=8, but when bit=32 the count of codewords becomes too large and hangs the devtools of the browser. It's only a demonstrative code and may need optimization if need to be applied in practical use...
function genCode(len, alpha){
let tmp = [...alpha];
for(let i = 1; i != len; ++i){
const ttmp = [];
tmp.forEach(te => {
[...alpha].forEach(e => {
ttmp.push(te + e);
});
});
tmp = ttmp;
}
return tmp;
}
function code(bits, i, alpha){
const len = Math.ceil(bits / Math.floor(Math.log2(alpha.length)));
return genCode(len, alpha)[i];
}
function decode(bits, c, alpha){
const len = Math.ceil(bits / Math.floor(Math.log2(alpha.length)));
const codes = genCode(len, alpha);
return codes.indexOf(c);
}
console.log(code(16, 2, "ABDHNMOPQRSTUVYZ"));
console.log(decode(16, "AAAD", "ABDHNMOPQRSTUVYZ"));
console.log(code(8, 255, "ABDHNMOPQRSTUVYZ"));
console.log(decode(8, "ZZ", "ABDHNMOPQRSTUVYZ"));

javascript using Math.exp for Math.sinh results

I was just trying to write polyfil for Math.sinh here
which is required for writing jvm in javascript doppio
but problem is java result for Math.sinh(Double.MIN_VALUE) = 4.9E-324
while in javascript its 0 because i am using polyfil and it requires Math.exp(4.9E-324).
(Math.exp(x) - Math.exp(-x)) / 2;
First javascript turns 4.9E-324 into 5E-324 secondly Math.exp(4.9E-324) or Math.pow(Math.E, 4.9E-324) results in 1 which then results in (1-1)/2 and that is 0 :)
Also Number.MIN_VALUE in JS is 5E-324 which equates to 4.9E-324 in Double.MIN_VALUE.
Is there any way so that i can avoid math.exp or Math.pow or to handle precision. I have looked at bigdecimal library which is also not working
Is there any other way to handle sig fig
Note i have to pass all boundary test cases!!!
The Taylor expansion of sinh(x) is x+x^3/3!+x^5/5!+x^7/7!+. . . . This converges for all values of x, but will converge fastest (and give the best results) for x close to 0.
function mySinh(x) {
var returning = x,
xToN = x,
factorial = 1,
index = 1,
nextTerm = 1;
while ( nextTerm != 0 ) {
index++;
factorial *= index;
index++;
factorial *= index;
xToN *= x*x;
nextTerm = xToN/factorial;
returning += nextTerm;
}
return returning;
}
For x less than 1E-108, nextTerm will immediately underflow to 0, and you'll just get x back.
Where you switch from using the Taylor expansion to using the definition in terms of Math.exp may end up depending on what your test cases are looking at.

Math.log2 precision has changed in Chrome

I've written a JavaScript program that calculates the depth of a binary tree based on the number of elements. My program has been working fine for months, but recently I've found a difference when the web page is viewed in Chrome vs Firefox.
In particular, on Firefox:
Math.log2(8) = 3
but now in Chrome:
Math.log2(8) = 2.9999999999999996
My JavaScript program was originally written to find the depth of the binary tree based on the number of elements as:
var tree_depth = Math.floor(Math.log2(n_elements)) + 1;
I made a simple modification to this formula so that it will still work correctly on Chrome:
var epsilon = 1.e-5;
var tree_depth = Math.floor(Math.log2(n_elements) + epsilon) + 1;
I have 2 questions:
Has anyone else noticed a change in the precision in Chrome recently for Math.log2?
Is there a more elegant modification than the one I made above by adding epsilon?
Note: Math.log2 hasn't actually changed since it's been implemented
in V8. Maybe you remembered incorrectly or you had included a shim that
happened to get the result correct for these special cases before Chrome
included its own implementation of Math.log2.
Also, it seems that you should be using Math.ceil(x) rather than
Math.floor(x) + 1.
How can I solve this?
To avoid relying on Math.log or Math.log2 being accurate amongst different implementations of JavaScript (the algorithm used is implementation-defined), you can use bitwise operators if you have less than 232 elements in your binary tree. This obviously isn't the fastest way of doing this (this is only O(n)), but it's a relatively simple example:
function log2floor(x) {
// match the behaviour of Math.floor(Math.log2(x)), change it if you like
if (x === 0) return -Infinity;
for (var i = 0; i < 32; ++i) {
if (x >>> i === 1) return i;
}
}
console.log(log2floor(36) + 1); // 6
How is Math.log2 currently implemented in different browsers?
The current implementation in Chrome is inaccurate as they rely on multiplying the value of Math.log(x) by Math.LOG2E, making it susceptible to rounding error (source):
// ES6 draft 09-27-13, section 20.2.2.22.
function MathLog2(x) {
return MathLog(x) * 1.442695040888963407; // log2(x) = log(x)/log(2).
}
If you are running Firefox, it either uses the native log2 function (if present), or if not (e.g. on Windows), uses a similar implementation to Chrome (source).
The only difference is that instead of multiplying, they divide by log(2) instead:
#if !HAVE_LOG2
double log2(double x)
{
return log(x) / M_LN2;
}
#endif
Multiplying or dividing: how much of a difference does it make?
To test the difference between dividing by Math.LN2 and multiplying by Math.LOG2E, we can use the following test:
function log2d(x) { return Math.log(x) / Math.LN2; }
function log2m(x) { return Math.log(x) * Math.LOG2E; }
// 2^1024 rounds to Infinity
for (var i = 0; i < 1024; ++i) {
var resultD = log2d(Math.pow(2, i));
var resultM = log2m(Math.pow(2, i));
if (resultD !== i) console.log('log2d: expected ' + i + ', actual ' + resultD);
if (resultM !== i) console.log('log2m: expected ' + i + ', actual ' + resultM);
}
Note that no matter which function you use, they still have floating point errors for certain values1. It just so happens that the floating point representation of log(2) is less than the actual value, resulting in a value higher than the actual value (while log2(e) is lower). This means that using log(2) will round down to the correct value for these special cases.
1: log(pow(2, 29)) / log(2) === 29.000000000000004
You could perhaps do this instead
// Math.log2(n_elements) to 10 decimal places
var tree_depth = Math.floor(Math.round(Math.log2(n_elements) * 10000000000) / 10000000000);

Reading variable length bits from a binary string

Im new to javascript and node.js, I have a base64 encoded string of data that I need to parse several values from which are of various bit lengths.
I figured I would start by using the Buffer object to read the b64 string but from there I am completely lost.
The data are a series of unsigned integers, The format is something akin to this:
Header:
8 bits - uint
3 bits - uint
2 bits - uint
3 bits - unused padding
6 bits - uint
After that there are recurring sections of either 23 bit or 13 bit length of data each with a couple of fields I need to extract.
An example of a 23 bit section:
3 bit - uint
10 bit - uint
10 bit - uint
My question is this, What is the best way to take an arbitrary number of bits and put the resulting value in a separate uint? Note that some of the values are multi-byte (> 8 bits) so I cant step byte for byte.
I apologize if my explanation is kind of vague but hopefully it will suffice.
One simple way to read any amount of bits is e.g.
function bufferBitReader(buffer) {
var bitPos = 0;
function readOneBit() {
var offset = Math.floor(bitPos / 8),
shift = 7 - bitPos % 8;
bitPos += 1;
return (buffer[offset] >> shift) & 1;
}
function readBits(n) {
var i, value = 0;
for (i = 0; i < n; i += 1) {
value = value << 1 | readOneBit();
}
return value;
}
function isEnd() {
return Math.floor(bitPos / 8) >= buffer.length;
}
return {
readOneBit: readOneBit,
readBits: readBits,
isEnd: isEnd
};
}
You just take your but buffer and initialize the reader by
var bitReader = bufferBitReader(buffer);
Then you can read any number of bits by calling
bitReader.readBits(8);
bitReader.readBits(3);
bitReader.readBits(2);
...
You can test whether you already read all bits by
bitReader.isEnd()
One thing to make sure is the actual order of bit that is expected... some 'bit streams' are expected to get bits from the least significant to the most significant.. this code expects the opposite that the first bit you read is the most significant of the first byte...

Javascript Big Decimal - Nth Root

Recently I run into the well known floating point precision errors of Javascript. Usually I would avoid floating point calculations on the thin client & rather leave it to the back-end.
I started using the big.js library created by Michael Mclaughlin. Though it has a square-root method/function, it does not have a nth-root methods/function nor does the power function support fraction values as arguments.
So I was wondering if anyone using the library has extended it to have such a function or at least use it to calculate accurate nth-root results.
Michael Mclaughlin suggested that I implement such a function similar in structure to the square-root function. However my attempts at understanding the logic proofed my maths-disability, resulting in simple calculations yielding very wrong results.
Using the algorithm on Rosetta Code also yields incorrect results.
So I was wondering if anyone using the library has extended it to have such a function or at least use it to calculate accurate nth-root results.
Here is the code to my last attempt:
P['nthrt'] = P['nthroot'] = function (n, prec)
{
var negate, r,
x = this,
xc = x['c'],
i = x['s'],
e = x['e'];
// Argument defaults
n = n || 2;
prec = prec || 12;
// Zero?
if ( !xc[0] ) {
return new Big(x)
}
// Negative?
negate = ( n % 2 == 1 && i < 0 );
// Estimate.
r = new Big(1); // Initial guess.
for (var i = 0; i < prec; i++) {
r = (ONE.div(n)).times(r.times(n-1).plus(x.div(r.pow(n-1))));
}
if (negate) r['s'] = -1;
return r;
};
It does not even get obvious results correct like the 4th root of 81 = 3, instead it gets 3.00000000xxx
Newton's method only gives an approximation for the root, so 3.0000xxx should be expected. If you know that the answer should be an integer, you can round r down (Newton's method overestimates the root) and check that r^n=x.
You can use big-numbers library to solve your problem. They support sqrt, pow, exp and many other features.
The pow method accept positive, negative, integer and floating point numbers:
var bn = new BigNumber();
var value = bn.of('81');
var xRoot = value.pow(0.25);
console.log('Result: ' + bn.format(xRoot));
You can use Basenumber.js to perform nth root. Documentation here.
E.g.
// Set precision decimals required
Base.setDecimals(25);
let x = Base("1e+10");
console.log(x.root(10).toString());
<script src='https://cdn.jsdelivr.net/gh/AlexSp3/Basenumber.js#main/BaseNumber.min.js'></script>

Categories

Resources