javascript using Math.exp for Math.sinh results - javascript

I was just trying to write polyfil for Math.sinh here
which is required for writing jvm in javascript doppio
but problem is java result for Math.sinh(Double.MIN_VALUE) = 4.9E-324
while in javascript its 0 because i am using polyfil and it requires Math.exp(4.9E-324).
(Math.exp(x) - Math.exp(-x)) / 2;
First javascript turns 4.9E-324 into 5E-324 secondly Math.exp(4.9E-324) or Math.pow(Math.E, 4.9E-324) results in 1 which then results in (1-1)/2 and that is 0 :)
Also Number.MIN_VALUE in JS is 5E-324 which equates to 4.9E-324 in Double.MIN_VALUE.
Is there any way so that i can avoid math.exp or Math.pow or to handle precision. I have looked at bigdecimal library which is also not working
Is there any other way to handle sig fig
Note i have to pass all boundary test cases!!!

The Taylor expansion of sinh(x) is x+x^3/3!+x^5/5!+x^7/7!+. . . . This converges for all values of x, but will converge fastest (and give the best results) for x close to 0.
function mySinh(x) {
var returning = x,
xToN = x,
factorial = 1,
index = 1,
nextTerm = 1;
while ( nextTerm != 0 ) {
index++;
factorial *= index;
index++;
factorial *= index;
xToN *= x*x;
nextTerm = xToN/factorial;
returning += nextTerm;
}
return returning;
}
For x less than 1E-108, nextTerm will immediately underflow to 0, and you'll just get x back.
Where you switch from using the Taylor expansion to using the definition in terms of Math.exp may end up depending on what your test cases are looking at.

Related

Overriding log2lin and lin2log for Highcharts does not appear to work

I've been trying to get negative values to display on a "logarithmic" axis - in the sense that the log function won't be mathematically accurate, but negative values will still have some sort of exponential effect where values with smaller magnitudes are more spaced out.
I've tried this solution (JSFiddle) which hides error messages then overrides Highcharts' native log2lin and lin2log methods such that this effect is supported for negative numbers.
(function (H) {
// Pass error messages
H.Axis.prototype.allowNegativeLog = true;
// Override conversions
H.Axis.prototype.log2lin = function (num) {
var isNegative = num < 0,
adjustedNum = Math.abs(num),
result;
if (adjustedNum < 10) {
adjustedNum += (10 - adjustedNum) / 10;
}
result = Math.log(adjustedNum) / Math.LN10;
return isNegative ? -result : result;
};
H.Axis.prototype.lin2log = function (num) {
var isNegative = num < 0,
absNum = Math.abs(num),
result = Math.pow(10, absNum);
if (result < 10) {
result = (10 * (result - 1)) / (10 - 1);
}
return isNegative ? -result : result;
};
}(Highcharts));
However, the only change I observed is that negative values were hidden. This behavior is observed on the demo site as well, so I suspect it's not an issue with my code.
This is what I see on the demo site.
Setting the axis extreme to a negative value would also produce the original error message. May I know how to fix this such that negative values can be displayed with the desired behavior on the logarithmic axis? Thanks.
Thank you for sharing this post, it seems like a regression because it used to work fine in the versions < 8.0.4.
Working demo: http://jsfiddle.net/BlackLabel/bj3dwe4t/
<script src="https://code.highcharts.com/8.0.4/highcharts.js"></script>
I reported it on Highcharts GitHub issue channel where you can follow this thread: https://github.com/highcharts/highcharts/issues/13914
If you don't need any of the new features please use the last working version of the code.

javascript float dynamic precision?

how can I get dynamic precision for float?
exemple what I need :
0.00019400000001.dynamicPrecision() //0.000194
0.0001940001.dynamicPrecision() //0.000194
0.0001941.dynamicPrecision() //0.0001941
0.0194.dynamicPrecision() //0.0194
0.01940000.dynamicPrecision() //0.0194
(its important to not have useless zero at the end)
I can't use toFixed or toPrecision because the significative number can change and is unknow. so what way to write this dynamicPrecision method with dynamic precision?
While it's a bit questionable what you are asking for, one approach would be to take slices of the decimal and then compare it to the original. If it is some threshold percentage different, consider it an answer.
const f = (v, threshold = .9999) => {
let shift = 1;
let part;
do {
shift *= 10;
part = Math.floor(v * shift) / shift;
} while (part / v < threshold);
return part;
}
[0.194, 0.194000001, 0.19401, 0.194101]
.forEach(v => console.log(f(v)));
This uses actual math to determine the significant digit.
Basically, for each step, it takes one more digit and compares it against the value. If it is within the threshold, then it will be returned.
For 1.9410001 it would:
part = 1.9
part = 1.94
part = 1.941 // part / v > threshold, returned
The threshold is then configurable. .9999 means it is 99.99% the same as the original value.
I hope it will help,
var number1 = 0.00019400000001
console.log(parseFloat(number1.toString().replace(/0+[1-9]$/, '')));
You could replace all endings with at least one zero and one one at the end. Then take the numererical value.
function precision(v) {
return +v.toString().replace(/0+1$/, '');
}
console.log([0.00019400000001, 0.0001940001, 0.0001941, 0.0194, 0.01940000].map(precision));
Number.prototype.dynamicPrecision = function(){
return parseFloat(this.valueOf().toString().replace(/0+1$/, ''));
}
console.log(
0.00019400000001.dynamicPrecision(), //0.000194
0.0001940001.dynamicPrecision(), //0.000194
0.0001941.dynamicPrecision(), //0.0001941
0.0194.dynamicPrecision(), //0.0194
0.01940000.dynamicPrecision() //0.0194
)

Math.log2 precision has changed in Chrome

I've written a JavaScript program that calculates the depth of a binary tree based on the number of elements. My program has been working fine for months, but recently I've found a difference when the web page is viewed in Chrome vs Firefox.
In particular, on Firefox:
Math.log2(8) = 3
but now in Chrome:
Math.log2(8) = 2.9999999999999996
My JavaScript program was originally written to find the depth of the binary tree based on the number of elements as:
var tree_depth = Math.floor(Math.log2(n_elements)) + 1;
I made a simple modification to this formula so that it will still work correctly on Chrome:
var epsilon = 1.e-5;
var tree_depth = Math.floor(Math.log2(n_elements) + epsilon) + 1;
I have 2 questions:
Has anyone else noticed a change in the precision in Chrome recently for Math.log2?
Is there a more elegant modification than the one I made above by adding epsilon?
Note: Math.log2 hasn't actually changed since it's been implemented
in V8. Maybe you remembered incorrectly or you had included a shim that
happened to get the result correct for these special cases before Chrome
included its own implementation of Math.log2.
Also, it seems that you should be using Math.ceil(x) rather than
Math.floor(x) + 1.
How can I solve this?
To avoid relying on Math.log or Math.log2 being accurate amongst different implementations of JavaScript (the algorithm used is implementation-defined), you can use bitwise operators if you have less than 232 elements in your binary tree. This obviously isn't the fastest way of doing this (this is only O(n)), but it's a relatively simple example:
function log2floor(x) {
// match the behaviour of Math.floor(Math.log2(x)), change it if you like
if (x === 0) return -Infinity;
for (var i = 0; i < 32; ++i) {
if (x >>> i === 1) return i;
}
}
console.log(log2floor(36) + 1); // 6
How is Math.log2 currently implemented in different browsers?
The current implementation in Chrome is inaccurate as they rely on multiplying the value of Math.log(x) by Math.LOG2E, making it susceptible to rounding error (source):
// ES6 draft 09-27-13, section 20.2.2.22.
function MathLog2(x) {
return MathLog(x) * 1.442695040888963407; // log2(x) = log(x)/log(2).
}
If you are running Firefox, it either uses the native log2 function (if present), or if not (e.g. on Windows), uses a similar implementation to Chrome (source).
The only difference is that instead of multiplying, they divide by log(2) instead:
#if !HAVE_LOG2
double log2(double x)
{
return log(x) / M_LN2;
}
#endif
Multiplying or dividing: how much of a difference does it make?
To test the difference between dividing by Math.LN2 and multiplying by Math.LOG2E, we can use the following test:
function log2d(x) { return Math.log(x) / Math.LN2; }
function log2m(x) { return Math.log(x) * Math.LOG2E; }
// 2^1024 rounds to Infinity
for (var i = 0; i < 1024; ++i) {
var resultD = log2d(Math.pow(2, i));
var resultM = log2m(Math.pow(2, i));
if (resultD !== i) console.log('log2d: expected ' + i + ', actual ' + resultD);
if (resultM !== i) console.log('log2m: expected ' + i + ', actual ' + resultM);
}
Note that no matter which function you use, they still have floating point errors for certain values1. It just so happens that the floating point representation of log(2) is less than the actual value, resulting in a value higher than the actual value (while log2(e) is lower). This means that using log(2) will round down to the correct value for these special cases.
1: log(pow(2, 29)) / log(2) === 29.000000000000004
You could perhaps do this instead
// Math.log2(n_elements) to 10 decimal places
var tree_depth = Math.floor(Math.round(Math.log2(n_elements) * 10000000000) / 10000000000);

mathematically represent a percentage of time without local storage

Is there way to reliably execute a function a certain percentage of time without using local storage?
eg: On page load, console.log('hello') 60% of the time.
Create a random number between 1-10
If number == 7,8,9, or 10 don't execute
Otherwise, console.log('hello);
If each number has an equal probability of being selected, running a function when 6 of the 10 numbers are chosen = 60% run time.
Is something like this mathematically sound? Cookies and sessions aren't available..
Thanks
Update: I'm not looking for the code to do it. I'm asking about the math.
if(Math.random() < 0.6) {
console.log('hello')
}
var rand = Math.floor((Math.random()*10)+1);
var display = rand < 7;
if (display) {
console.log('hello');
}
No local storage needed. All you need is plain old JS.
NOTE: You could actually simplify display down to just Math.random() < 0.7 but I thought I'd write out the steps to make the logic easier to follow, especially since Math.random() produces floating point numbers and the question was dealing with whole numbers.
var number = Math.floor((Math.random()*10)+1);
if (number < 7) doSomething();
This seems to be what you are trying to achieve?
You can use function composition to make a function that sometimes will invoke a callback function.
function maybeCallback(callback) {
function shouldCallback() {
// 60% chance
return Math.random() < 0.6;
}
return function () {
if(shouldCallback()) {
callback();
}
}
}
Usage:
function callback() {
console.log('This might get called');
}
var maybeCallable = maybeCallback(callback);
setTimeout(maybeCallable, 1000);
As for the math side take a look at Math.random.
returns a floating-point, pseudo-random number in the range [0, 1)
that is, from 0 (inclusive) up to but not including 1 (exclusive),
which you can then scale to your desired range.
The percentage W of an exclusive range R ([X, Y)) from a value Z is defined by:
Given the values:

Javascript Big Decimal - Nth Root

Recently I run into the well known floating point precision errors of Javascript. Usually I would avoid floating point calculations on the thin client & rather leave it to the back-end.
I started using the big.js library created by Michael Mclaughlin. Though it has a square-root method/function, it does not have a nth-root methods/function nor does the power function support fraction values as arguments.
So I was wondering if anyone using the library has extended it to have such a function or at least use it to calculate accurate nth-root results.
Michael Mclaughlin suggested that I implement such a function similar in structure to the square-root function. However my attempts at understanding the logic proofed my maths-disability, resulting in simple calculations yielding very wrong results.
Using the algorithm on Rosetta Code also yields incorrect results.
So I was wondering if anyone using the library has extended it to have such a function or at least use it to calculate accurate nth-root results.
Here is the code to my last attempt:
P['nthrt'] = P['nthroot'] = function (n, prec)
{
var negate, r,
x = this,
xc = x['c'],
i = x['s'],
e = x['e'];
// Argument defaults
n = n || 2;
prec = prec || 12;
// Zero?
if ( !xc[0] ) {
return new Big(x)
}
// Negative?
negate = ( n % 2 == 1 && i < 0 );
// Estimate.
r = new Big(1); // Initial guess.
for (var i = 0; i < prec; i++) {
r = (ONE.div(n)).times(r.times(n-1).plus(x.div(r.pow(n-1))));
}
if (negate) r['s'] = -1;
return r;
};
It does not even get obvious results correct like the 4th root of 81 = 3, instead it gets 3.00000000xxx
Newton's method only gives an approximation for the root, so 3.0000xxx should be expected. If you know that the answer should be an integer, you can round r down (Newton's method overestimates the root) and check that r^n=x.
You can use big-numbers library to solve your problem. They support sqrt, pow, exp and many other features.
The pow method accept positive, negative, integer and floating point numbers:
var bn = new BigNumber();
var value = bn.of('81');
var xRoot = value.pow(0.25);
console.log('Result: ' + bn.format(xRoot));
You can use Basenumber.js to perform nth root. Documentation here.
E.g.
// Set precision decimals required
Base.setDecimals(25);
let x = Base("1e+10");
console.log(x.root(10).toString());
<script src='https://cdn.jsdelivr.net/gh/AlexSp3/Basenumber.js#main/BaseNumber.min.js'></script>

Categories

Resources