I've written a JavaScript program that calculates the depth of a binary tree based on the number of elements. My program has been working fine for months, but recently I've found a difference when the web page is viewed in Chrome vs Firefox.
In particular, on Firefox:
Math.log2(8) = 3
but now in Chrome:
Math.log2(8) = 2.9999999999999996
My JavaScript program was originally written to find the depth of the binary tree based on the number of elements as:
var tree_depth = Math.floor(Math.log2(n_elements)) + 1;
I made a simple modification to this formula so that it will still work correctly on Chrome:
var epsilon = 1.e-5;
var tree_depth = Math.floor(Math.log2(n_elements) + epsilon) + 1;
I have 2 questions:
Has anyone else noticed a change in the precision in Chrome recently for Math.log2?
Is there a more elegant modification than the one I made above by adding epsilon?
Note: Math.log2 hasn't actually changed since it's been implemented
in V8. Maybe you remembered incorrectly or you had included a shim that
happened to get the result correct for these special cases before Chrome
included its own implementation of Math.log2.
Also, it seems that you should be using Math.ceil(x) rather than
Math.floor(x) + 1.
How can I solve this?
To avoid relying on Math.log or Math.log2 being accurate amongst different implementations of JavaScript (the algorithm used is implementation-defined), you can use bitwise operators if you have less than 232 elements in your binary tree. This obviously isn't the fastest way of doing this (this is only O(n)), but it's a relatively simple example:
function log2floor(x) {
// match the behaviour of Math.floor(Math.log2(x)), change it if you like
if (x === 0) return -Infinity;
for (var i = 0; i < 32; ++i) {
if (x >>> i === 1) return i;
}
}
console.log(log2floor(36) + 1); // 6
How is Math.log2 currently implemented in different browsers?
The current implementation in Chrome is inaccurate as they rely on multiplying the value of Math.log(x) by Math.LOG2E, making it susceptible to rounding error (source):
// ES6 draft 09-27-13, section 20.2.2.22.
function MathLog2(x) {
return MathLog(x) * 1.442695040888963407; // log2(x) = log(x)/log(2).
}
If you are running Firefox, it either uses the native log2 function (if present), or if not (e.g. on Windows), uses a similar implementation to Chrome (source).
The only difference is that instead of multiplying, they divide by log(2) instead:
#if !HAVE_LOG2
double log2(double x)
{
return log(x) / M_LN2;
}
#endif
Multiplying or dividing: how much of a difference does it make?
To test the difference between dividing by Math.LN2 and multiplying by Math.LOG2E, we can use the following test:
function log2d(x) { return Math.log(x) / Math.LN2; }
function log2m(x) { return Math.log(x) * Math.LOG2E; }
// 2^1024 rounds to Infinity
for (var i = 0; i < 1024; ++i) {
var resultD = log2d(Math.pow(2, i));
var resultM = log2m(Math.pow(2, i));
if (resultD !== i) console.log('log2d: expected ' + i + ', actual ' + resultD);
if (resultM !== i) console.log('log2m: expected ' + i + ', actual ' + resultM);
}
Note that no matter which function you use, they still have floating point errors for certain values1. It just so happens that the floating point representation of log(2) is less than the actual value, resulting in a value higher than the actual value (while log2(e) is lower). This means that using log(2) will round down to the correct value for these special cases.
1: log(pow(2, 29)) / log(2) === 29.000000000000004
You could perhaps do this instead
// Math.log2(n_elements) to 10 decimal places
var tree_depth = Math.floor(Math.round(Math.log2(n_elements) * 10000000000) / 10000000000);
Related
I've been trying to get negative values to display on a "logarithmic" axis - in the sense that the log function won't be mathematically accurate, but negative values will still have some sort of exponential effect where values with smaller magnitudes are more spaced out.
I've tried this solution (JSFiddle) which hides error messages then overrides Highcharts' native log2lin and lin2log methods such that this effect is supported for negative numbers.
(function (H) {
// Pass error messages
H.Axis.prototype.allowNegativeLog = true;
// Override conversions
H.Axis.prototype.log2lin = function (num) {
var isNegative = num < 0,
adjustedNum = Math.abs(num),
result;
if (adjustedNum < 10) {
adjustedNum += (10 - adjustedNum) / 10;
}
result = Math.log(adjustedNum) / Math.LN10;
return isNegative ? -result : result;
};
H.Axis.prototype.lin2log = function (num) {
var isNegative = num < 0,
absNum = Math.abs(num),
result = Math.pow(10, absNum);
if (result < 10) {
result = (10 * (result - 1)) / (10 - 1);
}
return isNegative ? -result : result;
};
}(Highcharts));
However, the only change I observed is that negative values were hidden. This behavior is observed on the demo site as well, so I suspect it's not an issue with my code.
This is what I see on the demo site.
Setting the axis extreme to a negative value would also produce the original error message. May I know how to fix this such that negative values can be displayed with the desired behavior on the logarithmic axis? Thanks.
Thank you for sharing this post, it seems like a regression because it used to work fine in the versions < 8.0.4.
Working demo: http://jsfiddle.net/BlackLabel/bj3dwe4t/
<script src="https://code.highcharts.com/8.0.4/highcharts.js"></script>
I reported it on Highcharts GitHub issue channel where you can follow this thread: https://github.com/highcharts/highcharts/issues/13914
If you don't need any of the new features please use the last working version of the code.
I was just trying to write polyfil for Math.sinh here
which is required for writing jvm in javascript doppio
but problem is java result for Math.sinh(Double.MIN_VALUE) = 4.9E-324
while in javascript its 0 because i am using polyfil and it requires Math.exp(4.9E-324).
(Math.exp(x) - Math.exp(-x)) / 2;
First javascript turns 4.9E-324 into 5E-324 secondly Math.exp(4.9E-324) or Math.pow(Math.E, 4.9E-324) results in 1 which then results in (1-1)/2 and that is 0 :)
Also Number.MIN_VALUE in JS is 5E-324 which equates to 4.9E-324 in Double.MIN_VALUE.
Is there any way so that i can avoid math.exp or Math.pow or to handle precision. I have looked at bigdecimal library which is also not working
Is there any other way to handle sig fig
Note i have to pass all boundary test cases!!!
The Taylor expansion of sinh(x) is x+x^3/3!+x^5/5!+x^7/7!+. . . . This converges for all values of x, but will converge fastest (and give the best results) for x close to 0.
function mySinh(x) {
var returning = x,
xToN = x,
factorial = 1,
index = 1,
nextTerm = 1;
while ( nextTerm != 0 ) {
index++;
factorial *= index;
index++;
factorial *= index;
xToN *= x*x;
nextTerm = xToN/factorial;
returning += nextTerm;
}
return returning;
}
For x less than 1E-108, nextTerm will immediately underflow to 0, and you'll just get x back.
Where you switch from using the Taylor expansion to using the definition in terms of Math.exp may end up depending on what your test cases are looking at.
Recently I run into the well known floating point precision errors of Javascript. Usually I would avoid floating point calculations on the thin client & rather leave it to the back-end.
I started using the big.js library created by Michael Mclaughlin. Though it has a square-root method/function, it does not have a nth-root methods/function nor does the power function support fraction values as arguments.
So I was wondering if anyone using the library has extended it to have such a function or at least use it to calculate accurate nth-root results.
Michael Mclaughlin suggested that I implement such a function similar in structure to the square-root function. However my attempts at understanding the logic proofed my maths-disability, resulting in simple calculations yielding very wrong results.
Using the algorithm on Rosetta Code also yields incorrect results.
So I was wondering if anyone using the library has extended it to have such a function or at least use it to calculate accurate nth-root results.
Here is the code to my last attempt:
P['nthrt'] = P['nthroot'] = function (n, prec)
{
var negate, r,
x = this,
xc = x['c'],
i = x['s'],
e = x['e'];
// Argument defaults
n = n || 2;
prec = prec || 12;
// Zero?
if ( !xc[0] ) {
return new Big(x)
}
// Negative?
negate = ( n % 2 == 1 && i < 0 );
// Estimate.
r = new Big(1); // Initial guess.
for (var i = 0; i < prec; i++) {
r = (ONE.div(n)).times(r.times(n-1).plus(x.div(r.pow(n-1))));
}
if (negate) r['s'] = -1;
return r;
};
It does not even get obvious results correct like the 4th root of 81 = 3, instead it gets 3.00000000xxx
Newton's method only gives an approximation for the root, so 3.0000xxx should be expected. If you know that the answer should be an integer, you can round r down (Newton's method overestimates the root) and check that r^n=x.
You can use big-numbers library to solve your problem. They support sqrt, pow, exp and many other features.
The pow method accept positive, negative, integer and floating point numbers:
var bn = new BigNumber();
var value = bn.of('81');
var xRoot = value.pow(0.25);
console.log('Result: ' + bn.format(xRoot));
You can use Basenumber.js to perform nth root. Documentation here.
E.g.
// Set precision decimals required
Base.setDecimals(25);
let x = Base("1e+10");
console.log(x.root(10).toString());
<script src='https://cdn.jsdelivr.net/gh/AlexSp3/Basenumber.js#main/BaseNumber.min.js'></script>
I'd like to display decimal places intelligently (i.e. without having to choose between lengthy decimal places or a ton of trailing zeroes) in JavaScript. This is my original list:
6
8
12.225252
Currently I'm using toFixed(1), and have output like this:
6.0
8.0
12.2
Is there a way I can get:
6
8
12.2
instead? Obviously I can write some custom code to do this, but is there anything in-built in JavaScript?
You can use Math.round.
function roundTo(n, decimals) {
var d = Math.pow(10, decimals);
return Math.round(n * d)/d;
}
Examples:
roundTo(6, 1)
6
roundTo(8, 1)
8
roundTo(12.623456, 1)
12.6
You could check to see whether the floor of the value is the same as the value itself:
if (Math.floor(x) === x) {
// no fractional part
}
Realize that floating point numbers are tricky and irritating, so you may end up with fractional parts in cases where, purely mathematically, you don't expect them.
edit also of course this won't help much with "6.001".
Try adding a function to Number prototype like this:
Number.prototype.toFixedIfDecimal = function(places){
var isWhole_re = /^\s*\d+\s*$/;
if(String(this).search(isWhole_re) != -1)
return this;
else
return this.toFixed(places);
}
Then you can use it like this:
var myNumber1 = 8;
var myNumber2 = 10.512;
alert(myNumber1.toFixedIfDecimal(1)); // will return 8
alert(myNumber2.toFixedIfDecimal(1)); // will return 10.5
I know that 0x is a prefix for hexadecimal numbers in Javascript. For example, 0xFF stands for the number 255.
Is there something similar for binary numbers ? I would expect 0b1111 to represent the number 15, but this doesn't work for me.
Update:
Newer versions of JavaScript -- specifically ECMAScript 6 -- have added support for binary (prefix 0b), octal (prefix 0o) and hexadecimal (prefix: 0x) numeric literals:
var bin = 0b1111; // bin will be set to 15
var oct = 0o17; // oct will be set to 15
var oxx = 017; // oxx will be set to 15
var hex = 0xF; // hex will be set to 15
// note: bB oO xX are all valid
This feature is already available in Firefox and Chrome. It's not currently supported in IE, but apparently will be when Spartan arrives.
(Thanks to Semicolon's comment and urish's answer for pointing this out.)
Original Answer:
No, there isn't an equivalent for binary numbers. JavaScript only supports numeric literals in decimal (no prefix), hexadecimal (prefix 0x) and octal (prefix 0) formats.
One possible alternative is to pass a binary string to the parseInt method along with the radix:
var foo = parseInt('1111', 2); // foo will be set to 15
In ECMASCript 6 this will be supported as a part of the language, i.e. 0b1111 === 15 is true. You can also use an uppercase B (e.g. 0B1111).
Look for NumericLiterals in the ES6 Spec.
I know that people says that extending the prototypes is not a good idea, but been your script...
I do it this way:
Object.defineProperty(
Number.prototype, 'b', {
set:function(){
return false;
},
get:function(){
return parseInt(this, 2);
}
}
);
100..b // returns 4
11111111..b // returns 511
10..b+1 // returns 3
// and so on
If your primary concern is display rather than coding, there's a built-in conversion system you can use:
var num = 255;
document.writeln(num.toString(16)); // Outputs: "ff"
document.writeln(num.toString(8)); // Outputs: "377"
document.writeln(num.toString(2)); // Outputs: "11111111"
Ref: MDN on Number.prototype.toString
As far as I know it is not possible to use a binary denoter in Javascript. I have three solutions for you, all of which have their issues. I think alternative 3 is the most "good looking" for readability, and it is possibly much faster than the rest - except for it's initial run time cost. The problem is it only supports values up to 255.
Alternative 1: "00001111".b()
String.prototype.b = function() { return parseInt(this,2); }
Alternative 2: b("00001111")
function b(i) { if(typeof i=='string') return parseInt(i,2); throw "Expects string"; }
Alternative 3: b00001111
This version allows you to type either 8 digit binary b00000000, 4 digit b0000 and variable digits b0. That is b01 is illegal, you have to use b0001 or b1.
String.prototype.lpad = function(padString, length) {
var str = this;
while (str.length < length)
str = padString + str;
return str;
}
for(var i = 0; i < 256; i++)
window['b' + i.toString(2)] = window['b' + i.toString(2).lpad('0', 8)] = window['b' + i.toString(2).lpad('0', 4)] = i;
May be this will usefull:
var bin = 1111;
var dec = parseInt(bin, 2);
// 15
No, but you can use parseInt and optionally omit the quotes.
parseInt(110, 2); // this is 6
parseInt("110", 2); // this is also 6
The only disadvantage of omitting the quotes is that, for very large numbers, you will overflow faster:
parseInt(10000000000000000000000, 2); // this gives 1
parseInt("10000000000000000000000", 2); // this gives 4194304
I know this does not actually answer the asked Q (which was already answered several times) as is, however I suggest that you (or others interested in this subject) consider the fact that the most readable & backwards/future/cross browser-compatible way would be to just use the hex representation.
From the phrasing of the Q it would seem that you are only talking about using binary literals in your code and not processing of binary representations of numeric values (for which parstInt is the way to go).
I doubt that there are many programmers that need to handle binary numbers that are not familiar with the mapping of 0-F to 0000-1111.
so basically make groups of four and use hex notation.
so instead of writing 101000000010 you would use 0xA02 which has exactly the same meaning and is far more readable and less less likely to have errors.
Just consider readability, Try comparing which of those is bigger:
10001000000010010 or 1001000000010010
and what if I write them like this:
0x11012 or 0x9012
Convert binary strings to numbers and visa-versa.
var b = function(n) {
if(typeof n === 'string')
return parseInt(n, 2);
else if (typeof n === 'number')
return n.toString(2);
throw "unknown input";
};
Using Number() function works...
// using Number()
var bin = Number('0b1111'); // bin will be set to 15
var oct = Number('0o17'); // oct will be set to 15
var oxx = Number('0xF'); // hex will be set to 15
// making function convTo
const convTo = (prefix,n) => {
return Number(`${prefix}${n}`) //Here put prefix 0b, 0x and num
}
console.log(bin)
console.log(oct)
console.log(oxx)
// Using convTo function
console.log(convTo('0b',1111))