Recently I run into the well known floating point precision errors of Javascript. Usually I would avoid floating point calculations on the thin client & rather leave it to the back-end.
I started using the big.js library created by Michael Mclaughlin. Though it has a square-root method/function, it does not have a nth-root methods/function nor does the power function support fraction values as arguments.
So I was wondering if anyone using the library has extended it to have such a function or at least use it to calculate accurate nth-root results.
Michael Mclaughlin suggested that I implement such a function similar in structure to the square-root function. However my attempts at understanding the logic proofed my maths-disability, resulting in simple calculations yielding very wrong results.
Using the algorithm on Rosetta Code also yields incorrect results.
So I was wondering if anyone using the library has extended it to have such a function or at least use it to calculate accurate nth-root results.
Here is the code to my last attempt:
P['nthrt'] = P['nthroot'] = function (n, prec)
{
var negate, r,
x = this,
xc = x['c'],
i = x['s'],
e = x['e'];
// Argument defaults
n = n || 2;
prec = prec || 12;
// Zero?
if ( !xc[0] ) {
return new Big(x)
}
// Negative?
negate = ( n % 2 == 1 && i < 0 );
// Estimate.
r = new Big(1); // Initial guess.
for (var i = 0; i < prec; i++) {
r = (ONE.div(n)).times(r.times(n-1).plus(x.div(r.pow(n-1))));
}
if (negate) r['s'] = -1;
return r;
};
It does not even get obvious results correct like the 4th root of 81 = 3, instead it gets 3.00000000xxx
Newton's method only gives an approximation for the root, so 3.0000xxx should be expected. If you know that the answer should be an integer, you can round r down (Newton's method overestimates the root) and check that r^n=x.
You can use big-numbers library to solve your problem. They support sqrt, pow, exp and many other features.
The pow method accept positive, negative, integer and floating point numbers:
var bn = new BigNumber();
var value = bn.of('81');
var xRoot = value.pow(0.25);
console.log('Result: ' + bn.format(xRoot));
You can use Basenumber.js to perform nth root. Documentation here.
E.g.
// Set precision decimals required
Base.setDecimals(25);
let x = Base("1e+10");
console.log(x.root(10).toString());
<script src='https://cdn.jsdelivr.net/gh/AlexSp3/Basenumber.js#main/BaseNumber.min.js'></script>
Related
how can I get dynamic precision for float?
exemple what I need :
0.00019400000001.dynamicPrecision() //0.000194
0.0001940001.dynamicPrecision() //0.000194
0.0001941.dynamicPrecision() //0.0001941
0.0194.dynamicPrecision() //0.0194
0.01940000.dynamicPrecision() //0.0194
(its important to not have useless zero at the end)
I can't use toFixed or toPrecision because the significative number can change and is unknow. so what way to write this dynamicPrecision method with dynamic precision?
While it's a bit questionable what you are asking for, one approach would be to take slices of the decimal and then compare it to the original. If it is some threshold percentage different, consider it an answer.
const f = (v, threshold = .9999) => {
let shift = 1;
let part;
do {
shift *= 10;
part = Math.floor(v * shift) / shift;
} while (part / v < threshold);
return part;
}
[0.194, 0.194000001, 0.19401, 0.194101]
.forEach(v => console.log(f(v)));
This uses actual math to determine the significant digit.
Basically, for each step, it takes one more digit and compares it against the value. If it is within the threshold, then it will be returned.
For 1.9410001 it would:
part = 1.9
part = 1.94
part = 1.941 // part / v > threshold, returned
The threshold is then configurable. .9999 means it is 99.99% the same as the original value.
I hope it will help,
var number1 = 0.00019400000001
console.log(parseFloat(number1.toString().replace(/0+[1-9]$/, '')));
You could replace all endings with at least one zero and one one at the end. Then take the numererical value.
function precision(v) {
return +v.toString().replace(/0+1$/, '');
}
console.log([0.00019400000001, 0.0001940001, 0.0001941, 0.0194, 0.01940000].map(precision));
Number.prototype.dynamicPrecision = function(){
return parseFloat(this.valueOf().toString().replace(/0+1$/, ''));
}
console.log(
0.00019400000001.dynamicPrecision(), //0.000194
0.0001940001.dynamicPrecision(), //0.000194
0.0001941.dynamicPrecision(), //0.0001941
0.0194.dynamicPrecision(), //0.0194
0.01940000.dynamicPrecision() //0.0194
)
I was just trying to write polyfil for Math.sinh here
which is required for writing jvm in javascript doppio
but problem is java result for Math.sinh(Double.MIN_VALUE) = 4.9E-324
while in javascript its 0 because i am using polyfil and it requires Math.exp(4.9E-324).
(Math.exp(x) - Math.exp(-x)) / 2;
First javascript turns 4.9E-324 into 5E-324 secondly Math.exp(4.9E-324) or Math.pow(Math.E, 4.9E-324) results in 1 which then results in (1-1)/2 and that is 0 :)
Also Number.MIN_VALUE in JS is 5E-324 which equates to 4.9E-324 in Double.MIN_VALUE.
Is there any way so that i can avoid math.exp or Math.pow or to handle precision. I have looked at bigdecimal library which is also not working
Is there any other way to handle sig fig
Note i have to pass all boundary test cases!!!
The Taylor expansion of sinh(x) is x+x^3/3!+x^5/5!+x^7/7!+. . . . This converges for all values of x, but will converge fastest (and give the best results) for x close to 0.
function mySinh(x) {
var returning = x,
xToN = x,
factorial = 1,
index = 1,
nextTerm = 1;
while ( nextTerm != 0 ) {
index++;
factorial *= index;
index++;
factorial *= index;
xToN *= x*x;
nextTerm = xToN/factorial;
returning += nextTerm;
}
return returning;
}
For x less than 1E-108, nextTerm will immediately underflow to 0, and you'll just get x back.
Where you switch from using the Taylor expansion to using the definition in terms of Math.exp may end up depending on what your test cases are looking at.
I've written a JavaScript program that calculates the depth of a binary tree based on the number of elements. My program has been working fine for months, but recently I've found a difference when the web page is viewed in Chrome vs Firefox.
In particular, on Firefox:
Math.log2(8) = 3
but now in Chrome:
Math.log2(8) = 2.9999999999999996
My JavaScript program was originally written to find the depth of the binary tree based on the number of elements as:
var tree_depth = Math.floor(Math.log2(n_elements)) + 1;
I made a simple modification to this formula so that it will still work correctly on Chrome:
var epsilon = 1.e-5;
var tree_depth = Math.floor(Math.log2(n_elements) + epsilon) + 1;
I have 2 questions:
Has anyone else noticed a change in the precision in Chrome recently for Math.log2?
Is there a more elegant modification than the one I made above by adding epsilon?
Note: Math.log2 hasn't actually changed since it's been implemented
in V8. Maybe you remembered incorrectly or you had included a shim that
happened to get the result correct for these special cases before Chrome
included its own implementation of Math.log2.
Also, it seems that you should be using Math.ceil(x) rather than
Math.floor(x) + 1.
How can I solve this?
To avoid relying on Math.log or Math.log2 being accurate amongst different implementations of JavaScript (the algorithm used is implementation-defined), you can use bitwise operators if you have less than 232 elements in your binary tree. This obviously isn't the fastest way of doing this (this is only O(n)), but it's a relatively simple example:
function log2floor(x) {
// match the behaviour of Math.floor(Math.log2(x)), change it if you like
if (x === 0) return -Infinity;
for (var i = 0; i < 32; ++i) {
if (x >>> i === 1) return i;
}
}
console.log(log2floor(36) + 1); // 6
How is Math.log2 currently implemented in different browsers?
The current implementation in Chrome is inaccurate as they rely on multiplying the value of Math.log(x) by Math.LOG2E, making it susceptible to rounding error (source):
// ES6 draft 09-27-13, section 20.2.2.22.
function MathLog2(x) {
return MathLog(x) * 1.442695040888963407; // log2(x) = log(x)/log(2).
}
If you are running Firefox, it either uses the native log2 function (if present), or if not (e.g. on Windows), uses a similar implementation to Chrome (source).
The only difference is that instead of multiplying, they divide by log(2) instead:
#if !HAVE_LOG2
double log2(double x)
{
return log(x) / M_LN2;
}
#endif
Multiplying or dividing: how much of a difference does it make?
To test the difference between dividing by Math.LN2 and multiplying by Math.LOG2E, we can use the following test:
function log2d(x) { return Math.log(x) / Math.LN2; }
function log2m(x) { return Math.log(x) * Math.LOG2E; }
// 2^1024 rounds to Infinity
for (var i = 0; i < 1024; ++i) {
var resultD = log2d(Math.pow(2, i));
var resultM = log2m(Math.pow(2, i));
if (resultD !== i) console.log('log2d: expected ' + i + ', actual ' + resultD);
if (resultM !== i) console.log('log2m: expected ' + i + ', actual ' + resultM);
}
Note that no matter which function you use, they still have floating point errors for certain values1. It just so happens that the floating point representation of log(2) is less than the actual value, resulting in a value higher than the actual value (while log2(e) is lower). This means that using log(2) will round down to the correct value for these special cases.
1: log(pow(2, 29)) / log(2) === 29.000000000000004
You could perhaps do this instead
// Math.log2(n_elements) to 10 decimal places
var tree_depth = Math.floor(Math.round(Math.log2(n_elements) * 10000000000) / 10000000000);
This code works as a calculator, but the scratch pad at codeacademy tells me that eval is evil. Is there another way to do the same thing without using eval?
var calculate = prompt("Enter problem");
alert(eval(calculate));
eval evaluates the string input as JavaScript and coincidentally JavaScript supports calculations and understands 1+1, which makes it suitable as a calculator.
If you don't want to use eval, which is good, you have to parse that string yourself and, finally, do the computation yourself (not exactly yourself though). Have a look at this math processor, which does what you want.
Basically what you do is:
Read the input string char by char (with this kind of problem it's still possible)
Building a tree of actions you want to do
At the end of the string, you evaluate the tree and do some calculations
For example you have "1+2/3", this could evaluate to the following data structure:
"+"
/ \
"1" "/"
/ \
"2" "3"
You could then traverse that structure from top to bottom and do the computations.
At first you've got the "+", which has a 1 on the left side and some expression on the right side,
so you have to evaluate that expression first. So you go to the "/" node, which has two numeric children. Knowing that, you can now compute 2/3 and replace the whole "/" node with the result of that. Now you can go up again and compute the result of the "+" node: 1 + 0.66. Now you replace that node with the result and all you've got left is the result of the expression.
Some pseudo code on how this might look in your code:
calculation(operator, leftValue, rightValue):
switch operator {
case '+': return leftValue + rightValue
case '-': return 42
}
action(node):
node.value = calculation(node.operator, action(node.left) action(node.right))
As you might have noticed, the tree is designed in such a way that it honors operator precedence. The / has a lower level than the +, which means it get's evaluated first.
However you do this in detail, that's basically the way to go.
You can use the expression parser that is included in the math.js library:
http://mathjs.org
Example usage:
mathjs.evaluate('1.2 / (2.3 + 0.7)'); // 0.4
mathjs.evaluate('5.08 cm in inch'); // 2 inch
mathjs.evaluate('sin(45 deg) ^ 2'); // 0.5
mathjs.evaluate('9 / 3 + 2i'); // 3 + 2i
mathjs.evaluate('det([-1, 2; 3, 1])'); // -7
You can use eval safely for a simple arithmetic calculator by filtering the input- if you only accept digits, decimal points and operators (+,-,*,/) you won't get in much trouble. If you want advanced Math functions, you are better off with the parser suggestions.
function calculate(){
"use strict";
var s= prompt('Enter problem');
if(/[^0-9()*+\/ .-]+/.test(s)) throw Error('bad input...');
try{
var ans= eval(s);
}
catch(er){
alert(er.message);
}
alert(ans);
}
calculate()
I write some functions when I had a problem like this. Maybe this can help:
data = [
{id:1,val1:"test",val2:"test2",val2:"test3"},
{id:2,val1:"test",val2:"test2",val2:"test3"},
{id:3,val1:"test",val2:"test2",val2:"test3"}
];
datakey = Object.keys(data[0]);
// here's a fix for e['datakey[f]'] >> e[x]
vix = function(e,f){
a = "string";
e[a] = datakey[f];
x = e.string;
end = e[x];
delete e.string;
return end;
};
// here's a fix to define that variable
vox = function(e,f,string){
a = "string";
e[a] = datakey[f];
x = e.string;
end = e[x] = string;
delete e.string;
};
row = 2 // 3th row ==> {id:3,val1:"test",val2:"test2",val2:"test3"}
column = 1 //datakey 2 ==> val1
vox(data[row],column,"new value");
alert(data[2].val1); //the value that we have changed
I know that 0x is a prefix for hexadecimal numbers in Javascript. For example, 0xFF stands for the number 255.
Is there something similar for binary numbers ? I would expect 0b1111 to represent the number 15, but this doesn't work for me.
Update:
Newer versions of JavaScript -- specifically ECMAScript 6 -- have added support for binary (prefix 0b), octal (prefix 0o) and hexadecimal (prefix: 0x) numeric literals:
var bin = 0b1111; // bin will be set to 15
var oct = 0o17; // oct will be set to 15
var oxx = 017; // oxx will be set to 15
var hex = 0xF; // hex will be set to 15
// note: bB oO xX are all valid
This feature is already available in Firefox and Chrome. It's not currently supported in IE, but apparently will be when Spartan arrives.
(Thanks to Semicolon's comment and urish's answer for pointing this out.)
Original Answer:
No, there isn't an equivalent for binary numbers. JavaScript only supports numeric literals in decimal (no prefix), hexadecimal (prefix 0x) and octal (prefix 0) formats.
One possible alternative is to pass a binary string to the parseInt method along with the radix:
var foo = parseInt('1111', 2); // foo will be set to 15
In ECMASCript 6 this will be supported as a part of the language, i.e. 0b1111 === 15 is true. You can also use an uppercase B (e.g. 0B1111).
Look for NumericLiterals in the ES6 Spec.
I know that people says that extending the prototypes is not a good idea, but been your script...
I do it this way:
Object.defineProperty(
Number.prototype, 'b', {
set:function(){
return false;
},
get:function(){
return parseInt(this, 2);
}
}
);
100..b // returns 4
11111111..b // returns 511
10..b+1 // returns 3
// and so on
If your primary concern is display rather than coding, there's a built-in conversion system you can use:
var num = 255;
document.writeln(num.toString(16)); // Outputs: "ff"
document.writeln(num.toString(8)); // Outputs: "377"
document.writeln(num.toString(2)); // Outputs: "11111111"
Ref: MDN on Number.prototype.toString
As far as I know it is not possible to use a binary denoter in Javascript. I have three solutions for you, all of which have their issues. I think alternative 3 is the most "good looking" for readability, and it is possibly much faster than the rest - except for it's initial run time cost. The problem is it only supports values up to 255.
Alternative 1: "00001111".b()
String.prototype.b = function() { return parseInt(this,2); }
Alternative 2: b("00001111")
function b(i) { if(typeof i=='string') return parseInt(i,2); throw "Expects string"; }
Alternative 3: b00001111
This version allows you to type either 8 digit binary b00000000, 4 digit b0000 and variable digits b0. That is b01 is illegal, you have to use b0001 or b1.
String.prototype.lpad = function(padString, length) {
var str = this;
while (str.length < length)
str = padString + str;
return str;
}
for(var i = 0; i < 256; i++)
window['b' + i.toString(2)] = window['b' + i.toString(2).lpad('0', 8)] = window['b' + i.toString(2).lpad('0', 4)] = i;
May be this will usefull:
var bin = 1111;
var dec = parseInt(bin, 2);
// 15
No, but you can use parseInt and optionally omit the quotes.
parseInt(110, 2); // this is 6
parseInt("110", 2); // this is also 6
The only disadvantage of omitting the quotes is that, for very large numbers, you will overflow faster:
parseInt(10000000000000000000000, 2); // this gives 1
parseInt("10000000000000000000000", 2); // this gives 4194304
I know this does not actually answer the asked Q (which was already answered several times) as is, however I suggest that you (or others interested in this subject) consider the fact that the most readable & backwards/future/cross browser-compatible way would be to just use the hex representation.
From the phrasing of the Q it would seem that you are only talking about using binary literals in your code and not processing of binary representations of numeric values (for which parstInt is the way to go).
I doubt that there are many programmers that need to handle binary numbers that are not familiar with the mapping of 0-F to 0000-1111.
so basically make groups of four and use hex notation.
so instead of writing 101000000010 you would use 0xA02 which has exactly the same meaning and is far more readable and less less likely to have errors.
Just consider readability, Try comparing which of those is bigger:
10001000000010010 or 1001000000010010
and what if I write them like this:
0x11012 or 0x9012
Convert binary strings to numbers and visa-versa.
var b = function(n) {
if(typeof n === 'string')
return parseInt(n, 2);
else if (typeof n === 'number')
return n.toString(2);
throw "unknown input";
};
Using Number() function works...
// using Number()
var bin = Number('0b1111'); // bin will be set to 15
var oct = Number('0o17'); // oct will be set to 15
var oxx = Number('0xF'); // hex will be set to 15
// making function convTo
const convTo = (prefix,n) => {
return Number(`${prefix}${n}`) //Here put prefix 0b, 0x and num
}
console.log(bin)
console.log(oct)
console.log(oxx)
// Using convTo function
console.log(convTo('0b',1111))