how to convert big decimal value to decimal in javascript - javascript

var zx =1800;
I have some value in big decimal value i need to convert it as decimal in javascript.
I have tried using zx.doubleValue() method but i m getting error as "Uncaught ReferenceError: Double is not defined" In browser console.

var bigdecimal = require("bigdecimal");
var i = new bigdecimal.BigInteger("1234567890abcdefghijklmn", 24);
console.log("i is " + i);
// Output: i is 60509751690538858612029415201127
var d = new bigdecimal.BigDecimal(i);
var x = new bigdecimal.BigDecimal("123456.123456789012345678901234567890");
console.log("d * x = " + d.multiply(x));
// Output: d * x = 7470299375046812977089832214047022056.555930270554343863089286012030
var two = new bigdecimal.BigDecimal('2');
console.log("Average = " + d.add(x).divide(two));
// Output: Average = 30254875845269429306014707662291.561728394506172839450617283945
var down = bigdecimal.RoundingMode.DOWN();
console.log("d / x (25 decimal places) = " + d.divide(x, 25, DOWN));
// Output: d / x (25 decimal places) = 490131635404200348624039911.8662623025579331926181155

As I understood you receiving big decimal from backend. The problem with JavaScript is that native Number type is limited in precision, thus your big decimal will be truncated.
You can use one of libraries, for example, big number:
https://www.npmjs.com/package/big-numbers
This will allow you to convert received from backend numbers to big number:
const bigNumberValue = numbers.of([received from backend value]);
After it is done, you can perform any required operation: add, multiply, etc. Check JavaScript tutorial here:
http://bignumbers.tech/tutorials/java-script

Related

Number of type double gets converted to exponential type

I am consuming Dot net Web API which generates HighChart structure it has X and Y axis of type double.
The X-axis stores the DateTime EPOCH format for example (1554120000000), i am getting the correct format when i call the endpoint from postman
Expected number
But when i am consuming the same endpoint from my angular application, The number is getting converted to exponential (-1.7976931348623157e+308)
Parsed number
I have referred to the question! on Stackoverflow and used the method from the answer but it is not giving the correct number
var epochTime = 1554120000000;
var exponentialNumber = -1.7976931348623157e+308;
function toFixed(x) {
if (Math.abs(x) < 1.0) {
var e = parseInt(x.toString().split('e-')[1]);
if (e) {
x *= Math.pow(10,e-1);
x = '0.' + (new Array(e)).join('0') + x.toString().substring(2);
}
} else {
var e = parseInt(x.toString().split('+')[1]);
if (e > 20) {
e -= 20;
x /= Math.pow(10,e);
x += (new Array(e+1)).join('0');
}
}
return x;
}
console.log("expected:" + epochTime)
console.log("result: " + toFixed(exponentialNumber));
How to avoid this converison, if not they how to parse the number correctly?
Just go through this - you might get to understand
https://playcode.io/281575?tabs=console&script.js&output
Can you refer this, may be you need to create your own custom method to read such numbers:
How to avoid scientific notation for large numbers in JavaScript?

Javascript round / floor / toFixed on decimals

I am having an issue with how javascript is dividing and rounding the number.
I have two float , 0.11 and 0.12
I want to calculate the mid of these two numbers and round it to the nearest highest value with 2 decimal price.
For example, if I do this on Calculator
0.11+0.12 / 2 = 0.115, and I need to round it to 0.12 as it is mid or above mid.
If I do this with Javascript, I am not getting an accurate number
Example,
var high = parseFloat(0.12);
var low = parseFloat(0.11);
var mid = (high + low) / 2;
document.getElementById("demo1").innerHTML = mid;
document.getElementById("demo2").innerHTML = mid.toFixed(2);
var another = mid.toFixed(3);
document.getElementById("demo3").innerHTML =another;
var last = Math.floor(another)
document.getElementById("demo4").innerHTML =last;
http://jsfiddle.net/gzqwbp6c/9/
Any input would be appreciated.
As the 0.11499999999999999 shows, the result is very slightly less than 0.115. This is because 0.11 and 0.12 cannot be represented with perfect accuracy using floating-point-numbers.
When you don't want to deal with floating-point-error, it's often easier to work with integers directly. Small integers are represented exactly by floating point numbers.
You can multiply by 100 before, and round, to ensure your numbers are integers, and only divide after you get your final result:
var a = Math.round(100 * parseFloat("0.12")) // 12
var b = Math.round(100 * parseFloat("0.11")) // 11
var mid = (a + b) / 2 // 11.5.
// 0.5 can be represented exactly in floating point for small numbers.
var midRound = (Math.round(mid) / 100).toFixed(2) // "0.12"
Need to multiply (workout on int part, i.e. find mid, and divide to reconvert to origin):
function myMid(high,low, precision){
var precision=2
var convFactor = Math.pow(10,precision);
return
(Math.round((low*convFactor+high*convFactor)/2)/convFactor).toFixed(precision);
}
Float is not precise, you cant rely on that, you'll have unexpected results.
everything *100 to prevent inaccuracies
.toFixed() does the rounding
var a = 0.11;
var b = 0.12;
c = parseFloat((((a*100) + (b*100))/200).toFixed(2));
console.log(c);

Javascript: Rounded numbers don't add up (33.3 + 33.3 + 33.3 = 99.89999999999999?) [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 8 years ago.
Consider this javascript that rounds to the nearest tenth, then totals the numbers together.
var log = function(key, value) {
var li = document.createElement("li");
li.appendChild(document.createTextNode(key + " = " + value));
document.getElementById("results").appendChild(li);
};
var number1 = Math.round(10*33.333)/10; // gives 33.3
var number2 = Math.round(10*33.333)/10; // gives 33.3
var number3 = Math.round(10*33.333)/10; // gives 33.3
var totalOfAllNumbers = number1 + number2 + number3; //gives 99.89999999999999?
var totalOfAllRealNumbers = 33.3 + 33.3 + 33.3; //gives 99.9?
log("number1", number1);
log("number2", number2);
log("number3", number3);
log("totalOfAllNumbers", totalOfAllNumbers);
<ul id="results"></ul>
The numbers appear to be rounded, but the total doesn't add up to 99.9? Why?
What is the difference between the number entered as 33.3 manually, and the result of a division? Are they different types?
I hate to give a flippant response, but it gives that rounding error number "because JavaScript".
A slightly less flippant answer is that the engine tries to guess (as described by ECMA spec) the type at each step of the addition and somewhere in the type guessing algorithm, it guesses it's a double instead of a float (or vice versa) and that's when you see this type of precision loss. If numerical accuracy are critical to you, I suggest using a different programming language with saner and more predictable number types.
Edit:
For example consider python 2.7:
>>> a = round(10*33.333, 2)/10
>>> b = round(10*33.333, 2)/10
>>> c = round(10*33.333, 2)/10
>>> a + b + c
99.999
Or ruby:
2.1.2 :001 > a = (10*33.333).round(2)/10
=> 33.333
2.1.2 :002 > b = (10*33.333).round(2)/10
=> 33.333
2.1.2 :003 > c = (10*33.333).round(2)/10
=> 33.333
2.1.2 :004 > a + b + c
=> 99.999

Node.js Secure Random Float from Crypto Buffer

Not for any practical purpose, of course, but I tried to generate a secure random floating point number using the Node's crypto module. Essentially, the following:
var crypto = require('crypto');
var buf = crypto.randomBytes(4);
var float = buf.readFloatBE();
This doesn't work, as far as the following test can tell, in an average of 0.40% of cases. Instead of getting a float, I get NaN.
var colors = require('colors');
var crypto = require('crypto');
var buf = new Buffer(4);
var fails = 0, tries = 100000;
var failures = [];
for (var i = 0; i < tries; i++) {
var num = crypto.randomBytes(4).readFloatBE(0);
try {
buf.writeFloatBE(num, 0);
} catch (e) {
fails++;
failures.push(buf.readUInt32BE(0).toString(2));
}
}
var percent = 100 * fails / tries;
if (fails)
percent = (percent.toFixed(2) + "%").red;
else
percent = '0.00%'.blue.bold;
console.log('Test ' + 'complete'.green.bold + ', ' + percent + ": " + fails + " / " + tries);
fails && console.log('Failed'.red + ' values:', failures.join(', '));
I'm guessing this is due to the IEEE single precision floating point number specification, but I'm not familiar with exactly how the float is stored as binary.
Why does this happen, exactly, and apart from simply generating floats until I get a valid number, how can I circumvent this?
EDIT: when looking at the filtered binary data, it appears they all follow the same pattern: the first 8 bits are all set. Everything else is random.
As is apparent from the current version of node, the buffer library's source specifies, on line 906, that the noAssert also checks to see if the Number provided is not NaN, so simply specifying noAssert allows writing of Infinity, NaN, and -Infinity to the buffer.

Seeded random number

Ive been wondering for some time. Is there a good (And fast) way to make an number random while its seeded?
is there a good algorithm to convert one number into a seemingly random number.
A little illustration:
specialrand(1) = 8
specialrand(2) = 5
specialrand(3) = 2
specialrand(4) = 5
specialrand(5) = 1
specialrand(1) = 8
specialrand(4) = 5
specialrand(1) = 8
It would be very nice if the output could also be huge numbers.
As a note: I don't want to fill a array and randomize the numbers because I want to be able to feed it huge difference of numbers because I want the same output whenever I restart the program
You're not looking for a seeded random number. Instead what I think you're looking for is a hashing function. If you put in the same input and get the same output, that's not random.
If you're looking to generate a sequence of random numbers for a run, but have the same sequence generate from run to run, you can use a random number generator that generates the same sequence given the same seed value.
Thats how most basic pRNG's work. There are more cryptographically secure RNG's out there, but your standard Math.rand() should work to accomplish your needs.
Maybe pseudorandom number generators are what you are looking for.
For example the XORshift.
uint32_t xor128(void) {
static uint32_t x = 123456789;
static uint32_t y = 362436069;
static uint32_t z = 521288629;
static uint32_t w = 88675123;
uint32_t t;
t = x ^ (x << 11);
x = y; y = z; z = w;
return w = w ^ (w >> 19) ^ (t ^ (t >> 8));
}
You could create something like this:
take a seed
specialrand(5) is a function which takes the fifth random number from this seed
or specialrand(5) is a function which gets the first random number from the seed+5
Maybe this is enough for your purpose.
Try setting a key or set of keys then writing a function with an equation to return a new number based on that key:
a very basic example would be:
function specialrand(value) {
key = array (1,2,4,6,8);
for (k in key) {
if (k%2 === 0) {
value -= key[k] * value;
} else {
value += key[k] / value;
}
}
return value;
}
however you could create a highly complex equation to generate your 'random' number and ensure you return the same number each time.
You can use Date functionality
Math.valueOfSeed = function(n)
{
return Number(new Date(n%9999, n%12, n%30, n%24, n%60, n%60, n%1000));
};
alert(Math.valueOfSeed(1) + " = " + Math.valueOfSeed(1));
alert(Math.valueOfSeed(2) + " = " + Math.valueOfSeed(2));
alert(Math.valueOfSeed(15) + " = " + Math.valueOfSeed(15));
alert(Math.valueOfSeed(5555) + " = " + Math.valueOfSeed(5555));
alert(Math.valueOfSeed(21212121) + " = " + Math.valueOfSeed(21212121));
alert(Math.valueOfSeed(6554654654) + " = " + Math.valueOfSeed(6554654654));​
test is here

Categories

Resources