How do I extract an individual number from a float - javascript

I need to extract individual numbers from a float without turning the float into a string but have no idea how to do that. I'm thinking of something like substr. but for a number.

You can make into a string extract the number you want and then turn the string back into a number using the Number() method.

Let say that you have a float like this:
x = 45783.7304
and we need the figure of hundredths
n = 0.01
result = Math.floor(x / n) % 10
if you need the digit of the thousands:
x = 45783.7304
n = 1000
result = Math.floor(x / n) % 10
Or you can split the different digits into an array (but you have to resort to strings):
digits = x.toString().replace(".","").split("").map(Math.floor)

What you ask is already quite ill-defined if you are thinking of decimal representation of a binary-based floating point number, even if resorting to string representation, because you have several ways of printing a decimal representation of a float, like:
the exact decimal representation of a float
the shortest decimal representation of a float that would get interpreted back to the same float in a round-trip conversion
some approximate decimal representation of a float rounded (or truncated, or ...) to a fixed number of digits/or decimal places.
Let's take an example, say you start with the nearest float to 0.0012345
the exact representation of that float in IEEE 754 double precision is 0.0012344999999999999203137424075293893110938370227813720703125
the shortest decimal representation converting back to same float - assuming round to nearest, tie to even default rounding mode - is 0.0012345
rounded or truncated to 6 decimal places after the decimal point (4 significant digits) lead to 0.001234
But let's take the nearest float to 0.012345
the exact decimal representation of that float is 0.01234500000000000007049916206369744031690061092376708984375
the shortest is 0.012345
the truncated to 5 places is 0.01234
the rounded to 5 places is 0.01235
We see that depending on the chosen string representation, your result may slightly vary.
Without resorting to string representation, things are getting worse, because every operation that you will perform with the floating point arithmetic unit will round the result to nearest floating point, and thus may induce some slight differences in the digits. Even worse if you would think of chaining several of these inexact operations !
For example, using shortest decimal representation for the sake of brievity, the most trivial chaining gives:
0.0012345 * 1000000 -> 1234.5
0.0012345 * 10 * 10 * 10 * 10 * 10 * 10 -> 1234.4999999999998
The exact value of those operations, 1234.4999999999999203137424075293893110938370227813720703125 is of course not representable as a float, the nearest float being 1234.5 (exactly).
The easiest thing you could think of is converting the float to some exact decimal of binary fraction ASAP and then operate on those numbers - it's pretty sure that you will find dedicated javascript libraries to do so. But think twice on what you exactly want first, because mixing float and decimal representation is a recipe for getting surprising (unexpected) results, unless greatest care is taken !
Depending on your purposes, you may as well want to completely avoid using float representation.

Related

How can I get exact value string of huge numbers in JavaScript?

I know JavaScript numbers are just "double" numbers and have only 52bit precisions for the fraction part. However, the REAL JavaScript numbers seem to have more practical precisions for huge numbers.
For example, the predefined constant Number.MAX_VALUE represents the largest positive finite value of the Number type, which is approximately 1.7976931348623157e+308. Here I can access trailing digits of this value using a modulus operator.
> Number.MAX_VALUE
1.7976931348623157e+308
> Number.MAX_VALUE % 10000000000
4124858368
From this result I can assume that this number is 7fef ffff ffff ffff which represents (1 + (1 − 2 ** −52)) × 2 ** 1023 (Wikipedia) and can be transcribed in an exact form as following:
179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368
...and we only saw trailing 10 digits of this 309 digits. So I think each JavaScript number must have exact digits in the decimal form.
My question is: how to get this 309 digits string in JavaScript? Challenges like Number.MAX_VALUE / 10000000000 % 10000000000 just fails because of such hugeness.
Furthermore, how about tiny numbers such as Number.MIN_VALUE? This must be the following fraction in the decimal form.
0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000004940656458412465441765687928682213723650598026143247644255856825006755072702087518652998363616359923797965646954457177309266567103559397963987747960107818781263007131903114045278458171678489821036887186360569987307230500063874091535649843873124733972731696151400317153853980741262385655911710266585566867681870395603106249319452715914924553293054565444011274801297099995419319894090804165633245247571478690147267801593552386115501348035264934720193790268107107491703332226844753335720832431936092382893458368060106011506169809753078342277318329247904982524730776375927247874656084778203734469699533647017972677717585125660551199131504891101451037862738167250955837389733598993664809941164205702637090279242767544565229087538682506419718265533447265625
All digits of MAX_VALUE is:
179769313486231570000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
Check out the code below:
http://howjs.com/?%3Aload%20http%3A%2F%2Fwww.javascriptoo.com%2Fapplication%2Fhtml%2Fjs%2FMikeMcl%2Fbig.js%2Fbig.min.js%0A%0Avar%20max%20%3D%20new%20Big(%20Number.MAX_VALUE%20)%3B%0Amax.toFixed()%3B
The actual implementation of IEEE floating point numbers is a little (little!!!) confusing.
I find it helps if you think of a simpler form, this reacts the same everywhere except near the overflows and underflows where the IEEE format is just better.
This is the form:
A floating point number consists of:
A sign for the number (+/-)
An unsigned integer value called the "mantissa" -- make this 'v'
An unsigned integer value called the "exponent" -- make this 'n'
A "sign" for the exponent.
The sign of the number is simple -- does it have a minus in front.
The value is calculated as:
v*2ⁿ
If the sign for the exponent is positive the exponent is basically 2*2*2*...*2 for as many twos as you have specified. If a large number is represented in decimal it will have lots of digits all the way down to the decimal point BUT they are meaningless. If you display the number in binary after about 53 binary digits all the rest will be zeros and you can't change them.
Notice, with a positive exponent all this is integers, floating point numbers (including IEEE ones) will calculate exactly with integers as long as you don't overflow. When you overflow they are still well behaved, they just have zeros in the lower bits.
Only when the exponent is negative do you have strangeness
v/(2ⁿ)
The value you get for a negative exponent is still based on the 2*2*2*...*2 value but you divide by it instead. So you're trying to represent say a tenth with a sum of halves, quarters, eighths and so forth ... but this doesn't work exactly so you get rounding errors and all the lovely floating point problems.
Your example value:
179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368
In binary it is
1111111111111111111111111111111111111111111111111111100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
There's lots of zeros on the end.
What every computer scientist should know about floating point

javascript: why is returning so much decimals? After a multiply [duplicate]

I know a little bit about how floating-point numbers are represented, but not enough, I'm afraid.
The general question is:
For a given precision (for my purposes, the number of accurate decimal places in base 10), what range of numbers can be represented for 16-, 32- and 64-bit IEEE-754 systems?
Specifically, I'm only interested in the range of 16-bit and 32-bit numbers accurate to +/-0.5 (the ones place) or +/- 0.0005 (the thousandths place).
For a given IEEE-754 floating point number X, if
2^E <= abs(X) < 2^(E+1)
then the distance from X to the next largest representable floating point number (epsilon) is:
epsilon = 2^(E-52) % For a 64-bit float (double precision)
epsilon = 2^(E-23) % For a 32-bit float (single precision)
epsilon = 2^(E-10) % For a 16-bit float (half precision)
The above equations allow us to compute the following:
For half precision...
If you want an accuracy of +/-0.5 (or 2^-1), the maximum size that the number can be is 2^10. Any X larger than this limit leads to the distance between floating point numbers greater than 0.5.
If you want an accuracy of +/-0.0005 (about 2^-11), the maximum size that the number can be is 1. Any X larger than this maximum limit leads to the distance between floating point numbers greater than 0.0005.
For single precision...
If you want an accuracy of +/-0.5 (or 2^-1), the maximum size that the number can be is 2^23. Any X larger than this limit leads to the distance between floating point numbers being greater than 0.5.
If you want an accuracy of +/-0.0005 (about 2^-11), the maximum size that the number can be is 2^13. Any X larger than this lmit leads to the distance between floating point numbers being greater than 0.0005.
For double precision...
If you want an accuracy of +/-0.5 (or 2^-1), the maximum size that the number can be is 2^52. Any X larger than this limit leads to the distance between floating point numbers being greater than 0.5.
If you want an accuracy of +/-0.0005 (about 2^-11), the maximum size that the number can be is 2^42. Any X larger than this limit leads to the distance between floating point numbers being greater than 0.0005.
For floating-point integers (I'll give my answer in terms of IEEE double-precision), every integer between 1 and 2^53 is exactly representable. Beyond 2^53, integers that are exactly representable are spaced apart by increasing powers of two. For example:
Every 2nd integer between 2^53 + 2 and 2^54 can be represented exactly.
Every 4th integer between 2^54 + 4 and 2^55 can be represented exactly.
Every 8th integer between 2^55 + 8 and 2^56 can be represented exactly.
Every 16th integer between 2^56 + 16 and 2^57 can be represented exactly.
Every 32nd integer between 2^57 + 32 and 2^58 can be represented exactly.
Every 64th integer between 2^58 + 64 and 2^59 can be represented exactly.
Every 128th integer between 2^59 + 128 and 2^60 can be represented exactly.
Every 256th integer between 2^60 + 256 and 2^61 can be represented exactly.
Every 512th integer between 2^61 + 512 and 2^62 can be represented exactly.
.
.
.
Integers that are not exactly representable are rounded to the nearest representable integer, so the worst case rounding is 1/2 the spacing between representable integers.
The precision quoted form Peter R's link to the MSDN ref is probably a good rule of thumb, but of course reality is more complicated.
The fact that the "point" in "floating point" is a binary point and not decimal point has a way of defeating our intuitions. The classic example is 0.1, which needs a precision of only one digit in decimal but isn't representable exactly in binary at all.
If you have a weekend to kill, have a look at What Every Computer Scientist Should Know About Floating-Point Arithmetic. You'll probably be particularly interested in the sections on Precision and Binary to Decimal Conversion.
First off, neither IEEE-754-2008 nor -1985 have 16-bit floats; but it is a proposed addition with a 5-bit exponent and 10-bit fraction. IEE-754 uses a dedicated sign bit, so the positive and negative range is the same. Also, the fraction has an implied 1 in front, so you get an extra bit.
If you want accuracy to the ones place, as in you can represent each integer, the answer is fairly simple: The exponent shifts the decimal point to the right-end of the fraction. So, a 10-bit fraction gets you ±211.
If you want one bit after the decimal point, you give up one bit before it, so you have ±210.
Single-precision has a 23-bit fraction, so you'd have ±224 integers.
How many bits of precision you need after the decimal point depends entirely on the calculations you're doing, and how many you're doing.
210 = 1,024
211 = 2,048
223 = 8,388,608
224 = 16,777,216
253 = 9,007,199,254,740,992 (double-precision)
2113 = 10,384,593,717,069,655,257,060,992,658,440,192 (quad-precision)
See also
Double-precision
Half-precision
See IEEE 754-1985:
Note (1 + fraction). As #bendin point out, using binary floating point, you cannot express simple decimal values such as 0.1. The implication is that you can introduce rounding errors by doing simple additions many many times or calling things like truncation. If you are interested in any sort of precision whatsoever, the only way to achieve it is to use a fixed-point decimal, which basically is a scaled integer.
If I understand your question correctly, it depends on your language.
For C#, check out the MSDN ref. Float has a 7 digit precision and double 15-16 digit precision.
It took me quite a while to figure out that when using doubles in Java, I wasn't losing significant precision in calculations. floating point actually has a very good ability to represent numbers to quite reasonable precision. The precision I was losing was immediately upon converting decimal numbers typed by users to the binary floating point representation that is natively supported. I've recently started converting all my numbers to BigDecimal. BigDecimal is much more work to deal with in the code than floats or doubles, since it's not one of the primitive types. But on the other hand, I'll be able to exactly represent the numbers that users type in.

Floating-point error mess

I have been trying to figure this floating-point problem out in javascript.
This is an example of what I want to do:
var x1 = 0
for(i=0; i<10; i++)
{
x1+= 0.2
}
However in this form I will get a rounding error, 0.2 -> 0.4 -> 0.600...001 doing that.
I have tried parseFloat, toFixed and Math.round suggested in other threads but none of it have worked for me. So are there anyone who could make this work, because I feel that I have run out of options.
You can almost always ignore the floating point "errors" while you're performing calculations - they won't make any difference to the end result unless you really care about the 17th significant digit or so.
You normally only need to worry about rounding when you display those values, for which .toFixed(1) would do perfectly well.
Whatever happens you simply cannot coerce the number 0.6 into exactly that value. The closest IEEE 754 double precision is exactly 0.59999999999999997779553950749686919152736663818359375, which when displayed within typical precision limits in JS is displayed as 0.5999999999999999778
Indeed JS can't even tell that 0.5999999999999999778 !== (e.g) 0.5999999999999999300 since their binary representation is the same.
To better understand how the rounding errors are accumulating, and get more insight on what is happenning at lower level, here is a small explanantion:
I will assume that IEEE 754 double precision standard is used by underlying software/hardware, with default rounding mode (round to nearest even).
1/5 could be written in base 2 with a pattern repeating infinitely
0.00110011001100110011001100110011001100110011001100110011...
But in floating point, the significand - starting at most significant 1 bit - has to be rounded to a finite number of bits (53)
So there is a small rounding error when representing 0.2 in binary:
0.0011001100110011001100110011001100110011001100110011010
Back to decimal representation, this rounding error corresponds to a small excess 0.000000000000000011102230246251565404236316680908203125 above 1/5
The first operation is then exact because 0.2+0.2 is like 2*0.2 and thus does not introduce any additional error, it's like shifting the fraction point:
0.0011001100110011001100110011001100110011001100110011010
+ 0.0011001100110011001100110011001100110011001100110011010
---------------------------------------------------------
0.0110011001100110011001100110011001100110011001100110100
But of course, the excess above 2/5 is doubled 0.00000000000000002220446049250313080847263336181640625
The third operation 0.2+0.2+0.2 will result in this binary number
0.011001100110011001100110011001100110011001100110011010
+ 0.0011001100110011001100110011001100110011001100110011010
---------------------------------------------------------
0.1001100110011001100110011001100110011001100110011001110
But unfortunately, it requires 54 bits of significand (the span between leading 1 and trailing 1), so another rounding error is necessary to represent the result as a double:
0.10011001100110011001100110011001100110011001100110100
Notice that the number was rounded upper, because by default floats are rounded to nearest even in case of perfect tie. We already had an error by excess, so bad luck, successive errors did cumulate rather than annihilate...
So the excess above 3/5 is now 0.000000000000000088817841970012523233890533447265625
You could reduce a bit this accumulation of errors by using
x1 = i / 5.0
Since 5 is represented exactly in float (101.0 in binary, 3 significand bits are enough), and since that will also be the case of i (up to 2^53), there is a single rounding error when performing the division, and IEEE 754 then guarantees that you get the nearest possible representation.
For example 3/5.0 is represented as:
0.10011001100110011001100110011001100110011001100110011
Back to decimal, the value is represented by default 0.00000000000000002220446049250313080847263336181640625 under 3/5
Note that both errors are very tiny, but in second case 3/5.0, four times smaller in magnitude than 0.2+0.2+0.2.
Depending on what you're doing, you may want to do fixed-point arithmetic instead of floating point. For example, if you are doing financial calculations in dollars with amounts that are always multiples of $0.01, you can switch to using cents internally, and then convert to (and from) dollars only when displaying values to the user (or reading input from the user). For more complicated scenarios, you can use a fixed-point arithmetic library.

Unexpected value on multiplication in javascript [duplicate]

There is some problem, i can't understand anyway.
look at this code please
<script type="text/javascript">
function math(x)
{
var y;
y = x*10;
alert(y);
}
</script>
<input type="button" onclick="math(0.011)">
What must be alerted after i click on button?
i think 0.11, but no, it alerts
0.10999999999999999
explain please this behavior.
thanks in advance
Ok. Let me try to explain it.
The basic thing to remember with floating point numbers is this: They occupy a limited amount of bits and try to represent the original number using base-2 arithmetic.
As you know, in base-2 arithmetic integers are represented by the powers of 2 that they contain. Thus, 6 would be represented as 4 + 2, ie. in binary as 110.
In order to understand how fractional numbers are represented, you have to think about how we represent fractional numbers in our decimal system. The fractional part of numbers (for example 0.11) is represented as multiples of inverse powers of 10 (since the base is 10). Thus 0.11 is actually 1/10 + 1/100. As you can appreciate, this is not powerful enough to represent all fractional numbers in a limited number of digits. For example, 1/3 would be 0.333333.... in a never ending fashion. If we had only 32 digits of space to write the number down, we would end up having only an approximation to the original number, 0.33333333333333333333333333333333. This number, for example, would give 0.99999999999999999999999999999999 if it was multiplied by 3 and not 1 as you would have expected.
The situation is similar in base-2. Each fractional number would be represented as multiples of inverse powers of 2. Thus 0.75 (in decimal) (ie 3/4) would be represented as 1/2 + 1/4, which would mean 0.11 (in base-2). Just as base 10 is not capable enough to represent every fractional number in a finite manner, base-2 cannot represent all fractional numbers given a limited amount of space.
Now, try to represent 0.11 in base-2; you start with 11/100 and try to find an inverse power of 2 that is just less than this number. 1/2 doesn't work, 1/4 neither, nor does 1/8. 1/16 fits the bill, so you mark a 1 in the 4th place after the decimal point and subtract 1/16 from 11/100. You are left with 19/400. Now try to find the next power of 2 that fits the description. 1/32 seems to be that one, mark the 5th place after the point and subtract 1/32 from 19/400, you get 13/800. Next one is 1/64 and you are left with 1/1600 thus the next one is all the way up at 1/2048, etc. etc. Thus we got as far as 0.00011100001 but it goes on and on; and you will see that there always is a fraction remaining. Now, I didn't go through the whole calculation, but after you have put in 32 binary digits after the dot you will still probably have some fraction left (and this is assuming that all of the 32 bits of space is spent representing the decimal part, which it is not). Thus, I am sure you can appreciate that the resulting number might differ from its actual value by some amount.
In your case, the difference is 0.00000000000000001 which is 1/100000000000000000 = 1/10^17 and I am sure that you can see why you might have that.
this is because you are dealing with floating point, and this is the expected behavior of floating point math.
what you need to do is format that number.
see this java explanation which also applies here if you want to know why this is happening.
in javascript all numbers are represented as 64bit floats, so you will run into this sort of thing often.
the quick overview of that article is that floating point tries to represent a range of values larger then would fit in 64bits, therefor there is going to be some imprecise representation, and this is what you are seeing.
With floating point number you get a representation of the number you try to encode. Mostly it is a number that is very close the the original number. More information on encoding/storing floating point numbers can be found here.
Note:
If you show the value of x, it still shows 0.011 because JavaScript has not yet decided what variable type x has. But after multiplying it with 10 the type got set to floating point (it is the only possibility) and the round error shows.
You can try to fix the nr of decimals with this one:
// fl is a float number with some nr of decimals
// d is how many decimals you want
function dec(fl, d) {
var p = Math.pow(10, d);
return Math.round(fl*p)/p;
}
Ex:
var n = 0.0012345;
console.log(dec(n,6)); // 0.001235
console.log(dec(n,5)); // 0.00123
console.log(dec(n,4)); // 0.0012
console.log(dec(n,3)); // 0.001
It works by first multiplying the float with 10^3 (1000) for three decimals, or 10^2 (100) for two decimals. Then do round on that and divide it back to original size.
Math.pow(10, d) makes 10^d (means that d will give us 1000).
In your case, do alert(dec(y,2));, it should work.

understanding floating point variables

There is some problem, i can't understand anyway.
look at this code please
<script type="text/javascript">
function math(x)
{
var y;
y = x*10;
alert(y);
}
</script>
<input type="button" onclick="math(0.011)">
What must be alerted after i click on button?
i think 0.11, but no, it alerts
0.10999999999999999
explain please this behavior.
thanks in advance
Ok. Let me try to explain it.
The basic thing to remember with floating point numbers is this: They occupy a limited amount of bits and try to represent the original number using base-2 arithmetic.
As you know, in base-2 arithmetic integers are represented by the powers of 2 that they contain. Thus, 6 would be represented as 4 + 2, ie. in binary as 110.
In order to understand how fractional numbers are represented, you have to think about how we represent fractional numbers in our decimal system. The fractional part of numbers (for example 0.11) is represented as multiples of inverse powers of 10 (since the base is 10). Thus 0.11 is actually 1/10 + 1/100. As you can appreciate, this is not powerful enough to represent all fractional numbers in a limited number of digits. For example, 1/3 would be 0.333333.... in a never ending fashion. If we had only 32 digits of space to write the number down, we would end up having only an approximation to the original number, 0.33333333333333333333333333333333. This number, for example, would give 0.99999999999999999999999999999999 if it was multiplied by 3 and not 1 as you would have expected.
The situation is similar in base-2. Each fractional number would be represented as multiples of inverse powers of 2. Thus 0.75 (in decimal) (ie 3/4) would be represented as 1/2 + 1/4, which would mean 0.11 (in base-2). Just as base 10 is not capable enough to represent every fractional number in a finite manner, base-2 cannot represent all fractional numbers given a limited amount of space.
Now, try to represent 0.11 in base-2; you start with 11/100 and try to find an inverse power of 2 that is just less than this number. 1/2 doesn't work, 1/4 neither, nor does 1/8. 1/16 fits the bill, so you mark a 1 in the 4th place after the decimal point and subtract 1/16 from 11/100. You are left with 19/400. Now try to find the next power of 2 that fits the description. 1/32 seems to be that one, mark the 5th place after the point and subtract 1/32 from 19/400, you get 13/800. Next one is 1/64 and you are left with 1/1600 thus the next one is all the way up at 1/2048, etc. etc. Thus we got as far as 0.00011100001 but it goes on and on; and you will see that there always is a fraction remaining. Now, I didn't go through the whole calculation, but after you have put in 32 binary digits after the dot you will still probably have some fraction left (and this is assuming that all of the 32 bits of space is spent representing the decimal part, which it is not). Thus, I am sure you can appreciate that the resulting number might differ from its actual value by some amount.
In your case, the difference is 0.00000000000000001 which is 1/100000000000000000 = 1/10^17 and I am sure that you can see why you might have that.
this is because you are dealing with floating point, and this is the expected behavior of floating point math.
what you need to do is format that number.
see this java explanation which also applies here if you want to know why this is happening.
in javascript all numbers are represented as 64bit floats, so you will run into this sort of thing often.
the quick overview of that article is that floating point tries to represent a range of values larger then would fit in 64bits, therefor there is going to be some imprecise representation, and this is what you are seeing.
With floating point number you get a representation of the number you try to encode. Mostly it is a number that is very close the the original number. More information on encoding/storing floating point numbers can be found here.
Note:
If you show the value of x, it still shows 0.011 because JavaScript has not yet decided what variable type x has. But after multiplying it with 10 the type got set to floating point (it is the only possibility) and the round error shows.
You can try to fix the nr of decimals with this one:
// fl is a float number with some nr of decimals
// d is how many decimals you want
function dec(fl, d) {
var p = Math.pow(10, d);
return Math.round(fl*p)/p;
}
Ex:
var n = 0.0012345;
console.log(dec(n,6)); // 0.001235
console.log(dec(n,5)); // 0.00123
console.log(dec(n,4)); // 0.0012
console.log(dec(n,3)); // 0.001
It works by first multiplying the float with 10^3 (1000) for three decimals, or 10^2 (100) for two decimals. Then do round on that and divide it back to original size.
Math.pow(10, d) makes 10^d (means that d will give us 1000).
In your case, do alert(dec(y,2));, it should work.

Categories

Resources