JavaScript: Rounding Down in .5 Cases - javascript

I am in a situation where a JavaScript function produces numbers, such as 2.5. I want to have these point five numbers rounded down to 2, rather than the result of Math.round, which will always round up in such cases (ignoring the even odd rule), producing 2. Is there any more elegant way of doing this than subtracting 0.01 from the number before rounding? Thanks.

Just negate the input and the output to Math.round:
var result = -Math.round(-num);
In more detail: JavaScript's Math.round has the unusual property that it rounds halfway cases towards positive infinity, regardless of whether they're positive or negative. So for example 2.5 will round to 3.0, but -2.5 will round to -2.0. This is an uncommon rounding mode: it's much more common to round halfway cases either away from zero (so -2.5 would round to -3.0), or to the nearest even integer.
However, it does have the nice property that it's trivial to adapt it to round halfway cases towards negative infinity instead: if that's what you want, then all you have to do is negate both the input and the output:
Example:
function RoundHalfDown(num) {
return -Math.round(-num);
}
document.write("1.5 rounds to ", RoundHalfDown(1.5), "<br>");
document.write("2.5 rounds to ", RoundHalfDown(2.5), "<br>");
document.write("2.4 rounds to ", RoundHalfDown(2.4), "<br>");
document.write("2.6 rounds to ", RoundHalfDown(2.6), "<br>");
document.write("-2.5 rounds to ", RoundHalfDown(-2.5), "<br>");

do this:
var result = (num - Math.Floor(num)) > 0.5 ? Math.Round(num):Math.Floor(num);

Another way exists that is to use real numbers (instead of 0.2 use 20, 0.02 use 2, etc.), then add floatingPoints variable that will divide the result (in your case it's 2). As a result you can operate as Number/(10^floatingPoints).
This solution is wide across Forex companies.

You can also use this function to round with no decimal part and .5 down rule (Only positive numbers):
function customRound(number) {
var decimalPart = number % 1;
if (decimalPart === 0.5)
return number - decimalPart;
else
return Math.round(number);
}
And sorry for my english.

Related

Why round value is different?

I know if I round up -1.5, it's -2.
so I tried to do with C# it returns -2 correctly.
Console.WriteLine(Math.Round(-1.5, 0));
also I tried to do with Excel, it also returns -2.
=Round(-1.5,0)
but when I do with javascript, it returns -1.
Math.round(-1.5)
why this values are different?
and how can I get -2 instead of -1 when I do this with javascript?
Math.round(Math.abs(-1.5));
your value is negative that's why it gets -1. Just get the absolute value and then round it and multiply it to -1 to get -2.
yes round in javascript works as you said. One solution is convert your negative number to positive then use Math.round. At last you should convert your number to negative number.
function myFunction() {
num = -1.5;
document.getElementById("demo").innerHTML = Math.round(num);
if(num < 0)
document.getElementById("demo").innerHTML = -1 * Math.round(Math.abs(num));
}
That's just how they made it. It is acknowledged that it is different than most languages.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/round
If the fractional portion of the argument is greater than 0.5, the argument is rounded to the integer with the next higher absolute value. If it is less than 0.5, the argument is rounded to the integer with the lower absolute value. If the fractional portion is exactly 0.5, the argument is rounded to the next integer in the direction of +∞. Note that this differs from many languages' round() functions, which often round this case to the next integer away from zero, instead giving a different result in the case of negative numbers with a fractional part of exactly 0.5.

JavaScript Math.round: Doesnt round object to 1 decimal

I have objects in array:
x[{a=2.99, b=5.11}{a=4.99, b=2.11}]
And I want it to display 1 decimal with Math round, as I use Math.round(x[0].a*10)/10; it displays 3, while it works fine if I use just numbers as Math.round(2.99*10)/10.
Why is that?
please try the below code
x[0].a.toString().match(/^-?\d+(?:\.\d{0,2})?/)[0]
you can round off to 1 by changing regex patter from {0, 2} to {0, 1}
Thiss might work -- Math.floor() and toFixed(1) are the keys here... rounding will cause the original input to change to the nearest integer.
function roundToTenth(inputVal) {
return Math.floor(inputVal * 10) * 0.1;
}
console.log(roundToTenth(2.99).toFixed(1));
Division is slower than multiplication is generally - and definitely using Regular Expression Matching is going to be slower than multiplication is....
All I'm doing in the code above is saying "Take the number times 10 and then turn it into a straight Integer with no Decimals,
so
2.99 * 10 = 29.9 which then becomes 29
finally, since I use the operation * 0.1 it becomes a float like 2.900000004 and I use toFixed(1) to strip out all those pesky 0s at the end

Comparing floating-point to integers in Javascript

So I ran across a small piece of code that looks like this
Math.random() * 5 | 0 and was confused by what it did.
after some inspecting, it seems like the comparison turns the decimal into an integer. is that right? and so the piece of code is another way is saying give me a random number between 0 and 4. Can anyone explain why that is?
1) Math.random() function always return decimal value and will be less than one. Ex - 0.2131313
random()
Returns a double value with a positive sign, greater than or equal to 0.0 and less than 1.0.
2) Math.random()*5 will always be less than 5. (maxvalue - 4.99999).
3) The bitwise operator '|' will truncate the decimal values.
Edit : Paul is correct. '|' does more than just truncate.
But in this case Math.random()*5|0 - It truncates the decimal and returns the integar.

Javascript precision while dividing

Is there a way to determine whether dividing one number by another will result in whole number in JavaScript? Like 18.4 / 0.002 gives us 9200, but 18.4 / 0.1 gives us 183.99999999999997. The problem is that both of them may be any float number (like 0.1, 0.01, 1, 10, ...) which makes it impossible to use the standard function modulo or trying to subtract, and floating point precision issues mean we will sometimes get non-whole-number results for numbers that should be whole, or whole-number results for ones that shouldn't be.
One hacky way would be
Convert both numbers to strings with toString()
Count the precision points (N) by stripping off the characters before the . (including the .) and taking the length of the remaining part
Multiply with 10^N to make them integers
Do modulo and get the result
Updated Demo: http://jsfiddle.net/9HLxe/1/
function isDivisible(u, d) {
var numD = Math.max(u.toString().replace(/^\d+\./, '').length,
d.toString().replace(/^\d+\./, '').length);
u = Math.round(u * Math.pow(10, numD));
d = Math.round(d * Math.pow(10, numD));
return (u % d) === 0;
}
I don't think you can do that with JavaScript's double-precision floating point numbers, not reliably across the entire range. Maybe within some constraints you could (although precision errors crop up in all sorts of -- to me -- unexpected locations).
The only way I see is to use any of the several "big decimal" libraries for JavaScript, that don't use Number at all. They're slower, but...
I Assume that you want the reminder to be zero when you perform the division.
check for the precision of the divisor, and multiply both divisor and divident by powers of 10
for example
you want to check for 2.14/1.245 multiply both divident and divisor by 1000 as 1.245 has 3 digits precision, now the you would have integers like 2140/1245 to perform modulo
Divide first number by second one and check if result is integer ?
Only, when you check that the result is integer, you need to specify a rounding threshold.
In javascript, 3.39/1.13 is slightly more than 3.
Example :
/**
* Returns true iif a is an integer multiple of b
*/
function isIntegerMultiple(a, b, precision) {
if (precision === undefined) {
precision = 10;
}
var quotient = a / b;
return Math.abs(quotient - Math.round(quotient)) < Math.pow(10, -precision);
}
console.log(isIntegerMultiple(2, 1)); // true
console.log(isIntegerMultiple(2.4, 1.2)); // true
console.log(isIntegerMultiple(3.39, 1.13)); // true
console.log(isIntegerMultiple(3.39, 1.13, 20)); // false
console.log(isIntegerMultiple(3, 2)); // false
Have a look at this for more details on floating point rounding issues: Is floating point math broken?

Avoiding problems with JavaScript's weird decimal calculations

I just read on MDN that one of the quirks of JS's handling of numbers due to everything being "double-precision 64-bit format IEEE 754 values" is that when you do something like .2 + .1 you get 0.30000000000000004 (that's what the article reads, but I get 0.29999999999999993 in Firefox). Therefore:
(.2 + .1) * 10 == 3
evaluates to false.
This seems like it would be very problematic. So what can be done to avoid bugs due to the imprecise decimal calculations in JS?
I've noticed that if you do 1.2 + 1.1 you get the right answer. So should you just avoid any kind of math that involves values less than 1? Because that seems very impractical. Are there any other dangers to doing math in JS?
Edit:
I understand that many decimal fractions can't be stored as binary, but the way most other languages I've encountered appear to deal with the error (like JS handles numbers greater than 1) seems more intuitive, so I'm not used to this, which is why I want to see how other programmers deal with these calculations.
1.2 + 1.1 may be ok but 0.2 + 0.1 may not be ok.
This is a problem in virtually every language that is in use today. The problem is that 1/10 cannot be accurately represented as a binary fraction just like 1/3 cannot be represented as a decimal fraction.
The workarounds include rounding to only the number of decimal places that you need and either work with strings, which are accurate:
(0.2 + 0.1).toFixed(4) === 0.3.toFixed(4) // true
or you can convert it to numbers after that:
+(0.2 + 0.1).toFixed(4) === 0.3 // true
or using Math.round:
Math.round(0.2 * X + 0.1 * X) / X === 0.3 // true
where X is some power of 10 e.g. 100 or 10000 - depending on what precision you need.
Or you can use cents instead of dollars when counting money:
cents = 1499; // $14.99
That way you only work with integers and you don't have to worry about decimal and binary fractions at all.
2017 Update
The situation of representing numbers in JavaScript may be a little bit more complicated than it used to. It used to be the case that we had only one numeric type in JavaScript:
64-bit floating point (the IEEE 754 double precision floating-point number - see: ECMA-262 Edition 5.1, Section 8.5 and ECMA-262 Edition 6.0, Section 6.1.6)
This is no longer the case - not only there are currently more numerical types in JavaScript today, more are on the way, including a proposal to add arbitrary-precision integers to ECMAScript, and hopefully, arbitrary-precision decimals will follow - see this answer for details:
Difference between floats and ints in Javascript?
See also
Another relevant answer with some examples of how to handle the calculations:
Node giving strange output on sum of particular float digits
In situations like these you would tipically rather make use of an epsilon estimation.
Something like (pseudo code)
if (abs(((.2 + .1) * 10) - 3) > epsilon)
where epsilon is something like 0.00000001, or whatever precision you require.
Have a quick read at Comparing floating point numbers
(Math.floor(( 0.1+0.2 )*1000))/1000
This will reduce the precision of float numbers but solves the problem if you are not working with very small values.
For example:
.1+.2 =
0.30000000000000004
after the proposed operation you will get 0.3 But any value between:
0.30000000000000000
0.30000000000000999
will be also considered 0.3
There are libraries that seek to solve this problem but if you don't want to include one of those (or can't for some reason, like working inside a GTM variable) then you can use this little function I wrote:
Usage:
var a = 194.1193;
var b = 159;
a - b; // returns 35.11930000000001
doDecimalSafeMath(a, '-', b); // returns 35.1193
Here's the function:
function doDecimalSafeMath(a, operation, b, precision) {
function decimalLength(numStr) {
var pieces = numStr.toString().split(".");
if(!pieces[1]) return 0;
return pieces[1].length;
}
// Figure out what we need to multiply by to make everything a whole number
precision = precision || Math.pow(10, Math.max(decimalLength(a), decimalLength(b)));
a = a*precision;
b = b*precision;
// Figure out which operation to perform.
var operator;
switch(operation.toLowerCase()) {
case '-':
operator = function(a,b) { return a - b; }
break;
case '+':
operator = function(a,b) { return a + b; }
break;
case '*':
case 'x':
precision = precision*precision;
operator = function(a,b) { return a * b; }
break;
case '÷':
case '/':
precision = 1;
operator = function(a,b) { return a / b; }
break;
// Let us pass in a function to perform other operations.
default:
operator = operation;
}
var result = operator(a,b);
// Remove our multiplier to put the decimal back.
return result/precision;
}
Understanding rounding errors in floating point arithmetic is not for the faint-hearted! Basically, calculations are done as though there were infinity bits of precision available. The result is then rounded according to rules laid down in the relevant IEEE specifications.
This rounding can throw up some funky answers:
Math.floor(Math.log(1000000000) / Math.LN10) == 8 // true
This an an entire order of magnitude out. That's some rounding error!
For any floating point architecture, there is a number that represents the smallest interval between distinguishable numbers. It is called EPSILON.
It will be a part of the EcmaScript standard in the near future. In the meantime, you can calculate it as follows:
function epsilon() {
if ("EPSILON" in Number) {
return Number.EPSILON;
}
var eps = 1.0;
// Halve epsilon until we can no longer distinguish
// 1 + (eps / 2) from 1
do {
eps /= 2.0;
}
while (1.0 + (eps / 2.0) != 1.0);
return eps;
}
You can then use it, something like this:
function numericallyEquivalent(n, m) {
var delta = Math.abs(n - m);
return (delta < epsilon());
}
Or, since rounding errors can accumulate alarmingly, you may want to use delta / 2 or delta * delta rather than delta.
You need a bit of error control.
Make a little double comparing method:
int CompareDouble(Double a,Double b) {
Double eplsilon = 0.00000001; //maximum error allowed
if ((a < b + epsilon) && (a > b - epsilon)) {
return 0;
}
else if (a < b + epsilon)
return -1;
}
else return 1;
}
As I found it while working with monetary values, I found a solution just by changing the values to cents, so I did the following:
result = ((value1*100) + (value2*100))/100;
Working with monetary values, we have only two decimal houses, thats why I multiplied and dived by 100.
If you're going to work with more decimal houses, you'll have to multiply the amount of decimal houses by then, having:
.0 -> 10
.00 -> 100
.000 -> 1000
.0000 -> 10000
...
With this, you'll always dodge working with decimal values.
Convert the decimals into integers with multiplication, then at the end convert back the result by dividing it with the same number.
Example in your case:
(0.2 * 100 + 0.1 * 100) / 100 * 10 === 3

Categories

Resources