Is this a valid way to truncate a number? - javascript

I found this code in a SO answer as a way to truncate a number into an integer in Javascript:
var num = -20.536;
var result = num | 0;
//result = -20
Is this a valid way to truncate a number in Javascript, or it is some kind of hack? Why does it works only with numbers less than 2147483647?

That method works by implicitly converting the number to a 32-bit integer, as binary operators use 32-bit integers in their calculations.
The drawbacks of that method are:
The desired operation is hidden as an implicit effect of the operator, so it's not easy to see what the intention of the code is.
It can only handle integers within the range of a 32-bit number.
For any regular case you should use the Math.floor or Math.ceil methods instead, it clearly shows what the intention of the code is, and it handles any number within the precision range of a double, i.e. integers up to 52 bits:
var num = 20.536;
var result = Math.floor(num); // 20
var num = -20.536;
var result = Math.ceil(num); // -20
There is no round-towards-zero method in Javascript, so to do that you would need to check the sign before rounding:
var result = num < 0 ? Math.ceil(num) : Math.floor(num);

Use Javascript's parseInt like so:
var num = -20.536;
var num2int = parseInt(num);
return num2int; //returns -20
Tada! num is now an int with the value of -20.

If you use parseInt you can go from -2^53 to +2^53:
parseInt(-20.536) // -20
parseInt(9007199254740992.1234) // 9007199254740992
Why +/- 2^53? This is because JavaScript uses a 64-bit representation for floating point numbers, with a 52-bit mantissa. Hence all integer values up to 2^53 can be represented exactly. Beyond this, whole numbers are approximated.

Related

How to convert a decimal (base 10) to 32-bit unsigned integer?

How to convert a decimal (base 10) to 32-bit unsigned integer?
Example:
If n = 9 (base 10), how to convert it to something like: 00000000000000000000000000001001 (base 2)?
Let's be clear that when you're talking about number bases, you're talking about textual representations (which code will see as strings). Numbers don't have number bases, they're just numbers (but more on this below). A number base is a way of representing a number with a series of digits (text). So n = 9 isn't base 10 or base 36 or base 2, it's just a number. (The number literal is in base 10, but the resulting number has no concept of that.)
You have a couple of options:
Built in methods
The number type's toString accepts a radix (base) to use, valid values are 2 through 36. And the string padStart method lets you pad the start of a string so the string is the desired number of characters, specifying the padding character. So:
const n = 9;
const binaryText = n.toString(2).padStart(32, "0");
console.log(binaryText);
If your starting point is text (e.g., "9" rather than 9), you'd parse that first. My answer here has a full rundown of your number parsing options, but for instance:
const decimalText = "9";
const binaryText = (+decimalText).toString(2).padStart(32, "0");
console.log(binaryText);
Bit manipulation
"Numbers don't have number bases, they're just numbers" is true in the abstract, but at the bits-in-memory level, the bits are naturally assigned meaning. In fact, JavaScript's number type is an implementation of IEEE-754 double-precision binary floating point.
That wouldn't help us except that JavaScript's bitwise & and | operators are defined in terms of 32-bit binary integers (even though numbers aren't 32-bit binary integers, they're 64-bit floating point). that means we could also implement the above by testing bits using &:
const n = 9;
let binaryText = "";
for (let bit = 1; bit < 65536; bit *= 2) {
binaryText = (n & bit ? "1" : "0") + binaryText;
}
console.log(binaryText);
Hoping u guys have already practiced javascript in leetcode using andygala playlists
function flippingBits(n){
n = n.toString(2).padStart(32, "0");
n=(n.split(''))
for(let i=0;i<32;i++){
(n[i]==='1')? n[i]='0': n[i]='1';
}
n=n.join('')
let n=parseInt(n,2)
return n;
}
console.log(flippingBits(9))

Does the number -0 exists in Javascript?

I was doing some money calculation and I got the number -0. This number -0 does not exists, since 0 does not have a sign. Why is this happening? I know the original price has more digits on my bd. but anyway this behaviour thing is weird anyway.
So I'm using this math expression: I pick the item price and subtract the discount, then I round it up.
(19.99923-20).toFixed(2)
And I get "-0.00" !? this is ugly to display. I tried using the Number() to make it a "real number", but
Number((19.99923-20).toFixed(1))
will appear as "-0".
What's wrong with javascript, there is no number -0, it should be just "0"?
JavaScript keeps sign with floating negatives close to 0, either using Math.round() or toFixed(), both can get you a -0. Solved that applying a quick fix, which consists in checking if our number enters that range in between 0 and -0.005 (rounded to -0.01 later). Considered 2 floating digits as your example works with money, so I considered a difference of 1 cent relevant:
var num=19.99923-20;
if(num>-0.005 && num<0){
num=0; //we set to '0' all -0
}else{
num=num*100;
num=Math.round(num);
num=num/100;
/*or
num=num.toFixed(2);
but this way we convert number to string*/
}
Hope it helps.
toFixed converts a number to string, not a number. So the -0.00 you are seeing is a string. Its the result of converting
19.99923-20 // which is about
-0.0007699999999992713 // in internal representation
to a string using the toFixed method in ECMAscript standards, which initialises the result to "-" for negative numbers and proceeds to convert the absolute (positive) value of the number being converted.
Converting the string "-0.00" back to a number with either parseFloat("-0.00") or Number("-0.00") returns a positive zero number representation (javscript stores numbers using the IEEE 754 standard for double precision float representation, which does have a negative zero value, but it's not the problem here.)
Looking at how toFixed works suggests the only problem is with a "-0.00" result, which can be checked using string comparison:
var number = 19.99923-20;
var str = number.toFixed(2);
if( str == "-0.00")
str = "0.00";
Alternatively you could consider using a conversion routine which never returns a negatively signed zero string such as:
function convertFixed( number, digits) {
if(isNaN(number))
return "NaN";
var neg = number < 0;
if( neg)
number = -number;
var string = number.toFixed(digits);
if( neg && Number(string) === 0) // negative zero result
neg = false;
return neg ? "-" + string : string;
}

Converting number into string in JavaScripts without trailing zeros from number

I tried to convert number to string in JavaScripts using toString() but it truncates insignificant zeros from numbers. For examples;
var n1 = 250.00
var n2 = 599.0
var n3 = 056.0
n1.toString() // yields 250
n2.toString() // yields 599
n3.toString() // yields 56
but I dont want to truncate these insignificant zeros ( "250.00"). Could you please provide any suggestions?. Thank you for help.
The number doesn't know how many trailing 0 there are because they are not stored. In math, 250, 250.00 or 250.0000000000000 are all the same number and are all represented the same way in memory.
So in short, there is no way to do what you want. What you can do is format all numbers in a specific way. See Formatting a number with exactly two decimals in JavaScript.
As far as I know, you can't store number with floating zeros, but you can create zeroes with floating zeroes, by using toFixed:
var n1 = 250;
var floatedN1 = n1.toFixed(2); //type 'string' value '250.00'

Javascript precision while dividing

Is there a way to determine whether dividing one number by another will result in whole number in JavaScript? Like 18.4 / 0.002 gives us 9200, but 18.4 / 0.1 gives us 183.99999999999997. The problem is that both of them may be any float number (like 0.1, 0.01, 1, 10, ...) which makes it impossible to use the standard function modulo or trying to subtract, and floating point precision issues mean we will sometimes get non-whole-number results for numbers that should be whole, or whole-number results for ones that shouldn't be.
One hacky way would be
Convert both numbers to strings with toString()
Count the precision points (N) by stripping off the characters before the . (including the .) and taking the length of the remaining part
Multiply with 10^N to make them integers
Do modulo and get the result
Updated Demo: http://jsfiddle.net/9HLxe/1/
function isDivisible(u, d) {
var numD = Math.max(u.toString().replace(/^\d+\./, '').length,
d.toString().replace(/^\d+\./, '').length);
u = Math.round(u * Math.pow(10, numD));
d = Math.round(d * Math.pow(10, numD));
return (u % d) === 0;
}
I don't think you can do that with JavaScript's double-precision floating point numbers, not reliably across the entire range. Maybe within some constraints you could (although precision errors crop up in all sorts of -- to me -- unexpected locations).
The only way I see is to use any of the several "big decimal" libraries for JavaScript, that don't use Number at all. They're slower, but...
I Assume that you want the reminder to be zero when you perform the division.
check for the precision of the divisor, and multiply both divisor and divident by powers of 10
for example
you want to check for 2.14/1.245 multiply both divident and divisor by 1000 as 1.245 has 3 digits precision, now the you would have integers like 2140/1245 to perform modulo
Divide first number by second one and check if result is integer ?
Only, when you check that the result is integer, you need to specify a rounding threshold.
In javascript, 3.39/1.13 is slightly more than 3.
Example :
/**
* Returns true iif a is an integer multiple of b
*/
function isIntegerMultiple(a, b, precision) {
if (precision === undefined) {
precision = 10;
}
var quotient = a / b;
return Math.abs(quotient - Math.round(quotient)) < Math.pow(10, -precision);
}
console.log(isIntegerMultiple(2, 1)); // true
console.log(isIntegerMultiple(2.4, 1.2)); // true
console.log(isIntegerMultiple(3.39, 1.13)); // true
console.log(isIntegerMultiple(3.39, 1.13, 20)); // false
console.log(isIntegerMultiple(3, 2)); // false
Have a look at this for more details on floating point rounding issues: Is floating point math broken?

Does Javascript handle integer overflow and underflow? If yes, how?

We know that Java does not handle underflows and overflows, but how does Javascript handle these for integers?
Does it go back to a minimum/maximum? If yes, which minimum/maximum?
I need to split a string and compute a hash value based on its characters.
In a simple test, when I try this:
var max = Number.MAX_VALUE;
var x = max + 10;
var min = Number.MIN_VALUE;
var y = min / 10;
I find that x and max have the same value (in Chrome, IE and Firefox) so it appears that some overflows are just pegged to the max value. And, y gets pegged to 0 so some underflows seem to go to zero.
Ahhh, but it is not quite that simple. Not all overflows go to Number.MAX_VALUE and not all underflows go to Number.MIN_VALUE. If you do this:
var max = Number.MAX_VALUE;
var z = max * 2;
Then, z will be Infinity.
It turns out that it depends upon how far you overflow/underflow. If you go too far, you will get INFINITY instead. This is because of the use of IEEE 754 round-to-nearest mode where the max value can be considered nearer than infinity. See Adding to Number.MAX_VALUE for more detail. Per that answer, values of 1.7976931348623158 × 10308 or greater round to infinity. Values between Number.MAX_VALUE and that will round to Number.MAX_VALUE.
To, make things even more complicated, there is also something as gradual underflow which Javascript supports. This is where the mantissa of the floating point value has leading zeroes in it. Gradual underflow allows floating point to represent some smaller numbers that it could not represent without that, but they are represented at a reduced precision.
You can see exactly where the limits are:
>>> Number.MAX_VALUE + 9.979201e291
1.7976931348623157e+308
>>> Number.MAX_VALUE + 9.979202e291
Infinity
Here's a runnable snippet you can try in any browser:
var max = Number.MAX_VALUE;
var x = max + 10;
var min = Number.MIN_VALUE;
var y = min / 10;
var z = max * 2;
document.getElementById("max").innerHTML = max;
document.getElementById("max10").innerHTML = x;
document.getElementById("min").innerHTML = min;
document.getElementById("min10").innerHTML = y;
document.getElementById("times2").innerHTML = z;
body {
font-family: "Courier New";
white-space:nowrap;
}
Number.MAX_VALUE = <span id="max"></span><br>
Number.MAX_VALUE + 10 = <span id="max10"></span><br>
<br>
Number.MIN_VALUE = <span id="min"></span><br>
Number.MIN_VALUE / 10 = <span id="min10"></span><br>
<br>
Number.MAX_VALUE * 2 = <span id="times2"></span><br>
The maximum and minimum is +/- 9007199254740992
Try these Number type properties:
alert([Number.MAX_VALUE, Number.MIN_VALUE]);
From the ECMAScript 2020 language specification, section "The Number Type":
Note that all the positive and negative mathematical integers whose magnitude is no
greater than 253 are representable in the Number type (indeed, the
mathematical integer 0 has two representations, +0 and −0).
Test:
var x = 9007199254740992;
var y = -x;
x == x + 1; // true !
y == y - 1; // also true !
Number
In JavaScript, number type is 64 bit IEEE 754 floating point number which is not an integer. So it don't follow common patterns of integer overflow / underflow behavior in other languages.
As the floating point number use 53 bits for base part. It may represent numbers in range Number.MIN_SAFE_INTEGER to Number.MAX_SAFE_INTEGER (-253+1 to 253-1) without floating point errors. For numbers out of this range, it may be rounded to nearest number available, or may be Infinity if it is too large.
Bit-wise operator
Bit-wise operator treat operand 32 bit integers. And common integer overflow may happened as in other languages. Only last 32 bits may be kept after calculate. For example, 3<<31 would results -2147483648.
>>> treat operand as unsigned 32 bit integers. All other operator treat operand as signed 32 bit integers. If you want to convert signed integer to unsigned, you may write value >>> 0 to do the trick. To convert back, use value | 0.
If you want to shift an integer with 33, it will actually be shifted with 1.
BigInt
Just like Java's java.math.BigInteger, BigInt supports unbounded integers (still bound by your memory limit though). So integer overflow may never happen here.
TypedArray
For most TypedArray types, when an integer out of supported range assigned, it got truncated as what other languages do when converting integers, by keeping least significant bits. For example new Int8Array([1000])[0] got -24.
Uint8ClampedArray is a bit different from other TypedArray's. Uint8ClampedArray supports integers in range 0 ~ 255. When numbers out of range is used, 0 or 255 will be set instead.
asm.js
The same rules for bit-wise operator applied here. The value would be trucked back as what | 0 or >>> 0 do.

Categories

Resources