how to set automatically decimal values using javascript - javascript

I want to change the number into integer and decimal portions after specific length. For example if I am entering more than 8 digit balance digits should be displayed as decimal values.
input :
4567454857
output:
45674548.57

Javascript Numbers without a decimal portion (ie, they look like ints) can be easily converted to decimals through division (JS isn't like other languages where ints can't become floats). However, for your problem, I have a solution involving substrings and the ability to convert freely between Number and String.
function truncateToEightDigits(num) {
if (num > 99999999) { // if num has more than 8 digits
var str = String(num);
return Number(str.substr(0, 8) + '.' + str.substr(8, str.length));
} else {
return num;
}
}
This also has the added benefit of avoiding weird issues with floating point math you might incur if you tried to do something such as divide by 10 repeatedly.
Side note: I'm not really sure what this has to do with ASP.NET and I'm kinda wondering why you tagged your question with that.

Related

Converting a Two's complement number to its binary representation

I am performing bitwise operations, the result of which is apparently being stored as a two's complement number. When I hover over the variable it's stored in I see- num = -2086528968.
The binary of that number that I want is - (10000011101000100001100000111000).
But when I say num.toString(2) I get a completely different binary representation, the raw number's binary instead of the 2s comp(-1111100010111011110011111001000).
How do I get the first string back?
Link to a converter: rapidtables.com/convert/number/decimal-to-binary.html
Put in this number: -2086528968
Follow bellow the result:
var number = -2086528968;
var bin = (number >>> 0).toString(2)
//10000011101000100001100000111000
console.log(bin)
pedro already answered this, but since this is a hack and not entirely intuitive I'll explain it.
I am performing bitwise operations, the result of which is apparently being stored as a two's complement number. When I hover over the variable its stored in I see num = -2086528968
No, the result of most bit-operations is a 32bit signed integer. This means that the bit 0x80000000 is interpreted as a sign followed by 31 bits of value.
The weird bit-sequence is because of how JS stringifies the value, something like sign + Math.abs(value).toString(base);
How to deal with that? We need to tell JS to not interpret that bit as sign, but as part of the value. But how?
An easy to understand solution would be to add 0x100000000 to the negative numbers and therefore get their positive couterparts.
function print(value) {
if (value < 0) {
value += 0x100000000;
}
console.log(value.toString(2).padStart(32, 0));
}
print(-2086528968);
Another way would be to convert the lower and the upper bits seperately
function print(value) {
var signBit = value < 0 ? "1" : "0";
var valueBits = (value & 0x7FFFFFFF).toString(2);
console.log(signBit + valueBits.padStart(31, 0));
}
print(-2086528968);
//or lower and upper half of the bits:
function print2(value) {
var upperHalf = (value >> 16 & 0xFFFF).toString(2);
var lowerHalf = (value & 0xFFFF).toString(2);
console.log(upperHalf.padStart(16, 0) + lowerHalf.padStart(16, 0));
}
print2(-2086528968);
Another way involves the "hack" that pedro uses. You remember how I said that most bit-operations return an int32? There is one operation that actually returns an unsigned (32bit) interger, the so called Zero-fill right shift.
So number >>> 0 does not change the bits of the number, but the first bit is no longer interpreted as sign.
function uint32(value){
return value>>>0;
}
function print(value){
console.log(uint32(value).toString(2).padStart(32, 0));
}
print(-2086528968);
will I run this shifting code only when the number is negative, or always?
generally speaking, there is no harm in running nr >>> 0 over positive integers, but be careful not to overdo it.
Technically JS only supports Numbers, that are double values (64bit floating point values). Internally the engines also use int32 values; where possible. But no uint32 values. So when you convert your negative int32 into an uint32, the engine converts it to a double. And if you follow up with another bit operation, first thing it does is converting it back.
So it's fine to do this like when you need an actual uint32 value, like to print the bits here, but you should avoid this conversion between operations. Like "just to fix it".

Using a float in Javascript in a hash function

I Have a hash function like this.
class Hash {
static rotate (x, b) {
return (x << b) ^ (x >> (32-b));
}
static pcg (a) {
let b = a;
for (let i = 0; i < 3; i++) {
a = Hash.rotate((a^0xcafebabe) + (b^0xfaceb00c), 23);
b = Hash.rotate((a^0xdeadbeef) + (b^0x8badf00d), 5);
}
return a^b;
}
}
// source Adam Smith: https://groups.google.com/forum/#!msg/proceduralcontent/AuvxuA1xqmE/T8t88r2rfUcJ
I use it like this.
console.log(Hash.pcg(116)); // Output: -191955715
As long as I send an integer in, I get an integer out. Now here comes the problem. If I have a floating number as input, rounding will happen. The number Hash.pcg(1.1) and Hash.pcg(1.2) will yield the same. I want different inputs to yield different results. A possible solution could be to multiply the input so the decimal is not rounded down, but is there a more elegant and flexible solution to this?
Is there a way to convert a floating point number to a unique integer? Each floating point number would result in a different integer number.
Performance is important.
This isn't quite an answer, but I was running out of room to make it a comment. :)
You'll hit a problem with integers outside of the 32-bit range as well as with non-integer values.
JavaScript handles all numbers as 64-bit floating point. This gives you exact integers over the range -9007199254740991 to 9007199254740991 (±(2^53 - 1)), but the bit-wise operators used in your hash algorithm (^, <<, >>) only work in a 32-bit range.
Since there are far more non-integer numbers possible than integers, no one-to-one mapping is possible with ordinary numbers. You could work something out with BigInts, but that will likely lead to comparatively much slower performance.
If you're willing to deal with the performance hit, your can use JavaScript buffer functions to get at the actual bits of a floating point number. (I'd say more now about how to do that, but I've got to run!)
Edit... back from dinner...
You can convert JavaScript's standard number type, which is 64-bit floating point, to a BigInt like this:
let dv = new DataView(new ArrayBuffer(8));
dv.setFloat64(0, Math.PI);
console.log(dv.getFloat64(0), dv.getBigInt64(0), dv.getBigInt64(0).toString(16).toUpperCase())
The output from this is:
3.141592653589793 4614256656552045848n "400921FB54442D18"
The first item shows that the number was properly stored as byte array, the second shows the BigInt created from the same bits, and the last is the same BigInt over again, but in hex to better show the floating point data format.
Once you've converted a number like this to a BigInt (which is not the same numeric value, but it is the same string of bits) every possible value of number will be uniquely represented.
The same bit-wise operators you used in your algorithm above will work with BigInts, but without the 32-bit limitation. I'm guessing that for best results you'd want to change the 32 in your code to 64, and use 16-digit (instead of 8-digit) hex constants as hash keys.

Can someone please explain this JavaScript decimal rounding function in detail?

function Round2DecimalPlaces(l_amt) {
var l_dblRounded = +(Math.round(l_amt + "e+2") + "e-2");
return l_dblRounded;
}
Fiddle: http://jsfiddle.net/1jf3ut3v/
I'm mainly confused on how Math.round works with "e+2" and how addition the "+" sign to the beginning of Math.round makes any difference at all.
I understand the basic of the function; the decimal gets moved n places to the right (as specified by e+2), rounded with this new integer, and then moved back. However, I'm not sure what 'e' is doing in this situation.
eX is a valid part of a Number literal and means *10^X, just like in scientific notation:
> 1e1 // 1 * Math.pow(10, 1)
10
> 1e2 // 1 * Math.pow(10, 2)
100
And because of that, converting a string containing such a character sequence results in a valid number:
> var x = 2;
> Number(x + "e1")
20
> Number(x + "e2")
200
For more information, have a look at the MDN JavaScript Guide.
But of course the way this notation is used in your example is horrible. Converting values back and forth to numbers and strings is already bad enough, but it also makes it more difficult to understand.
Simple multiple or divide by a multiple of 10.
The single plus operator coerces a the string into a float. (See also: Single plus operator in javascript )

Is there a way to distinguish integers from very near decimals in Javascript?

Look at those evaluations (actual dump from node 0.10.33)
> parseFloat(2.1e-17) === parseInt(2.1e-17)
false
> parseFloat(2.1e-17 + 2) === parseInt(2.1e-17 + 2)
true
> parseFloat(2.000000000000000000000000000000000009) === parseInt(2.00000000000000000000000000000000000009)
true
How can I tell integers from decimals very near to integers?
It seems that JS (or at least V8) doesn't care about digits smaller than 10^-16 when doing calculations, even if the 64bit representation used by the language (reference) should handle it.
Your examples are pretty much straight forward to explain. First thing to note is, that parseInt() and parseFloat() take a string as an input. So you inputs first get converted to string, before actually getting parsed.
The first is easy to see:
> parseFloat(2.1e-17) === parseInt(2.1e-17)
false
// look at the result of each side
parseFloat(2.1e-17) == 2.1e-17
parseInt(2.1e-17) == 2
When parsing the string "2.1e-17" as integer, the parse will stop at the dot as that is no valid digit and return everything it found until then, which is just 2.
> parseFloat(2.1e-17 + 2) === parseInt(2.1e-17 + 2)
true
// look at the result of each side
parseFloat(2.1e-17 + 2) == 2
parseInt(2.1e-17 + 2) == 2
Here the formula in the parameter will be evaluated first. Due to the limitations of floating point math (we just have 52bit for the mantissa and can't represent something like 2.000000000000000021), this will result in just 2. So both parseX() function get the same integer parameter, which will result in the same parsed number.
> parseFloat(2.000000000000000000000000000000000009) === parseInt(2.00000000000000000000000000000000000009)
true
Same argument as for the second case. The only difference is, that instead of a formula, that gets evaluated, this time it is the JavaScript parser, which converts your numbers just to 2.
So to sum up: From JavaScript's point of view, your numbers are just the same. If you need more precision, you will have to use some library for arbitrary precision.
This is something I learned from ReSharper
instead of using expressions like
if (2.00001 == 2) {}
try
if (Math.abs(2.00001 - 2) < tolerance) {}
where tolerance should be an aceptable value for you for example .001
so all values wich difference is less than .001 will be equals
Do you really need 10^-16 precision I mean that is why 1000 meter = 1 kilometer, just change the unit of the output so you dont have to work with all those decimals

Why is 10000000000000.126.toString() 1000000000000.127 (and what can I do to prevent it)?

Why is 10000000000000.126.toString() 1000000000000.127 and 100000000000.126.toString() not?
I think it must have something to do with the maximum value of a Number in Js (as per this SO question), but is that related to floating point operations and how?
I'm asking because I wrote this function to format a number using thousands separators and would like to prevent this.
function th(n,sep) {
sep = sep || '.';
var dec = n.toString().split(/[,.]/),
nArr = dec[0].split(''),
isDot = /\./.test(sep);
return function tt(n) {
return n.length > 3 ?
tt(n.slice(0,n.length-3)).concat(n.slice(n.length-3).join('')) :
[n.join('')]
;
}(nArr)
.join(sep)
+ (dec[1] ? (isDot?',':'.') + dec[1] : '');
}
sep1000(10000000000000.126); //=> 10.000.000.000.000,127
sep1000(1000000000000.126); //=> 1.000.000.000.000,126
Because not all numbers can be exactly represented with floating point (JavaScript uses double-precision 64-bit format IEEE 754 numbers), rounding errors come in. For instance:
alert(0.1 + 0.2); // "0.30000000000000004"
All numbering systems with limited storage (e.g., all numbering systems) have this issue, but you and I are used to dealing with our decimal system (which can't accurately represent "one third") and so are surprised by some of the different values that the floating-point formats used by computers can't accurately represent. This sort of thing is why you're seeing more and more "decimal" style types out there (Java has BigDecimal, C# has decimal, etc.), which use our style of number representation (at a cost) and so are useful for applications where rounding needs to align with our expectations more closely (such as financial apps).
Update: I haven't tried, but you may be able to work around this by manipulating the values a bit before you grab their strings. For instance, this works with your specific example (live copy):
Code:
function display(msg) {
var p = document.createElement('p');
p.innerHTML = msg;
document.body.appendChild(p);
}
function preciseToString(num) {
var floored = Math.floor(num),
fraction = num - floored,
rv,
fractionString,
n;
rv = String(floored);
n = rv.indexOf(".");
if (n >= 0) {
rv = rv.substring(0, n);
}
fractionString = String(fraction);
if (fractionString.substring(0, 2) !== "0.") {
return String(num); // punt
}
rv += "." + fractionString.substring(2);
return rv;
}
display(preciseToString(10000000000000.126));
Result:
10000000000000.126953125
...which then can, of course, be truncated as you see fit. Of course, it's important to note that 10000000000000.126953125 != 10000000000000.126. But I think that ship had already sailed (e.g., the Number already contained an imprecise value), given that you were seeing .127. I can't see any way for you to know that the original went to only three places, not with Number.
I'm not saying the above is in any way reliable, you'd have to really put it through paces to prove it's (which is to say, I'm) not doing something stoopid there. And again, since you don't know where the precision ended in the first place, I'm not sure how helpful it is.
It's about the maximum number of significant decimal digits a float can store.
If you look at http://en.wikipedia.org/wiki/IEEE_754-2008 you can see that a double precision float (binary64) can store about 16 (15.95) decimal digits.
If your number contains more digits you effectively lose precision, which is the case in your sample.

Categories

Resources