Bitwise logical operator ~ - javascript

My question is how it became -6 when do the negation ?
Edit : Let's say like this, if we need to represent 6 on 2's complement it should be 110.But on the 2nd row of above is having '4294967290' (decimal) value when It has been converted by using cal Here
So how can it be a -6 then ?

The negation as you call it is a strict bit inversion, but decimal values in JavaScript are handled as twos-complement.
So you'd basically need '~5 + 1' to get to the equivalent representation as '-5'.
In two's-complement representation, positive numbers are simply represented as themselves, and negative numbers are represented by the two's complement of their absolute value
See http://en.wikipedia.org/wiki/Two's_complement for more details.

Related

facing an issue with parseFloat when input is more than 16 digits

I am facing weird issued.
parseFloat(11111111111111111) converts it to 11111111111111112.
I noticed that it works fine till length is 16 but rounds off higher when input length is > 16.
I want to retain the original value passed in parseFloat after it is executed.
Any help?
Integers (numbers without a period or exponent notation) are considered accurate up to 15 digits.
More information here
Numbers in javascript are represented using 64 bit floating point values (so called doubles in other languages).
doubles can hold at most 15/16 significant digits (depends on number magnitute). Since range of double is 1.7E+/-308 some numbers can only be aproximated by double, in your case 11111111111111111 cannot be represented exactly but is aproximated by 11111111111111112 value. If this sounds strange then remember that 0.3 cannot be represented exactly as double too.
double can hold exact integers values in range +/-2^53, when you are operating in this range - you may expect exact values.
Javascript has a constant, Number.MAX_SAFE_INTEGER which is the highest integer that can be exactly represented.
Safe in this context refers to the ability to represent integers exactly and to correctly compare them. For example, Number.MAX_SAFE_INTEGER + 1 === Number.MAX_SAFE_INTEGER + 2 will evaluate to true, which is mathematically incorrect.
The value is 9007199254740991 (2^53 - 1) which makes a maximum of 15 digits safe.
JavaScript now has BigInt
BigInt is a built-in object that provides a way to represent whole numbers larger than 253 - 1, which is the largest number JavaScript can reliably represent with the Number primitive.
BigInt can be used for arbitrarily large integers.
As you can see in the following blog post, JavaScript only supports 53 bit integers.
if you type in the console
var x = 11111111111111111
and then type
x
you'll get
11111111111111112
This has nothing to do with the parseFloat method.
There's also a related question here about working with big numbers in JavaScript.
Try using the unary + operator.
Like this + ("1111111111111111") + 1 = 1111111111111112

parseInt returning values that differs by 1 [duplicate]

This question already has answers here:
What is JavaScript's highest integer value that a number can go to without losing precision?
(21 answers)
Closed 7 years ago.
I have data like this:
var currentValue="12345678901234561";
and I'm trying to parse it:
var number = parseInt(currentValue, 10) || 0;
and my result is:
number = 12345678901234560
now lets try:
currentValue="12345678901234567"
in this case parseInt(currentValue,10) will result in 12345678901234568
Can anyone explain me why parseInt is adding/substracting 1 from values provided by me?
Can anyone explain me why parseInt is adding/substracting 1 from values provided by me?
It's not, quite, but JavaScript numbers are IEEE-754 double-precision binary floating point (even when you're using parseInt), which have only about 15 digits of precision. Your number is 17 digits long, so precision suffers, and the lowest-order digits get spongy.
The maximum reliable integer value is 9,007,199,254,740,991, which is available from the property Number.MAX_SAFE_INTEGER on modern JavaScript engines. (Similarly, there's Number.MIN_SAFE_INTEGER, which is -9,007,199,254,740,991.)
Some integer-specific operations, like the bitwise operators ~, &, and |, convert their floating-point number operands to signed 32-bit integers, which gives us a much smaller range: -231 (-2,147,483,648) through 231-1 (2,147,483,647). Others, like <<, >>, and >>>, convert it to an unsigned 32-bit integer, giving us the range 0 through 4,294,967,295. Finally, just to round out our integer discussion, the length of an array is always a number within the unsigned 32-bit integer range.

How to obtain only the integer part of a long floating precision number with JS?

I know there's
Math.floor
parseInt
But about this case:
Math.floor(1.99999999999999999999999999)
returning 2, how could I obtain only its integer part, equals to 1?
1.99999999999999999999999999 is not the actual value of the number. It has a value of 2 after the literal is parsed because this is 'as best' as JavaScript can represent such a value.
JavaScript numbers are IEEE-754 binary64 or "double precision" which allow for 15 to 17 significant [decimal] digits of precision - the literal shown requires 27 significant digits which results in information loss.
Test: 1.99999999999999999999999999 === 2 (which results in true).
Here is another answer of mine that describes the issue; the finite relative precision is the problem.

parseInt rounds incorrectly

I stumbled upon this issue with parseInt and I'm not sure why this is happening.
console.log(parseInt("16980884512690999")); // gives 16980884512691000
console.log(parseInt("169808845126909101"));​ // gives 169808845126909100
I clearly not hitting any number limits in JavaScript limits
(Number.MAX_VALUE = 1.7976931348623157e+308)
Running Win 7 64 bit if that matters.
What am I overlooking?
Fiddle
Don't confuse Number.MAX_VALUE with maximum accurate value. All numbers in javascript are stored as 64 bit floating point, which means you can get high (and low) numbers, but they'll only be accurate to a certain point.
Double floating points (i.e. Javascript's) have 53 bits of significand precision, which means the highest/lowest "certainly accurate" integer in javascript is +/-9007199254740992 (2^53). Numbers above/below that may turn out to be accurate (the ones that simply add 0's on the end, because the exponent bits can be used to represent that).
Or, in the words of ECMAScript: "Note that all the positive and negative integers whose magnitude is no greater than 2^53 are representable in the Number type (indeed, the integer 0 has two representations, +0 and −0)."
Update
Just to add a bit to the existing question, the ECMAScript spec requires that if an integral Number has less than 22 digits, .toString() will output it in standard decimal notation (e.g. 169808845126909100000 as in your example). If it has 22 or more digits, it will be output in normalized scientific notation (e.g. 1698088451269091000000 - an additional 0 - is output as 1.698088451269091e+21).
From this answer
All numbers in Javascript are 64 bit "double" precision IEE754
floating point.
The largest positive whole number that can therefore be accurately
represented is 2^53. The remaining bits are reserved for the exponent.
2^53 = 9007199254740992

JavaScript | operator [duplicate]

This question already has answers here:
Using bitwise OR 0 to floor a number
(7 answers)
Closed 8 years ago.
Anyone able to explain what "|" and the value after does? I know the output for 0 creates sets of 13, the numbers, 3, 2, 1, 0. But what about | 1, or | 2.
var i = 52;
while(i--) {
alert(i/13 | 0);
}
It is the bitwise OR operator. There is both an explanation and an example over at MDC. Since doing bitwise OR with one operand being 0 produces the value of the other operand, in this case it does exactly nothing rounds the result of the division down.
If it were written | 1 what it would do is always print odd numbers (because it would set the 1-bit to on); specifically, it would cause even numbers to be incremented by 1 while leaving odd numbers untouched.
Update: As the commenters correctly state, the bitwise operator causes both operands to be treated as integers, therefore removing any fraction of the division result. I stand corrected.
This is a clever way of accomplishing the same effect as:
Math.floor(i/13);
JavaScript developers seem to be good at these kinds of things :)
In JavaScript, all numbers are floating point. There is no integer type. So even when you do:
var i = 1;
i is really the floating point number 1.0. So if you just did i/13, you'd end up with a fractional portion of it, and the output would be 3.846... for example.
When using the bitwise or operator in JavaScript, the runtime has to convert the operands to 32 bit integers before it can proceed. Doing this chops away the fractional part, leaving you with just an integer left behind. Bitwise or of zero is a no op (well, a no op in a language that has true integers) but has the side effect of flooring in JavaScript.
It's a bitwise operator. Specifically the OR Bitwise Operator.
What it basically does is use your var as an array of bits and each corresponding bit is with eachother. The result is 1 if any of them is 1. And 0 if both are 0.
Example:
24 = 11000
10 = 1010
The two aren't of equal length so we pad with 0's
24 = 11000
10 = 01010
26 = 11010
24 | 10 = 26
Best way to learn this is to readup on it.
That is the bitwise OR. In evaluating the expression, the LHS is truncated to an integer and returned, so | is effecively the same as Math.floor().

Categories

Resources