javascript bitwise operator question - javascript

In Javascript when I do this
var num = 1;
~ num == -2
why does ~num not equal 0
in binary 1 is stored as 1 ... thus not 1 should be 0
or it is stored like 0001 thus not 0001 would be 1110
I think I am missing something... can someone clear this up

Look up Two's complement for signed binary numbers
Lets assume that a javascript Number is 8 bits wide (which its not):
then
1 = 0000 0001b
and
~1 = 1111 1110b
Which is the binary representation of -2
0000 0010b = 2
0000 0001b = 1
0000 0000b = 0
1111 1111b = -1
1111 1110b = -2

~ toggles the bits of the operand so
00000001
becomes
11111110
which is -2
Note: In javascript, the numbers are 32-bit, but I shortened it to illustrate the point.

From the documentation:
Bitwise NOTing any number x yields -(x + 1). For example, ~5 yields -6.

The reason for this is that using a bitwise NOT reverses all the bits of a value. If you are storing the value of 1 in a signed 8-bit integer, you're storing the binary value 00000001. If you apply a bitwise NOT, you get 11111110, which for a signed 8-bit integer is the binary value for -2.

Related

When converting to integer from string it adds +1 on the value [duplicate]

Is this defined by the language? Is there a defined maximum? Is it different in different browsers?
JavaScript has two number types: Number and BigInt.
The most frequently-used number type, Number, is a 64-bit floating point IEEE 754 number.
The largest exact integral value of this type is Number.MAX_SAFE_INTEGER, which is:
253-1, or
+/- 9,007,199,254,740,991, or
nine quadrillion seven trillion one hundred ninety-nine billion two hundred fifty-four million seven hundred forty thousand nine hundred ninety-one
To put this in perspective: one quadrillion bytes is a petabyte (or one thousand terabytes).
"Safe" in this context refers to the ability to represent integers exactly and to correctly compare them.
From the spec:
Note that all the positive and negative integers whose magnitude is no
greater than 253 are representable in the Number type (indeed, the
integer 0 has two representations, +0 and -0).
To safely use integers larger than this, you need to use BigInt, which has no upper bound.
Note that the bitwise operators and shift operators operate on 32-bit integers, so in that case, the max safe integer is 231-1, or 2,147,483,647.
const log = console.log
var x = 9007199254740992
var y = -x
log(x == x + 1) // true !
log(y == y - 1) // also true !
// Arithmetic operators work, but bitwise/shifts only operate on int32:
log(x / 2) // 4503599627370496
log(x >> 1) // 0
log(x | 1) // 1
Technical note on the subject of the number 9,007,199,254,740,992: There is an exact IEEE-754 representation of this value, and you can assign and read this value from a variable, so for very carefully chosen applications in the domain of integers less than or equal to this value, you could treat this as a maximum value.
In the general case, you must treat this IEEE-754 value as inexact, because it is ambiguous whether it is encoding the logical value 9,007,199,254,740,992 or 9,007,199,254,740,993.
>= ES6:
Number.MIN_SAFE_INTEGER;
Number.MAX_SAFE_INTEGER;
<= ES5
From the reference:
Number.MAX_VALUE;
Number.MIN_VALUE;
console.log('MIN_VALUE', Number.MIN_VALUE);
console.log('MAX_VALUE', Number.MAX_VALUE);
console.log('MIN_SAFE_INTEGER', Number.MIN_SAFE_INTEGER); //ES6
console.log('MAX_SAFE_INTEGER', Number.MAX_SAFE_INTEGER); //ES6
It is 253 == 9 007 199 254 740 992. This is because Numbers are stored as floating-point in a 52-bit mantissa.
The min value is -253.
This makes some fun things happening
Math.pow(2, 53) == Math.pow(2, 53) + 1
>> true
And can also be dangerous :)
var MAX_INT = Math.pow(2, 53); // 9 007 199 254 740 992
for (var i = MAX_INT; i < MAX_INT + 2; ++i) {
// infinite loop
}
Further reading: http://blog.vjeux.com/2010/javascript/javascript-max_int-number-limits.html
In JavaScript, there is a number called Infinity.
Examples:
(Infinity>100)
=> true
// Also worth noting
Infinity - 1 == Infinity
=> true
Math.pow(2,1024) === Infinity
=> true
This may be sufficient for some questions regarding this topic.
Jimmy's answer correctly represents the continuous JavaScript integer spectrum as -9007199254740992 to 9007199254740992 inclusive (sorry 9007199254740993, you might think you are 9007199254740993, but you are wrong!
Demonstration below or in jsfiddle).
console.log(9007199254740993);
However, there is no answer that finds/proves this programatically (other than the one CoolAJ86 alluded to in his answer that would finish in 28.56 years ;), so here's a slightly more efficient way to do that (to be precise, it's more efficient by about 28.559999999968312 years :), along with a test fiddle:
/**
* Checks if adding/subtracting one to/from a number yields the correct result.
*
* #param number The number to test
* #return true if you can add/subtract 1, false otherwise.
*/
var canAddSubtractOneFromNumber = function(number) {
var numMinusOne = number - 1;
var numPlusOne = number + 1;
return ((number - numMinusOne) === 1) && ((number - numPlusOne) === -1);
}
//Find the highest number
var highestNumber = 3; //Start with an integer 1 or higher
//Get a number higher than the valid integer range
while (canAddSubtractOneFromNumber(highestNumber)) {
highestNumber *= 2;
}
//Find the lowest number you can't add/subtract 1 from
var numToSubtract = highestNumber / 4;
while (numToSubtract >= 1) {
while (!canAddSubtractOneFromNumber(highestNumber - numToSubtract)) {
highestNumber = highestNumber - numToSubtract;
}
numToSubtract /= 2;
}
//And there was much rejoicing. Yay.
console.log('HighestNumber = ' + highestNumber);
Many earlier answers have shown 9007199254740992 === 9007199254740992 + 1 is true to verify that 9,007,199,254,740,991 is the maximum and safe integer.
But what if we keep doing accumulation:
input: 9007199254740992 + 1 output: 9007199254740992 // expected: 9007199254740993
input: 9007199254740992 + 2 output: 9007199254740994 // expected: 9007199254740994
input: 9007199254740992 + 3 output: 9007199254740996 // expected: 9007199254740995
input: 9007199254740992 + 4 output: 9007199254740996 // expected: 9007199254740996
We can see that among numbers greater than 9,007,199,254,740,992, only even numbers are representable.
It's an entry to explain how the double-precision 64-bit binary format works. Let's see how 9,007,199,254,740,992 be held (represented) by using this binary format.
Using a brief version to demonstrate it from 4,503,599,627,370,496:
1 . 0000 ---- 0000 * 2^52 => 1 0000 ---- 0000.
|-- 52 bits --| |exponent part| |-- 52 bits --|
On the left side of the arrow, we have bit value 1, and an adjacent radix point. By consuming the exponent part on the left, the radix point is moved 52 steps to the right. The radix point ends up at the end, and we get 4503599627370496 in pure binary.
Now let's keep incrementing the fraction part with 1 until all the bits are set to 1, which equals 9,007,199,254,740,991 in decimal.
1 . 0000 ---- 0000 * 2^52 => 1 0000 ---- 0000.
(+1)
1 . 0000 ---- 0001 * 2^52 => 1 0000 ---- 0001.
(+1)
1 . 0000 ---- 0010 * 2^52 => 1 0000 ---- 0010.
(+1)
.
.
.
1 . 1111 ---- 1111 * 2^52 => 1 1111 ---- 1111.
Because the 64-bit double-precision format strictly allots 52 bits for the fraction part, no more bits are available if we add another 1, so what we can do is setting all bits back to 0, and manipulate the exponent part:
┏━━▶ This bit is implicit and persistent.
┃
1 . 1111 ---- 1111 * 2^52 => 1 1111 ---- 1111.
|-- 52 bits --| |-- 52 bits --|
(+1)
1 . 0000 ---- 0000 * 2^52 * 2 => 1 0000 ---- 0000. * 2
|-- 52 bits --| |-- 52 bits --|
(By consuming the 2^52, radix
point has no way to go, but
there is still one 2 left in
exponent part)
=> 1 . 0000 ---- 0000 * 2^53
|-- 52 bits --|
Now we get the 9,007,199,254,740,992, and for the numbers greater than it, the format can only handle increments of 2 because every increment of 1 on the fraction part ends up being multiplied by the left 2 in the exponent part. That's why double-precision 64-bit binary format cannot hold odd numbers when the number is greater than 9,007,199,254,740,992:
(consume 2^52 to move radix point to the end)
1 . 0000 ---- 0001 * 2^53 => 1 0000 ---- 0001. * 2
|-- 52 bits --| |-- 52 bits --|
Following this pattern, when the number gets greater than 9,007,199,254,740,992 * 2 = 18,014,398,509,481,984 only 4 times the fraction can be held:
input: 18014398509481984 + 1 output: 18014398509481984 // expected: 18014398509481985
input: 18014398509481984 + 2 output: 18014398509481984 // expected: 18014398509481986
input: 18014398509481984 + 3 output: 18014398509481984 // expected: 18014398509481987
input: 18014398509481984 + 4 output: 18014398509481988 // expected: 18014398509481988
How about numbers between [ 2 251 799 813 685 248, 4 503 599 627 370 496 )?
1 . 0000 ---- 0001 * 2^51 => 1 0000 ---- 000.1
|-- 52 bits --| |-- 52 bits --|
The value 0.1 in binary is exactly 2^-1 (=1/2) (=0.5)
So when the number is less than 4,503,599,627,370,496 (2^52), there is one bit available to represent the 1/2 times of the integer:
input: 4503599627370495.5 output: 4503599627370495.5
input: 4503599627370495.75 output: 4503599627370495.5
Less than 2,251,799,813,685,248 (2^51)
input: 2251799813685246.75 output: 2251799813685246.8 // expected: 2251799813685246.75
input: 2251799813685246.25 output: 2251799813685246.2 // expected: 2251799813685246.25
input: 2251799813685246.5 output: 2251799813685246.5
/**
Please note that if you try this yourself and, say, log
these numbers to the console, they will get rounded. JavaScript
rounds if the number of digits exceed 17. The value
is internally held correctly:
*/
input: 2251799813685246.25.toString(2)
output: "111111111111111111111111111111111111111111111111110.01"
input: 2251799813685246.75.toString(2)
output: "111111111111111111111111111111111111111111111111110.11"
input: 2251799813685246.78.toString(2)
output: "111111111111111111111111111111111111111111111111110.11"
And what is the available range of exponent part? 11 bits allotted for it by the format.
From Wikipedia (for more details, go there)
So to make the exponent part be 2^52, we exactly need to set e = 1075.
To be safe
var MAX_INT = 4294967295;
Reasoning
I thought I'd be clever and find the value at which x + 1 === x with a more pragmatic approach.
My machine can only count 10 million per second or so... so I'll post back with the definitive answer in 28.56 years.
If you can't wait that long, I'm willing to bet that
Most of your loops don't run for 28.56 years
9007199254740992 === Math.pow(2, 53) + 1 is proof enough
You should stick to 4294967295 which is Math.pow(2,32) - 1 as to avoid expected issues with bit-shifting
Finding x + 1 === x:
(function () {
"use strict";
var x = 0
, start = new Date().valueOf()
;
while (x + 1 != x) {
if (!(x % 10000000)) {
console.log(x);
}
x += 1
}
console.log(x, new Date().valueOf() - start);
}());
The short answer is “it depends.”
If you’re using bitwise operators anywhere (or if you’re referring to the length of an Array), the ranges are:
Unsigned: 0…(-1>>>0)
Signed: (-(-1>>>1)-1)…(-1>>>1)
(It so happens that the bitwise operators and the maximum length of an array are restricted to 32-bit integers.)
If you’re not using bitwise operators or working with array lengths:
Signed: (-Math.pow(2,53))…(+Math.pow(2,53))
These limitations are imposed by the internal representation of the “Number” type, which generally corresponds to IEEE 754 double-precision floating-point representation. (Note that unlike typical signed integers, the magnitude of the negative limit is the same as the magnitude of the positive limit, due to characteristics of the internal representation, which actually includes a negative 0!)
ECMAScript 6:
Number.MAX_SAFE_INTEGER = Math.pow(2, 53)-1;
Number.MIN_SAFE_INTEGER = -Number.MAX_SAFE_INTEGER;
Other may have already given the generic answer, but I thought it would be a good idea to give a fast way of determining it :
for (var x = 2; x + 1 !== x; x *= 2);
console.log(x);
Which gives me 9007199254740992 within less than a millisecond in Chrome 30.
It will test powers of 2 to find which one, when 'added' 1, equals himself.
Anything you want to use for bitwise operations must be between 0x80000000 (-2147483648 or -2^31) and 0x7fffffff (2147483647 or 2^31 - 1).
The console will tell you that 0x80000000 equals +2147483648, but 0x80000000 & 0x80000000 equals -2147483648.
JavaScript has received a new data type in ECMAScript 2020: BigInt. It introduced numerical literals having an "n" suffix and allows for arbitrary precision:
var a = 123456789012345678901012345678901n;
Precision will still be lost, of course, when such big integer is (maybe unintentionally) coerced to a number data type.
And, obviously, there will always be precision limitations due to finite memory, and a cost in terms of time in order to allocate the necessary memory and to perform arithmetic on such large numbers.
For instance, the generation of a number with a hundred thousand decimal digits, will take a noticeable delay before completion:
console.log(BigInt("1".padEnd(100000,"0")) + 1n)
...but it works.
Try:
maxInt = -1 >>> 1
In Firefox 3.6 it's 2^31 - 1.
I did a simple test with a formula, X-(X+1)=-1, and the largest value of X I can get to work on Safari, Opera and Firefox (tested on OS X) is 9e15. Here is the code I used for testing:
javascript: alert(9e15-(9e15+1));
I write it like this:
var max_int = 0x20000000000000;
var min_int = -0x20000000000000;
(max_int + 1) === 0x20000000000000; //true
(max_int - 1) < 0x20000000000000; //true
Same for int32
var max_int32 = 0x80000000;
var min_int32 = -0x80000000;
Let's get to the sources
Description
The MAX_SAFE_INTEGER constant has a value of 9007199254740991 (9,007,199,254,740,991 or ~9 quadrillion). The reasoning behind that number is that JavaScript uses double-precision floating-point format numbers as specified in IEEE 754 and can only safely represent numbers between -(2^53 - 1) and 2^53 - 1.
Safe in this context refers to the ability to represent integers exactly and to correctly compare them. For example, Number.MAX_SAFE_INTEGER + 1 === Number.MAX_SAFE_INTEGER + 2 will evaluate to true, which is mathematically incorrect. See Number.isSafeInteger() for more information.
Because MAX_SAFE_INTEGER is a static property of Number, you always use it as Number.MAX_SAFE_INTEGER, rather than as a property of a Number object you created.
Browser compatibility
In JavaScript the representation of numbers is 2^53 - 1.
However, Bitwise operation are calculated on 32 bits ( 4 bytes ), meaning if you exceed 32bits shifts you will start loosing bits.
In the Google Chrome built-in javascript, you can go to approximately 2^1024 before the number is called infinity.
Scato wrotes:
anything you want to use for bitwise operations must be between
0x80000000 (-2147483648 or -2^31) and 0x7fffffff (2147483647 or 2^31 -
1).
the console will tell you that 0x80000000 equals +2147483648, but
0x80000000 & 0x80000000 equals -2147483648
Hex-Decimals are unsigned positive values, so 0x80000000 = 2147483648 - thats mathematically correct. If you want to make it a signed value you have to right shift: 0x80000000 >> 0 = -2147483648. You can write 1 << 31 instead, too.
Firefox 3 doesn't seem to have a problem with huge numbers.
1e+200 * 1e+100 will calculate fine to 1e+300.
Safari seem to have no problem with it as well. (For the record, this is on a Mac if anyone else decides to test this.)
Unless I lost my brain at this time of day, this is way bigger than a 64-bit integer.
Node.js and Google Chrome seem to both be using 1024 bit floating point values so:
Number.MAX_VALUE = 1.7976931348623157e+308

JavaScript fetch oddly adds a zero at the end of JSON field response [duplicate]

Is this defined by the language? Is there a defined maximum? Is it different in different browsers?
JavaScript has two number types: Number and BigInt.
The most frequently-used number type, Number, is a 64-bit floating point IEEE 754 number.
The largest exact integral value of this type is Number.MAX_SAFE_INTEGER, which is:
253-1, or
+/- 9,007,199,254,740,991, or
nine quadrillion seven trillion one hundred ninety-nine billion two hundred fifty-four million seven hundred forty thousand nine hundred ninety-one
To put this in perspective: one quadrillion bytes is a petabyte (or one thousand terabytes).
"Safe" in this context refers to the ability to represent integers exactly and to correctly compare them.
From the spec:
Note that all the positive and negative integers whose magnitude is no
greater than 253 are representable in the Number type (indeed, the
integer 0 has two representations, +0 and -0).
To safely use integers larger than this, you need to use BigInt, which has no upper bound.
Note that the bitwise operators and shift operators operate on 32-bit integers, so in that case, the max safe integer is 231-1, or 2,147,483,647.
const log = console.log
var x = 9007199254740992
var y = -x
log(x == x + 1) // true !
log(y == y - 1) // also true !
// Arithmetic operators work, but bitwise/shifts only operate on int32:
log(x / 2) // 4503599627370496
log(x >> 1) // 0
log(x | 1) // 1
Technical note on the subject of the number 9,007,199,254,740,992: There is an exact IEEE-754 representation of this value, and you can assign and read this value from a variable, so for very carefully chosen applications in the domain of integers less than or equal to this value, you could treat this as a maximum value.
In the general case, you must treat this IEEE-754 value as inexact, because it is ambiguous whether it is encoding the logical value 9,007,199,254,740,992 or 9,007,199,254,740,993.
>= ES6:
Number.MIN_SAFE_INTEGER;
Number.MAX_SAFE_INTEGER;
<= ES5
From the reference:
Number.MAX_VALUE;
Number.MIN_VALUE;
console.log('MIN_VALUE', Number.MIN_VALUE);
console.log('MAX_VALUE', Number.MAX_VALUE);
console.log('MIN_SAFE_INTEGER', Number.MIN_SAFE_INTEGER); //ES6
console.log('MAX_SAFE_INTEGER', Number.MAX_SAFE_INTEGER); //ES6
It is 253 == 9 007 199 254 740 992. This is because Numbers are stored as floating-point in a 52-bit mantissa.
The min value is -253.
This makes some fun things happening
Math.pow(2, 53) == Math.pow(2, 53) + 1
>> true
And can also be dangerous :)
var MAX_INT = Math.pow(2, 53); // 9 007 199 254 740 992
for (var i = MAX_INT; i < MAX_INT + 2; ++i) {
// infinite loop
}
Further reading: http://blog.vjeux.com/2010/javascript/javascript-max_int-number-limits.html
In JavaScript, there is a number called Infinity.
Examples:
(Infinity>100)
=> true
// Also worth noting
Infinity - 1 == Infinity
=> true
Math.pow(2,1024) === Infinity
=> true
This may be sufficient for some questions regarding this topic.
Jimmy's answer correctly represents the continuous JavaScript integer spectrum as -9007199254740992 to 9007199254740992 inclusive (sorry 9007199254740993, you might think you are 9007199254740993, but you are wrong!
Demonstration below or in jsfiddle).
console.log(9007199254740993);
However, there is no answer that finds/proves this programatically (other than the one CoolAJ86 alluded to in his answer that would finish in 28.56 years ;), so here's a slightly more efficient way to do that (to be precise, it's more efficient by about 28.559999999968312 years :), along with a test fiddle:
/**
* Checks if adding/subtracting one to/from a number yields the correct result.
*
* #param number The number to test
* #return true if you can add/subtract 1, false otherwise.
*/
var canAddSubtractOneFromNumber = function(number) {
var numMinusOne = number - 1;
var numPlusOne = number + 1;
return ((number - numMinusOne) === 1) && ((number - numPlusOne) === -1);
}
//Find the highest number
var highestNumber = 3; //Start with an integer 1 or higher
//Get a number higher than the valid integer range
while (canAddSubtractOneFromNumber(highestNumber)) {
highestNumber *= 2;
}
//Find the lowest number you can't add/subtract 1 from
var numToSubtract = highestNumber / 4;
while (numToSubtract >= 1) {
while (!canAddSubtractOneFromNumber(highestNumber - numToSubtract)) {
highestNumber = highestNumber - numToSubtract;
}
numToSubtract /= 2;
}
//And there was much rejoicing. Yay.
console.log('HighestNumber = ' + highestNumber);
Many earlier answers have shown 9007199254740992 === 9007199254740992 + 1 is true to verify that 9,007,199,254,740,991 is the maximum and safe integer.
But what if we keep doing accumulation:
input: 9007199254740992 + 1 output: 9007199254740992 // expected: 9007199254740993
input: 9007199254740992 + 2 output: 9007199254740994 // expected: 9007199254740994
input: 9007199254740992 + 3 output: 9007199254740996 // expected: 9007199254740995
input: 9007199254740992 + 4 output: 9007199254740996 // expected: 9007199254740996
We can see that among numbers greater than 9,007,199,254,740,992, only even numbers are representable.
It's an entry to explain how the double-precision 64-bit binary format works. Let's see how 9,007,199,254,740,992 be held (represented) by using this binary format.
Using a brief version to demonstrate it from 4,503,599,627,370,496:
1 . 0000 ---- 0000 * 2^52 => 1 0000 ---- 0000.
|-- 52 bits --| |exponent part| |-- 52 bits --|
On the left side of the arrow, we have bit value 1, and an adjacent radix point. By consuming the exponent part on the left, the radix point is moved 52 steps to the right. The radix point ends up at the end, and we get 4503599627370496 in pure binary.
Now let's keep incrementing the fraction part with 1 until all the bits are set to 1, which equals 9,007,199,254,740,991 in decimal.
1 . 0000 ---- 0000 * 2^52 => 1 0000 ---- 0000.
(+1)
1 . 0000 ---- 0001 * 2^52 => 1 0000 ---- 0001.
(+1)
1 . 0000 ---- 0010 * 2^52 => 1 0000 ---- 0010.
(+1)
.
.
.
1 . 1111 ---- 1111 * 2^52 => 1 1111 ---- 1111.
Because the 64-bit double-precision format strictly allots 52 bits for the fraction part, no more bits are available if we add another 1, so what we can do is setting all bits back to 0, and manipulate the exponent part:
┏━━▶ This bit is implicit and persistent.
┃
1 . 1111 ---- 1111 * 2^52 => 1 1111 ---- 1111.
|-- 52 bits --| |-- 52 bits --|
(+1)
1 . 0000 ---- 0000 * 2^52 * 2 => 1 0000 ---- 0000. * 2
|-- 52 bits --| |-- 52 bits --|
(By consuming the 2^52, radix
point has no way to go, but
there is still one 2 left in
exponent part)
=> 1 . 0000 ---- 0000 * 2^53
|-- 52 bits --|
Now we get the 9,007,199,254,740,992, and for the numbers greater than it, the format can only handle increments of 2 because every increment of 1 on the fraction part ends up being multiplied by the left 2 in the exponent part. That's why double-precision 64-bit binary format cannot hold odd numbers when the number is greater than 9,007,199,254,740,992:
(consume 2^52 to move radix point to the end)
1 . 0000 ---- 0001 * 2^53 => 1 0000 ---- 0001. * 2
|-- 52 bits --| |-- 52 bits --|
Following this pattern, when the number gets greater than 9,007,199,254,740,992 * 2 = 18,014,398,509,481,984 only 4 times the fraction can be held:
input: 18014398509481984 + 1 output: 18014398509481984 // expected: 18014398509481985
input: 18014398509481984 + 2 output: 18014398509481984 // expected: 18014398509481986
input: 18014398509481984 + 3 output: 18014398509481984 // expected: 18014398509481987
input: 18014398509481984 + 4 output: 18014398509481988 // expected: 18014398509481988
How about numbers between [ 2 251 799 813 685 248, 4 503 599 627 370 496 )?
1 . 0000 ---- 0001 * 2^51 => 1 0000 ---- 000.1
|-- 52 bits --| |-- 52 bits --|
The value 0.1 in binary is exactly 2^-1 (=1/2) (=0.5)
So when the number is less than 4,503,599,627,370,496 (2^52), there is one bit available to represent the 1/2 times of the integer:
input: 4503599627370495.5 output: 4503599627370495.5
input: 4503599627370495.75 output: 4503599627370495.5
Less than 2,251,799,813,685,248 (2^51)
input: 2251799813685246.75 output: 2251799813685246.8 // expected: 2251799813685246.75
input: 2251799813685246.25 output: 2251799813685246.2 // expected: 2251799813685246.25
input: 2251799813685246.5 output: 2251799813685246.5
/**
Please note that if you try this yourself and, say, log
these numbers to the console, they will get rounded. JavaScript
rounds if the number of digits exceed 17. The value
is internally held correctly:
*/
input: 2251799813685246.25.toString(2)
output: "111111111111111111111111111111111111111111111111110.01"
input: 2251799813685246.75.toString(2)
output: "111111111111111111111111111111111111111111111111110.11"
input: 2251799813685246.78.toString(2)
output: "111111111111111111111111111111111111111111111111110.11"
And what is the available range of exponent part? 11 bits allotted for it by the format.
From Wikipedia (for more details, go there)
So to make the exponent part be 2^52, we exactly need to set e = 1075.
To be safe
var MAX_INT = 4294967295;
Reasoning
I thought I'd be clever and find the value at which x + 1 === x with a more pragmatic approach.
My machine can only count 10 million per second or so... so I'll post back with the definitive answer in 28.56 years.
If you can't wait that long, I'm willing to bet that
Most of your loops don't run for 28.56 years
9007199254740992 === Math.pow(2, 53) + 1 is proof enough
You should stick to 4294967295 which is Math.pow(2,32) - 1 as to avoid expected issues with bit-shifting
Finding x + 1 === x:
(function () {
"use strict";
var x = 0
, start = new Date().valueOf()
;
while (x + 1 != x) {
if (!(x % 10000000)) {
console.log(x);
}
x += 1
}
console.log(x, new Date().valueOf() - start);
}());
The short answer is “it depends.”
If you’re using bitwise operators anywhere (or if you’re referring to the length of an Array), the ranges are:
Unsigned: 0…(-1>>>0)
Signed: (-(-1>>>1)-1)…(-1>>>1)
(It so happens that the bitwise operators and the maximum length of an array are restricted to 32-bit integers.)
If you’re not using bitwise operators or working with array lengths:
Signed: (-Math.pow(2,53))…(+Math.pow(2,53))
These limitations are imposed by the internal representation of the “Number” type, which generally corresponds to IEEE 754 double-precision floating-point representation. (Note that unlike typical signed integers, the magnitude of the negative limit is the same as the magnitude of the positive limit, due to characteristics of the internal representation, which actually includes a negative 0!)
ECMAScript 6:
Number.MAX_SAFE_INTEGER = Math.pow(2, 53)-1;
Number.MIN_SAFE_INTEGER = -Number.MAX_SAFE_INTEGER;
Other may have already given the generic answer, but I thought it would be a good idea to give a fast way of determining it :
for (var x = 2; x + 1 !== x; x *= 2);
console.log(x);
Which gives me 9007199254740992 within less than a millisecond in Chrome 30.
It will test powers of 2 to find which one, when 'added' 1, equals himself.
Anything you want to use for bitwise operations must be between 0x80000000 (-2147483648 or -2^31) and 0x7fffffff (2147483647 or 2^31 - 1).
The console will tell you that 0x80000000 equals +2147483648, but 0x80000000 & 0x80000000 equals -2147483648.
JavaScript has received a new data type in ECMAScript 2020: BigInt. It introduced numerical literals having an "n" suffix and allows for arbitrary precision:
var a = 123456789012345678901012345678901n;
Precision will still be lost, of course, when such big integer is (maybe unintentionally) coerced to a number data type.
And, obviously, there will always be precision limitations due to finite memory, and a cost in terms of time in order to allocate the necessary memory and to perform arithmetic on such large numbers.
For instance, the generation of a number with a hundred thousand decimal digits, will take a noticeable delay before completion:
console.log(BigInt("1".padEnd(100000,"0")) + 1n)
...but it works.
Try:
maxInt = -1 >>> 1
In Firefox 3.6 it's 2^31 - 1.
I did a simple test with a formula, X-(X+1)=-1, and the largest value of X I can get to work on Safari, Opera and Firefox (tested on OS X) is 9e15. Here is the code I used for testing:
javascript: alert(9e15-(9e15+1));
I write it like this:
var max_int = 0x20000000000000;
var min_int = -0x20000000000000;
(max_int + 1) === 0x20000000000000; //true
(max_int - 1) < 0x20000000000000; //true
Same for int32
var max_int32 = 0x80000000;
var min_int32 = -0x80000000;
Let's get to the sources
Description
The MAX_SAFE_INTEGER constant has a value of 9007199254740991 (9,007,199,254,740,991 or ~9 quadrillion). The reasoning behind that number is that JavaScript uses double-precision floating-point format numbers as specified in IEEE 754 and can only safely represent numbers between -(2^53 - 1) and 2^53 - 1.
Safe in this context refers to the ability to represent integers exactly and to correctly compare them. For example, Number.MAX_SAFE_INTEGER + 1 === Number.MAX_SAFE_INTEGER + 2 will evaluate to true, which is mathematically incorrect. See Number.isSafeInteger() for more information.
Because MAX_SAFE_INTEGER is a static property of Number, you always use it as Number.MAX_SAFE_INTEGER, rather than as a property of a Number object you created.
Browser compatibility
In JavaScript the representation of numbers is 2^53 - 1.
However, Bitwise operation are calculated on 32 bits ( 4 bytes ), meaning if you exceed 32bits shifts you will start loosing bits.
In the Google Chrome built-in javascript, you can go to approximately 2^1024 before the number is called infinity.
Scato wrotes:
anything you want to use for bitwise operations must be between
0x80000000 (-2147483648 or -2^31) and 0x7fffffff (2147483647 or 2^31 -
1).
the console will tell you that 0x80000000 equals +2147483648, but
0x80000000 & 0x80000000 equals -2147483648
Hex-Decimals are unsigned positive values, so 0x80000000 = 2147483648 - thats mathematically correct. If you want to make it a signed value you have to right shift: 0x80000000 >> 0 = -2147483648. You can write 1 << 31 instead, too.
Firefox 3 doesn't seem to have a problem with huge numbers.
1e+200 * 1e+100 will calculate fine to 1e+300.
Safari seem to have no problem with it as well. (For the record, this is on a Mac if anyone else decides to test this.)
Unless I lost my brain at this time of day, this is way bigger than a 64-bit integer.
Node.js and Google Chrome seem to both be using 1024 bit floating point values so:
Number.MAX_VALUE = 1.7976931348623157e+308

Reverse Bits: why it's giving me a negative value?

var reverseBits = function(n) {
var re = 0;
for( var i = 0; i < 32; i++ ) {
re = (re << 1) | (n & 1);
n >>>= 1;
}
return re;
};
This is my code to reverse bit in Javascript, but when n = 1, it gives -2147483648 (-10000000000000000000000000000000), should not it be a positive number? Where am I wrong?
The reason you are getting a negative number is because of how computers store negative and positive numbers. The most significant bit (the bit with the greatest value) is used in sign numbers to determine if a number should be negative or positive. If this bit is a 0, then its positive. If it's 1, then its negative. Computers use a technique called 2's compliment to convert a number from negative to positive. Here is how it works:
In your example, you assigned the number 1 to n. In a 32-bit computer, the binary would look like this:
0000 0000 0000 0000 0000 0000 0000 0001
After your reverse your bits, your binary looks like this:
1000 0000 0000 0000 0000 0000 0000 0000
If you pull out your binary calculator and punch in this number and convert it to decimal, you'll see its the value 2147483648. Because the left most bit is a 1, its a negative number. Because Javascript only has generic var variables, it assumes you want a signed result. The >>> operator in JavaScript is called a zero-fill right shift and using it with an operand of 0 (>>> 0), tells Javascript that you want an unsigned result.
In case you're curious (or some other reader of this post is curious), here is how a binary-based computer deals with negative numbers. Suppose you want to store the value -96. How does a computer store this? Well first, just ignore the sign. 96 in binary looks like this:
0000 0000 0000 0000 0000 0000 0110 0000
Next, the computer performs a 2's compliment. This is accomplished by first inverting each bit (1's become 0's and 0's become 1's):
1111 1111 1111 1111 1111 1111 1001 1111
Finally, you simply add 1, which looks like this:
1111 1111 1111 1111 1111 1111 1010 0000
Internally, this is how it's stored in your computer's memory. This equates to 4,294,967,200 or -96
should not it be a positive number? Where am I wrong?
No it should not. All bitwise operators except >>> work with signed 32 bit integers, where a leading 1 (in twos complement) signifies a negative number.
As suggested by #Icemanind, you can use the unsignedness of >>> to fix that and "cast" it to an unsigned integer:
return re >>> 0;

Why does bitwise "not 1" equal -2?

Suppose we have 1 and this number in base 2 is:
00000000000000000000000000000001
Now I want to flip all bits to get following result:
11111111111111111111111111111110
As far as I know, the solution is to use the ~ (bitwise NOT operator) to flip all bits, but the result of ~1 is -2:
console.log(~1); //-2
console.log((~1).toString(2)); //-10 (binary representation)
Why do I get this strange result?
There are 2 integers between 1 and -2: 0 and -1
1 in binary is 00000000000000000000000000000001
0 in binary is 00000000000000000000000000000000
-1 in binary is 11111111111111111111111111111111
-2 in binary is 11111111111111111111111111111110
("binary" being 2's complement, in the case of a bitwise not ~ )
As you can see, it's not very surprising ~1 equals -2, since ~0 equals -1.
As #Derek explained, These bitwise operators treat their operands as a sequence of 32 bits. parseInt, on the other hand, does not. That is why you get some different results.
Here's a more complete demo:
for (var i = 5; i >= -5; i--) {
console.log('Decimal: ' + pad(i, 3, ' ') + ' | Binary: ' + bin(i));
if (i === 0)
console.log('Decimal: -0 | Binary: ' + bin(-0)); // There is no `-0`
}
function pad(num, length, char) {
var out = num.toString();
while (out.length < length)
out = char + out;
return out
}
function bin(bin) {
return pad((bin >>> 0).toString(2), 32, '0');
}
.as-console-wrapper { max-height: 100% !important; top: 0; }
100 -4
101 -3
110 -2
111 -1
000 0
001 1
010 2
011 3
A simple way to remeber how two's complement notation works is imagine it's just a normal binary, except its last bit corresponds to the same value negated. In my contrived three-bit two's complement first bit is 1, second is 2, third is -4 (note the minus).
So as you can see, a bitwise not in two's complement is -(n + 1). Surprisingly enough, applying it to a number twice gives the same number:
-(-(n + 1) + 1) = (n + 1) - 1 = n
It is obvious when talking bitwise, but not so much in its arithmetical effect.
Several more observations that make remebering how it works a bit easier:
Notice how negative values ascend. Quite the same rules, with just 0 and 1 swapped. Bitwise NOTted, if you will.
100 -4 011 - I bitwise NOTted this half
101 -3 010
110 -2 001
111 -1 000
----------- - Note the symmetry of the last column
000 0 000
001 1 001
010 2 010
011 3 011 - This one's left as-is
By cycling that list of binaries by half of the total amount of numbers in there, you get a typical sequence of ascending binary numbers starting at zero.
- 100 -4 \
- 101 -3 |
- 110 -2 |-\ - these are in effect in signed types
- 111 -1 / |
*************|
000 0 |
001 1 |
010 2 |
011 3 |
*************|
+ 100 4 \ |
+ 101 5 |-/ - these are in effect in unsigned types
+ 110 6 |
+ 111 7 /
In computer science it's all about interpretation. For a computer everything is a sequence of bits that can be interpreted in many ways. For example 0100001 can be either the number 33 or ! (that's how ASCII maps this bit sequence).
Everything is a bit sequence for a computer, no matter if you see it as a digit, number, letter, text, Word document, pixel on your screen, displayed image or a JPG file on your hard drive. If you know how to interpret that bit sequence, it may be turned into something meaningful for a human, but in the RAM and CPU there are only bits.
So when you want to store a number in a computer, you have to encode it. For non-negative numbers it's pretty simple, you just have to use binary representation. But how about negative numbers?
You can use an encoding called two's complement. In this encoding you have to decide how many bits each number will have (for example 8 bits). The most significant bit is reserved as a sign bit. If it's 0, then the number should be interpreted as non-negative, otherwise it's negative. Other 7 bits contain actual number.
00000000 means zero, just like for unsigned numbers. 00000001 is one, 00000010 is two and so on. The largest positive number that you can store on 8 bits in two's complement is 127 (01111111).
The next binary number (10000000) is -128. It may seem strange, but in a second I'll explain why it makes sense. 10000001 is -127, 10000010 is -126 and so on. 11111111 is -1.
Why do we use such strange encoding? Because of its interesting properties. Specifically, while performing addition and subtraction the CPU doesn't have to know that it's a signed number stored as two's complement. It can interpret both numbers as unsigned, add them together and the result will be correct.
Let's try this: -5 + 5. -5 is 11111011, 5 is 00000101.
11111011
+ 00000101
----------
000000000
The result is 9 bits long. Most significant bit overflows and we're left with 00000000 which is 0. It seems to work.
Another example: 23 + -7. 23 is 00010111, -7 is 11111001.
00010111
+ 11111001
----------
100010000
Again, the MSB is lost and we get 00010000 == 16. It works!
That's how two's complement works. Computers use it internally to store signed integers.
You may have noticed that in two's complements when you negate bits of a number N, it turns into -N-1. Examples:
0 negated == ~00000000 == 11111111 == -1
1 negated == ~00000001 == 11111110 == -2
127 negated == ~01111111 == 10000000 == -128
128 negated == ~10000000 == 01111111 == 127
This is exactly what you have observed: JS is pretending it's using two's complement. So why parseInt('11111111111111111111111111111110', 2) is 4294967294? Well, because it's only pretending.
Internally JS always uses floating point number representation. It works in a completely different way than two's complement and its bitwise negation is mostly useless, so JS pretends a number is two's complement, then negates its bits and converts it back to floating point representation. This does not happen with parseInt, so you get 4294967294, even though binary value is seemingly the same.
A 2's complement 32 bit signed integer (Javascript insists that is the format used for a 32 bit signed integer) will store -2 as 11111111111111111111111111111110
So all as expected.
It's two's complement arithmetic. Which is the equivalent of "tape counter" arithmetic. Tape recorders tended to have counters attached (adding machines would likely be an even better analogy but they were obsolete already when 2s complement became hip).
When you wind backwards 2 steps from 000, you arrive at 998. So 998 is the tape counter's 10s complement arithmetic representation for -2: wind forward 2 steps, arrive at 000 again.
2s complement is just like that. Wind forward 1 from 1111111111111111 and you arrive at 0000000000000000, so 1111111111111111 is the representation of -1. Wind instead back another 1 from there, and you get 1111111111111110 which then is the representation of -2.
Numbers in JavaScript are floating point numbers, stored and represented by IEEE 754 standard.
However, for bitwise operations, the operands are internally treated as signed 32-bit integers represented by two's complement format:
The operands of all bitwise operators are converted to signed 32-bit
integers in two's complement format. Two's complement format means
that a number's negative counterpart (e.g. 5 vs. -5) is all the
number's bits inverted (bitwise NOT of the number, a.k.a. ones'
complement of the number) plus one.
A negative number's positive counterpart is calculated the same way. Thus we have:
1 = 00000000000000000000000000000001b
~1 = 11111111111111111111111111111110b
11111111111111111111111111111110b = -2
Note that Number.toString() is not supposed to return the two's complement representation for base-2.
The expression (-2).toString(2) yields -10 which is the minus sign (-) followed by base-2 representation of 2 (10).
This is the expected behavior. According to mdn:bitwise-not.
The part you probably don't understand is that
[11111111111111111111111111111110]₂ = [10]₂¹, if expressed as a signed integer. The leading 1s can be as many as you want and it's still the same number, similar to leading 0s in unsigned integers/decimal.
¹ [10]₂ specifies that 10 should be interpreted as base 2 (binary)

Why does ~-1 equal 0 and ~1 equal -2?

According to subsection 11.4.8 of the ECMAScript 5.1 standard:
The production UnaryExpression : ~ UnaryExpression is evaluated as follows:
Let expr be the result of evaluating UnaryExpression.
Let oldValue be ToInt32(GetValue(expr)).
Return the result of applying bitwise complement to oldValue. The result is a signed 32-bit integer.
The ~ operator will invoke the internal method ToInt32. In my understanding ToInt32(1) and ToInt32(-1) will return the same value 1 , but why does ~-1 equal 0 and ~1 equal -2?
Now my question is why ToInt32(-1) equals -1?
subsection 9.5 of the ECMAScript 5.1 standard:
The abstract operation ToInt32 converts its argument to one of 232 integer values
in the range −231 through 231−1, inclusive. This abstract operation functions as
follows:
Let number be the result of calling ToNumber on the input argument.
If number is NaN, +0, −0, +∞, or −∞, return +0.
Let posInt be sign(number) * floor(abs(number)).
Let int32bit be posInt modulo 232; that is, a finite integer value k of Number
type with positive sign and less than 232 in magnitude such that the mathematical
difference of posInt and k is mathematically an integer multiple of 232.
If int32bit is greater than or equal to 231, return int32bit − 232, otherwise
return int32bit.
when the argument is -1,according to 9.5,
in step 1
number will still be -1,
skip step2
in step 3
posInt will be -1
in step 4
int32bit will be 1
in step 5 it will return 1
which step is wrong?
The -1 in 32-bit integer
1111 1111 1111 1111 1111 1111 1111 1111
So ~-1 will be
0000 0000 0000 0000 0000 0000 0000 0000
Which is zero.
The 1 in 32-bit integer
0000 0000 0000 0000 0000 0000 0000 0001
So ~1 will be
1111 1111 1111 1111 1111 1111 1111 1110
Which is -2.
You should read about two's complement to understand the display of negative integer in binary-base.
Where did you get the idea that ToInt32(-1) evaluates to 1? It evaluates to -1, which in 32-bit, two's complement binary representation, is all bits set to 1. When you apply the ~ operator, every bit then becomes 0, which is the representation of 0 in 32-bit, two's complement binary.
The representation of 1 is all bits 0 except for bit 0. When the bits are inverted, the result is all bits 1 except for bit 0. This happens to be the two's complement representation of -2. (To see this, just subtract 1 from the two's complement representation of -1.)
ToInt32(1) will return 1
ToInt32(-1) will return -1
-1 is represented as a signed 32-bit integer by having all 32 bits set. The bitwise complement of that is all bits clear, thus ~-1 yields 0
1 is represented as a signed 32-bit integer by having all bits clear except the bottom bit. The bitwise complement of that has all bits set except the bottom bit, which is the signed 32-bit integer representation of the value -2. Thus, ~1 yields -2

Categories

Resources