I would like to shift this unsigned number: 1479636484000 7 bits to the right. Is this possible in JavaScript?
Both
1479636484000 >> 7
and
1479636484000 >>> 7
returns an incorrect answer (for me). The correct answer should be 11559660031.
I guess there's some sign bit involved here, and maybe the number is too large to be supported. But is there any clever way of getting around it?
Bitwise operations in JavaScript start by truncating the number to a 32-bit integer. Your numbers are too big. The "clever way" to get around that limitation is to implement your own numeric library.
Note that floating-point division by 128 gets you the right answer (if you drop the fraction).
You could use a string with the number and remove the last 7 characters and convert it back to a number.
console.log((1479636484000).toString(2));
console.log((11559660031).toString(2));
console.log((1479636484000).toString(2).slice(0, -7));
console.log(parseInt((1479636484000).toString(2).slice(0, -7), 2));
Related
I am trying to understand why I am not able to calculate the right modulo using JavaScript. The operation I have tried is:
Wrong answer
28493595674446332150349236018567871332790652257295571471311614363091332152517 % 6 = 4
The result should be 1.
28493595674446332150349236018567871332790652257295571471311614363091332152517 % 6 = 1
I have tried to convert this number to BN but unfortunately I always get the same answer. However if you use wolfram alpha or another math software it returns the right answer.
What's going on? What have I been doing wrong?
The integer number range in JavaScript is +/- 9007199254740991 (Number.MAX_SAFE_INTEGER). Your number is simply out of range for JS.
You can also use the BigInt notation to get the right answer.
28493595674446332150349236018567871332790652257295571471311614363091332152517n % 6n
See What is JavaScript's highest integer value that a number can go to without losing precision?
First note that mod(3^146,293)=292. For some reason, inputting mod(3^146,293) in Matlab returns 275. Inputting Math.pow(3,146) % 293 in JS returns 275. This same error occurs (as far as I can tell) every time. This leads me to believe I am missing something obvious but cannot seem to tell what.
Any help is much appreciated.
As discussed in the answers to this related question, MATLAB uses double-precision floating point numbers by default, which have limits on their resolution (i.e. the floating point relative accuracy, eps). For example:
>> a = 3^146
a =
4.567759074507741e+69
>> eps(a)
ans =
7.662477704329444e+53
In this case, 3146 is on the order of 1069 and the relative accuracy is on the order of 1053. With only 16 digits of precision, a double can't store the exact integer representation of an arbitrary 70 digit integer.
An alternative in MATLAB is to use the Symbolic Toolbox to create symbolic numbers with a greater resolution. This gives you the answer you expect:
>> a = sym('3^146')
a =
4567759074507740406477787437675267212178680251724974985372646979033929
>> mod(a, 293)
ans =
292
Math.pow(3, 146) is is larger than the constant Number.MAX_SAFE_INTEGER in JavaScript which represents the upper limit of numbers that can be represented without losing any accuracy. Therefore JavaScript cannot accurately represent Math.pow(3, 146) within the 64 bit limit.
MatLab also has limits on its integer size but can represent a large number with the Symbolic Math Toolbox.
There are also algorithms that you can implement to accomplish this without overflowing.
I am facing weird issued.
parseFloat(11111111111111111) converts it to 11111111111111112.
I noticed that it works fine till length is 16 but rounds off higher when input length is > 16.
I want to retain the original value passed in parseFloat after it is executed.
Any help?
Integers (numbers without a period or exponent notation) are considered accurate up to 15 digits.
More information here
Numbers in javascript are represented using 64 bit floating point values (so called doubles in other languages).
doubles can hold at most 15/16 significant digits (depends on number magnitute). Since range of double is 1.7E+/-308 some numbers can only be aproximated by double, in your case 11111111111111111 cannot be represented exactly but is aproximated by 11111111111111112 value. If this sounds strange then remember that 0.3 cannot be represented exactly as double too.
double can hold exact integers values in range +/-2^53, when you are operating in this range - you may expect exact values.
Javascript has a constant, Number.MAX_SAFE_INTEGER which is the highest integer that can be exactly represented.
Safe in this context refers to the ability to represent integers exactly and to correctly compare them. For example, Number.MAX_SAFE_INTEGER + 1 === Number.MAX_SAFE_INTEGER + 2 will evaluate to true, which is mathematically incorrect.
The value is 9007199254740991 (2^53 - 1) which makes a maximum of 15 digits safe.
JavaScript now has BigInt
BigInt is a built-in object that provides a way to represent whole numbers larger than 253 - 1, which is the largest number JavaScript can reliably represent with the Number primitive.
BigInt can be used for arbitrarily large integers.
As you can see in the following blog post, JavaScript only supports 53 bit integers.
if you type in the console
var x = 11111111111111111
and then type
x
you'll get
11111111111111112
This has nothing to do with the parseFloat method.
There's also a related question here about working with big numbers in JavaScript.
Try using the unary + operator.
Like this + ("1111111111111111") + 1 = 1111111111111112
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
>> in javascript
Here:
var num=10;
console.log(num/2);
num=4;
console.log(num/2);
It gives me 5 and 2.
And this one:
var num=10;
console.log(num>>1);
num=4;
console.log(num>>1);
It also gives me 5 and 2.
So x/2 is the same as x>>1? But why?
For the same reason that dropping the last digit off a normal (decimal) number is the same as dividing it by 10 (ignoring, of course, any non-integer remainder).
In computers, integers are internally represented in binary (base 2). So each digit represents a power of 2 instead of a power of 10 that we're used to with the decimal system.
>> 1 just means to shift all the bits right by one, which is another way of saying "drop the last digit". Since the digits are in binary, that's equivalent to dividing by the base, which is 2.
Similarly, if you need to divide by any power of 2, you can do so using the right shift operator: To divide by 4, shift by 2; to divide by 8, shift by 3; and so on.
Note that internally, it's often more efficient to do a shift operation instead of a division operation, but any compiler worth its salt will do this optimization for you (so that you don't have to write obfuscated code to get the performance benefit -- generally, you would only use the shift operator when your intention is to manipulate bits directly, and use the division operator when your intention is to do math).
x>>1 is a bit shift, which operates on the number's binary representation. The effect is that x>>n is the same as x/(2^n) (except that bit shift is usually faster than division, as it is lower level).
When you >> something, you basically shift all its bits to the right.
When that happens, you shift the 2-place value into the 1-place value, and the 4-place value into the 2-place value, and so on.
This effectively divides a number in half.
Take the number 14, for example:
1110.
When you shift the bits, you get 111, or 7.
Take a look at this:
http://en.wikipedia.org/wiki/Division_by_two#Binary
Everything you need to know about >> can be found here and here.
I heard that you could right-shift a number by .5 instead of using Math.floor(). I decided to check its limits to make sure that it was a suitable replacement, so I checked the following values and got the following results in Google Chrome:
2.5 >> .5 == 2;
2.9999 >> .5 == 2;
2.999999999999999 >> .5 == 2; // 15 9s
2.9999999999999999 >> .5 == 3; // 16 9s
After some fiddling, I found out that the highest possible value of two which, when right-shifted by .5, would yield 2 is 2.9999999999999997779553950749686919152736663818359374999999¯ (with the 9 repeating) in Chrome and Firefox. The number is 2.9999999999999997779¯ in IE.
My question is: what is the significance of the number .0000000000000007779553950749686919152736663818359374? It's a very strange number and it really piqued my curiosity.
I've been trying to find an answer or at least some kind of pattern, but I think my problem lies in the fact that I really don't understand the bitwise operation. I understand the idea in principle, but shifting a bit sequence by .5 doesn't make any sense at all to me. Any help is appreciated.
For the record, the weird digit sequence changes with 2^x. The highest possible values of the following numbers that still truncate properly:
for 0: 0.9999999999999999444888487687421729788184165954589843749¯
for 1: 1.9999999999999999888977697537484345957636833190917968749¯
for 2-3: x+.99999999999999977795539507496869191527366638183593749¯
for 4-7: x+.9999999999999995559107901499373838305473327636718749¯
for 8-15: x+.999999999999999111821580299874767661094665527343749¯
...and so forth
Actually, you're simply ending up doing a floor() on the first operand, without any floating point operations going on. Since the left shift and right shift bitwise operations only make sense with integer operands, the JavaScript engine is converting the two operands to integers first:
2.999999 >> 0.5
Becomes:
Math.floor(2.999999) >> Math.floor(0.5)
Which in turn is:
2 >> 0
Shifting by 0 bits means "don't do a shift" and therefore you end up with the first operand, simply truncated to an integer.
The SpiderMonkey source code has:
switch (op) {
case JSOP_LSH:
case JSOP_RSH:
if (!js_DoubleToECMAInt32(cx, d, &i)) // Same as Math.floor()
return JS_FALSE;
if (!js_DoubleToECMAInt32(cx, d2, &j)) // Same as Math.floor()
return JS_FALSE;
j &= 31;
d = (op == JSOP_LSH) ? i << j : i >> j;
break;
Your seeing a "rounding up" with certain numbers is due to the fact the JavaScript engine can't handle decimal digits beyond a certain precision and therefore your number ends up getting rounded up to the next integer. Try this in your browser:
alert(2.999999999999999);
You'll get 2.999999999999999. Now try adding one more 9:
alert(2.9999999999999999);
You'll get a 3.
This is possibly the single worst idea I have ever seen. Its only possible purpose for existing is for winning an obfusticated code contest. There's no significance to the long numbers you posted -- they're an artifact of the underlying floating-point implementation, filtered through god-knows how many intermediate layers. Bit-shifting by a fractional number of bytes is insane and I'm surprised it doesn't raise an exception -- but that's Javascript, always willing to redefine "insane".
If I were you, I'd avoid ever using this "feature". Its only value is as a possible root cause for an unusual error condition. Use Math.floor() and take pity on the next programmer who will maintain the code.
Confirming a couple suspicions I had when reading the question:
Right-shifting any fractional number x by any fractional number y will simply truncate x, giving the same result as Math.floor() while thoroughly confusing the reader.
2.999999999999999777955395074968691915... is simply the largest number that can be differentiated from "3". Try evaluating it by itself -- if you add anything to it, it will evaluate to 3. This is an artifact of the browser and local system's floating-point implementation.
If you wanna go deeper, read "What Every Computer Scientist Should Know About Floating-Point Arithmetic": https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
Try this javascript out:
alert(parseFloat("2.9999999999999997779553950749686919152736663818359374999999"));
Then try this:
alert(parseFloat("2.9999999999999997779553950749686919152736663818359375"));
What you are seeing is simple floating point inaccuracy. For more information about that, see this for example: http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems.
The basic issue is that the closest that a floating point value can get to representing the second number is greater than or equal to 3, whereas the closes that the a float can get to the first number is strictly less than three.
As for why right shifting by 0.5 does anything sane at all, it seems that 0.5 is just itself getting converted to an int (0) beforehand. Then the original float (2.999...) is getting converted to an int by truncation, as usual.
I don't think your right shift is relevant. You are simply beyond the resolution of a double precision floating point constant.
In Chrome:
var x = 2.999999999999999777955395074968691915273666381835937499999;
var y = 2.9999999999999997779553950749686919152736663818359375;
document.write("x=" + x);
document.write(" y=" + y);
Prints out: x = 2.9999999999999996 y=3
The shift right operator only operates on integers (both sides). So, shifting right by .5 bits should be exactly equivalent to shifting right by 0 bits. And, the left hand side is converted to an integer before the shift operation, which does the same thing as Math.floor().
I suspect that converting 2.9999999999999997779553950749686919152736663818359374999999
to it's binary representation would be enlightening. It's probably only 1 bit different
from true 3.
Good guess, but no cigar.
As the double precision FP number has 53 bits, the last FP number before 3 is actually
(exact): 2.999999999999999555910790149937383830547332763671875
But why it is
2.9999999999999997779553950749686919152736663818359375
(and this is exact, not 49999... !)
which is higher than the last displayable unit ? Rounding. The conversion routine (String to number) simply is correctly programmed to round the input the the next floating point number.
2.999999999999999555910790149937383830547332763671875
.......(values between, increasing) -> round down
2.9999999999999997779553950749686919152736663818359375
....... (values between, increasing) -> round up to 3
3
The conversion input must use full precision. If the number is exactly the half between
those two fp numbers (which is 2.9999999999999997779553950749686919152736663818359375)
the rounding depends on the setted flags. The default rounding is round to even, meaning that the number will be rounded to the next even number.
Now
3 = 11. (binary)
2.999... = 10.11111111111...... (binary)
All bits are set, the number is always odd. That means that the exact half number will be rounded up, so you are getting the strange .....49999 period because it must be smaller than the exact half to be distinguishable from 3.
I suspect that converting 2.9999999999999997779553950749686919152736663818359374999999 to its binary representation would be enlightening. It's probably only 1 bit different from true 3.
And to add to John's answer, the odds of this being more performant than Math.floor are vanishingly small.
I don't know if JavaScript uses floating-point numbers or some kind of infinite-precision library, but either way, you're going to get rounding errors on an operation like this -- even if it's pretty well defined.
It should be noted that the number ".0000000000000007779553950749686919152736663818359374" is quite possibly the Epsilon, defined as "the smallest number E such that (1+E) > 1."