2.9999999999999999 >> .5? - javascript

I heard that you could right-shift a number by .5 instead of using Math.floor(). I decided to check its limits to make sure that it was a suitable replacement, so I checked the following values and got the following results in Google Chrome:
2.5 >> .5 == 2;
2.9999 >> .5 == 2;
2.999999999999999 >> .5 == 2; // 15 9s
2.9999999999999999 >> .5 == 3; // 16 9s
After some fiddling, I found out that the highest possible value of two which, when right-shifted by .5, would yield 2 is 2.9999999999999997779553950749686919152736663818359374999999¯ (with the 9 repeating) in Chrome and Firefox. The number is 2.9999999999999997779¯ in IE.
My question is: what is the significance of the number .0000000000000007779553950749686919152736663818359374? It's a very strange number and it really piqued my curiosity.
I've been trying to find an answer or at least some kind of pattern, but I think my problem lies in the fact that I really don't understand the bitwise operation. I understand the idea in principle, but shifting a bit sequence by .5 doesn't make any sense at all to me. Any help is appreciated.
For the record, the weird digit sequence changes with 2^x. The highest possible values of the following numbers that still truncate properly:
for 0: 0.9999999999999999444888487687421729788184165954589843749¯
for 1: 1.9999999999999999888977697537484345957636833190917968749¯
for 2-3: x+.99999999999999977795539507496869191527366638183593749¯
for 4-7: x+.9999999999999995559107901499373838305473327636718749¯
for 8-15: x+.999999999999999111821580299874767661094665527343749¯
...and so forth

Actually, you're simply ending up doing a floor() on the first operand, without any floating point operations going on. Since the left shift and right shift bitwise operations only make sense with integer operands, the JavaScript engine is converting the two operands to integers first:
2.999999 >> 0.5
Becomes:
Math.floor(2.999999) >> Math.floor(0.5)
Which in turn is:
2 >> 0
Shifting by 0 bits means "don't do a shift" and therefore you end up with the first operand, simply truncated to an integer.
The SpiderMonkey source code has:
switch (op) {
case JSOP_LSH:
case JSOP_RSH:
if (!js_DoubleToECMAInt32(cx, d, &i)) // Same as Math.floor()
return JS_FALSE;
if (!js_DoubleToECMAInt32(cx, d2, &j)) // Same as Math.floor()
return JS_FALSE;
j &= 31;
d = (op == JSOP_LSH) ? i << j : i >> j;
break;
Your seeing a "rounding up" with certain numbers is due to the fact the JavaScript engine can't handle decimal digits beyond a certain precision and therefore your number ends up getting rounded up to the next integer. Try this in your browser:
alert(2.999999999999999);
You'll get 2.999999999999999. Now try adding one more 9:
alert(2.9999999999999999);
You'll get a 3.

This is possibly the single worst idea I have ever seen. Its only possible purpose for existing is for winning an obfusticated code contest. There's no significance to the long numbers you posted -- they're an artifact of the underlying floating-point implementation, filtered through god-knows how many intermediate layers. Bit-shifting by a fractional number of bytes is insane and I'm surprised it doesn't raise an exception -- but that's Javascript, always willing to redefine "insane".
If I were you, I'd avoid ever using this "feature". Its only value is as a possible root cause for an unusual error condition. Use Math.floor() and take pity on the next programmer who will maintain the code.
Confirming a couple suspicions I had when reading the question:
Right-shifting any fractional number x by any fractional number y will simply truncate x, giving the same result as Math.floor() while thoroughly confusing the reader.
2.999999999999999777955395074968691915... is simply the largest number that can be differentiated from "3". Try evaluating it by itself -- if you add anything to it, it will evaluate to 3. This is an artifact of the browser and local system's floating-point implementation.

If you wanna go deeper, read "What Every Computer Scientist Should Know About Floating-Point Arithmetic": https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

Try this javascript out:
alert(parseFloat("2.9999999999999997779553950749686919152736663818359374999999"));
Then try this:
alert(parseFloat("2.9999999999999997779553950749686919152736663818359375"));
What you are seeing is simple floating point inaccuracy. For more information about that, see this for example: http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems.
The basic issue is that the closest that a floating point value can get to representing the second number is greater than or equal to 3, whereas the closes that the a float can get to the first number is strictly less than three.
As for why right shifting by 0.5 does anything sane at all, it seems that 0.5 is just itself getting converted to an int (0) beforehand. Then the original float (2.999...) is getting converted to an int by truncation, as usual.

I don't think your right shift is relevant. You are simply beyond the resolution of a double precision floating point constant.
In Chrome:
var x = 2.999999999999999777955395074968691915273666381835937499999;
var y = 2.9999999999999997779553950749686919152736663818359375;
document.write("x=" + x);
document.write(" y=" + y);
Prints out: x = 2.9999999999999996 y=3

The shift right operator only operates on integers (both sides). So, shifting right by .5 bits should be exactly equivalent to shifting right by 0 bits. And, the left hand side is converted to an integer before the shift operation, which does the same thing as Math.floor().

I suspect that converting 2.9999999999999997779553950749686919152736663818359374999999
to it's binary representation would be enlightening. It's probably only 1 bit different
from true 3.
Good guess, but no cigar.
As the double precision FP number has 53 bits, the last FP number before 3 is actually
(exact): 2.999999999999999555910790149937383830547332763671875
But why it is
2.9999999999999997779553950749686919152736663818359375
(and this is exact, not 49999... !)
which is higher than the last displayable unit ? Rounding. The conversion routine (String to number) simply is correctly programmed to round the input the the next floating point number.
2.999999999999999555910790149937383830547332763671875
.......(values between, increasing) -> round down
2.9999999999999997779553950749686919152736663818359375
....... (values between, increasing) -> round up to 3
3
The conversion input must use full precision. If the number is exactly the half between
those two fp numbers (which is 2.9999999999999997779553950749686919152736663818359375)
the rounding depends on the setted flags. The default rounding is round to even, meaning that the number will be rounded to the next even number.
Now
3 = 11. (binary)
2.999... = 10.11111111111...... (binary)
All bits are set, the number is always odd. That means that the exact half number will be rounded up, so you are getting the strange .....49999 period because it must be smaller than the exact half to be distinguishable from 3.

I suspect that converting 2.9999999999999997779553950749686919152736663818359374999999 to its binary representation would be enlightening. It's probably only 1 bit different from true 3.

And to add to John's answer, the odds of this being more performant than Math.floor are vanishingly small.
I don't know if JavaScript uses floating-point numbers or some kind of infinite-precision library, but either way, you're going to get rounding errors on an operation like this -- even if it's pretty well defined.

It should be noted that the number ".0000000000000007779553950749686919152736663818359374" is quite possibly the Epsilon, defined as "the smallest number E such that (1+E) > 1."

Related

Where is the specification for conversion in JavaScript's left shift (<<) operator

I would like to determine where my lack of knowledge is with respect to JavaScript's number handling. The gap shows up in the way that JS handles the shift left operator.
It is my understanding that in in JavaScript "all numbers are floating point". But that doesn't fit with bit shifting operators, and indeed, the demonstrable behavior also tells me there's more to this than I'm aware of.
I checked the specification here: bitshift operators and all it says (at this point) is
The Left Shift Operator ( << ) Performs a bitwise left shift operation on the left operand by the amount specified by the right operand.
But if all numbers are in fact represented as a floating point format, then this, taken literally would mean we'd be shifting the bit representations of mantissa and exponent, and would get total nonsense values as the exponent bits moved into the mantissa bits. So, my next thought was that perhaps it just takes the mantissa part, shifts that, and puts it back. But that doesn't appear to be the case either, since 2.5 << 1 is 4, not 5.
I also found that any number with a large exponent value seems to result in zeroes.
So, my guess, from this brief examination, is that the system actually takes the number, performs a conversion that would be described as "chop out a 32 bit two's complement integer value" and then does a shift on that, before converting the resulting int back to a floating representation. (Unless I was lied to about the "all numbers are floats" in the first place!)
Can anyone tell me:
a) was I lied to about the "all numbers are floats" thing?
b) if I'm right in my guess about the "convert to int, shift, then convert back" behavior, where is that documented. I'm have to believe it's in the spec, but it's a big one, and while I've done some searching, I've not read it all (nor do I particularly want to if I can get a hint!)
It can be found in the abstract operation Number::leftShift:
Let lnum be ! ToInt32(x).
Let rnum be ! ToUint32(y).
Whereas ToInt32 basically performs:
Let int32bit be int modulo 2³²

Faster way to multiply in javascript?

I'm looking at ways to multiply faster in JavaScript, and I found this
https://medium.com/#p_arithmetic/4-more-javascript-hacks-to-make-your-javascript-faster-1f5fd88a219e#.306ophng2
which has this text
Instead of multiplying, use the bit shift operation. it looks a little more complex, but once you get the hang of it, it’s pretty simple. The formula to multiply x * y is simply x << (y-1)
As a bonus, everyone else will think you’re really smart!
// multiply by 1
[1,2,3,4].forEach(function(n){ return n<<0; }) // 1,2,3,4
// multiply by 2
[1,2,3,4].forEach(function(n){ return n<<1; }) // 2,4,6,8
// multiply by 3
[1,2,3,4].forEach(function(n){ return n<<2; }) // 3,6,9,12
// etc
However this doesn't seem to work for me. Does anyone know what's wrong?
Thanks
Ignore that article. It's intended as humor -- the advice it's giving is intentionally terrible and wrong.
Multiplication does not involve "expensive logarithmic look up tables and extremely lucky guesses" with "dozens of guesses per second". It is a highly optimized hardware operation, and any modern CPU can perform hundreds of millions (or more!) of these operations per second.
Bitwise operations are not faster than multiplication in Javascript. In fact, they're much slower -- numbers are generally stored as double-precision floating point by default, so performing a bitwise operation requires them to be converted to integers, then back.
Bit-shifting is not equivalent to multiplication in the way that the article implies. While left-shifting by 0 and 1 are equivalent to multiplication by 1 and 2, the pattern continues with <<2 and <<3 being equivalent to a multiplication by 4 and 8, not 3 and 4.
Array.forEach does not return a value. The appropriate function to use here would be Array.map.
The other "Javascript hacks" described in the article are even worse. I won't bother going into details.
Bit shift is not the same as multiplying by any number. This is how bit shift works in decimal for example:
1234 >> 0 = 1234
1234 >> 1 = 0123 #discards the last digit.
0123 >> 2 = 0001 #discards the last two digit
So in decimal, shifting digits to the right is like integer division by 10. When shifted by two digits it's the same as dividing by 10^2.
Now computers work with binary words. So instead of a digit shift, we have bit shifts.
10101011 >> 1 = 01010101
This is basically dividing by 2^1.
10101011 >> n is simply dividing by 2^n.
Now in the same manner,
10101011 << n is multiplying by by 2^n. This adds n zeroes behind the binary word.
Hope it's clearer now :)
Cheers
Incidentally, if you want to know why it isn't working for you, you will want to use map instead of forEach since forEach ignores the return.
So do this instead:
// multiply by 1
[1,2,3,4].map(function(n){ return n<<0; }) // 1,2,3,4
// multiply by 2
[1,2,3,4].map(function(n){ return n<<1; }) // 2,4,6,8
// multiply by 4
[1,2,3,4].map(function(n){ return n<<2; }) // 4,8,12,16
// multiply by 8
[1,2,3,4].map(function(n){ return n<<3; }) // 8, 16, 24, 32
And as others have mentioned - don't use this for multiplication. Just take this as a clarity of forEach vs map...

JavaScript Math.floor: how guarantee number will round down?

I want to normalize an array so that each value is
in [0-1) .. i.e. "the max will never be 1 but the min can be 0."
This is not unlike the random function returning numbers in the same range.
While looking at this, I found that .99999999999999999===1 is true!
Ditto (1-Number.MIN_VALUE) === 1 But Math.ceil(Number.MIN_VALUE) is 1, as it should be.
Some others: Math.floor(.999999999999) is 0
while Math.floor(.99999999999999999) is 1
OK so there are rounding problems in JS.
Is there any way I can normalize a set of numbers to lie in the range [0,1)?
It may help to examine the steps that JavaScript performs of each of your expressions.
In .99999999999999999===1:
The source text .99999999999999999 is converted to a Number. The closest Number is 1, so that is the result. (The next closest Number is 0.99999999999999988897769753748434595763683319091796875, which is 1–2–53.)
Then 1 is compared to 1. The result is true.
In (1-Number.MIN_VALUE) === 1:
Number.MIN_VALUE is 2–1074, about 5e–304.
1–2–1074 is extremely close to one. The exact value cannot be represented as a Number, so the nearest value is used. Again, the nearest value is 1.
Then 1 is compared to 1. The result is true.
In Math.ceil(Number.MIN_VALUE):
Number.MIN_VALUE is 2–1074, about 5e–304.
The ceiling function of that value is 1.
In Math.floor(.999999999999):
The source text .999999999999 is converted to a Number. The closest Number is 0.99999999999900002212172012150404043495655059814453125, so that is the result.
The floor function of that value is 0.
In Math.floor(.99999999999999999):
The source text .99999999999999999 is converted to a Number. The closest Number is 1, so that is the result.
The floor function of 1 is 1.
There are only two surprising things here, at most. One is that the numerals in the source text are converted to internal Number values. But this should not be surprising. Of course text has to be converted to internal representations of numbers, and the Number type cannot perfectly store all the infinitely many numbers. So it has to round. And of course numbers very near 1 round to 1.
The other possibly surprising thing is that 1-Number.MIN_VALUE is 1. But this is actually the same issue: The exact result is not representable, but it is very near 1, so 1 is used.
The Math.floor function works correctly. It never introduces any error, and you do not have to do anything to guarantee that it will round down. It always does.
However, since you want to normalize numbers, it seems likely you are going to divide numbers at some point. When you divide, there may be rounding problems, because many results of division are not exactly representable, so they must be rounded.
However, that is a separate problem, and you have not given enough information in this question to address the specific calculations you plan to do. You should open a separate question for it.
Javascript will treat any number between 0.999999999999999994 and 1 as 1, so just subtract .000000000000000006.
Of course that's not as easy as it sounds, since .000000000000000006 is evaluated as 0 in Javascript, so you could do something like:
function trueFloor(x)
{
x = x * 100;
if(x > .0000000000000006)
x = x - .0000000000000006;
x = Math.floor(x/100);
return x;
}
EDIT: Or at least you'd think you could. Apparently JS casts .99999999999999999 to 1 before passing it to a function, so you'd have to try something like:
trueFloor("0.99999999999999999")
function trueFloor(str)
{
x=str.substring(0,9) + 0;
return Math.floor(x); //=> 0
}
Not sure why you'd need that level of precision, but in theory, I guess it works. You can see a working fiddle here
As long as you cast your insanely precise float as a string, that's probably your best bet.
Please understand one thing: this...
.999999999999999999
... is just a Number literal. Just as
.999999999999999998
.999999999999999997
.999999999999999996
...
... you see the pattern.
How JavaScript treats these literals is completely another story. And yes, this treatment is limited by the number of bits that can be used to store a Number value.
The number of possible floating point literals is infinite by definition - no matter how small is the range set for them. For example, take the ones shown above: how many of numbers very close to 1 you may express? Right, it's infinite: just keep appending 9 to the line.
But the container for each Number value is quite finite: it has 64 bits. That means, it can store 2^64 different values (Infinite, -Infinite and NaN among them) - and that's all.
You want to work with such literals anyway? Use Strings to store them, not Numbers - and some BigMath JS library (take your pick) to work with those values - as Strings, again.
But from your question it looks like you're not, as you talked about array of Numbers - Number values, that is. And in no way there can be .999999999999999999 stored there, as there is no such Number value in JavaScript.

Preserving the floating point & addition of a bitwise operation in javascript

I am trying to understand the way to add, subtract, divide, and multiply by operating on the bits.
It is necessary to do some optimizing in my JavaScript program due to many calculations running after an event has happened.
By using the code below for a reference I am able to understand that the carry holds the &ing value. Then by doing the XOr that sets the sum var to the bits that do not match in each n1 / n2 variable.
Here is my question.;) What does shifting the (n1 & n2)<<1 by 1 do? What is the goal by doing this? As with the XOr it is obvious that there is no need to do anything else with those bits because their decimal values are ok as they are in the sum var. I can't picture in my head what is being accomplished by the & shift operation.
function add(n1,n2)
{
var carry, sum;
// Find out which bits will result in a carry.
// Those bits will affect the bits directly to
// the left, so we shall shift one bit.
carry = (n1 & n2) << 1;
// In digital electronics, an XOR gate is also known
// as a quarter adder. Basically an addition is performed
// on each individual bit, and the carry is discarded.
//
// All I'm doing here is applying the same concept.
sum = n1 ^ n2;
// If any bits match in position, then perform the
// addition on the current sum and the results of
// the carry.
if (sum & carry)
{
return add(sum, carry);
}
// Return the sum.
else
{
return sum ^ carry;
};
};
The code above works as expected but it does not return the floating point values. I've got to have the total to be returned along with the floating point value.
Does anyone have a function that I can use with the above that will help me with floating point values? Are a website with a clear explanation of what I am looking for? I've tried searching for the last day are so and cannot find anything to go look over.
I got the code above from this resource.
http://www.dreamincode.net/code/snippet3015.htm
Thanks ahead of time!
After thinking about it doing a left shift to the 1 position is a multiplication by 2.
By &ing like this : carry = (n1 & n2) << 1; the carry var will hold a string of binaries compiled of the matched positions in n1 and n2. So, if n1 is 4 and n2 is 4 they both hold the same value. Therefore, by combing the two and right shifting to the 1 index will multiply 4 x 2 = 8; so carry would now equal 8.
1.) var carry = 00001000 =8
&
00001000 =8
2.) carry = now holds the single value of 00001000 =8
A left shift will multiply 8 x 2 =16, or 8 + 8 = 16
3.)carry = carry <<1 , shift all bits over one position
4.) carry now holds a single value of 00010000 = 16
I still cannot find anything on working with floating point values. If anyone has anything do post a link.
It doesn't work because the code assumes that the floating point numbers are represented as integer numbers, which they aren't. Floating point numbers are represented using the IEEE 754 standard, which breaks the numbers in three parts: a sign bit, a group of bits representing an exponent, and another group representing a number between 1 (inclusive) and 2 (exclusive), the mantissa, and the value is calculated as
(sign is set ? 1 : -1) * (mantissa ^ (exponent - bias))
Where the bias depends on the precision of the floating point number. So the algorithm you use for adding two numbers assumes that the bits represent an integer which is not the case for floating point numbers. Operations such as bitwise-AND and bitwise-OR also don't give the results that you'd expect in an integer world.
Some examples, in double precision, the number 2.3 is represented as (in hex) 4002666666666666, while the number 5.3 is represented as 4015333333333333. OR-ing those two numbers will give you 4017777777777777, which represents (roughly) 5.866666.
There are some good pointers on this format, I found the links at http://www.psc.edu/general/software/packages/ieee/ieee.php, http://babbage.cs.qc.edu/IEEE-754/ and http://www.binaryconvert.com/convert_double.html fairly good for understanding it.
Now, if you still want to implement the bitwise addition for those numbers, you can. But you'll have to break the number down in its parts, then normalize the numbers in the same exponent (otherwise you won't be able to add them), perform the addition on the mantissa, and finally normalize it back to the IEEE754 format. But, as #LukeGT said, you'll likely not get a better performance than the JS engine you're running. And some JS implementations don't even support bitwise operations on floating point numbers, so what usually ends up happening is that they first cast the numbers to integers, then perform the operation, which will make your results incorrect as well.
Floating point values have a complicated bit structure, which is very difficult to manipulate with bit operations. As a result, I doubt you could do any better than the Javascript engine at computing them. Floating point calculations are inherently slow, so you should try to avoid them if you're worried about speed.
Try using integers to represent a decimal number to x amount of digits instead. For example if you were working with currency, you could store things in terms of whole cents as opposed to dollars with fractional values.
Hope that helps.

Unexpected value on multiplication in javascript [duplicate]

There is some problem, i can't understand anyway.
look at this code please
<script type="text/javascript">
function math(x)
{
var y;
y = x*10;
alert(y);
}
</script>
<input type="button" onclick="math(0.011)">
What must be alerted after i click on button?
i think 0.11, but no, it alerts
0.10999999999999999
explain please this behavior.
thanks in advance
Ok. Let me try to explain it.
The basic thing to remember with floating point numbers is this: They occupy a limited amount of bits and try to represent the original number using base-2 arithmetic.
As you know, in base-2 arithmetic integers are represented by the powers of 2 that they contain. Thus, 6 would be represented as 4 + 2, ie. in binary as 110.
In order to understand how fractional numbers are represented, you have to think about how we represent fractional numbers in our decimal system. The fractional part of numbers (for example 0.11) is represented as multiples of inverse powers of 10 (since the base is 10). Thus 0.11 is actually 1/10 + 1/100. As you can appreciate, this is not powerful enough to represent all fractional numbers in a limited number of digits. For example, 1/3 would be 0.333333.... in a never ending fashion. If we had only 32 digits of space to write the number down, we would end up having only an approximation to the original number, 0.33333333333333333333333333333333. This number, for example, would give 0.99999999999999999999999999999999 if it was multiplied by 3 and not 1 as you would have expected.
The situation is similar in base-2. Each fractional number would be represented as multiples of inverse powers of 2. Thus 0.75 (in decimal) (ie 3/4) would be represented as 1/2 + 1/4, which would mean 0.11 (in base-2). Just as base 10 is not capable enough to represent every fractional number in a finite manner, base-2 cannot represent all fractional numbers given a limited amount of space.
Now, try to represent 0.11 in base-2; you start with 11/100 and try to find an inverse power of 2 that is just less than this number. 1/2 doesn't work, 1/4 neither, nor does 1/8. 1/16 fits the bill, so you mark a 1 in the 4th place after the decimal point and subtract 1/16 from 11/100. You are left with 19/400. Now try to find the next power of 2 that fits the description. 1/32 seems to be that one, mark the 5th place after the point and subtract 1/32 from 19/400, you get 13/800. Next one is 1/64 and you are left with 1/1600 thus the next one is all the way up at 1/2048, etc. etc. Thus we got as far as 0.00011100001 but it goes on and on; and you will see that there always is a fraction remaining. Now, I didn't go through the whole calculation, but after you have put in 32 binary digits after the dot you will still probably have some fraction left (and this is assuming that all of the 32 bits of space is spent representing the decimal part, which it is not). Thus, I am sure you can appreciate that the resulting number might differ from its actual value by some amount.
In your case, the difference is 0.00000000000000001 which is 1/100000000000000000 = 1/10^17 and I am sure that you can see why you might have that.
this is because you are dealing with floating point, and this is the expected behavior of floating point math.
what you need to do is format that number.
see this java explanation which also applies here if you want to know why this is happening.
in javascript all numbers are represented as 64bit floats, so you will run into this sort of thing often.
the quick overview of that article is that floating point tries to represent a range of values larger then would fit in 64bits, therefor there is going to be some imprecise representation, and this is what you are seeing.
With floating point number you get a representation of the number you try to encode. Mostly it is a number that is very close the the original number. More information on encoding/storing floating point numbers can be found here.
Note:
If you show the value of x, it still shows 0.011 because JavaScript has not yet decided what variable type x has. But after multiplying it with 10 the type got set to floating point (it is the only possibility) and the round error shows.
You can try to fix the nr of decimals with this one:
// fl is a float number with some nr of decimals
// d is how many decimals you want
function dec(fl, d) {
var p = Math.pow(10, d);
return Math.round(fl*p)/p;
}
Ex:
var n = 0.0012345;
console.log(dec(n,6)); // 0.001235
console.log(dec(n,5)); // 0.00123
console.log(dec(n,4)); // 0.0012
console.log(dec(n,3)); // 0.001
It works by first multiplying the float with 10^3 (1000) for three decimals, or 10^2 (100) for two decimals. Then do round on that and divide it back to original size.
Math.pow(10, d) makes 10^d (means that d will give us 1000).
In your case, do alert(dec(y,2));, it should work.

Categories

Resources