Javascript: Is This Truly Signed Integer Division - javascript

Given the following code, where both a and b are Numbers representing values within the range of signed 32-bit signed integers:
var quotient = ((a|0) / (b|0))|0;
and assuming that the runtime is in full compliance with the ECMAScript 6 specifications, will the value of quotient always be the correct signed integer division of a and b as integers? In other words, is this a proper method to achieve true signed integer division in JavaScript that is equivalent to the machine instruction?

I'm no expert on floating-point numbers, but Wikipedia says that doubles have 52 bits of precision. Logically, it seems that 52 bits should be enough to reliably approximate integer division of 32-bit integers.
Dividing the minimum and maximum 32-bit signed ints, -2147483648 / 2147483647, produces -1.0000000004656613, which is still a reasonable amount of significant digits. The same goes for its inverse, 2147483647 / -2147483648, which produces -0.9999999995343387.
An exception is division by zero, which I mentioned in a comment. As the linked SO question states, integer division by zero normally throws some sort of error, whereas floating-point coercion results in (1 / 0) | 0 == 0.
Update: According to another SO answer, integer division in C truncates towards zero, which is what |0 does in JavaScript. In addition, division by 0 is undefined, so JavaScript is technically not incorrect in returning zero. Unless I've missed anything else, the answer to the original question should be yes.
Update 2: Relevant sections of the ECMAScript 6 spec: how to divide numbers and how to convert to a 32-bit signed integer, which is what |0 does.

Related

facing an issue with parseFloat when input is more than 16 digits

I am facing weird issued.
parseFloat(11111111111111111) converts it to 11111111111111112.
I noticed that it works fine till length is 16 but rounds off higher when input length is > 16.
I want to retain the original value passed in parseFloat after it is executed.
Any help?
Integers (numbers without a period or exponent notation) are considered accurate up to 15 digits.
More information here
Numbers in javascript are represented using 64 bit floating point values (so called doubles in other languages).
doubles can hold at most 15/16 significant digits (depends on number magnitute). Since range of double is 1.7E+/-308 some numbers can only be aproximated by double, in your case 11111111111111111 cannot be represented exactly but is aproximated by 11111111111111112 value. If this sounds strange then remember that 0.3 cannot be represented exactly as double too.
double can hold exact integers values in range +/-2^53, when you are operating in this range - you may expect exact values.
Javascript has a constant, Number.MAX_SAFE_INTEGER which is the highest integer that can be exactly represented.
Safe in this context refers to the ability to represent integers exactly and to correctly compare them. For example, Number.MAX_SAFE_INTEGER + 1 === Number.MAX_SAFE_INTEGER + 2 will evaluate to true, which is mathematically incorrect.
The value is 9007199254740991 (2^53 - 1) which makes a maximum of 15 digits safe.
JavaScript now has BigInt
BigInt is a built-in object that provides a way to represent whole numbers larger than 253 - 1, which is the largest number JavaScript can reliably represent with the Number primitive.
BigInt can be used for arbitrarily large integers.
As you can see in the following blog post, JavaScript only supports 53 bit integers.
if you type in the console
var x = 11111111111111111
and then type
x
you'll get
11111111111111112
This has nothing to do with the parseFloat method.
There's also a related question here about working with big numbers in JavaScript.
Try using the unary + operator.
Like this + ("1111111111111111") + 1 = 1111111111111112

What is the maximum integer it is safe to use in a Javascript bitmask flag value?

This is mostly just a sanity-check.
Mozilla says that
The operands of all bitwise operators are converted to signed 32-bit
integers in two's complement format.
and that
The numbers -2147483648 and 2147483647 are the minimum and the maximum
integers representable through a 32bit signed number.
Since 2147483647 is 0x7FFFFFFF, I believe that 0x40000000 (that is to say, not 0x80000000) is the maximum number I can safely use as a javascript flag value. But I'd like to make sure I haven't missed something or that there aren't other gotchas. Thank you in advance!
The value range is a complete 32-bit value, ie. 0 to 0xffffffff (or 232-1). If it will be signed or not depends. If it will be signed initially then this will produce -1:
document.write(0xffffffff>>0);
But you can use unsigned values too which means the range is [0, 4294967295]:
document.write(0xffffffff>>>0);
The number 0x40000000 is only gonna give you half your range (in the negative range, in the positive it would be 0x40000000-1, or 0x3fffffff) so this is not the safe number for a 32-bit signed range.
You safe-range for signed number would be [0x80000000, 0x7fffffff], so the common safe-margin mask would be 0x7fffffff, however, you would need to preserve the sign-bit:
number = number < 0 ? number & 0xffffffff : 0x7fffffff;
And for unsigned your mask would always be 0xffffffff.

How can I get exact value string of huge numbers in JavaScript?

I know JavaScript numbers are just "double" numbers and have only 52bit precisions for the fraction part. However, the REAL JavaScript numbers seem to have more practical precisions for huge numbers.
For example, the predefined constant Number.MAX_VALUE represents the largest positive finite value of the Number type, which is approximately 1.7976931348623157e+308. Here I can access trailing digits of this value using a modulus operator.
> Number.MAX_VALUE
1.7976931348623157e+308
> Number.MAX_VALUE % 10000000000
4124858368
From this result I can assume that this number is 7fef ffff ffff ffff which represents (1 + (1 − 2 ** −52)) × 2 ** 1023 (Wikipedia) and can be transcribed in an exact form as following:
179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368
...and we only saw trailing 10 digits of this 309 digits. So I think each JavaScript number must have exact digits in the decimal form.
My question is: how to get this 309 digits string in JavaScript? Challenges like Number.MAX_VALUE / 10000000000 % 10000000000 just fails because of such hugeness.
Furthermore, how about tiny numbers such as Number.MIN_VALUE? This must be the following fraction in the decimal form.
0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000004940656458412465441765687928682213723650598026143247644255856825006755072702087518652998363616359923797965646954457177309266567103559397963987747960107818781263007131903114045278458171678489821036887186360569987307230500063874091535649843873124733972731696151400317153853980741262385655911710266585566867681870395603106249319452715914924553293054565444011274801297099995419319894090804165633245247571478690147267801593552386115501348035264934720193790268107107491703332226844753335720832431936092382893458368060106011506169809753078342277318329247904982524730776375927247874656084778203734469699533647017972677717585125660551199131504891101451037862738167250955837389733598993664809941164205702637090279242767544565229087538682506419718265533447265625
All digits of MAX_VALUE is:
179769313486231570000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
Check out the code below:
http://howjs.com/?%3Aload%20http%3A%2F%2Fwww.javascriptoo.com%2Fapplication%2Fhtml%2Fjs%2FMikeMcl%2Fbig.js%2Fbig.min.js%0A%0Avar%20max%20%3D%20new%20Big(%20Number.MAX_VALUE%20)%3B%0Amax.toFixed()%3B
The actual implementation of IEEE floating point numbers is a little (little!!!) confusing.
I find it helps if you think of a simpler form, this reacts the same everywhere except near the overflows and underflows where the IEEE format is just better.
This is the form:
A floating point number consists of:
A sign for the number (+/-)
An unsigned integer value called the "mantissa" -- make this 'v'
An unsigned integer value called the "exponent" -- make this 'n'
A "sign" for the exponent.
The sign of the number is simple -- does it have a minus in front.
The value is calculated as:
v*2ⁿ
If the sign for the exponent is positive the exponent is basically 2*2*2*...*2 for as many twos as you have specified. If a large number is represented in decimal it will have lots of digits all the way down to the decimal point BUT they are meaningless. If you display the number in binary after about 53 binary digits all the rest will be zeros and you can't change them.
Notice, with a positive exponent all this is integers, floating point numbers (including IEEE ones) will calculate exactly with integers as long as you don't overflow. When you overflow they are still well behaved, they just have zeros in the lower bits.
Only when the exponent is negative do you have strangeness
v/(2ⁿ)
The value you get for a negative exponent is still based on the 2*2*2*...*2 value but you divide by it instead. So you're trying to represent say a tenth with a sum of halves, quarters, eighths and so forth ... but this doesn't work exactly so you get rounding errors and all the lovely floating point problems.
Your example value:
179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368
In binary it is
1111111111111111111111111111111111111111111111111111100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
There's lots of zeros on the end.
What every computer scientist should know about floating point

Adding to Number.MAX_VALUE

The answer to this question may be painfully obvious but I can't find it in the Mozilla docs nor on Google from a cursory search.
If you have some code like this
Number.MAX_VALUE + 1; // Infinity, right?
Number.MIN_VALUE - 1; // -Infinity, right?
Then I would expect adding anything to Number.MAX_VALUE would push it over to Infinity. The result is just Number.MAX_VALUE spat right back at me.
However, when playing around in the Chrome JS console, I noticed that it didn't actually become Infinity until I added/subtracted enough:
Number.MAX_VALUE + Math.pow(100,1000); // now we hit Infinity
Number.MIN_VALUE - Math.pow(100,1000); // -Infinity at last
What is the explanation for this "buffer" between Number.MAX_VALUE and Infinity?
Standardwise...
In ECMAScript, addition of two nonzero finite numbers is implemented as (ECMA-262 §11.6.3 "Applying the Additive Operators to Numbers"):
the sum is computed and rounded to the nearest representable value using IEEE 754 round-to-nearest mode. If the magnitude is too large to represent, the operation overflows and the result is then an infinity of appropriate sign.
IEEE-754's round-to-nearest mode specifies that (IEEE-754 2008 §4.3.1 "Rounding-direction attributes to nearest")
In the following two rounding-direction attributes, an infinitely precise result with magnitude at least bemax ( b − ½ b1-p ) shall round to ∞ with no change in sign; here emax and p are determined by the destination format (see 3.3). With:
roundTiesToEven, the floating-point number nearest to the infinitely precise result shall be delivered; if the two nearest floating-point numbers bracketing an unrepresentable infinitely precise result are equally near, the one with an even least significant digit shall be delivered
roundTiesToAway, the floating-point number nearest to the infinitely precise result shall be delivered; if the two nearest floating-point numbers bracketing an unrepresentable infinitely precise result are equally near, the one with larger magnitude shall be delivered.
ECMAScript does not specify which of the round-to-nearest, but it doesn't matter here because both gives the same result. The number in ECMAScript is "double", in which
b = 2
emax = 1023
p = 53,
so the result must be at least 21024 - 2970 ~ 1.7976931348623158 × 10308 in order to round to infinity. Otherwise it will just round to MAX_VALUE, because that is the closer than Infinity.
Notice that MAX_VALUE = 21024 - 2971, so you need to add at least 2971 - 2970 = 2970 ~ 9.979202 × 10291 in order to get infinity. We could check:
>>> Number.MAX_VALUE + 9.979201e291
1.7976931348623157e+308
>>> Number.MAX_VALUE + 9.979202e291
Infinity
Meanwhile, your Math.pow(100,1000) ~ 26643.9 is well beyond 21024 - 2970. It is already infinity.
If you look at Number.MAX_VALUE.toString(2), you'll see that the binary representation of MAX_VALUE is 53 ones followed by 971 zeros. This because IEEE 754 floating points are made of a mantissa coefficient multiplied by a power of 2 (so the other half of the floating point number is the exponent). With MAX_VALUE, both the mantissa and the exponent are maxed out, so you see a bunch of ones bit-shifted up a lot.
In short, you need to increase MAX_VALUE enough to actually affect the mantissa, otherwise your additional value gets lost and rounded out.
Math.pow(2, 969) is the lowest power of 2 that will not tip MAX_VALUE into Infinity.

parseInt rounds incorrectly

I stumbled upon this issue with parseInt and I'm not sure why this is happening.
console.log(parseInt("16980884512690999")); // gives 16980884512691000
console.log(parseInt("169808845126909101"));​ // gives 169808845126909100
I clearly not hitting any number limits in JavaScript limits
(Number.MAX_VALUE = 1.7976931348623157e+308)
Running Win 7 64 bit if that matters.
What am I overlooking?
Fiddle
Don't confuse Number.MAX_VALUE with maximum accurate value. All numbers in javascript are stored as 64 bit floating point, which means you can get high (and low) numbers, but they'll only be accurate to a certain point.
Double floating points (i.e. Javascript's) have 53 bits of significand precision, which means the highest/lowest "certainly accurate" integer in javascript is +/-9007199254740992 (2^53). Numbers above/below that may turn out to be accurate (the ones that simply add 0's on the end, because the exponent bits can be used to represent that).
Or, in the words of ECMAScript: "Note that all the positive and negative integers whose magnitude is no greater than 2^53 are representable in the Number type (indeed, the integer 0 has two representations, +0 and −0)."
Update
Just to add a bit to the existing question, the ECMAScript spec requires that if an integral Number has less than 22 digits, .toString() will output it in standard decimal notation (e.g. 169808845126909100000 as in your example). If it has 22 or more digits, it will be output in normalized scientific notation (e.g. 1698088451269091000000 - an additional 0 - is output as 1.698088451269091e+21).
From this answer
All numbers in Javascript are 64 bit "double" precision IEE754
floating point.
The largest positive whole number that can therefore be accurately
represented is 2^53. The remaining bits are reserved for the exponent.
2^53 = 9007199254740992

Categories

Resources