Cross browser Javascript number precision - javascript

Within JavaScript, numbers are defined as 64bit double-precision. I have a specific use in mind for a distributed web application, which would only work if I can rely on consistent results across all browsers.
Despite the spec using the IEEE standard, I naturally have a suspicion that there may be tiny differences in implementations of the maths library or even the underlying hardware, which could cause compound errors.
Is there any source of compatibility data, or a reliable test suite to verify double precision calculations in the browser? In particular, I also need to consider mobile browsers (usually ARM based).
Clarification -
This is a question about browser compatibility. I'm trying to understand whether all browsers can be relied upon to treat numbers in a reliable, consistent and repeatable way as defined for IEEE floating point. In most languages this is a safe assumption, but it's interesting that there's a little uncertainty about this in the browser.
There's been some great advice on how to avoid floating point problems due to lack of precision and rounding errors. In most cases, if you require accuracy you should follow this advice!
For this question, I'm not trying to avoid the problem but understand it. Floating point numbers are inherently inaccurate by design, but as long as some care is taken with how builds are made that inaccuracy can be completely predictable and consistent. IEEE-754 describes this to a level of detail that only a standards body could.
I've decided to offer a small bounty if anyone can cite,
Genuine compatibility data relating to the implementation of IEEE numbers in mainstream browsers.
A test suite intended to verify the implementation within the browsers, including verifying the correct internal use of a 64 bit floating point number (53 bit mantissa).
In this question I'm not looking for alternative options, workarounds or ways to avoid the problem. Thank you for the suggestions.

This is just for fun, as you already stated and I created a new answer because this one is in a different vein. But I still feel like there are a few random passerby's who are ignoring the futility of the problem. So let's start by addressing your points:
Firstly:
Genuine compatibility data relating to the implementation of IEEE
numbers in mainstream browsers.
doesn't exist, and for that matter doesn't even make any sense, IEEE is just a standards body...? I am not sure if this vague on purpose or on accident, I will assume you were trying to say IEEE 754, but there in lies the rub... there are technically 2 versions of this standard IEEE 754-2008 AND IEEE 754-1985. Basically the former is newer and addresses the latter's oversights. Any sane person would assume that any maintained JavaScript implementation would update to the latest and greatest standard, but any sane person should know JavaScript better than that, and even if JavaScript wasn't crazy, there is no specification saying that the implementation has to be/stay up to date (check the ECMA spec yourself if you don't believe me, they don't even talk "versions"). To compound the matters further the IEEE Standard 754-2008 for Floating-Point Arithmetic supports two
encoding formats: the decimal encoding format, and the binary encoding format. Which as would be expected are compatible with each other in the sense that you can go back and forth without loss of data, but that's assuming we have access to the binary representation of the number, which we don't (without attaching a debugger and looking at the store the old school way)
However, from what I can tell it seems it is general practice to "back" a JavaScript Number with an old fashioned double which of course means that we are at the mercy of the compiler used to actually build the browser. But even in that realm, we can't and shouldn't be assuming equality even if all the compilers were on the same version of the standard (they aren't) and even if all the compilers implemented the standard in its entirety (they don't). Here's an excerpt from this paper, which I have deemed an interesting, worthwhile and relevant-to-this-dialog read...
Many programmers like to believe that they can understand the behavior
of a program and prove that it will work correctly without reference
to the compiler that compiles it or the computer that runs it. In many
ways, supporting this belief is a worthwhile goal for the designers of
computer systems and programming languages. Unfortunately, when it
comes to floating-point arithmetic, the goal is virtually impossible
to achieve. The authors of the IEEE standards knew that, and they
didn't attempt to achieve it. As a result, despite nearly universal
conformance to (most of) the IEEE 754 standard throughout the computer
industry, programmers of portable software must continue to cope with
unpredictable floating-point arithmetic.
While finding that I also found this reference implementation done completely in JavaScript (note: I haven't actually verified the validity of the implementation).
All that said, let's move on to your second request:
A test suite intended to verify the implementation within the
browsers, including verifying the correct internal use of a 64 bit
floating point number (53 bit mantissa).
Since JavaScript is an interpreted platform you should see now that there is no way to test the set of script + compiler (VM/engine) + compiler that compiled the compiler + machine in an absolute and reliable way from the point of JavaScript. So unless you want to build a test suite that acts as a browser host and actually "peeks" into the private memory of the process to ensure a valid representation, which would be fruitless most likely anyway since the number are most likely "backed" by a double and that is going to conform as it does in the C or C++ that browser was built in. There is no absolute way to do this from JavaScript since all we have access to is the "object" and even when we view the Number in a console we are looking at a .toString version. For that matter I would posit that this is the only form that matters since it will be determined from the binary and would only become a point of failure if for the statement: n1 === n2 && n1.toString() !== n2.toString() you could find an n1, n2 that is relevant...
That said, we can test the string version and in reality it is just as good as testing the binary as long as we keep a few oddities in mind. Especially since nothing outside the JavaScript engine/VM ever touches the binary version. However this puts you at the mercy of an oddly specific, possibly very finicky and poised to be changed point of failure. Just for reference, here is an excerpt from webkit's JavaScriptCore's Number Prototype (NumberPrototype.cpp) displaying the complexity of the conversion:
// The largest finite floating point number is 1.mantissa * 2^(0x7fe-0x3ff).
// Since 2^N in binary is a one bit followed by N zero bits. 1 * 2^3ff requires
// at most 1024 characters to the left of a decimal point, in base 2 (1025 if
// we include a minus sign). For the fraction, a value with an exponent of 0
// has up to 52 bits to the right of the decimal point. Each decrement of the
// exponent down to a minimum of -0x3fe adds an additional digit to the length
// of the fraction. As such the maximum fraction size is 1075 (1076 including
// a point). We pick a buffer size such that can simply place the point in the
// center of the buffer, and are guaranteed to have enough space in each direction
// fo any number of digits an IEEE number may require to represent.
typedef char RadixBuffer[2180];
// Mapping from integers 0..35 to digit identifying this value, for radix 2..36.
static const char* const radixDigits = "0123456789abcdefghijklmnopqrstuvwxyz";
static char* toStringWithRadix(RadixBuffer& buffer, double number, unsigned radix)
{
ASSERT(isfinite(number));
ASSERT(radix >= 2 && radix <= 36);
// Position the decimal point at the center of the string, set
// the startOfResultString pointer to point at the decimal point.
char* decimalPoint = buffer + sizeof(buffer) / 2;
char* startOfResultString = decimalPoint;
// Extract the sign.
bool isNegative = number < 0;
if (signbit(number))
number = -number;
double integerPart = floor(number);
// We use this to test for odd values in odd radix bases.
// Where the base is even, (e.g. 10), to determine whether a value is even we need only
// consider the least significant digit. For example, 124 in base 10 is even, because '4'
// is even. if the radix is odd, then the radix raised to an integer power is also odd.
// E.g. in base 5, 124 represents (1 * 125 + 2 * 25 + 4 * 5). Since each digit in the value
// is multiplied by an odd number, the result is even if the sum of all digits is even.
//
// For the integer portion of the result, we only need test whether the integer value is
// even or odd. For each digit of the fraction added, we should invert our idea of whether
// the number is odd if the new digit is odd.
//
// Also initialize digit to this value; for even radix values we only need track whether
// the last individual digit was odd.
bool integerPartIsOdd = integerPart <= static_cast<double>(0x1FFFFFFFFFFFFFull) && static_cast<int64_t>(integerPart) & 1;
ASSERT(integerPartIsOdd == static_cast<bool>(fmod(integerPart, 2)));
bool isOddInOddRadix = integerPartIsOdd;
uint32_t digit = integerPartIsOdd;
// Check if the value has a fractional part to convert.
double fractionPart = number - integerPart;
if (fractionPart) {
// Write the decimal point now.
*decimalPoint = '.';
// Higher precision representation of the fractional part.
Uint16WithFraction fraction(fractionPart);
bool needsRoundingUp = false;
char* endOfResultString = decimalPoint + 1;
// Calculate the delta from the current number to the next & previous possible IEEE numbers.
double nextNumber = nextafter(number, std::numeric_limits<double>::infinity());
double lastNumber = nextafter(number, -std::numeric_limits<double>::infinity());
ASSERT(isfinite(nextNumber) && !signbit(nextNumber));
ASSERT(isfinite(lastNumber) && !signbit(lastNumber));
double deltaNextDouble = nextNumber - number;
double deltaLastDouble = number - lastNumber;
ASSERT(isfinite(deltaNextDouble) && !signbit(deltaNextDouble));
ASSERT(isfinite(deltaLastDouble) && !signbit(deltaLastDouble));
// We track the delta from the current value to the next, to track how many digits of the
// fraction we need to write. For example, if the value we are converting is precisely
// 1.2345, so far we have written the digits "1.23" to a string leaving a remainder of
// 0.45, and we want to determine whether we can round off, or whether we need to keep
// appending digits ('4'). We can stop adding digits provided that then next possible
// lower IEEE value is further from 1.23 than the remainder we'd be rounding off (0.45),
// which is to say, less than 1.2255. Put another way, the delta between the prior
// possible value and this number must be more than 2x the remainder we'd be rounding off
// (or more simply half the delta between numbers must be greater than the remainder).
//
// Similarly we need track the delta to the next possible value, to dertermine whether
// to round up. In almost all cases (other than at exponent boundaries) the deltas to
// prior and subsequent values are identical, so we don't need track then separately.
if (deltaNextDouble != deltaLastDouble) {
// Since the deltas are different track them separately. Pre-multiply by 0.5.
Uint16WithFraction halfDeltaNext(deltaNextDouble, 1);
Uint16WithFraction halfDeltaLast(deltaLastDouble, 1);
while (true) {
// examine the remainder to determine whether we should be considering rounding
// up or down. If remainder is precisely 0.5 rounding is to even.
int dComparePoint5 = fraction.comparePoint5();
if (dComparePoint5 > 0 || (!dComparePoint5 && (radix & 1 ? isOddInOddRadix : digit & 1))) {
// Check for rounding up; are we closer to the value we'd round off to than
// the next IEEE value would be?
if (fraction.sumGreaterThanOne(halfDeltaNext)) {
needsRoundingUp = true;
break;
}
} else {
// Check for rounding down; are we closer to the value we'd round off to than
// the prior IEEE value would be?
if (fraction < halfDeltaLast)
break;
}
ASSERT(endOfResultString < (buffer + sizeof(buffer) - 1));
// Write a digit to the string.
fraction *= radix;
digit = fraction.floorAndSubtract();
*endOfResultString++ = radixDigits[digit];
// Keep track whether the portion written is currently even, if the radix is odd.
if (digit & 1)
isOddInOddRadix = !isOddInOddRadix;
// Shift the fractions by radix.
halfDeltaNext *= radix;
halfDeltaLast *= radix;
}
} else {
// This code is identical to that above, except since deltaNextDouble != deltaLastDouble
// we don't need to track these two values separately.
Uint16WithFraction halfDelta(deltaNextDouble, 1);
while (true) {
int dComparePoint5 = fraction.comparePoint5();
if (dComparePoint5 > 0 || (!dComparePoint5 && (radix & 1 ? isOddInOddRadix : digit & 1))) {
if (fraction.sumGreaterThanOne(halfDelta)) {
needsRoundingUp = true;
break;
}
} else if (fraction < halfDelta)
break;
ASSERT(endOfResultString < (buffer + sizeof(buffer) - 1));
fraction *= radix;
digit = fraction.floorAndSubtract();
if (digit & 1)
isOddInOddRadix = !isOddInOddRadix;
*endOfResultString++ = radixDigits[digit];
halfDelta *= radix;
}
}
// Check if the fraction needs rounding off (flag set in the loop writing digits, above).
if (needsRoundingUp) {
// Whilst the last digit is the maximum in the current radix, remove it.
// e.g. rounding up the last digit in "12.3999" is the same as rounding up the
// last digit in "12.3" - both round up to "12.4".
while (endOfResultString[-1] == radixDigits[radix - 1])
--endOfResultString;
// Radix digits are sequential in ascii/unicode, except for '9' and 'a'.
// E.g. the first 'if' case handles rounding 67.89 to 67.8a in base 16.
// The 'else if' case handles rounding of all other digits.
if (endOfResultString[-1] == '9')
endOfResultString[-1] = 'a';
else if (endOfResultString[-1] != '.')
++endOfResultString[-1];
else {
// One other possibility - there may be no digits to round up in the fraction
// (or all may be been rounded off already), in which case we may need to
// round into the integer portion of the number. Remove the decimal point.
--endOfResultString;
// In order to get here there must have been a non-zero fraction, in which case
// there must be at least one bit of the value's mantissa not in use in the
// integer part of the number. As such, adding to the integer part should not
// be able to lose precision.
ASSERT((integerPart + 1) - integerPart == 1);
++integerPart;
}
} else {
// We only need to check for trailing zeros if the value does not get rounded up.
while (endOfResultString[-1] == '0')
--endOfResultString;
}
*endOfResultString = '\0';
ASSERT(endOfResultString < buffer + sizeof(buffer));
} else
*decimalPoint = '\0';
BigInteger units(integerPart);
// Always loop at least once, to emit at least '0'.
do {
ASSERT(buffer < startOfResultString);
// Read a single digit and write it to the front of the string.
// Divide by radix to remove one digit from the value.
digit = units.divide(radix);
*--startOfResultString = radixDigits[digit];
} while (!!units);
// If the number is negative, prepend '-'.
if (isNegative)
*--startOfResultString = '-';
ASSERT(buffer <= startOfResultString);
return startOfResultString;
}
... as you can see, the number here is backed by a traditional double and the conversion is anything but simple and straightforward. So what I devised was this: since I conjecture that the only spot that these implementations will differ are their "rendering" to strings. I built a test generator that is three fold:
tests the "string result" against a reference string result
tests their parsed equivalents (ignoring any epsilon, I mean exact!)
tests a special version of the strings that solely adjusts for the rounding "interpretation"
To accomplish this we need access to a reference build, my first thought was to use one from a native language but with that I found that the numbers produced seemed to have a higher precision than JavaScript in general leading to far more errors. So then I thought, what if I just used an implementation already inside a JavaScript engine. WebKit/JavaScriptCore seemed like a really good choice but it would have also been a lot of work to get the reference build up and running so I opted for the simplicity of .NET since it has access to "jScript" while not ideal seemed upon initial examination to produce closer results than the native counterpart. I didn't really want to code in jScript since the language is all but deprecated so I opted for C# bootstrapping jScript through a CodeDomProvider.... After a little tinkering here's what it produced: http://jsbin.com/afiqil (finally demo sauce!!!!1!), so now you can run it in all browsers and compile your own data, which upon my personal inspection it seems string rounding interpretation varies in EVERY browser I tried, however I've yet to find a major browser that handled the numbers behind the scenes (other that the stringify-ing) differently...
now for the C# sauce:
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using System.CodeDom.Compiler;
using System.Reflection;
namespace DoubleFloatJs
{
public partial class Form1 : Form
{
private static string preamble = #"
var successes = [];
var failures = [];
function fpu_test_add(v1, v2) {
return '' + (v1 + v2);
}
function fpu_test_sub(v1, v2) {
return '' + (v1 - v2);
}
function fpu_test_mul(v1, v2) {
return '' + (v1 * v2);
}
function fpu_test_div(v1, v2) {
return '' + (v1 / v2);
}
function format(name, result1, result2, result3, received, expected) {
return '<span style=""display:inline-block;width:350px;"">' + name + '</span>' +
'<span style=""display:inline-block;width:60px;text-align:center;font-weight:bold; color:' + (result1 ? 'green;"">OK' : 'red;"">NO') + '</span>' +
'<span style=""display:inline-block;width:60px;text-align:center;font-weight:bold; color:' + (result2 ? 'green;"">OK' : 'red;"">NO') + '</span>' +
'<span style=""display:inline-block;width:60px;text-align:center;font-weight:bold; color:' + (result3 ? 'green;"">OK' : 'red;"">NO') + '</span>' +
'<span style=""display:inline-block;width:200px;vertical-align:top;"">' + received + '<br />' + expected + '</span>';
}
function check_ignore_round(received, expected) {
return received.length > 8 &&
received.length == expected.length &&
received.substr(0, received.length - 1) === expected.substr(0, expected.length - 1);
}
function check_parse_parity_no_epsilon(received, expected) {
return parseFloat(received) === parseFloat(expected);
}
function fpu_test_result(v1, v2, textFn, received, expected) {
var result = expected === received,
resultNoRound = check_ignore_round(received, expected),
resultParse = check_parse_parity_no_epsilon(received, expected),
resDiv = document.createElement('div');
resDiv.style.whiteSpace = 'nowrap';
resDiv.style.fontFamily = 'Courier New, Courier, monospace';
resDiv.style.fontSize = '0.74em';
resDiv.style.background = result ? '#aaffaa' : '#ffaaaa';
resDiv.style.borderBottom = 'solid 1px #696969';
resDiv.style.padding = '2px';
resDiv.innerHTML = format(textFn + '(' + v1 + ', ' + v2 + ')', result, resultNoRound, resultParse, received, expected);
document.body.appendChild(resDiv);
(result ? successes : failures).push(resDiv);
return resDiv;
}
function fpu_test_run(v1, v2, addRes, subRes, mulRes, divRes) {
var i, res,
fnLst = [fpu_test_add, fpu_test_sub, fpu_test_mul, fpu_test_div],
fnNam = ['add', 'sub', 'mul', 'div'];
for (i = 0; i < fnLst.length; i++) {
res = fnLst[i].call(null, v1, v2);
fpu_test_result(v1, v2, fnNam[i], res, arguments[i + 2]);
}
}
function setDisplay(s, f) {
var i;
for (i = 0; i < successes.length; i++) {
successes[i].style.display = s;
}
for (i = 0; i < failures.length; i++) {
failures[i].style.display = f;
}
}
var test_header = fpu_test_result('value1', 'value2', 'func', 'received', 'expected'),
test_header_cols = test_header.getElementsByTagName('span');
test_header_cols[1].innerHTML = 'string';
test_header_cols[2].innerHTML = 'rounded';
test_header_cols[3].innerHTML = 'parsed';
test_header.style.background = '#aaaaff';
failures.length = successes.length = 0;
";
private static string summation = #"
var bs = document.createElement('button');
var bf = document.createElement('button');
var ba = document.createElement('button');
bs.innerHTML = 'show successes (' + successes.length + ')';
bf.innerHTML = 'show failures (' + failures.length + ')';
ba.innerHTML = 'show all (' + (successes.length + failures.length) + ')';
ba.style.width = bs.style.width = bf.style.width = '200px';
ba.style.margin = bs.style.margin = bf.style.margin = '4px';
ba.style.padding = bs.style.padding = bf.style.padding = '4px';
bs.onclick = function() { setDisplay('block', 'none'); };
bf.onclick = function() { setDisplay('none', 'block'); };
ba.onclick = function() { setDisplay('block', 'block'); };
document.body.insertBefore(bs, test_header);
document.body.insertBefore(bf, test_header);
document.body.insertBefore(ba, test_header);
document.body.style.minWidth = '700px';
";
private void buttonGenerate_Click(object sender, EventArgs e)
{
var numberOfTests = this.numericNumOfTests.Value;
var strb = new StringBuilder(preamble);
var rand = new Random();
for (int i = 0; i < numberOfTests; i++)
{
double v1 = rand.NextDouble();
double v2 = rand.NextDouble();
strb.Append("fpu_test_run(")
.Append(v1)
.Append(", ")
.Append(v2)
.Append(", '")
.Append(JsEval("" + v1 + '+' + v2))
.Append("', '")
.Append(JsEval("" + v1 + '-' + v2))
.Append("', '")
.Append(JsEval("" + v1 + '*' + v2))
.Append("', '")
.Append(JsEval("" + v1 + '/' + v2))
.Append("');")
.AppendLine();
}
strb.Append(summation);
this.textboxOutput.Text = strb.ToString();
Clipboard.SetText(this.textboxOutput.Text);
}
public Form1()
{
InitializeComponent();
Type evalType = CodeDomProvider
.CreateProvider("JScript")
.CompileAssemblyFromSource(new CompilerParameters(), "package e{class v{public static function e(e:String):String{return eval(e);}}}")
.CompiledAssembly
.GetType("e.v");
this.JsEval = s => (string)evalType.GetMethod("e").Invoke(null, new[] { s });
}
private readonly Func<string, string> JsEval;
}
}
or a pre-compiled version if you should choose: http://uploading.com/files/ad4a85md/DoubleFloatJs.exe/ this is an executable, download at your own risk
I should mention that the purpose of the program is just to produce a JavaScript file in a text box and copy it to the clipboard for convenience for pasting wherever you choose, you could easily turn this around and put it on an asp.net server and add reporting to results to ping the server and keep track in some massive database... which is what I would do to it if I desired the information..
...and, ...I'm, ...spent I hope this helps you -ck

Summarizing everything below, you can expect compliance on the majority of systems save for a few IE's glitches, but ought to use a sanity check as a precaution (proposition is included).
ECMAScript specification (both rev. 3 and 5) is very precise on IEEE 754's peculiarities such as conversions, rounding mode, overflow/underflow and even signed zeros (sec. 5.2, 8.5, 9.*, 11.5.*, 11.8.5; 15.7 and 15.8 deal with floats too). It doesn't just "leave things to the underlying implementation". There are no apparent differences between vv. 3 and 5 and all major browsers support v.3 at least. So its rules are honored by everyone, at least nominally. Let's see...
No browser passes test262 ECMAScript compliance tests completely (WP#Conformance tests). However, no test262 errors found on google are float-related.
IE5.5+ ([MS-ES3]) reports discrepancies in Number.toFixed, Number.toExponential, Number.toPrecision. Other differences aren't float-related. I couldn't run test262 in IE8;
FF uses special types for number and mandatory conversion fns between them and C types (see e.g. JSAPI user guide) which suggests that things are in fact not handed down to C. Test on FF10 didn't show any float-related errors;
Opera users did report float-related errors half a year ago;
Webkit's test262 failures don't appear to be float-related.
To validate a system, you can use float-related tests from test262. They are located at http://test262.ecmascript.org/json/ch<2-digit # of spec chapter>.json; test code can be extracted with (python 2.6+):
ch="05"; #substitute chapter #
import urllib,json,base64
j=json.load(urllib.urlopen("http://test262.ecmascript.org/json/ch%s.json"%ch))
tt=j['testsCollection']['tests']
f=open('ch%s.js'%ch,'w')
for t in tt:
print >>f
print >>f,base64.b64decode(t['code'])
f.close()
Another opportunity is IEEE 754 compliance tests in C.
Relevant sections from test262 (ones that compare floating point numbers) are as follows:
{
"S11": "5.1.A4: T1-T8",
"S15": {
"7": "3: 2.A1 & 3.A1",
"8": {
"1": "1-8: A1",
"2": {
"4": "A4 & A5",
"5": "A: 3,6,7,10-13,15,17-19",
"7": "A6 & A7",
"13": "A24",
"16": "A6 & A7",
"17": "A6",
"18": "A6"
}
}
},
"S8": "5.A2: 1 & 2"
}
this list and the concatenated source of all the relevant test files (as of 3/9/2012, no files from the harness) can be found here: http://pastebin.com/U6nX6sKL

General rule of thumb is that when number precision is important and you only have access to floating point precision numbers, all of your calculations should be done as integer math to best ensure validity (where you're assured 15 digits of assuredly valid data). And yes there are a bunch of general numeric idiosyncrasies in JavaScript but they are more associated with the lack of precision within floating point numbers and not with UA implementations of the standard. Look around for the pitfalls of floating point math, they're numerous and treacherous.
I feel as I should elaborate a little, for instance I wrote a program (in JavaScript) that used basic calculus to determine the area of polygon with dimensions given in meters or feet. Instead of doing the calculations as is, the program converted everything to micrometers and did its calculations there as everything would be more integral.
hope this helps -ck
In response to your clarification, comments and concerns
I'm not going to repeat my comments below in their entirety, however the short answer is no one will ever be able to say that EVERY IMPLEMENTATION is 100% on 100% of devices. Period. I can say and others will tell you the same, is that on the current major browsers I have not seen nor heard of any browser specific detrimental bug involving floating point numbers. But your question itself is kind of a double edged sword since you want to "rely" upon "unreliable" results, or simply that you want all the browsers to be "consistently inconsistent" - in other words instead of trying make sure a lion will play fetch, your time would be better spent looking for a dog, meaning: you can rely 110% on integer math AND the results of said math, the same goes for string math which has already been suggested to you...
good luck -ck

(EDIT: The bug mentioned below was closed as fixed on 3 Mar 2016. So my answer is now "maybe".)
Unfortunately the answer is no. There is at least one outstanding bug in v8 that, due to double-rounding, means it might not match IEEE 754 double precision on 32-bit Linux.
v8 bug entry
Firefox bug entry - fixed in 2009
More info on the double-rounding issue
This can be tested with:
9007199254740994 + 0.99999 === 9007199254740994
I can verify that this fails (the left-hand side is 9007199254740996) on Chrome 26.0.1410.63 running on 32-bit Ubuntu. It passes on Firefox 20.0 on the same system. At the very least, this test should be added to your test suite, and maybe test262.

"I have a specific use in mind for a distributed web application, which would only work if I can rely on consistent results across all browsers."
Then the answer is no. You can not relay on a specification to tell you that a browser correctly handles floats. Chrome updates every 6 weeks, so even if you have the specifications Chrome could change there behavior in the next release.
You have to relay on feature testing that test your assumptions before each time before you calculations is run.

Maybe you should use a library for your calculations. For example bignumber has a good handling of floating point numbers. Here you should be save from environment changes because it uses it's own storage format.

This is a problem since ages in computing. And if you ask old programmers who matured from assembly language, they will tell you that you store important numbers in a different format and do manipulations on them in similar way too.
For example, a currency value can be saved as integer by multiplying the float value by 100 (to keep the 2 decimal places intact). You can then safely do calculations and when you have to display the final result, divide there by 100. Depending upon how many decimal places you have to keep secure and safe, you may have to select a different number other than 100. Store things in a long value and be care-free about such problems ever.
This is what gives me satisfactory results across platforms so far. I just keep myself away from the floating point arithmetic nuances this way

Related

Converting a Two's complement number to its binary representation

I am performing bitwise operations, the result of which is apparently being stored as a two's complement number. When I hover over the variable it's stored in I see- num = -2086528968.
The binary of that number that I want is - (10000011101000100001100000111000).
But when I say num.toString(2) I get a completely different binary representation, the raw number's binary instead of the 2s comp(-1111100010111011110011111001000).
How do I get the first string back?
Link to a converter: rapidtables.com/convert/number/decimal-to-binary.html
Put in this number: -2086528968
Follow bellow the result:
var number = -2086528968;
var bin = (number >>> 0).toString(2)
//10000011101000100001100000111000
console.log(bin)
pedro already answered this, but since this is a hack and not entirely intuitive I'll explain it.
I am performing bitwise operations, the result of which is apparently being stored as a two's complement number. When I hover over the variable its stored in I see num = -2086528968
No, the result of most bit-operations is a 32bit signed integer. This means that the bit 0x80000000 is interpreted as a sign followed by 31 bits of value.
The weird bit-sequence is because of how JS stringifies the value, something like sign + Math.abs(value).toString(base);
How to deal with that? We need to tell JS to not interpret that bit as sign, but as part of the value. But how?
An easy to understand solution would be to add 0x100000000 to the negative numbers and therefore get their positive couterparts.
function print(value) {
if (value < 0) {
value += 0x100000000;
}
console.log(value.toString(2).padStart(32, 0));
}
print(-2086528968);
Another way would be to convert the lower and the upper bits seperately
function print(value) {
var signBit = value < 0 ? "1" : "0";
var valueBits = (value & 0x7FFFFFFF).toString(2);
console.log(signBit + valueBits.padStart(31, 0));
}
print(-2086528968);
//or lower and upper half of the bits:
function print2(value) {
var upperHalf = (value >> 16 & 0xFFFF).toString(2);
var lowerHalf = (value & 0xFFFF).toString(2);
console.log(upperHalf.padStart(16, 0) + lowerHalf.padStart(16, 0));
}
print2(-2086528968);
Another way involves the "hack" that pedro uses. You remember how I said that most bit-operations return an int32? There is one operation that actually returns an unsigned (32bit) interger, the so called Zero-fill right shift.
So number >>> 0 does not change the bits of the number, but the first bit is no longer interpreted as sign.
function uint32(value){
return value>>>0;
}
function print(value){
console.log(uint32(value).toString(2).padStart(32, 0));
}
print(-2086528968);
will I run this shifting code only when the number is negative, or always?
generally speaking, there is no harm in running nr >>> 0 over positive integers, but be careful not to overdo it.
Technically JS only supports Numbers, that are double values (64bit floating point values). Internally the engines also use int32 values; where possible. But no uint32 values. So when you convert your negative int32 into an uint32, the engine converts it to a double. And if you follow up with another bit operation, first thing it does is converting it back.
So it's fine to do this like when you need an actual uint32 value, like to print the bits here, but you should avoid this conversion between operations. Like "just to fix it".

Javascript: "+" sign concatenates instead of giving sum of variables

I am currently creating a site that will help me quickly answer physics questions.
As it happens, the code didn't run as expected, here is the code
if (option == "dv") {
var Vinitial = prompt("What is the Velocity Initial?")
var acceleration = prompt("what is the acceleration?")
var time = prompt("what is the time?")
Vfinal = Vinitial + acceleration * time
displayV.innerHTML = "v= vf= " + Vfinal + "ms" + sup1.sup();
}
Now, let's say Vinitial was 9, acceleration was 2, and time was 3.
When the code runs, instead of getting 15 for "Vfinal", I get 96.
I figured out that it multiplies acceleration and time fine, and then just concatenates the 9 at the beginning, with 6 (the product of 2 * 3).
I have fixed it for now by using
Vfinal = acceleration * time - (-Vinitial)
which avoids using the "+" sign, but I don't want to have to keep doing this. How do I fix it?
you are dealing with strings here, and math operations on strings will mess up. Remember when ever you are doing math operations you have to convert the data into actual numbers and then perform the math.
Use parseInt() more Details here
Your code should change to
Vfinal = parseInt(Vinitial,10) + parseInt(acceleration,10) * parseInt(time,10);
Edit 1: If the numbers are decimal values then use parseFloat() instead
So the code would be
Vfinal = parseFloat(Vinitial) + parseFloat(acceleration) * parseFloat(time);
Object-Oriented JavaScript - Second Edition: As you already know, when you use the plus sign with two numbers, this
is the arithmetic addition operation. However, if you use the plus
sign with strings, this is a string concatenation operation, and it
returns the two strings glued together:
var s1 = "web";
var s2 = "site";
s1 + s2; // website
The dual purpose of the + operator is a source of errors. Therefore,
if you intend to concatenate strings, it's always best to make sure
that all of the operands are strings. The same applies for addition;
if you intend to add numbers, make sure the operands are numbers.
You can use "+" operator with prompt() to convert returned values from string to int
var Vinitial = +prompt("What is the Velocity Initial?");
var acceleration = +prompt("what is the acceleration?");
var time = +prompt("what is the time?");
Explanation:
var a = prompt('Enter a digit');
typeof a; // "string"
typeof +a; // "number"
If you will enter non-digit data +a gives you NaN. typeof NaN is "number" too :)
You will get the same result with parseInt():
var Vinitial = parseInt(prompt("What is the Velocity Initial?"), 10);
var acceleration = parseInt(prompt("what is the acceleration?"), 10);
var time = parseInt(prompt("what is the time?"), 10);
developer.mozilla.org: parseInt(string, radix);
string: The value to parse.
radix: An integer between 2 and 36 that represents the radix (the base in mathematical numeral systems) of the above mentioned string.
Specify 10 for the decimal numeral system commonly used by humans.
Always specify this parameter to eliminate reader confusion and to
guarantee predictable behavior. Different implementations produce
different results when a radix is not specified, usually defaulting
the value to 10.
Epilogue:
Object-Oriented JavaScript - Second Edition: The safest thing to do is to always specify the radix. If you omit the radix, your code
will probably still work in 99 percent of cases (because most often
you parse decimals), but every once in a while it might cause you a
bit of hair loss while debugging some edge cases. For example, imagine
you have a form field that accepts calendar days or months and the
user types 06 or 08.
Epilogue II:
ECMAScript 5 removes the octal literal values and avoids the confusion
with parseInt() and unspecified radix.
The Problem is, Your value has been took it in a form of string .. so convert your value into Int using parseInt(accelaration).. then it will work ..
Vfinal = parseInt(Vinitial) + parseInt(acceleration) * parseInt(time)
//use ParseInt
var a=10,b=10;
var sum=parseInt(a+b);
ex:
parseInt(Vinitial + acceleration) * time

Diffie-Hellman implementation doesn't work for bigger numbers

Context
I was looking at this video DHE explained
It talks about how two people can exchange a key without eyedroppers to know much.
The implementation according to the video
// INITIALIZERS (video's values)-------------------------
var prefx = 3
var modulo = 17
// SECRET NUMBERS ---------------------------------------
var alice_secret_number = 19 // replaced 54 since there is a precision loss with it.
var bob_secret_number = 24
// PUBLIC KEYS ------------------------------------------
var public_alice = Math.pow(prefx , alice_secret_number)
var public_bob = Math.pow(prefx , bob_secret_number)
// Check potential overflow -----------------------------
console.log(public_alice , public_bob)
// Apply the modulo -------------------------------------
public_alice %= modulo
public_bob %= modulo
// Check the value again --------------------------------
console.log( public_alice , public_bob )
// Calculate the good number ------------------------------------------
var final_alice = Math.pow( public_bob , alice_secret_number ) % modulo
var final_bob = Math.pow( public_alice , bob_secret_number ) % modulo
console.log( final_alice , final_bob )
Problem
That doesn't always work. First, javascript, for example, loses precision.
So you can try with small numbers only. The speaker talks about big modulos. Even small one won't make it.
I gave you the code, which is not tailored toward performance but readability.
Could someone give me his/her opinion on what I am doing wrong?
All numbers in JavaScript are floats (actually doubles). The corresponding specification is IEEE 754. To represent an integer without loss of precision it must fit into the mantissa which is 53 bit big for 64 bit floats. You can check the maximum integer with Number.MAX_SAFE_INTEGER which is 9007199254740991. Everything beyond that loses precision.
Why is this a problem? (Most of) cryptography must be exact otherwise the secret cannot be learned. What you try to do is exponentiate and then apply the modulus, but since you do this separately, you get a very big number after exponentiation before it can be reduced through the modulus operation.
The solution is to use some kind of BigNumber library (like BigInteger) which handles all those big numbers for you. Note that there is specifically a modPow(exp, mod) function which combines those two steps and calculates the result efficiently.
Note that user secrets should be smaller than the modulus.

What if I need to count 9,007,199,254,740,993 of something?

I'm reading Effective Javascript by David Herman, and just learned this about how JavaScript handles numbers:
"all numbers in JavaScript are double-precision floating-point numbers, that is, the 64-bit encoding of numbers specified by the IEEE 754 standard -- commonly known as "doubles". If this fact leaves you wondering what happened to the integers, keep in mind that doubles can represent integers perfectly with up to 53 bits of precision. All of the integers from -9,007,199,254,740,992 (-2^53) to 9,007,199,254,740,992 (2^53) are valid doubles." (p. 7)
I was curious, so I threw together this jsfiddle to try it out:
var hilariouslyLargeNumber = 9007199254740992;
console.log(hilariouslyLargeNumber);
// 9007199254740992
console.log(hilariouslyLargeNumber + 1);
// 9007199254740992
console.log (hilariouslyLargeNumber === hilariouslyLargeNumber);
// true
console.log(hilariouslyLargeNumber === hilariouslyLargeNumber+1);
// true
console.log(hilariouslyLargeNumber === hilariouslyLargeNumber-1);
// false
I sort of understand why this is the case -- in simple (simple, simple) language, there aren't any more 'slots' for any more 0s and 1s for how JavaScript encodes numbers, and so it has to stop at 9,007,199,254,740,992.
So: what do I do if I find myself in possession of 9,007,199,254,740,993 puppies, and want to write some code to help me remember which one is which? Do I need to use something other than JavaScript? If so, why?
You have to do some work-around programming. An example would be:
var MAX_VALUE = 9007199254740992;
var lower_digit_set = 0;
var upper_digit_set = 0;
/* Do some calculations that will eventually result in lower_digit_set to be 9007199254740992 */
lower_digit_set = MAX_VALUE;
if (lower_digit_set == MAX_VALUE) {
lower_digit_set = 0;
upper_digit_set = upper_digit_set + 1;
}
/* What you have to keep in mid is that your final number is something you calculate,
however you cannot display it (you probably could, but it is very complex solution that I should give it a longer thought).
And therefore if we increase lower_digit_set as such: */
lower_digit_set = lower_digit_set + 1;
/* Then the new number will be a combination of both lower_digit_set and upper_digit_set*/
console.log("The actual number is more than once, or twice, or thrice ...etc. of the max_value");
console.log("Number of times we multiply the max value: ", upper_digit_set);
console.log("Then we add our remainder: ", lower_digit_set);
Please note, if you are going to calculate negative numbers, then you should account for that. Also, the solution depends on your needs, this is just an example, you may need to modify it to fit your needs, but it just gives you a general idea of what you need to do, or at least a way of thinking so to speak.
Of course you can do it; you just can't do it with primitives. Libraries like JSDecimal represent numbers in other ways.

Why is 10000000000000.126.toString() 1000000000000.127 (and what can I do to prevent it)?

Why is 10000000000000.126.toString() 1000000000000.127 and 100000000000.126.toString() not?
I think it must have something to do with the maximum value of a Number in Js (as per this SO question), but is that related to floating point operations and how?
I'm asking because I wrote this function to format a number using thousands separators and would like to prevent this.
function th(n,sep) {
sep = sep || '.';
var dec = n.toString().split(/[,.]/),
nArr = dec[0].split(''),
isDot = /\./.test(sep);
return function tt(n) {
return n.length > 3 ?
tt(n.slice(0,n.length-3)).concat(n.slice(n.length-3).join('')) :
[n.join('')]
;
}(nArr)
.join(sep)
+ (dec[1] ? (isDot?',':'.') + dec[1] : '');
}
sep1000(10000000000000.126); //=> 10.000.000.000.000,127
sep1000(1000000000000.126); //=> 1.000.000.000.000,126
Because not all numbers can be exactly represented with floating point (JavaScript uses double-precision 64-bit format IEEE 754 numbers), rounding errors come in. For instance:
alert(0.1 + 0.2); // "0.30000000000000004"
All numbering systems with limited storage (e.g., all numbering systems) have this issue, but you and I are used to dealing with our decimal system (which can't accurately represent "one third") and so are surprised by some of the different values that the floating-point formats used by computers can't accurately represent. This sort of thing is why you're seeing more and more "decimal" style types out there (Java has BigDecimal, C# has decimal, etc.), which use our style of number representation (at a cost) and so are useful for applications where rounding needs to align with our expectations more closely (such as financial apps).
Update: I haven't tried, but you may be able to work around this by manipulating the values a bit before you grab their strings. For instance, this works with your specific example (live copy):
Code:
function display(msg) {
var p = document.createElement('p');
p.innerHTML = msg;
document.body.appendChild(p);
}
function preciseToString(num) {
var floored = Math.floor(num),
fraction = num - floored,
rv,
fractionString,
n;
rv = String(floored);
n = rv.indexOf(".");
if (n >= 0) {
rv = rv.substring(0, n);
}
fractionString = String(fraction);
if (fractionString.substring(0, 2) !== "0.") {
return String(num); // punt
}
rv += "." + fractionString.substring(2);
return rv;
}
display(preciseToString(10000000000000.126));
Result:
10000000000000.126953125
...which then can, of course, be truncated as you see fit. Of course, it's important to note that 10000000000000.126953125 != 10000000000000.126. But I think that ship had already sailed (e.g., the Number already contained an imprecise value), given that you were seeing .127. I can't see any way for you to know that the original went to only three places, not with Number.
I'm not saying the above is in any way reliable, you'd have to really put it through paces to prove it's (which is to say, I'm) not doing something stoopid there. And again, since you don't know where the precision ended in the first place, I'm not sure how helpful it is.
It's about the maximum number of significant decimal digits a float can store.
If you look at http://en.wikipedia.org/wiki/IEEE_754-2008 you can see that a double precision float (binary64) can store about 16 (15.95) decimal digits.
If your number contains more digits you effectively lose precision, which is the case in your sample.

Categories

Resources