JavaScript and Dealing with Floating Point Determinism - javascript

I'm looking to build a browser multiplayer game using rollback netcode that runs a deterministic simulation on the clients. I prototyped the netcode in Flash already before I ran into the floating point roadblock.
Basically, from what I understand, integer math in Flash is done by casting ints to Numbers, doing the math, then casting back to int. It's faster apparently, but it means that there's no chance of deterministic math across different computer architectures.
Before I dump all my eggs into the JavaScript basket then, I'd like to ask a few questions.
Is there true integer arithmetic on all major browsers in JavaScript? Or do some browsers do the Flash thing and cast to floats/doubles to do the math before casting back to int?
Does something like BigDecimal or BigNum work for deterministic math across different computer architectures? I don't mind some performance loss as long as it's within reason. If not, is there some JavaScript fixed point library out there that solves my problem?
This is a long shot, but is there a HTML5 2D game engine that has deterministic math for stuff like x/y positions and collisions? The list of game engines is overwhelming to say the least. I'm uneasy about building a deterministic cross browser compatible engine from scratch, but that might be what I have to do.
NOTE: Edited from HTML5 to JS as per responses. Apologies for my lack of knowledge.

This is a Javascript issue - not an HTML5 one.
All Javascript math is done using IEEE754 floating point double values - there are no "ints".
Although IEEE754 requires (AFAIK) a specific answer for each operation for any given input, you should be aware that JS interpreters are potentially free to optimise expressions, loops, etc, such that the floating point operations don't actually execute in the order you expect.
Over the course of a program this may result in different answers being produced on different browsers.

Related

Working with accurate currency values in Javascript

I'm working on a system that uses financial data. I'm getting subtle rounding errors due to the use of floating point numbers. I'm wondering if there's a better way to deal with this.
One of the issues is that I'm working with a mixture of different currencies, which might have up to 12 decimals, and large numbers for other currencies.
This means that the smallest number I need to represent is 0.000000000001 * (1*10^-12) and the largest 100,000,000,000 (1*10^11).
Are there any recommended ways to work with numbers of this size and not lose precision?
If you're really trying to stay in the JS realm you might consider Decimal.js which should cover your precision range.
If I were writing this and needed to make sure there were no rounding errors I would likely try and use a GMP extension for another lang inside a microservice which was only tasked with the financial math. GMPY2 for Python3 is probably a good bet for something quick and easy.

Can javascript be trusted when making calculations?

I am implementing an invoice system, where everything is dynamically added on the dom through javascript and I am making some calculations on the browser itself with javascript.
for eg I am calculating each invoice line with quantity and price of unit and generating a total sum
price can be a floating point number
but I am not sure if this should be trusted or not, if someone has the same toughts about javascript please comment :)
I don't know but javascript doesn't seem to me to be trusted like other programming languages like PHP or so, this is my opinion, but if you can convince me please do
Thanks
Javascript uses the same data type that almost all languages use for floating point calculations. The double precision floating point data type is very common, because processors have built in support for it.
Floating point numbers have a limited precision, and most numbers with a fractional part can't be represented exactly. However, for what you are going to use it for, the precision is more than enough to show a correct result.
You should just be aware of the limited precision. When displaying the result, you should make sure that it's formatted (and rounded) to the precision that you want to show. Otherwise the limited precision might show up as for example a price of 14.9500000000000001 instead 14.95.
According to JavaScript's specifications, all numbers are 64bit precision (as in 64bit floating point precision).
From this post, you have 3 solutions:
use some implementation of Decimal for JavaScript, as BigDecimal.js
just choose a fixed number of digits to keep, like this (Math.floor(y/x) * x).toFixed(2)
switch to pure integers, treating prices as number of cents. This could lead you to big changes across the whole project
Financial calculations usually require specific fixed rules about (for example) when and how to round (in which direction), etc.
That means you'll often maintain an internal sub-total precision until you move to a next section of your calculation (like adding the tax, as per rules set).
IEEE-754 Floating point (as used in javascript) will give you a maximum accuracy of 2^53 (if you think about it like an integer).
Now your 'job' is to pretend javascript doesn't support floating point and substitute it yourself using the simplest possible way: decrease your maximum integer range to obtain the required floating point precision and see if that resulting range is suitable to your needs. If not, then you might need an external high precision math library (although most basic operations are pretty easy to implement).
First determine your desired internal precision (incl overflow digit for your expected rounding behavior): for example 3 digits:
FLOOR((2^53)/(10^3))=FLOOR(9.007.199.254.740.992/1000)=9.007.199.254.740,000
If this range is sufficient, then you need no other library, just multiply your input 10^float_digits and maintain that internal precision per calculation-section, while rounding each step according to the rules required for your calculation (you'd still need to do that when using a high-precision external math library).
For (visual) output, again, apply proper rounding and just divide your remaining value by 10^(floatDigits-roundingDigit(s)) and pass it through Number.prototype.toFixed() (which then just pads zero's when required).
As to your other question regarding trustworthiness of javascript vs other programming languages: one can even boot/run and use LINUX on javascript inside the browser: http://bellard.org/jslinux/
Let that sink in for a moment...
Now what if I told you this even works in IE6... Pretty humbling. Even servers can run on javascript (node.js)..
Hope this helps (it didn't fit in a comment).
Other answers have addressed issues that JavaScript has with using floating point numbers to represent money.
There's a separate issue with using JavaScript for calculations involving financial transactions that comes to mind.
Because the code is executed in a browser on the client machine, You can only trust the result to the extent that you can trust the client.
Therefore you should really only rely on JavaScript to calculate something that you could take for granted if the client told you.
For instance, if you were writing an e-commerce site, you could trust code that told you what the client wanted to buy, and what the clients shipping address was, but you would need to calculate the price of the goods yourself to prevent the client from telling you a lower price.
It's entirely possible that the invoicing system you're working on will only be used internally to your organisation.
If this is the case, you can disregard this entire answer.
But, if your applications is going to be used by customers to access and manipulate their invoices and orders, then this is something you'd have to consider.

Optimizing javascript code to use integer arithmetic

There are some algorithms which solve problems "very well" under the assumption that "very well" means minimizing the amount of floating point arithmetic operations in favor of integer arithmetic. Take for example Bresenham's line algorithm for figuring out which pixels to fill in order to draw a line on a canvas: the guy made practically the entire process doable with only some simple integer arithmetic.
This sort of thing is obviously good in many situations. But is it worth fretting about operations that require a lot of floating-point math in javascript? I understand that everything's pretty much a decimal number as far as the language specification goes. I'm wondering if it is practically worth it to try to keep things as integer-like as possible--do browsers make optimizations that could make it worth it?
You can use Int8, Uint8, Int16, etc. in javascript, but it requires a bit more effort than normal - see TypedArrays.
var A = new Uint32Array(new ArrayBuffer(4*n));
var B = new Uint32Array(new ArrayBuffer(4*n));
//assign some example values to A
for(var i=0;i<n;i++)
A[i] = i; //note RHS is implicitly converted to uint32
//assign some example values to B
for(var i=0;i<n;i++)
B[i] = 4*i+3; //again, note RHS is implicitly converted to uint32
//this is true integer arithmetic
for(var i=0;i<n;i++)
A[i] += B[i];
Recently, the asm.js project has made it is possible to compile C/C++ code to strange looking javascript that uses these TypedArrays in a rather extreme fashion, the benefit being that you can use your existing C/C++ code and it should run pretty fast in the browser (especially if the browser vendors implement special optimizations for this kind of code, which is supposed to happen soon).
On a side note* if you program can do SIMD parallelism (see wikipeda), i.e. if your code uses the SSEx instruction set, your arithmetic will be much faster, and in fact using int8s will be twice as fast as using int16s etc.
*I don't think this is relevant to browsers yet due to being too difficult for them to take advantage of on the fly. Edit: It turns out that Firefox is experimenting with this kind of optimization. Also Dart (true Dart, not Dart compiled to js) will be able to do this in Chrome.
Long ago, computers lacked dedicated FPUs and did floating point math entirely via software emulation.
Modern computers all have dedicated FPUs which handle floating point math just as well as integer. You should not need to worry about it unless you have a very specific circumstance.
Actually, it makes no different. JavaScript has no concept of "integer". JS only uses double-precision floating-point numbers, which may or may not be integers.
Therefore, there is absolutely nothing to gain in terms of performance by limiting yourself to integers.
However, keep in mind that integers will be precise up to 251, whereas non-integers can very easily suffer from precision loss (example: 0.1), so you might gain because of this.

Performance of bitwise operators in javascript

One of the main ideas behind using bitwise operators in languages like C++/java/C# is that they're extremely fast. But I've heard that in javascript they're very slow (admittedly a few milliseconds probably doesn't matter much today). Why is this so?
(this question discusses when bitwise operators are used, so I'm changing the focus of this question to performance.)
This is quite an old question, but no one seemed to answer the updated version.
The performance hit that you get with JavaScript that doesn't exist in C/C++ is the cast from floating point representation (how JavaScript strores all of its numbers) to a 32 bit integer to perform the bit manipulation and back.
Nobody uses hex anymore?
function hextoRgb(c) {
c = '0x' + c.substring(1);
return [(c >> 16) & 255, (c >> 8) & 255, c & 255];
}
var c1 = hextoRgb('#191970');
alert('rgb(' + c1.join(',') + ')');
I use bitwise shift of zero in JS to perform quick integer truncation:
var i=3.141532;
var iTrunc=i>>0; //3
When would you want to use them? You would want to use them when you want to do bitwise operations. Just like you'd use boolean operators to do boolean operations, and mathematical operators to do mathematical operations.
If you are comfortable with bitwise operators it is very natural to use them for some applications. They can be used for many purposes other than an over-optimized boolean array. Of course, these circumstances don't come up very often in Javascript programming, but that's no reason why the operators shouldn't be available.
I found some good info #
http://dreaminginjavascript.wordpress.com/2009/02/09/bitwise-byte-foolish/
Apparently they perform very well these days. Why would you use them? Same reason you would anywhere else.
I'd think it's up to the implementer to make an operator efficient or inefficient. For example, there's nothing that prevents a JavaScript implementer from making a JITting VM, which turns a bitwise op into 1 machine instruction. So there's nothing inherently slow about "the bitwise operators in JavaScript".
There is an NES emulator written in JavaScript - it seems to make plenty of use of bitwise operations.
I am doubtful that bitwise operation are particularly slow in javascript. Since such operations can map directly to CPU operations, which are themselves quite efficient, there doesn't appear to be any inherent characteristic of bitwise operations that would force them to be irremediably slow in javascript.
Edit December 2015: I stand corrected! The performance hit that Javascript suffers in regards to bitwise operations comes from the need of converting from float to int and back (as all numeric variables in Javascript are stored as floating point values). Thank you to Chad Schouggins for pointing that out.
Never the less, as indicated in several responses, there exist various applications of javascript which rely on bitwise operation (ex: crytography and graphics) and which are not particularly slow... (see silky and Snarfblam on this page). This suggests that while slower than C/C++ and other languages which translate directly bitwise ops to single native CPU instructions, bitwise operations are all that sluggish.
Let's never the less entertain the possibility that some particular reasons caused the various implementers of javascript hosts to implement bitwise ops in a fashion that makes these extremely slow, and see if this even matters...
Although javascript has been used for other purposes, the most common use of this language in in providing user interface type of services.
BTW, I do not mean this in any pejorative way at all; performing these smart UI functions, and considering various constraints imposed on the language and also the loose adherence to standards, has required -and keeps requiring- talented javascript hackers.
The point is that in the context of UI-type requirements, the need for any quantity of bitwise operations susceptible of exposing the slowness of javascript in handling such operations is uncommon at best. Consequently, for typical uses, programmers should use bitwise operations where and if this approach seems to flow well with overall program/data and they should do so with little concern for performance issues. In the unlikely case of performance bottleneck arising from bitwise use, one can always refactor things, but one is better off staying clear from early optimization.
The notable exception to the above is with the introduction of canvas, on modern browsers, we can expect that more primitive graphic functions will be required of javascript hosts, and such operations can require in some cases heavy doses of bitwise operations (as well as healthy does of math functions). It is likely that these services will eventually be supported by way of javascript libraries (and even end-up as languages additions). For such libraries the common smarts of the industry will have been put to use to figure out the most efficient approaches. Furthermore and if indeed there is a weakness in javascript performance with bitwise ops, we'll get some help, for I predict that the javascript implementations on various hosts (browsers) will be modified to improve this particular area. (This would follow the typical pattern of evolution of javascript, that we've seen over the years.)
When speed is paramount, you can use them for bit-masking: http://snook.ca/archives/javascript/storing_values/
Also, if you need to support Netscape 4, you'd use them to deal with Document.captureEvents(). Not that any respectable company would have you write JS for NS4...
People do interesting things in JavaScript.
For example there are a lot of cryptography algorithms implemented in it (for various reasons); so of course bitwise operators are used.
Using JavaScript in its Windows Scripting Host JScript incarnation, you might have cause to use bitwise operators to pick out flags in values returned from WMI or Active Directory calls. For example, the User Access value of a user's record in AD contains several flags packed into one long integer.
ADS_UF_ACCOUNTDISABLE = 0x00000002;
if (uac & ADS_UF_ACCOUNTDISABLE == ADS_UF_ACCOUNTDISABLE) {
// user account has been disabled
}
Or someone's arbitrary table structure may contain such a field, accessible through ADO with JScript.
Or you may want to convert some retrieved data into a binary representation on any platform, just because:
BinaryData = "L";
BinaryString = BinToStr(BinaryData, ".", "x");
// BinaryString => '.x..xx..'
So there are numerous reasons why one might want to do bit manipulation in JavaScript. As for performance, the only way to know is to write it and test it. I suspect in most cases it would be perfectly acceptable, not significantly worse than any other of the multitude of inefficiencies these systems contain.
A lot of bitwise operations are being benchmarked here: http://jsperf.com/rounding-numbers-down/3
However, feel free to create your own performance testcase on jsPerf!

Is there a definitive solution to javascript floating-point errors?

I write line of business applications. I'd like to build a front-end end using Javascript and am trying to figure out how to deal with, for a business user, are floating point errors (I understand from a computer science perspective they might not be considered errors). I've read plenty on this and seen all kinds of rounding hacks that work on examples given but seem prone to break down unexpectedly. Is there a definitive way to do decimal math in javascript?
According to Douglas Crockford, the only way around this problem is scale your values to integer. Make sure it really is an integer by using Math.round on the scaled value. (DC does not talk about the rounding part, but I discovered it was necessary. e.g. Math.round(1.1 *100)) Do calculation(s). When you are done with the math scale back to original precision. See JavaScript: The Good Parts "Floating Point" section.
One answer is to do the math in decimal instead of binary. Then you never have to worry about the decimal <=> binary conversion errors. You'd represent the numbers as binary digits in an array or a string and write the math routines yourself.
Here are some bignumber libraries you can look into if you don't want to go to that trouble:
http://jsfromhell.com/classes/bignumber
http://stz-ida.de/html/oss/js_bigdecimal.html.en
the only definite solution seems to be writing your own arbitrary precision number type working on strings internally -- which will be complicated and horribly slow.

Categories

Resources