How to implement (fast) bigint division? - javascript

I'm currently making my own BigInt class, by splitting numbers by 7 digits. (i.e. in base 10,000,000)
I implemented addition, subtraction, and multiplication, and now I'm implementing division and mod. I wrote a code that performs division by long division (estimating numbers by dividing most-significant digits), and it works.
However, it is too slow. When I test operations on a 108-digit number and a 67-digit number, it takes 1.9ms to calculate division, much slower than other operations(0.007~0.008ms to calculate addition/subtraction, 0.1ms to calculate multiplication).
Like Karatsuba and FFT algorithm for fast multiplication, what algorithm exists for calculating division? Wikipedia demonstrates some division algorithms (which calculates multiplicative inverse of divisor and multiplies it with dividend), but I think that doesn't help me much implementing division. I read 'Large integer methods' sections too but that doesn't help me too... :(

The standard reference for big-integer arithmetic is Donald Knuth's book Art of Computer Programming, Volume 2, Section 4.3. His division algorithm is basically the grade-school algorithm, with some small improvements.
By the way, most people that implement big-integer arithmetic use a power of two rather than a power of ten as the radix of their number system.

I'd suggest you have a look at the source code of the GMP library and port the functionality you need to JavaScript, or get an idea of how it's done.
If there is a good algorithm out there, this library will most likely have it; and it is distributed under the LGPL.

For a reasonably fast division algorithm, look at http://myweb.lmu.edu/dmsmith/MComp1996.pdf
It is still O(n^2) but efficient for moderate lengths. It is especially well suited when you are using bases that are smaller than the word size. Years ago, I implemented it in Python. The code is buried in http://home.comcast.net/~casevh/decint_041.tar.gz . Look for the functions smithdiv() and remainder_norm().

Related

What is probability of random generator repeating more than once?

Imagine we have two independent pseudo-random number generators using same algorithm but seeded differently. And we are generating numbers of same size using these generators, say 32-bit integers. Provided algorithm gives us uniform distribution, there is 1/2^32 probability (or is it?) of a collision. If a collision just happened, what is the probability the very next pair will also be a collision? It seems for me this probability might be different (higher) from that initial uniform-based collision chance. Most of currently existing pseudo-random number generators hold internal state to maintain own stability, and recently happened collision might signal those internal states are somewhat "entangled" giving modified (higher) chance of a collision to happen again.
The question is probably too broad to give any precise answer, but revealing general directions/trends could also be nice. Here are some interesting aspects:
Does size of initial collision matter? Is there a difference after a
collision of 8 consecutive bits vs 64 bits? How approximately chance
of next collision depends on size of generated sequence?
Does pattern of pair generation matter? For example, we could find
initial collision by executing first generator only once and
"searching" second generator. Or we could invoke each generator on
every iteration.
I'm particularly interested in default javascript Math.random().
32-bit integers can be generated of that like
this (for example). EDIT: As pointed in comments, conversion of
random value from [0; 1) range should be done carefully, as exponent of
such values is very likely to repeat (and it takes decent part of result
extracted this way).

Working with accurate currency values in Javascript

I'm working on a system that uses financial data. I'm getting subtle rounding errors due to the use of floating point numbers. I'm wondering if there's a better way to deal with this.
One of the issues is that I'm working with a mixture of different currencies, which might have up to 12 decimals, and large numbers for other currencies.
This means that the smallest number I need to represent is 0.000000000001 * (1*10^-12) and the largest 100,000,000,000 (1*10^11).
Are there any recommended ways to work with numbers of this size and not lose precision?
If you're really trying to stay in the JS realm you might consider Decimal.js which should cover your precision range.
If I were writing this and needed to make sure there were no rounding errors I would likely try and use a GMP extension for another lang inside a microservice which was only tasked with the financial math. GMPY2 for Python3 is probably a good bet for something quick and easy.

Can javascript be trusted when making calculations?

I am implementing an invoice system, where everything is dynamically added on the dom through javascript and I am making some calculations on the browser itself with javascript.
for eg I am calculating each invoice line with quantity and price of unit and generating a total sum
price can be a floating point number
but I am not sure if this should be trusted or not, if someone has the same toughts about javascript please comment :)
I don't know but javascript doesn't seem to me to be trusted like other programming languages like PHP or so, this is my opinion, but if you can convince me please do
Thanks
Javascript uses the same data type that almost all languages use for floating point calculations. The double precision floating point data type is very common, because processors have built in support for it.
Floating point numbers have a limited precision, and most numbers with a fractional part can't be represented exactly. However, for what you are going to use it for, the precision is more than enough to show a correct result.
You should just be aware of the limited precision. When displaying the result, you should make sure that it's formatted (and rounded) to the precision that you want to show. Otherwise the limited precision might show up as for example a price of 14.9500000000000001 instead 14.95.
According to JavaScript's specifications, all numbers are 64bit precision (as in 64bit floating point precision).
From this post, you have 3 solutions:
use some implementation of Decimal for JavaScript, as BigDecimal.js
just choose a fixed number of digits to keep, like this (Math.floor(y/x) * x).toFixed(2)
switch to pure integers, treating prices as number of cents. This could lead you to big changes across the whole project
Financial calculations usually require specific fixed rules about (for example) when and how to round (in which direction), etc.
That means you'll often maintain an internal sub-total precision until you move to a next section of your calculation (like adding the tax, as per rules set).
IEEE-754 Floating point (as used in javascript) will give you a maximum accuracy of 2^53 (if you think about it like an integer).
Now your 'job' is to pretend javascript doesn't support floating point and substitute it yourself using the simplest possible way: decrease your maximum integer range to obtain the required floating point precision and see if that resulting range is suitable to your needs. If not, then you might need an external high precision math library (although most basic operations are pretty easy to implement).
First determine your desired internal precision (incl overflow digit for your expected rounding behavior): for example 3 digits:
FLOOR((2^53)/(10^3))=FLOOR(9.007.199.254.740.992/1000)=9.007.199.254.740,000
If this range is sufficient, then you need no other library, just multiply your input 10^float_digits and maintain that internal precision per calculation-section, while rounding each step according to the rules required for your calculation (you'd still need to do that when using a high-precision external math library).
For (visual) output, again, apply proper rounding and just divide your remaining value by 10^(floatDigits-roundingDigit(s)) and pass it through Number.prototype.toFixed() (which then just pads zero's when required).
As to your other question regarding trustworthiness of javascript vs other programming languages: one can even boot/run and use LINUX on javascript inside the browser: http://bellard.org/jslinux/
Let that sink in for a moment...
Now what if I told you this even works in IE6... Pretty humbling. Even servers can run on javascript (node.js)..
Hope this helps (it didn't fit in a comment).
Other answers have addressed issues that JavaScript has with using floating point numbers to represent money.
There's a separate issue with using JavaScript for calculations involving financial transactions that comes to mind.
Because the code is executed in a browser on the client machine, You can only trust the result to the extent that you can trust the client.
Therefore you should really only rely on JavaScript to calculate something that you could take for granted if the client told you.
For instance, if you were writing an e-commerce site, you could trust code that told you what the client wanted to buy, and what the clients shipping address was, but you would need to calculate the price of the goods yourself to prevent the client from telling you a lower price.
It's entirely possible that the invoicing system you're working on will only be used internally to your organisation.
If this is the case, you can disregard this entire answer.
But, if your applications is going to be used by customers to access and manipulate their invoices and orders, then this is something you'd have to consider.

Does using rounded numbers decrease CPU usage in Javascript?

I'm performing a lot of calculations in javascript. I was wondering if using rounded numbers would decrease CPU usage? When I look at the inner workings of my code using console.log, the numbers have upwards of 15 decimal places.
Sometimes highly optimized engines can tell the difference between an integer and a double. For instance 1+1 might use integer math where 1.0+1 might not. Most likely this "integerness" will quickly get lost, functions like Math.pow, Math.sqrt, etc. will likely lose the property. However I would not rely on this behavior and even rounded number might not have this effect (i.e. they might still be floats after rounding).
Also, as an aside, there's probably so much overhead in the JS engine that the difference between using a float and an integer would not be that big (given that the difference is maybe a factor of 2-3 on the processor itself and the overhead is probably at least a factor of 10).
No. JavaScript does not distinguish between integers and real numbers. It only has double-precision floats.
This means that accuracy will be best with integers or binary fractions (within the range of about 15 significant digits), but actual performance won't vary much, if at all.

Why am I seeing inexact floating-point results in ECMAScript / ActionScript 3?

Hey all, let's jump straight to a code sample to show how ECMAScript/JavaScript/AS3 can't do simple math right (AS3 uses a 'IEEE-754 double-precision floating-point number' for the Number class which is supposedly identical to that used in JavaScript)...
trace(1.1); //'1.1': Ok, fine, looks good.
trace(1.1*100); //'110.00000000000001': What!?
trace((1.1*100)/100); //'1.1': Brings it back to 1.1 (since we're apparently multiplying by *approximately* 100 and then dividing by the same *approximate* '100' amount)
trace(1.1*100-110); //'1.4210854715202004e-14': Proof that according to AS3, 1.1*100!=110 (i.e. this isn't just a bug in Number.toString())
trace(1.1*100==110); //'false': Even further proof that according to AS3, 1.1*100!=110
What gives?
Welcome to the wonderful world of floating point calculation accuracy. In general, floating point calculations will give you results that are very very nearly correct, but comparing outputs for absolute equality is unlikely to give you results you expect without the use of rounding functions.
This is just a side effect of using floating point numbers - these are binary representations of decimal numbers, there will always be some approximations.
Long explanation
Floating point inconsistencies are a known problem in many languages. This is because computers aren't designed to handle floating point numbers.
Have fun
As moonshadow states, you're running into issues with floating point precision. Floating point numbers aren't suited to the task of representing and performing arithmetic upon decimal values in the manner that you would expect. These kinds of problems are seen most often when people try to using floating point numbers for financial calculations. The wikipedia entry is good, but you might get more out of this page, which steps through an error-prone financial calculation: http://c2.com/cgi/wiki?FloatingPointCurrency
To accurately deal with decimal numbers you need a decimal library. I've outlined two BigDecimal-style libraries written in javascript that may suit your needs in another SO post, hopefully you'll find them useful:
https://stackoverflow.com/questions/744099/javascript-bigdecimal-library/1575569

Categories

Resources