This is probably a question with an really logical answer.. But I really don't understand this!!
Why, does this give different results..
Only difference is a for loop and a while loop.. Even while they loop exactly as many times???
array = [1.2344, 2.47373, 3.444];
var total = 0,
total2 = 0,
i = array.length,
whileLoops = 0,
forLoops = 0;
while (i--) {
whileLoops++;
total += array[i];
}
for (var i = 0, len = array.length; i < len; i++) {
forLoops++;
total2 += array[i];
}
if (total !== total2) {
console.log("BOE")
}
I tried parseFloat, but this also wasn't helping :(
It it because the Javascript engine rounds numbers in a sort of way???
On request: the fiddle http://jsfiddle.net/5dsx0ump/
UPDATE
Would the solution be to first to a * 1000 and after all the calculations, divide again by 1000, to keep round numbers?
The difference in the loops is the order that you add the numbers.
For each of those additions there is a tiny loss of data, as the result has to fit in the same data type as both the operands. What's lost depends on what the numbers are, so adding the numbers in different order causes a small difference in the result in the end.
If you print out the numbers, they may or may not look the same, but looking the same when printed doesn't mean that they must have the same value. The numbers have a precision of 15-17 digits, but that is rounded to slightly less when printed, just to avoid seeing the limitation in precision.
This is normal behaviour for floating point numbers, and you would see the same result in any programming language using floating point numbers. Floating point numbers are simply not exact, so in application where numbers actually have to be exact (e.g. banking), other data types are used.
Floating-point math (in JavaScript or any other language) has some quirks that you wouldn't expect. Putting this at the end of your code:
console.log(total, total2);
Returns the following:
7.1521300000000005 7.15213
Heck, just put 0.1 + 0.2 in a browser console and see what you get. Not what you'd expect.
Instead of re-hashing the entire explanation, there's a really good write-up and discussion here: Is floating point math broken?
Related
I Have a hash function like this.
class Hash {
static rotate (x, b) {
return (x << b) ^ (x >> (32-b));
}
static pcg (a) {
let b = a;
for (let i = 0; i < 3; i++) {
a = Hash.rotate((a^0xcafebabe) + (b^0xfaceb00c), 23);
b = Hash.rotate((a^0xdeadbeef) + (b^0x8badf00d), 5);
}
return a^b;
}
}
// source Adam Smith: https://groups.google.com/forum/#!msg/proceduralcontent/AuvxuA1xqmE/T8t88r2rfUcJ
I use it like this.
console.log(Hash.pcg(116)); // Output: -191955715
As long as I send an integer in, I get an integer out. Now here comes the problem. If I have a floating number as input, rounding will happen. The number Hash.pcg(1.1) and Hash.pcg(1.2) will yield the same. I want different inputs to yield different results. A possible solution could be to multiply the input so the decimal is not rounded down, but is there a more elegant and flexible solution to this?
Is there a way to convert a floating point number to a unique integer? Each floating point number would result in a different integer number.
Performance is important.
This isn't quite an answer, but I was running out of room to make it a comment. :)
You'll hit a problem with integers outside of the 32-bit range as well as with non-integer values.
JavaScript handles all numbers as 64-bit floating point. This gives you exact integers over the range -9007199254740991 to 9007199254740991 (±(2^53 - 1)), but the bit-wise operators used in your hash algorithm (^, <<, >>) only work in a 32-bit range.
Since there are far more non-integer numbers possible than integers, no one-to-one mapping is possible with ordinary numbers. You could work something out with BigInts, but that will likely lead to comparatively much slower performance.
If you're willing to deal with the performance hit, your can use JavaScript buffer functions to get at the actual bits of a floating point number. (I'd say more now about how to do that, but I've got to run!)
Edit... back from dinner...
You can convert JavaScript's standard number type, which is 64-bit floating point, to a BigInt like this:
let dv = new DataView(new ArrayBuffer(8));
dv.setFloat64(0, Math.PI);
console.log(dv.getFloat64(0), dv.getBigInt64(0), dv.getBigInt64(0).toString(16).toUpperCase())
The output from this is:
3.141592653589793 4614256656552045848n "400921FB54442D18"
The first item shows that the number was properly stored as byte array, the second shows the BigInt created from the same bits, and the last is the same BigInt over again, but in hex to better show the floating point data format.
Once you've converted a number like this to a BigInt (which is not the same numeric value, but it is the same string of bits) every possible value of number will be uniquely represented.
The same bit-wise operators you used in your algorithm above will work with BigInts, but without the 32-bit limitation. I'm guessing that for best results you'd want to change the 32 in your code to 64, and use 16-digit (instead of 8-digit) hex constants as hash keys.
I recently wrote the code to generate 10 characters randomly. Math.random() gives a decimal to toString(36) and all the numbers will be replaced.
Math.random().toString(36).replace(/[^a-z]+/g,'').substr(1,10);
Does anybody have a hint why Firefox (47.0) and Chrome (51) don't handle this equally?
Chrome tests:
Math.random().toString(36).replace(/[^a-z]+/g,'').substr(1,10);
"spkcirhyzb"
"gcqbrmulxe"
"sallvbzqbk"
"pcdcufhqet"
"knfffqsytm"
Firefox tests:
Math.random().toString(36).replace(/[^a-z]+/g,'').substr(1,10);
"zxntpvn"
"hebfyxlt"
"zclj"
"ormtqw"
"cfbsnye"
Live version:
for (var n = 0; n < 5; ++n) {
console.log(Math.random().toString(36).replace(/[^a-z]+/g,'').substr(1,10));
}
UPDATE (string average):
var test;
var count = 0;
for (var n = 0; n < 1000; ++n) {
test = Math.random().toString(36).replace(/[^a-z]+/g,'').substr(1,10);
count += test.length;
}
console.log(count);
console.log(count/1000);
My results:
Chrome - 9.999
Firefox - 6.794
Because Chrome's implementation of Number#toString(36) outputs more digits than Firefox's. Consider the number 0.9112907907957448:
Chrome: 0.wt16lcd3ae3m96qx2a3v7vi
Firefox: 0.wt16lcd3ae
You can try it here:
console.log((0.9112907907957448).toString(36));
The spec says the algorithm can be implementation-dependent, it just has to be a "generalization" of ToString Applied To Number Type. Apparently the V8 team (Chrome's JavaScript engine) and the SpiderMonkey team (Firefox's) differ in their interpretations.
The rules for converting IEEE-754 double-precision binary floating point ("double") numbers to strings are complex, because doubles routinely do not precisely store the value that we think of them as storing. For instance, 0.1 is not really 0.1 (which leads to the famous 0.1 + 0.2 != 0.3 issue). It's really, really close to 0.1, but it isn't 0.1. So in theory, (0.1).toString() should output 0.1000000000000000055511151231257827021181583404541015625 (I think that's the right value). In general, though, algorithms that create strings for these values work to the rule that they only output enough digits that if you took that string and converted it back to a floating-point double, you'd get the same floating-point double. That is, even though 0.1 isn't exactly 0.1, it's all the digits you need to get back to the original double value that's very nearly 0.1. Apparently Chrome's implementation of toString in base 36 outputs more digits than that, probably in accordance with "NOTE 2" on the second link above, but I'm not an expert.
The technique is fundamentally flawed in any case: You're taking a string with a near-purely random series of letters and digits and removing the digits, then expecting to get at least ten remaining characters. There's no way to be sure that's actually going to be true, not even on Chrome.
This is a working solution for your initial question on generating a random string of 10 characters.
As T. J Crowder has pointed out, your solution won't work in any browser as you are expecting it to work.
var chars = "abcdefghijklmnopqrstuvwxyz";
var str = '';
for (var i = 0; i < 10; i++) {
str += chars[Math.floor(Math.random() * chars.length)];
}
console.log(str);
I'm reading Effective Javascript by David Herman, and just learned this about how JavaScript handles numbers:
"all numbers in JavaScript are double-precision floating-point numbers, that is, the 64-bit encoding of numbers specified by the IEEE 754 standard -- commonly known as "doubles". If this fact leaves you wondering what happened to the integers, keep in mind that doubles can represent integers perfectly with up to 53 bits of precision. All of the integers from -9,007,199,254,740,992 (-2^53) to 9,007,199,254,740,992 (2^53) are valid doubles." (p. 7)
I was curious, so I threw together this jsfiddle to try it out:
var hilariouslyLargeNumber = 9007199254740992;
console.log(hilariouslyLargeNumber);
// 9007199254740992
console.log(hilariouslyLargeNumber + 1);
// 9007199254740992
console.log (hilariouslyLargeNumber === hilariouslyLargeNumber);
// true
console.log(hilariouslyLargeNumber === hilariouslyLargeNumber+1);
// true
console.log(hilariouslyLargeNumber === hilariouslyLargeNumber-1);
// false
I sort of understand why this is the case -- in simple (simple, simple) language, there aren't any more 'slots' for any more 0s and 1s for how JavaScript encodes numbers, and so it has to stop at 9,007,199,254,740,992.
So: what do I do if I find myself in possession of 9,007,199,254,740,993 puppies, and want to write some code to help me remember which one is which? Do I need to use something other than JavaScript? If so, why?
You have to do some work-around programming. An example would be:
var MAX_VALUE = 9007199254740992;
var lower_digit_set = 0;
var upper_digit_set = 0;
/* Do some calculations that will eventually result in lower_digit_set to be 9007199254740992 */
lower_digit_set = MAX_VALUE;
if (lower_digit_set == MAX_VALUE) {
lower_digit_set = 0;
upper_digit_set = upper_digit_set + 1;
}
/* What you have to keep in mid is that your final number is something you calculate,
however you cannot display it (you probably could, but it is very complex solution that I should give it a longer thought).
And therefore if we increase lower_digit_set as such: */
lower_digit_set = lower_digit_set + 1;
/* Then the new number will be a combination of both lower_digit_set and upper_digit_set*/
console.log("The actual number is more than once, or twice, or thrice ...etc. of the max_value");
console.log("Number of times we multiply the max value: ", upper_digit_set);
console.log("Then we add our remainder: ", lower_digit_set);
Please note, if you are going to calculate negative numbers, then you should account for that. Also, the solution depends on your needs, this is just an example, you may need to modify it to fit your needs, but it just gives you a general idea of what you need to do, or at least a way of thinking so to speak.
Of course you can do it; you just can't do it with primitives. Libraries like JSDecimal represent numbers in other ways.
var quantity = $(this).find('td:eq(2) input').val()*1;
var unitprice = $(this).find('td:eq(3) input').val()*1;
var totaltax = 0;
$(this).find('td:eq(4) input[name^=taxamount]').each(function(){
totaltax = (totaltax*1)+($(this).val()*1);
});
var subtotal = (unitprice+totaltax);
alert(subtotal+' is unit subtotal, to mulitply by '+quantity);
var total = subtotal*quantity;
$(this).find('td:last').html('$'+total);
In this case, based on my DOM, the results are all integers (especially because I'm making sure I apply the *1 modifier to values to ensure they are numbers, not strings).
In this case, these are teh values returned within the first 7 lines of the above code (and verified through alert command)
quantity: 10
unitprice: 29
totaltax: 3.48
subtotal = 32.48
When I multiply subtotal*quantity for the total variable, total returns:
total: 324.79999999999995
So at the end, I get the td:last filled with $324.79999999999995 rather than $324.80 which would be more correct.
Bizarre, I know. I tried all sorts of alerts at different points to ensure there were no errors etc.
This has been asked one bizillion times.
Please read: What Every Computer Scientist Should Know About Floating-Point Arithmetic
You're coming up against a familiar issue with floating point values: certain values can't be precisely represented in a finite binary floating point number.
See here:
How to deal with floating point number precision in JavaScript?
This is the way floating point numbers work. There's nothing bizarre going on here.
I'd recommend that you round the value appropriately for display.
That's the joy of floating point arithmetic -- some base 10 decimals cannot be represented in binary.
http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
Computers can't handle decimals very well in binary since in real mathematics there are literally an infinite number of values between 0.01 and 0.02 for example. So they need to store approximations, and when you do arithmetic on those approximations the results can get a little away from the true result.
You can fix it with (Math.round(total*100)/100).toFixed(2);
As others have mentioned, this is the way its meant to work. A suggested workaround can be found below:
var v = "324.32999999999995";
function roundFloat(n, d) {
var a= Math.pow(10, d);
var b= Math.round(n * a) / a;
return b;
}
$("body").append(roundFloat(v,3));
Where v would be replaced with the desired value.
You can view the working example at: http://jsfiddle.net/QZXhc/
You could try rounding to 2 decimal digits as workaround
I'm trying to implement a BigInt type in JavaScript using an array of integers. For now each one has an upper-bound of 256. I've finished implementing all integer operations, but I can't figure out how to convert the BigInt to its string representation. Of course, the simple way is this:
BigInt.prototype.toString = function(base) {
var s = '', total = 0, i, conv = [
,,
'01',
'012',
'0123',
'01234',
'012345',
'0123456',
'01234567',
'012345678',
'0123456789',
,
,
,
,
,
'0123456789abcdef'
];
base = base || 10;
for(i = this.bytes.length - 1; i >= 0; i--) {
total += this.bytes[i] * Math.pow(BigInt.ByteMax, this.bytes.length - 1 - i);
}
while(total) {
s = conv[base].charAt(total % base) + s;
total = Math.floor(total / base);
}
return s || '0';
};
But when the BigInts actually get big, I won't be able to convert by adding anymore. How can I convert an array of base-x to an array of base-y?
See the example I gave in this answer to a similar question recently (it's for base-10 to base-3, but the principle should be transferrable): C Fast base convert from decimal to ternary.
In summary:
Iterate over the input
digits, from low to high. For each
digit position, first calculate what
1000....000 (base-256) would be in the output representation (it's 256x the previous
power of 256). Then multiply that
result by the digit, and accumulate
into the output representation.
You will need routines that perform
multiplication and addition in the
output representation. The
multiplication routine can be written
in terms of the addition routine.
Note that I make no claims that this approach is in any way fast (I think it's O(n^2) in the number of digits); I'm sure there are algorithmically faster approaches than this.
If you're prepared to put on your math thinking cap more than I am right now, someone seems to have explained how to convert digit representations using Pascal's triangle:
http://home.ccil.org/~remlaps/DispConWeb/index.html
There are links to the source code near the bottom. They're in Java rather than JavaScript, but if you're putting in the effort to grok the math, you can probably come up with your own implementation or put in the effort to port the code...