In the following code, i will always be an even number so the quotient i / 2 should always be an integer. Should I still use Math.floor(i / 2) to be on the safe side? I'm asking because JavaScript does treat all numbers as floating points so I'm concerned about rounding errors.
for (var i = 0; i < data.length; i = i + 2) {
var name = names[i / 2];
...
}
No. you do not have to use Math.floor() in this situation.
Because i is always even and also names[1.00] is equivalent to names[1].
To check, try the below in a javascript console.
The length of array will be 20 and the first 10 array items will be printed
var names = ["nums1", "nums2", "nums3","nums4", "nums5", "nums6","nums7",
"nums8", "nums9","nums10", "nums11","nums12", "nums13","nums14", "nums15",
"nums16","nums17", "nums18", "nums19","nums20"];
for (var i = 0; i < names.length; i = i + 2) {
console.log(names[i/2]);
}
Dividing even integers by 2 always returns an integer as long as it does not overflow
Number.MAX_SAFE_INTEGER
after that, integers are stored with a power of two, so they lose precision.
However arrays which are bigger than this size would have problems accessing their elements (as indexes get imprecise), the maximum number of elements you can push into an array is limited to exactly this maximum safe integer. So basically your code will always work, if it doesn't, that's because the array overflows and not because the index is wrong. However, I rather recommend you to do:
var array = ["one", "two", "three","four", "five", "six","seven",
"eight", "nine","ten", "eleven","twelve", "thirteen","fourteen", "fifteen",
"sixteen","seventeen", "eighteen", "nineteen","twenty"];
for(var i=0,l=array.length/2;i<l;i++){
console.log(array[i]);
}
As it saves these unnecessary math operations...
According to the JavaScript(ECMAScript®) Language Specification if
i <= Number.MAX_SAFE_INTEGER + 1
then i can be exactly represented and the division should work out fine.
The value of Number.MAX_SAFE_INTEGER is the largest integer n such that n and n + 1 are both exactly representable as a Number value.
The value of Number.MAX_SAFE_INTEGER is 9007199254740991 (253−1).
Section 12.7.3.2 Applying the / Operator
In the remaining cases, where neither an infinity, nor a zero, nor NaN is involved, the quotient is computed and rounded to the nearest representable value using IEEE 754-2008 round to nearest, ties to even mode. If the magnitude is too large to represent, the operation overflows; the result is then an infinity of appropriate sign. If the magnitude is too small to represent, the operation underflows and the result is a zero of the appropriate sign. The ECMAScript language requires support of gradual underflow as defined by IEEE 754-2008.
Related
I would like to ask about the behavior of parseInt when called with a string which represents an integer value larger than Number.MAX_SAFE_INTEGER.
Technically, I assume I should be expecting the same outcome which I would get by using that value directly (i.e., as a number rather than as a string).
For example, the following two will yield the same value (whether its accurate or not):
const x = parseInt("0x100000000000000000000000000000000");
const x = 0x100000000000000000000000000000000;
I do understand, however, that perhaps JS doesn't guarantee this.
So what I would really like to know is whether I can at least count on parseInt to return a value different than 0, when called with a string which represents an integer value larger than Number.MAX_SAFE_INTEGER.
The problem is fundamental with number representations - any numeric that is over Number.MAX_SAFE_INTEGER is not safe for use any more. Basic example:
const max = Number.MAX_SAFE_INTEGER;
const maxPlus1 = Number.MAX_SAFE_INTEGER + 1;
const maxPlus2 = Number.MAX_SAFE_INTEGER + 2;
console.log("max:", max); //9007199254740991
console.log("max + 1:", maxPlus1); //9007199254740992
console.log("max + 2:", maxPlus2); //9007199254740992
console.log("max + 1 = max + 2:", maxPlus1 === maxPlus2); //true
As you can see, almost immediately after you breach the Number.MAX_SAFE_INTEGER barrier, you run into problems with precision. JavaScript uses the IEEE 754 standard to represents numbers and while it can show numbers than its highest (as opposed to, say an int field in another language which will overflow to zero or maximum negative), such representations are imprecise. Some big numbers cannot be represented like 9007199254740993 (which is Number.MAX_SAFE_INTEGER + 2) and you get a different number instead.
The exact same thing applies to parseInt, since it will convert the string into a JavaScript numeric, there might not be a precise representation for it:
const maxPlus1String = "9007199254740992";
const maxPlus2String = "9007199254740993";
const maxPlus1 = parseInt(maxPlus1String);
const maxPlus2 = parseInt(maxPlus2String);
console.log("max + 1:", maxPlus1); //9007199254740992
console.log("max + 2:", maxPlus2); //9007199254740992
console.log("(string) max + 1 = max + 2:", maxPlus1String === maxPlus2String); //false
console.log("max + 1 = max + 2:", maxPlus1 === maxPlus2); //true
Ultimately, it's a question of how floating point numbers are represented. Wikipedia has a good article but I'll simplify it to the most important parts:
With floating point representation, you keep a mantissa (also called significand with d at the end) and exponent for each number. This works like scientific notation, so I'll use that for easier referencing:
1.23e5 = 1.23 * 105 = 123 000
1.23 is the mantissa of the number.
5 is the exponent.
With both of these, you can represent numbers arbitrarily high in very short form. However, with floating point representation, you are limited to how many bits of each you can keep. This comes at the cost of sacrificing accuracy once you run out of numbers for the mantissa, you lose precision. So, if we decide to only allow one decimal place in our scientific notation, we get the number 1.2e5 which could be 123 000 but it might also be 120 000 or 125 000 or 128 215 - we cannot recreate it from the shortened form. Similar thing happens with floating point numbers - once you don't have enough digits for the mantissa, the rest are discarded, so you don't get the exact number back.
When the exponent runs out of digits you reach the highest number representable.
In JavaScript, the maximum number possible can be seen in Number.MAX_VALUE:
console.log(Number.MAX_VALUE)
1.7976931348623157e+308 is pretty large, with an exponent of 308. So you can represent a lot of numbers with this and if you use parseInt anything under this value, you will get some number that is in the region of what you parsed.
However, what happens if you go over? Well, you'll reach a value within the number range that is representable in JavaScript which is reserved for a special reason. This is the absolute top most number - a floating point representation for the largest number possible to be represented. That value is Infinity. If you happen to parse something that is larger than Number.MAX_VALUE, you'll get Infinity instead:
const largeNum = "17" + "0".repeat(307); //1.7e308
const tooLargeNum = "18" + "0".repeat(307); //1.8e308
console.log("large number string:", largeNum);
console.log("large number parsed:", parseInt(largeNum));
console.log("too large number string:", tooLargeNum);
console.log("too large number parsed:", parseInt(tooLargeNum));
So, even if you have astronomically large numbers, you are guaranteed to have a number more than zero, since Infinity > 0
I Have a hash function like this.
class Hash {
static rotate (x, b) {
return (x << b) ^ (x >> (32-b));
}
static pcg (a) {
let b = a;
for (let i = 0; i < 3; i++) {
a = Hash.rotate((a^0xcafebabe) + (b^0xfaceb00c), 23);
b = Hash.rotate((a^0xdeadbeef) + (b^0x8badf00d), 5);
}
return a^b;
}
}
// source Adam Smith: https://groups.google.com/forum/#!msg/proceduralcontent/AuvxuA1xqmE/T8t88r2rfUcJ
I use it like this.
console.log(Hash.pcg(116)); // Output: -191955715
As long as I send an integer in, I get an integer out. Now here comes the problem. If I have a floating number as input, rounding will happen. The number Hash.pcg(1.1) and Hash.pcg(1.2) will yield the same. I want different inputs to yield different results. A possible solution could be to multiply the input so the decimal is not rounded down, but is there a more elegant and flexible solution to this?
Is there a way to convert a floating point number to a unique integer? Each floating point number would result in a different integer number.
Performance is important.
This isn't quite an answer, but I was running out of room to make it a comment. :)
You'll hit a problem with integers outside of the 32-bit range as well as with non-integer values.
JavaScript handles all numbers as 64-bit floating point. This gives you exact integers over the range -9007199254740991 to 9007199254740991 (±(2^53 - 1)), but the bit-wise operators used in your hash algorithm (^, <<, >>) only work in a 32-bit range.
Since there are far more non-integer numbers possible than integers, no one-to-one mapping is possible with ordinary numbers. You could work something out with BigInts, but that will likely lead to comparatively much slower performance.
If you're willing to deal with the performance hit, your can use JavaScript buffer functions to get at the actual bits of a floating point number. (I'd say more now about how to do that, but I've got to run!)
Edit... back from dinner...
You can convert JavaScript's standard number type, which is 64-bit floating point, to a BigInt like this:
let dv = new DataView(new ArrayBuffer(8));
dv.setFloat64(0, Math.PI);
console.log(dv.getFloat64(0), dv.getBigInt64(0), dv.getBigInt64(0).toString(16).toUpperCase())
The output from this is:
3.141592653589793 4614256656552045848n "400921FB54442D18"
The first item shows that the number was properly stored as byte array, the second shows the BigInt created from the same bits, and the last is the same BigInt over again, but in hex to better show the floating point data format.
Once you've converted a number like this to a BigInt (which is not the same numeric value, but it is the same string of bits) every possible value of number will be uniquely represented.
The same bit-wise operators you used in your algorithm above will work with BigInts, but without the 32-bit limitation. I'm guessing that for best results you'd want to change the 32 in your code to 64, and use 16-digit (instead of 8-digit) hex constants as hash keys.
I am relatively unfamiliar with JavaScript, and I was recently told that a JavaScript array contains a length variable of type Number. This length is automatically updated as the array is updated to the number of elements in the array.
However, I was also told that internally, JavaScript uses a 64-bit floating point representation for its Number class. We know that floating point arithmetic cannot exactly represent all integers within its range.
So my question is, what happens with large arrays, where length + 1 cannot exactly represent the next largest integer in the sequence?
According to this the maximum length of an Array is 4,294,967,295. Number.MAX_SAFE_INTEGER is 9,007,199,254,740,991 so you won't have to worry because the engine won't let you get that far, example:
new Array(4294967296); // RangeError: Invalid array length
Relevant part of the spec:
c. Let newLen be ToUint32(Desc.[[Value]]).
b. If newLen is not equal to ToNumber( Desc.[[Value]]), throw a RangeError exception
So given our example length 4294967296:
var length = 4294967296;
var int32length = length >>> 0; // Convert to int32
int32length === 0; // Can't represent this as int32
length !== int32length; // Therefore RangeException
The maximum length of an array according to the ECMA-262 5th Edition specification is bound by an unsigned 32-bit integer due to the ToUint32 abstract operation, so the longest possible array could have 232-1 = 4,294,967,295 = 4.29 billion elements. This is according to Maximum size of an Array in Javascript..
so I guess #RGraham is right
We know that Java does not handle underflows and overflows, but how does Javascript handle these for integers?
Does it go back to a minimum/maximum? If yes, which minimum/maximum?
I need to split a string and compute a hash value based on its characters.
In a simple test, when I try this:
var max = Number.MAX_VALUE;
var x = max + 10;
var min = Number.MIN_VALUE;
var y = min / 10;
I find that x and max have the same value (in Chrome, IE and Firefox) so it appears that some overflows are just pegged to the max value. And, y gets pegged to 0 so some underflows seem to go to zero.
Ahhh, but it is not quite that simple. Not all overflows go to Number.MAX_VALUE and not all underflows go to Number.MIN_VALUE. If you do this:
var max = Number.MAX_VALUE;
var z = max * 2;
Then, z will be Infinity.
It turns out that it depends upon how far you overflow/underflow. If you go too far, you will get INFINITY instead. This is because of the use of IEEE 754 round-to-nearest mode where the max value can be considered nearer than infinity. See Adding to Number.MAX_VALUE for more detail. Per that answer, values of 1.7976931348623158 × 10308 or greater round to infinity. Values between Number.MAX_VALUE and that will round to Number.MAX_VALUE.
To, make things even more complicated, there is also something as gradual underflow which Javascript supports. This is where the mantissa of the floating point value has leading zeroes in it. Gradual underflow allows floating point to represent some smaller numbers that it could not represent without that, but they are represented at a reduced precision.
You can see exactly where the limits are:
>>> Number.MAX_VALUE + 9.979201e291
1.7976931348623157e+308
>>> Number.MAX_VALUE + 9.979202e291
Infinity
Here's a runnable snippet you can try in any browser:
var max = Number.MAX_VALUE;
var x = max + 10;
var min = Number.MIN_VALUE;
var y = min / 10;
var z = max * 2;
document.getElementById("max").innerHTML = max;
document.getElementById("max10").innerHTML = x;
document.getElementById("min").innerHTML = min;
document.getElementById("min10").innerHTML = y;
document.getElementById("times2").innerHTML = z;
body {
font-family: "Courier New";
white-space:nowrap;
}
Number.MAX_VALUE = <span id="max"></span><br>
Number.MAX_VALUE + 10 = <span id="max10"></span><br>
<br>
Number.MIN_VALUE = <span id="min"></span><br>
Number.MIN_VALUE / 10 = <span id="min10"></span><br>
<br>
Number.MAX_VALUE * 2 = <span id="times2"></span><br>
The maximum and minimum is +/- 9007199254740992
Try these Number type properties:
alert([Number.MAX_VALUE, Number.MIN_VALUE]);
From the ECMAScript 2020 language specification, section "The Number Type":
Note that all the positive and negative mathematical integers whose magnitude is no
greater than 253 are representable in the Number type (indeed, the
mathematical integer 0 has two representations, +0 and −0).
Test:
var x = 9007199254740992;
var y = -x;
x == x + 1; // true !
y == y - 1; // also true !
Number
In JavaScript, number type is 64 bit IEEE 754 floating point number which is not an integer. So it don't follow common patterns of integer overflow / underflow behavior in other languages.
As the floating point number use 53 bits for base part. It may represent numbers in range Number.MIN_SAFE_INTEGER to Number.MAX_SAFE_INTEGER (-253+1 to 253-1) without floating point errors. For numbers out of this range, it may be rounded to nearest number available, or may be Infinity if it is too large.
Bit-wise operator
Bit-wise operator treat operand 32 bit integers. And common integer overflow may happened as in other languages. Only last 32 bits may be kept after calculate. For example, 3<<31 would results -2147483648.
>>> treat operand as unsigned 32 bit integers. All other operator treat operand as signed 32 bit integers. If you want to convert signed integer to unsigned, you may write value >>> 0 to do the trick. To convert back, use value | 0.
If you want to shift an integer with 33, it will actually be shifted with 1.
BigInt
Just like Java's java.math.BigInteger, BigInt supports unbounded integers (still bound by your memory limit though). So integer overflow may never happen here.
TypedArray
For most TypedArray types, when an integer out of supported range assigned, it got truncated as what other languages do when converting integers, by keeping least significant bits. For example new Int8Array([1000])[0] got -24.
Uint8ClampedArray is a bit different from other TypedArray's. Uint8ClampedArray supports integers in range 0 ~ 255. When numbers out of range is used, 0 or 255 will be set instead.
asm.js
The same rules for bit-wise operator applied here. The value would be trucked back as what | 0 or >>> 0 do.
I am trying to understand the way to add, subtract, divide, and multiply by operating on the bits.
It is necessary to do some optimizing in my JavaScript program due to many calculations running after an event has happened.
By using the code below for a reference I am able to understand that the carry holds the &ing value. Then by doing the XOr that sets the sum var to the bits that do not match in each n1 / n2 variable.
Here is my question.;) What does shifting the (n1 & n2)<<1 by 1 do? What is the goal by doing this? As with the XOr it is obvious that there is no need to do anything else with those bits because their decimal values are ok as they are in the sum var. I can't picture in my head what is being accomplished by the & shift operation.
function add(n1,n2)
{
var carry, sum;
// Find out which bits will result in a carry.
// Those bits will affect the bits directly to
// the left, so we shall shift one bit.
carry = (n1 & n2) << 1;
// In digital electronics, an XOR gate is also known
// as a quarter adder. Basically an addition is performed
// on each individual bit, and the carry is discarded.
//
// All I'm doing here is applying the same concept.
sum = n1 ^ n2;
// If any bits match in position, then perform the
// addition on the current sum and the results of
// the carry.
if (sum & carry)
{
return add(sum, carry);
}
// Return the sum.
else
{
return sum ^ carry;
};
};
The code above works as expected but it does not return the floating point values. I've got to have the total to be returned along with the floating point value.
Does anyone have a function that I can use with the above that will help me with floating point values? Are a website with a clear explanation of what I am looking for? I've tried searching for the last day are so and cannot find anything to go look over.
I got the code above from this resource.
http://www.dreamincode.net/code/snippet3015.htm
Thanks ahead of time!
After thinking about it doing a left shift to the 1 position is a multiplication by 2.
By &ing like this : carry = (n1 & n2) << 1; the carry var will hold a string of binaries compiled of the matched positions in n1 and n2. So, if n1 is 4 and n2 is 4 they both hold the same value. Therefore, by combing the two and right shifting to the 1 index will multiply 4 x 2 = 8; so carry would now equal 8.
1.) var carry = 00001000 =8
&
00001000 =8
2.) carry = now holds the single value of 00001000 =8
A left shift will multiply 8 x 2 =16, or 8 + 8 = 16
3.)carry = carry <<1 , shift all bits over one position
4.) carry now holds a single value of 00010000 = 16
I still cannot find anything on working with floating point values. If anyone has anything do post a link.
It doesn't work because the code assumes that the floating point numbers are represented as integer numbers, which they aren't. Floating point numbers are represented using the IEEE 754 standard, which breaks the numbers in three parts: a sign bit, a group of bits representing an exponent, and another group representing a number between 1 (inclusive) and 2 (exclusive), the mantissa, and the value is calculated as
(sign is set ? 1 : -1) * (mantissa ^ (exponent - bias))
Where the bias depends on the precision of the floating point number. So the algorithm you use for adding two numbers assumes that the bits represent an integer which is not the case for floating point numbers. Operations such as bitwise-AND and bitwise-OR also don't give the results that you'd expect in an integer world.
Some examples, in double precision, the number 2.3 is represented as (in hex) 4002666666666666, while the number 5.3 is represented as 4015333333333333. OR-ing those two numbers will give you 4017777777777777, which represents (roughly) 5.866666.
There are some good pointers on this format, I found the links at http://www.psc.edu/general/software/packages/ieee/ieee.php, http://babbage.cs.qc.edu/IEEE-754/ and http://www.binaryconvert.com/convert_double.html fairly good for understanding it.
Now, if you still want to implement the bitwise addition for those numbers, you can. But you'll have to break the number down in its parts, then normalize the numbers in the same exponent (otherwise you won't be able to add them), perform the addition on the mantissa, and finally normalize it back to the IEEE754 format. But, as #LukeGT said, you'll likely not get a better performance than the JS engine you're running. And some JS implementations don't even support bitwise operations on floating point numbers, so what usually ends up happening is that they first cast the numbers to integers, then perform the operation, which will make your results incorrect as well.
Floating point values have a complicated bit structure, which is very difficult to manipulate with bit operations. As a result, I doubt you could do any better than the Javascript engine at computing them. Floating point calculations are inherently slow, so you should try to avoid them if you're worried about speed.
Try using integers to represent a decimal number to x amount of digits instead. For example if you were working with currency, you could store things in terms of whole cents as opposed to dollars with fractional values.
Hope that helps.