I have a bunch of bins which double in size, starting from size 32. The bins are divided in half and added to the lower bin, but this isn't important for the question. I am currently hardcoding a "max" 16777216 which is the size of the largest bin.
let bins = [
[], // 1 = 32
[], // 2 = 64
[], // 3 = 128
[], // 4 = 256
[], // 5 = 512
[], // 6 = 1024
[], // 7 = 2048
[], // 8 = 4096
[], // 9 = 8192
[], // 10 = 16384
[], // 11 = 32768
[], // 12 = 65536
[], // 13 = 131072
[], // 14 = 262144
[], // 15 = 524288
[], // 16 = 1048576
[], // 17 = 2097152
[], // 18 = 4194304
[], // 19 = 8388608
[0], // 20 = 16777216
I would like to dynamically determine the number of bins based on the available memory on the platform, as a factor of 32. So if there was 24 TB of available memory on the machine, that is 1.92e+14 bits, or 6e+12 32-bit chunks. So I would round that number up to the nearest double factor multiple of 32, following this same pattern of how the numbers grow.
How do I do this programmatically with a generic equation? I gathered these numbers by doing this manually:
a = 1 * 32
b = a * 2
c = b * 2
d = c * 2
...
How do I do this with a generic equation?
How do I round up efficiently to the nearest one of these numbers?
const smallestBin = n => {
if(n <= 32) return 32;
let size = 5;
for(let x = Math.trunc(n/32); x > 0; x = Math.trunc(x/2), size++) { }
size = 2**size;
return size == n*2 ? n : size;
};
Note that we don't use bit-wise operators because they're limited to 32 bits.
When you say 32 * 2 * 2 * 2... you are multiplying 32 by a certain power of 2, or basically:
32 * 2^i
Now, since in your example i starts from 1. the correct equation is actually:
16 * 2^i
And since 16 is also a power of 2, you can just write this as:
2^4 * 2^i
Which is equal to:
2^(4+i)
If you now have a random number, how do you round it up to the nearest power of 2? This is basically calculating the logarithm (base 2) of your number, then round the result to the next integer.
This integer value is the exponent of the nearest power of two (rounding up).
If you want the nearest power of 2, this is just: 2^result. So the complete equation is:
2^round(log_2(num))
In javascript, you can do this with:
2**Math.ceil(Math.log2(num))
If you need your index i, remember that your numbers are 2^(4+i), so just subtract 4 from round(log_2(num)) and you get i
Related
Given a bitfield like the following (numbers can only be 1 or 0):
const bits = [1, 1, 0, 1, 0, 0, 0, 1]
I want to produce a list of N average values where N is an arbitrary number.
In our example above, given N is 4, the result would be [1, 0.5, 0, 0.5]. (group the array 2 by 2 and then calculate the average number of each group).
If, instead, N was 2, the result would be [0.75, 0.25].
I have been using the following code so far:
const average = (...args) => args.reduce((a, b) => a + b) / args.length;
const averageNumbers = Ramda.splitEvery(Math.floor(bits.length / N), bits).map(nums => average(nums))
The problem is that the above doesn't work in case my bits array is composed of 696 values and I need 150 average numbers, because I end up having 174 numbers instead of 150.
What am I doing wrong?
As mentioned in the comments, you end up with 174 numbers instead of 150 because floor(696 / 150) divides the bit field into 174 chunks of 4 bits.
To get an approximate value for the averages, you can first "expand" the bit field to a length that is a multiple of N before taking the averages.
// Factor is the lowest number to expand bits by for its length to be a multiple of N
const factor = lcm(bits.length, N) / bits.length;
// Expand bits by the factor
const expandedBits = bits.map(bit => Array(factor).fill(bit)).flat(1);
const averages = splitEvery(Math.floor(expandedBits.length / N), expandedBits).map(nums => average(...nums));
// averages.length should now always be N
We are building a table in Javascript with Handsontable representing currency amounts. We give the user the possibility of render the amounts with two decimal places or no decimal places (it's a requirement from the client). And then we find things like this:
Column A Column B Column C = A + B
-------------------------------------------
-273.50 273.50 0 Two decimals
-273 274 0 No decimals
Investigating a little we came to find that the basic rounding function in Javascript, Math.round(), works like this:
If the fractional portion is exactly 0.5, the argument is rounded to the next integer in the direction of +∞. Note that this differs from many languages' round() functions, which often round this case to the next integer away from zero, instead (giving a different result in the case of negative numbers with a fractional part of exactly 0.5).
As we are dealing with currency amounts, we do not care about what happens after the second decimal place, so we chose to add -0.0000001 to any negative value in the table. Thus, when rendering the values with two or no decimals, now we get the proper results, as Math.round(-273.5000001) = -274, and Math.round(-273.4900001) is still -273.
Nonetheless, we would like to find a finer solution to this problem. So what is the best, most elegant way to achieve this (that does not require modifying the original numeric value)? Note that we do not directly call Math.round(x), we just tell Handsontable to format a value with a given number of decimal places.
Just some variations on how you could implement the desired behaviour, with or without using Math.round(). And the proof that these functions work.
It's up to you which version speaks to you.
const round1 = v => v<0 ? Math.ceil(v - .5) : Math.floor(+v + .5);
const round2 = v => Math.trunc(+v + .5 * Math.sign(v));
const round3 = v => Math.sign(v) * Math.round(Math.abs(v));
const round4 = v => v<0 ? -Math.round(-v): Math.round(v);
const funcs = [Number, Math.round, round1, round2, round3, round4];
[
-273.50, -273.49, -273.51,
273.50, 273.49, 273.51
].forEach(value => console.log(
Object.fromEntries(funcs.map(fn => [fn.name, fn(value)]))
));
Math.round() works as wanted for zero and positive numbers. For negative numbers, convert to a positive and then multiply back:
/**
* #arg {number} num
* #return {number}
*/
function round(num) {
return num < 0 ? -Math.round(-num) : Math.round(num);
}
This seems to work correctly for these inputs:
round(-3) = -3
round(-2.9) = -3
round(-2.8) = -3
round(-2.7) = -3
round(-2.6) = -3
round(-2.5) = -3
round(-2.4) = -2
round(-2.3) = -2
round(-2.2) = -2
round(-2.1) = -2
round(-2) = -2
round(-1.9) = -2
round(-1.8) = -2
round(-1.7) = -2
round(-1.6) = -2
round(-1.5) = -2
round(-1.4) = -1
round(-1.3) = -1
round(-1.2) = -1
round(-1.1) = -1
round(-1) = -1
round(-0.9) = -1
round(-0.8) = -1
round(-0.7) = -1
round(-0.6) = -1
round(-0.5) = -1
round(-0.4) = 0
round(-0.3) = 0
round(-0.2) = 0
round(-0.1) = 0
round(0) = 0
round(0.1) = 0
round(0.2) = 0
round(0.3) = 0
round(0.4) = 0
round(0.5) = 1
round(0.6) = 1
round(0.7) = 1
round(0.8) = 1
round(0.9) = 1
round(1) = 1
round(1.1) = 1
round(1.2) = 1
round(1.3) = 1
round(1.4) = 1
round(1.5) = 2
round(1.6) = 2
round(1.7) = 2
round(1.8) = 2
round(1.9) = 2
round(2) = 2
round(2.1) = 2
round(2.2) = 2
round(2.3) = 2
round(2.4) = 2
round(2.5) = 3
round(2.6) = 3
round(2.7) = 3
round(2.8) = 3
round(2.9) = 3
round(3) = 3
I think negatives will always round to another negative or zero so this multiplication order is correct.
As #dandavis noted in the comments, the best way to handle this is to use .toFixed(2) instead of Math.round().
Math.round(), as you noted, rounds half up.
.toFixed() will round half away from zero. We designate the 2 in there to indicate the amount of decimals to use when doing the rounding.
By multiplying the random number (which is between 0 and 1) by 5, we make it a random number between 0 and 5 (for example, 3.1841). Math.floor() rounds this number down to a whole number, and adding 1 at the end changes the range from between 0 and 4 to between 1 and 5 (up to and including 5).
The explanation above confused me... my interpretation below:
--adding the 5 gives it a range of 5 numbers
--but it starts with 0 (like an array?)
--so it's technically 0 - 4
--and by adding the one, you make it 1 - 5
I am very new to JS, don't even know if this kind of question is appropriate here, but this site has been great so far. Thank you for any help!
From the Mozilla Developer Networks' documentation on Math.random():
The Math.random() function returns a floating-point, pseudo-random number in the range [0, 1) that is, from 0 (inclusive) up to but not including 1 (exclusive).
Here are two example randomly generated numbers:
Math.random() // 0.011153860716149211
Math.random() // 0.9729151880834252
Because of this, when we multiply our randomly generated number by another number, it will range from 0 to a maximum of 1 lower than the number being multiplied by (as Math.floor() simply removes the decimal places rather than rounding the number (that is to say, 0.999 becomes 0 when processed with Math.floor(), not 1)).
Math.floor(0.011153860716149211 * 5) // 0
Math.floor(0.9729151880834252 * 5) // 4
Adding one simply offsets this to the value you're after:
Math.floor(0.011153860716149211 * 5) + 1 // 1
Math.floor(0.9729151880834252 * 5) + 1 // 5
Math.Random() returns a number between 0 and 1, excluding 1.
So when you multiply it with 5, you get a number between 0 and 5 but not 5.
Math.floor() on this number rounds down to a whole number.
So numbers you will get are either 0, 1, 2, 3 or 4.
Adding 1 to this range gives you a number in [1, 2, 3, 4, 5].
Note that:
0 <= Math.random() **<** 1
Math.floor(x.yz) = x
And therefore, the number given is a integer in the interval:
x = Math.floor((0..0.999999999) * 5 + 1)
x = (0..4) + 1
15.8.2.14 Math.random from the ES5 spec,
Returns a Number value with positive sign, greater than or equal to 0 but less than 1, chosen randomly or pseudo randomly with approximately uniform distribution over that range, using an implementation-dependent algorithm or strategy. This function takes no arguments.
So,
x = Math.random(); // 0 ≤ x < 1
y = x * 5; // 0 ≤ y < 5
z = y + 1; // 1 ≤ z < 6
i = Math.floor(z); // 1 ≤ i ≤ 5, i ∈ ℤ, ℤ integers
Which means
i ∈ {1, 2, 3, 4, 5}
How do I use JavaScript to calculate the x value in this formula?
(x * y) % z = 1
y and z are known integers.
For example, y = 7, z = 20. 3 multiplied by 7 results into 21, that divided by 20 results into remainder of 1. Solution x = 3.
(3 * 7) % 20 = 1
This is a math question, not a JavaScript question. Use the Extended Euclidean Algorithm. For example, to find the inverse of 7 modulo 20, start with these two equations:
20 = 0•7 + 1•20.
7 = 1•7 + 0•20.
Next, divide the two numbers on the left (20/7) and take the integer part (2). The subtract that times the bottom equation from the one above it:
20 = 0•7 + 1•20.
7 = 1•7 + 0•20.
6 = -2•7 + 1•20.
Repeat: The integer part of 7/6 is 1. Subtract one times the bottom equation from the one above it. The new equation is:
1 = 3•7 - 1•20.
Now you can see that 3 times 7 is 1 modulo 20. (Simultaneously, -1 times 20 is 1 modulo 7.)
Well, there's more than one number that is valid for x. x could also be 23 in this case (23 * 7 = 161, and 161 % 20 = 1). So you need to express the problem a bit differently as a starting point, such as "what is the lowest x that can solve the equation?"
If you're solving that then it is suddenly a different problem. You then only need to solve for two possibilities: (x * y) - z = 1, and (x * y) = 1. From there you can do a little algebra to solve for x.
I'm trying to get a handle on the interaction between different types of typed arrays.
Example 1.
var buf = new ArrayBuffer(2);
var arr8 = new Int8Array(buf);
var arr8u = new Uint8Array(buf);
arr8[0] = 7.2;
arr8[1] = -45.3;
console.log(arr8[0]); // 7
console.log(arr8[1]); // -45
console.log(arr8u[0]); // 7
console.log(arr8u[1]); // 211
I have no problem with the first three readouts but where does 211 come from in the last. Does this have something to do with bit-shifting because of the minus sign.
Example 2
var buf = new ArrayBuffer(4);
var arr8 = new Int8Array(buf);
var arr32 = new Int32Array(buf);
for (var i = 0; i < buf.byteLength; i++){
arr8[i] = i;
}
console.log(arr8); // [0, 1, 2, 3]
console.log(arr32); // [50462976]
So where does the 50462976 come from?
Example #1
Examine positive 45 as a binary number:
> (45).toString(2)
"101101"
Binary values are negated using a two's complement:
00101101 => 45 signed 8-bit value
11010011 => -45 signed 8-bit value
When we read we read 11010011 as an unsigned 8-bit value, it comes out to 211:
> parseInt("11010011", 2);
211
Example #2
If you print 50462976 in base 2:
> (50462976).toString(2);
"11000000100000000100000000"
We can add leading zeros and rewrite this as:
00000011000000100000000100000000
And we can break it into octets:
00000011 00000010 00000001 00000000
This shows binary 3, 2, 1, and 0. The storage of 32-bit integers is big-endian. The 8-bit values 0 to 3 are read in order of increasing significance when constructing the 32-bit value.
First question.
Signed 8bit integers range from -128 to 127. The positive part (0-127) maps to binary values from 00000000 to 01111111), and the other half (-128-1) from 10000000 to 11111111.
If you omit the first bit, you can create a number by adding a 7bit number to a boundary. In your case, the binary representation is 11010011. The first bit is 1, this means the number will be negative. The last 7bits are 1010011, that gives us value 83. Add it to the boundary: -128 + 83 = -45. That’s it.
Second question.
32bit integers are represented by four bytes in memory. You are storing four 8bit integers in the buffer. When converted to an Int32Array, all those values are combined to form one value.
If this was decimal system, you could think of it as combining "1" and "2" gives "12". It’s similar in this case, except the multipliers are different. For the first value, it’s 2^24. Then it’s 2^16, then 2^8 and finally 2^0. Let’s do the math:
2^24 * 3 + 2^16 * 2 + 2^8 * 1 + 2^0 * 0 =
16777216 * 3 + 65536 * 2 + 256 * 1 + 1 * 0 =
50331648 + 131072 + 256 + 0 =
50462976
That’s why you’re seeing such a large number.