Javascript divide buffer into golden rectangles - javascript

Im using JS in max/Msp and want to slice the buffer into sections using the golden ratio. I know that this will be a recursive function using the formula a+b/a = a/b but im not sure how to express this in javascript and how to then divide the remaining sections into golden rectangles recursively.

The golden ratio of 1.6xx talks about spiraling out, expanding.
If you start with a buffer and want to mark inwards, you can expand it by the reciprocal to shrink.
golden_ratio = ( 1 / 1.61803398875 ) // 0.6180339887498547 of 1.00
//where 1.00 is the 'whole' of your buffer
//the following values are percentage points the slices are at in the buffer
slice_1 = 1.00 * golden_ratio // .618
slice_2 = slice_1 * golden_ratio // .618^2
slice_3 = slice_2 * golden ratio // .618^3
You could calculate a given-depth slice of from an arbitrary buffer in one explicit call:
slicepoint = Math.pow(golden_ratio, slice_depth) * buffer_size
Otherwise manually performing that recursively could be from providing the initial condition (buffer size) and multiplying the last slice you just calculated by the golden ratio again. Our "last slice" would be the first condition provided, whatever buffer you've got.
some pseudocode:
A[0] = buffer_size
while n++
A[n] = A[n-1] * golden_ratio
for actual javascript code:
Slicing 1sec. of audio into 10 buffers (9 slices needed for 10 buffers)
arbitrary_buffer = 44100 //buffer length 1 sec # 44.1kHz
slices = 10; //slice count
// get the reciprocal of golden ratio
// we can just shrink the whole and mark those points as slices
golden_ratio = ( 1 / 1.61803398875 )
// creepily enough
// the reciprocal of the golden ratio is
// its same value without the integer in front of it: .6180339887498547
// so its a 61.80339887498547% of the original size per-iteration
// This is a list of our slice point locations
slice_points = new Array();
// add an initial condition (arbitrary_buffer) for the recursion
// like knowing the end of a loaf of bread before cutting
slice_points[0] = arbitrary_buffer;
// the following recursion is in the [i-1] back-tracking
// that's why this for-loop condition has i at 1 instead of 0 per usual
// we provided the initial 44100 and start with index 1 instead
// slice_point[1] should be about 27255.29890386859
for(i=1;i<slices;i++)
slice_point[i] = slice_points[i-1] * golden_ratio;
The array is populated in descending order starting at its end: 44100
You can switch those values to forward-facing if you just take
arbitrary_buffer - slice_point[i];
or, outside of the js object and back in max use [- ] and subtract the js output value from the buffer ms length from bufferinfo~'s right-most outlet, converted to samples.
You might also try using this as a velocity-map for loudness

Related

What is the indexing logic in the ImageData array?

This question is for a deeper understanding of my previous question about large size Canvas animation. The question is here: Repeat HTML canvas element (box) to fill whole viewport
I am trying to understand the logic behind the TypedArray - Uint8ClampedArray. I will first start with my research and get to the question itself later.
So, ImageData represents the pixel data of the HTML5 Canvas. It allows for much faster performance and is good for heavy animations. After we have our ImageData object, we create a buffer space for it. Because we can not read & write directly from / to buffer, we pass this buffer to a TypedArray. In this case Uint8ClampedArray which is like a normal array and allows to access data inside it.
Each pixel on our canvas is represented by 4 integer values that stand for red, green, blue, alfa - as in RGBA - ranging from 0 to 255. Each of these value is assigned to an Uint8ClampedArray index starting from 0 and the array is divided into chunks of 4. So first 4 values are very first pixel, second 4 values are 2nd pixel and so on. The Cavnas pixels are being read from left to right, row by row.
So if for example we want to get array index of red value of the pixel at xCoord = 3; yCoord = 1; canvasWidth = 10;. The formula from MDN: Pixel manipulation with canvas suggests that we do the following math:
var red = y * (width * 4) + x * 4; = 1 * 10 * 4 + 3 * 4 = 52;
But if we try to do the same manually, and just calculate ourselves pixel by pixel, we don't get the same value. It's always a little off. How would we calculate manually? In this picture we start from 0 X 0 to 0 X 9 and to 1 X 3. Because we start from top left and move toward right, it's inverted and Y is our first coordinate and X is our second coordinate. From 0 X 0 to 0 X 9 we record 40 values in total (4 values on each pixel, 10 pixels in total width); From 1 X 0 to 1 X 3 we record 16 values in total. We get 56th index at the end, instead of 52 as we had calculated using the formula.
So, please help me understand the whole logic in Uint8ClampedArray and how it's calcualted.
From 1 X 0 to 1 X 3 we record 16 values in total
The last 4 bytes of these 16 do represent the pixel at (3, 1). The red channel is the first of these, preceded by 12 bytes for the pixels to the left and 40 bytes for the pixels in the first row. It is sitting at index 52 in the overall array.
Remember that arrays are indexed as
0 1 2
+---+---+--
| | |
+---+---+--
not as
+---+---+--
| 0 | 1 | 2
+---+---+--

Modifying image pixels using bitwise operators (JSFeat)

I am using the JSFeat Computer Vision Library and am trying to convert an image to greyscale. The function jsfeat.imgproc.grayscale outputs to a matrix (img_u8 below), where each element is an integer between 0 and 255. I was unsure how to apply this matrix to the original image so I went looking through their example at https://inspirit.github.io/jsfeat/sample_grayscale.htm.
Below is my code to convert an image to grey scale. I adopted their method to update the pixels in the original image but I do not understand how it works.
/**
* I understand this stuff
*/
let canvas = document.getElementById('canvas');
let ctx = canvas.getContext('2d');
let img = document.getElementById('img-in');
ctx.drawImage(img, 0, 0, img.width, img.height);
let imageData = ctx.getImageData(0, 0, img.width, img.height);
let img_u8 = new jsfeat.matrix_t(img.width, img.height, jsfeat.U8C1_t);
jsfeat.imgproc.grayscale(imageData.data, img.width, img.height, img_u8);
let data_u32 = new Uint32Array(imageData.data.buffer);
let i = img_u8.cols*img_u8.rows, pix = 0;
/**
* Their logic to update the pixel values of the original image
* I need help understanding how the following works
*/
let alpha = (0xff << 24);
while(--i >= 0) {
pix = img_u8.data[i];
data_u32[i] = alpha | (pix << 16) | (pix << 8) | pix;
}
/**
* I understand this stuff
*/
context.putImageData(imageData, 0, 0);
Thanks in advance!
It's a wide topic, but I'll try to roughly cover the basics in order to understand what goes on here.
As we know, it's using 32-bit integer values which means you can operate on four bytes simultaneously using fewer CPU instructions and therefor in many cases can increase overall performance.
Crash course
A 32-bit value is often notated as hex like this:
0x00000000
and represents the equivalent of bits starting with the least significant bit 0 on the right to the most significant bit 31 on the left. A bit can of course only be either on/set/1 or off/unset/0. 4 bits is a nibble, 2 nibbles are one byte. The hex value has each nibble as one digit, so here you have 8 nibbles = 4 bytes or 32 bits. As in decimal notation, leading 0s have no effect on the value, i.e. 0xff is the same as 0x000000ff (The 0x prefix also has no effect on the value; it is just the traditional C notation for hexadecimal numbers which was then taken over by most other common languages).
Operands
You can bit-shift and perform logic operations such as AND, OR, NOT, XOR on these values directly (in assembler language you would fetch the value from a pointer/address and load it into a registry, then perform these operations on that registry).
So what happens is this:
The << means bit-shift to the left. In this case the value is:
0xff
or in binary (bits) representation (a nibble 0xf = 1111):
0b11111111
This is the same as:
0x000000ff
or in binary (unfortunately we cannot denote bit representation natively in JavaScript actually, there is the 0b-prefix in ES6):
0b00000000 00000000 00000000 11111111
and is then bit-shifted to the left 24 bit positions, making the new value:
0b00000000 00000000 00000000 11111111
<< 24 bit positions =
0b11111111 00000000 00000000 00000000
or
0xff000000
So why is this necessary here? Well, that's an excellent question!
The 32-bit value in relation to canvas represents RGBA and each of the components can have a value between 0 and 255, or in hex a value between 0x00 and 0xff. However, since most consumer CPUs today uses little-endian byte order each components for the colors is at memory level stored as ABGR instead of RGBA for 32-bit values.
We are normally abstracted away from this in a high-level language such as JavaScript of course, but since we now work directly with memory bytes through typed arrays we have to consider this aspect as well, and in relation to registry width (here 32-bits).
So here we try to set alpha channel to 255 (fully opaque) and then shift it 24 bits so it becomes in the correct position:
0xff000000
0xAABBGGRR
(Though, this is an unnecessary step here as they could just as well have set it directly as 0xff000000 which would be faster, but anyhoo).
Next we use the OR (|) operator combined with bit-shift. We shift first to get the value in the correct bit position, then OR it onto the existing value.
OR will set a bit if either the existing or the new bit is set, otherwise it will remain 0. F.ex starting with an existing value, now holding the alpha channel value:
0xff000000
We then want the blue component of say value 0xcc (204 in decimal) combined which currently is represented in 32-bit as:
0x000000cc
so we need to first shift it 16 bits to the left in this case:
0x000000cc
<< 16 bits
0x00cc0000
When we now OR that value with the existing alpha value we get:
0xff000000
OR 0x00cc0000
= 0xffcc0000
Since the destination is all 0 bits only the value from source (0xcc) is set, which is what we want (we can use AND to remove unwanted bits but, that's for another day).
And so on for the green and red component (the order which in they are OR'ed doesn't matter so much).
So this line then does, lets say pix = 0xcc:
data_u32[i] = alpha | (pix << 16) | (pix << 8) | pix;
which translates into:
alpha = 0xff000000 Alpha
pix = 0x000000cc Red
pix << 8 = 0x0000cc00 Green
pix << 16 = 0x00cc0000 Blue
and OR'ed together would become:
value = 0xffcccccc
and we have a grey value since all components has the same value. We have the correct byte-order and can write it back to the Uint32 buffer using a single operation (in JS anyways).
You can optimize this line though by using a hard-coded value for alpha instead of a reference now that we know what it does (if alpha channel vary then of course you would need to read the alpha component value the same way as the other values):
data_u32[i] = 0xff000000 | (pix << 16) | (pix << 8) | pix;
Working with integers, bits and bit operators is as said a wide topic and this just scratches the surface, but hopefully enough to make it more clear what goes on in this particular case.

How much data does texImage2D in WebGL need?

I created a DataTexture in ThreeJS, which in turn will call texImage2D.
var twidth = 50;
var theight = 50;
var tsize = twidth * theight;
var tileData = new Uint8Array( tsize * 3);
//fill it with some data
tileDataTexture = new THREE.DataTexture(this.tileData, twidth, theight, THREE.RGBFormat);
As you can see, I used a size of 50x50 Texels and three 8Bit channels (THREE.RGB in threejs). When I used a UInt8Array of size 7500 (50*50*3), Firefox tells me, that it needs more data:
Error: WebGL: texImage2D: not enough data for operation (need 7598, have 7500)
I would like to know: where do this extra 98 bytes come from? I would guess alignment, but 7598 is not even divisible by 4 while 7500 is. (Now that I think of it, it is not divisible by 3, my number of channels, either)
7600 would make sense, since that would mean 2 Bytes of padding per row, is the last row not padded?
(I get that one should use only multiples of four for the dimensions, still I would like to know the answer)
The row lengths are aligned to multiples of 4, except for the last row. In your case this means each row needs 2 bytes of padding, for a total of 152 bytes (3 * 50 + 2) per row.
152 * (50 - 1) + 3 * 50 = 7598
For reference, see the source code of Gecko.
As user1421750 pointed out the rows are padded
But, you can set how much padding per row by setting texture.unpackAlignment. it defaults to 4 but you can set it to 8, 4, 2 or 1.
Personally I would have made the default 1 because I think you'd be less likely to be surprised like you were

Custom linear congruential generator in JavaScript

I am trying to create a custom linear congruential generator (LCQ) in JavaScript (the one used in glibc).
Its properties as it's stated on Wikipedia are: m=2^31 , a=1103515245 , c=12345.
Now I am getting next seed value with
x = (1103515245 * x + 12345) % 0x80000000 ; // (The same as &0x7fffffff)
Although the generator seems to work, but when the numbers are tested on canvas:
cx = (x & 0x3fffffff) % canvasWidth; // Coordinate x (the same for cy)
They seem to be horribly biased: http://jsfiddle.net/7VmR9/3/show/
Why does this happen? By choosing a different modulo, the result of a visual test looks much better.
The testing JSFiddle is here: http://jsfiddle.net/7VmR9/3/
Update
At last I fixed the transformation to canvas coordinates as in this formula:
var cx = ((x & 0x3fffffff)/0x3fffffff*canvasWidth)|0
Now the pixel coordinates are not so much malformed as when used the modulo operation.
Updated fiddle: http://jsfiddle.net/7VmR9/14/
For the generator the formula is (you forgot a modulus in the first part):
current = (multiplier * current * modul + addend) % modulus) / modulus
I realize that you try to optimize it so I updated the fiddle with this so you can use it as a basis for the optimizations:
http://jsfiddle.net/AbdiasSoftware/7VmR9/12/
Yes, it looks like you solved it. I've done the same thing.
A linear congruential generator is in the form:
seed = (seed * factor + offset) % range;
But, most importantly, when obtaining an actual random number from it, the following does not work:
random = seed % random_maximum;
This won't work because the second modulus seems to counteract the effect of the generator. Instead, you need to use:
random = floor (seed / range * random_maximum);
(This would be a random integer; remove the floor call to obtain a random float.)
Lastly, I will warn you: In JavaScript, when working with numbers that exceed the dword limit, there is a loss of precision. Thus, the random results of your LCG may be random, but they most likely won't match the results of the same LCG implemented in C++ or another low-level language that actually supports dword math.
Also due to imprecision, the cycle of the LCG is highly liable to be greatly reduced. So, for instance, the cycle of the glibc LCG you reference is probably 4 billion (that is, it will generate over 4 billion random numbers before starting over and re-generating the exact same set of numbers). This JavaScript implementation may only get 1 billion, or perhaps far less, due to the fact that when multiplying by the factor, the number surpasses 4 billion, and loses precision in doing so.

percent chance logic issue/refactoring (javascript)

I have an array of numbers, each one of these number represents weight. the numbers in the array is between 0 and 1, and the total of these numbers adds up to 1. So if say array[1] is 0.6 then that represents I want it to show about 60% of the time. The array itself is not known to me so I don't know these numbers, they are user input for example.
I have a solution that will work but i don't know if it is the most efficient way to do this. My solution just seems very inefficient. Here it is
generate random number
copy this user input array into a new array and sort it from smallest to largest
compare the random number to numbers in this new array, so it would compare from the smallest in the array to the largest, when random number is smaller than the array number then I will store the array number into a variable say x and exit the loop
finally, i will compare x with the original array to figure out what the index of x is in the original array
this seems like a lot of work to do, is there a simpler solution? my head does not spin that fast
EDIT - the original array is not sorted in any way
EDIT 2 - Basically I am having trouble comparing this random number to the unsorted array. I need the unsorted to stay the same which is why I created that new array in my logic
If you have an array whose elements sum to 1, you can use the following algorithm:
Pick a uniformly distributed (pseuo)random number, r, between 0 and the sum of the weights.
For each weight,
If r is less than the weight, pick it and exit.
Otherwise subtract the weight from r, so it is now between 0 and the sum of the remaining weights.
Since r is always between 0 and the sum of the remaining weights, there is always a chance proportional to each weight that r is less than it when the loop reaches that weight.
In JavaScript:
var weights = [0.5, 0.15, 0.3, 0.05];
var index = weights.length - 1; // Last by default to avoid rounding error.
var r = Math.random();
for (var i = 0; i < a.length - 1; ++i) {
if (weights[i] > r) { index = i; break; }
// The rest of the array sums to
// 1 - sum(weights[0]..weights[i])
// so rescale r appropriately.
r -= weights[i];
}
will give you the desired distribution.
The trick is the r -= which makes sure that r is always between 0 and the sum of the unprocessed array elements.
You can test it at http://jsfiddle.net/KdKdb/

Categories

Resources