I posted a couple of questions on this a few days ago and got some excellent replies JavaScript Typed Arrays - Different Views
My second question involved two views, 8-bit array and 32-bit array of a buffer. By placing 0, 1, 2, 3, in the 8-bit I got 50462976 in the 32-bit. As mentioned the reason for the 32-bit value was well explained.
I can achieve the same thing with the following code:
var buf = new ArrayBuffer(4);
var arr8 = new Int8Array(buf);
var arr32 = new Int32Array(buf);
for (var x = 0; x < buf.byteLength; x++) {
arr8[x] =
(x << 24) |
(x << 16) |
(x << 8) |
x;
}
console.log(arr8); // [0, 1, 2, 3]
console.log(arr32); // [50462976]
I can't find anything that explains the mechanics of this process. It seems to be saying that each arr8 element equals X bit-shifted 24 positions OR bit-shifted 16 positions OR bit-shifted 8 positions OR not bit-shifted.
That doesn't really make sense to me. I'd appreciate it if someone could shed some light on this.
Thanks,
Basically, your buffer is like this:
00000000 00000001 00000010 00000011
When handled as an Int8Array, it reads each 8-bit group individually: 0, 1, 2, 3
When handled as an Int32Array, it reads 32-bit groups (ie. 4 8-bit groups) to get 50462976
The memory used by the buffer is interpreted as 8-bit bytes for the Int8Array and 32-bit words for the Int32Array. The ordering of the bytes in the 8-bit array is the same as the ordering of the bytes in the single 32-bit word in the other array because they're the same bytes. There are no "mechanics" involved; it's just two ways of looking at the same 4 bytes of memory.
You get the exact same effect in C if you allocate a four-byte array and then create an int pointer to the same location.
Furthermore, this expression here:
arr8[x] =
(x << 24) |
(x << 16) |
(x << 8) |
x;
will do precisely the same thing as
arr8[x] = x;
You're shifting the value of x up into ranges that will be truncated away when the value is actually saved into the (8-bit) array element.
Related
I had a look at Jason Davies's Word Cloud source on Github
and within the index.js there are some variables that are declared like this:
cw = 1 << 11 >> 5,
ch = 1 << 11;
I noticed this pattern:
value before "<<" multiplies the value after "<<";
value after "<<" is a 2 to the power of the value specified;
value after ">>" (following "<<") divides that number before (which is also 2 two the power of the value);
I was curious:
in general what are the uses for this type of declaration and where does it come from
how does it add value to the code in the rest of the Jason Davies' layout?
See this link
Basically, << and >> do bit-wise shifts. If you do a << b, it will represent a as a number in base 2 (0s and 1s) and shift all the digits to the left by b positions. This is mathematically equivalent to
a * 2^b
The >> is the same principle, but it shifts to the right. It's almost analogous to a division by a factor of 2, but there's a special case when the intial number is odd: it floors the result.
⌊(a / 2^b)⌋
If you have 1 << 11 >> 5, the left and right shifts cancel each other, we end up in reality with
1 << 6 === 64 === 1 * 2^6
In JavaScript, I need to convert two bytes into a 16 bit integer, so that I can convert a stream of audio data into an array of signed PCM values.
Most answers online for converting bytes to 16 bit integers use the following, but it does not work correctly for negative numbers.
var result = (((byteA & 0xFF) << 8) | (byteB & 0xFF));
You need to consider that the negatives are represented in 2's compliment, and that JavaScript uses 32 bit integers to perform bitwise operations. Because of this, if it's a negative value, you need to fill in the first 16 bits of the number with 1's. So, here is a solution:
var sign = byteA & (1 << 7);
var x = (((byteA & 0xFF) << 8) | (byteB & 0xFF));
if (sign) {
result = 0xFFFF0000 | x; // fill in most significant bits with 1's
}
I am using the JSFeat Computer Vision Library and am trying to convert an image to greyscale. The function jsfeat.imgproc.grayscale outputs to a matrix (img_u8 below), where each element is an integer between 0 and 255. I was unsure how to apply this matrix to the original image so I went looking through their example at https://inspirit.github.io/jsfeat/sample_grayscale.htm.
Below is my code to convert an image to grey scale. I adopted their method to update the pixels in the original image but I do not understand how it works.
/**
* I understand this stuff
*/
let canvas = document.getElementById('canvas');
let ctx = canvas.getContext('2d');
let img = document.getElementById('img-in');
ctx.drawImage(img, 0, 0, img.width, img.height);
let imageData = ctx.getImageData(0, 0, img.width, img.height);
let img_u8 = new jsfeat.matrix_t(img.width, img.height, jsfeat.U8C1_t);
jsfeat.imgproc.grayscale(imageData.data, img.width, img.height, img_u8);
let data_u32 = new Uint32Array(imageData.data.buffer);
let i = img_u8.cols*img_u8.rows, pix = 0;
/**
* Their logic to update the pixel values of the original image
* I need help understanding how the following works
*/
let alpha = (0xff << 24);
while(--i >= 0) {
pix = img_u8.data[i];
data_u32[i] = alpha | (pix << 16) | (pix << 8) | pix;
}
/**
* I understand this stuff
*/
context.putImageData(imageData, 0, 0);
Thanks in advance!
It's a wide topic, but I'll try to roughly cover the basics in order to understand what goes on here.
As we know, it's using 32-bit integer values which means you can operate on four bytes simultaneously using fewer CPU instructions and therefor in many cases can increase overall performance.
Crash course
A 32-bit value is often notated as hex like this:
0x00000000
and represents the equivalent of bits starting with the least significant bit 0 on the right to the most significant bit 31 on the left. A bit can of course only be either on/set/1 or off/unset/0. 4 bits is a nibble, 2 nibbles are one byte. The hex value has each nibble as one digit, so here you have 8 nibbles = 4 bytes or 32 bits. As in decimal notation, leading 0s have no effect on the value, i.e. 0xff is the same as 0x000000ff (The 0x prefix also has no effect on the value; it is just the traditional C notation for hexadecimal numbers which was then taken over by most other common languages).
Operands
You can bit-shift and perform logic operations such as AND, OR, NOT, XOR on these values directly (in assembler language you would fetch the value from a pointer/address and load it into a registry, then perform these operations on that registry).
So what happens is this:
The << means bit-shift to the left. In this case the value is:
0xff
or in binary (bits) representation (a nibble 0xf = 1111):
0b11111111
This is the same as:
0x000000ff
or in binary (unfortunately we cannot denote bit representation natively in JavaScript actually, there is the 0b-prefix in ES6):
0b00000000 00000000 00000000 11111111
and is then bit-shifted to the left 24 bit positions, making the new value:
0b00000000 00000000 00000000 11111111
<< 24 bit positions =
0b11111111 00000000 00000000 00000000
or
0xff000000
So why is this necessary here? Well, that's an excellent question!
The 32-bit value in relation to canvas represents RGBA and each of the components can have a value between 0 and 255, or in hex a value between 0x00 and 0xff. However, since most consumer CPUs today uses little-endian byte order each components for the colors is at memory level stored as ABGR instead of RGBA for 32-bit values.
We are normally abstracted away from this in a high-level language such as JavaScript of course, but since we now work directly with memory bytes through typed arrays we have to consider this aspect as well, and in relation to registry width (here 32-bits).
So here we try to set alpha channel to 255 (fully opaque) and then shift it 24 bits so it becomes in the correct position:
0xff000000
0xAABBGGRR
(Though, this is an unnecessary step here as they could just as well have set it directly as 0xff000000 which would be faster, but anyhoo).
Next we use the OR (|) operator combined with bit-shift. We shift first to get the value in the correct bit position, then OR it onto the existing value.
OR will set a bit if either the existing or the new bit is set, otherwise it will remain 0. F.ex starting with an existing value, now holding the alpha channel value:
0xff000000
We then want the blue component of say value 0xcc (204 in decimal) combined which currently is represented in 32-bit as:
0x000000cc
so we need to first shift it 16 bits to the left in this case:
0x000000cc
<< 16 bits
0x00cc0000
When we now OR that value with the existing alpha value we get:
0xff000000
OR 0x00cc0000
= 0xffcc0000
Since the destination is all 0 bits only the value from source (0xcc) is set, which is what we want (we can use AND to remove unwanted bits but, that's for another day).
And so on for the green and red component (the order which in they are OR'ed doesn't matter so much).
So this line then does, lets say pix = 0xcc:
data_u32[i] = alpha | (pix << 16) | (pix << 8) | pix;
which translates into:
alpha = 0xff000000 Alpha
pix = 0x000000cc Red
pix << 8 = 0x0000cc00 Green
pix << 16 = 0x00cc0000 Blue
and OR'ed together would become:
value = 0xffcccccc
and we have a grey value since all components has the same value. We have the correct byte-order and can write it back to the Uint32 buffer using a single operation (in JS anyways).
You can optimize this line though by using a hard-coded value for alpha instead of a reference now that we know what it does (if alpha channel vary then of course you would need to read the alpha component value the same way as the other values):
data_u32[i] = 0xff000000 | (pix << 16) | (pix << 8) | pix;
Working with integers, bits and bit operators is as said a wide topic and this just scratches the surface, but hopefully enough to make it more clear what goes on in this particular case.
In Java script I want to extract bits 13 to 16 from integer number.
Example: If I extract bits 13 to 16 from number 16640 then output will be 2
I have searched on google and found few links but they are in C language.
Assuming your bit count starts at 0:
var extracted, orig;
orig = parseInt("16640", 10); // best practice on using parseInt: specify number base to avoid spurious octal interpretation on leading zeroes (thx Ken Fyrstenberg)
extracted = ((orig & ((1 << 16) - 1) & ~(((1 << 13) - 1))) >>> 13);
Explanation:
mask the lower 16 bits of the original number
mask the complement of the lower 13 bits of the result (ie. bits 13-31)
you currently have bits 13-16 of the orignal number in their original position. shift this bit pattern 13 bits to the right.
Note that this method only works reliably for numbers less than 2^31. The docs (MDN) are here
Javascript's bitwise operations work essentially the same way as they do in C:
var bits13to16 = (number >> 13) & 15;
This shifts the number 13 bits to the right (eliminating bits 0-12) and masks all but the last 4 remaining bits (which used to be bits 13-16). 15 = 2^4 - 1.
All suggestions are working but the simplest I think is given by #dandavis.
parseInt( 16640 .toString(2).slice(-16, -13), 2 );
Suppose I have a hex number "4072508200000000" and I want the floating point number that it represents (293.03173828125000) in IEEE-754 double format to be put into a JavaScript variable.
I can think of a way that uses some masking and a call to pow(), but is there a simpler solution?
A client-side solution is needed.
This may help. It's a website that lets you enter a hex encoding of an IEEE-754 and get an analysis of mantissa and exponent.
http://babbage.cs.qc.edu/IEEE-754/64bit.html
Because people always tend to ask "why?," here's why: I'm trying to fill out an existing but incomplete implementation of Google's Procol Buffers (protobuf).
I don't know of a good way. It certainly can be done the hard way, here is a single-precision example totally within JavaScript:
js> a = 0x41973333
1100428083
js> (a & 0x7fffff | 0x800000) * 1.0 / Math.pow(2,23) * Math.pow(2, ((a>>23 & 0xff) - 127))
18.899999618530273
A production implementation should consider that most of the fields have magic values, typically implemented by specifying a special interpretation for what would have been the largest or smallest. So, detect NaNs and infinities. The above example should be checking for negatives. (a & 0x80000000)
Update: Ok, I've got it for double's, too. You can't directly extend the above technique because the internal JS representation is a double, and so by its definition it can handle at best a bit string of length 52, and it can't shift by more than 32 at all.
Ok, to do double you first chop off as a string the low 8 digits or 32 bits; process them with a separate object. Then:
js> a = 0x40725082
1081233538
js> (a & 0xfffff | 0x100000) * 1.0 / Math.pow(2, 52 - 32) * Math.pow(2, ((a >> 52 - 32 & 0x7ff) - 1023))
293.03173828125
js>
I kept the above example because it's from the OP. A harder case is when the low 32-bits have a value. Here is the conversion of 0x40725082deadbeef, a full-precision double:
js> a = 0x40725082
1081233538
js> b = 0xdeadbeef
3735928559
js> e = (a >> 52 - 32 & 0x7ff) - 1023
8
js> (a & 0xfffff | 0x100000) * 1.0 / Math.pow(2,52-32) * Math.pow(2, e) +
b * 1.0 / Math.pow(2, 52) * Math.pow(2, e)
293.0319506442019
js>
There are some obvious subexpressions you can factor out but I've left it this way so you can see how it relates to the format.
A quick addition to DigitalRoss' solution, for those finding this page via Google as I did.
Apart from the edge cases for +/- Infinity and NaN, which I'd love input on, you also need to take into account the sign of the result:
s = a >> 31 ? -1 : 1
You can then include s in the final multiplication to get the correct result.
I think for a little-endian solution you'll also need to reverse the bits in a and b and swap them.
The new Typed Arrays mechanism allows you to do this (and is probably an ideal mechanism for implementing protocol buffers):
var buffer = new ArrayBuffer(8);
var bytes = new Uint8Array(buffer);
var doubles = new Float64Array(buffer); // not supported in Chrome
bytes[7] = 0x40; // Load the hex string "40 72 50 82 00 00 00 00"
bytes[6] = 0x72;
bytes[5] = 0x50;
bytes[4] = 0x82;
bytes[3] = 0x00;
bytes[2] = 0x00;
bytes[1] = 0x00;
bytes[0] = 0x00;
my_double = doubles[0];
document.write(my_double); // 293.03173828125
This assumes a little-endian machine.
Unfortunately Chrome does not have Float64Array, although it does have Float32Array. The above example does work in Firefox 4.0.1.