Convert two bytes into signed 16 bit integer in JavaScript - javascript

In JavaScript, I need to convert two bytes into a 16 bit integer, so that I can convert a stream of audio data into an array of signed PCM values.
Most answers online for converting bytes to 16 bit integers use the following, but it does not work correctly for negative numbers.
var result = (((byteA & 0xFF) << 8) | (byteB & 0xFF));

You need to consider that the negatives are represented in 2's compliment, and that JavaScript uses 32 bit integers to perform bitwise operations. Because of this, if it's a negative value, you need to fill in the first 16 bits of the number with 1's. So, here is a solution:
var sign = byteA & (1 << 7);
var x = (((byteA & 0xFF) << 8) | (byteB & 0xFF));
if (sign) {
result = 0xFFFF0000 | x; // fill in most significant bits with 1's
}

Related

How do I convert string hex representation to byte ? - javascript

My problem consists of stream of bytes or array of bytes.
This is no problem with these
'\u0000'
'\u0000'
'\u0001'
'\u0010'
But the problem lies when i decode some special characters as this
'\u0000'
'\u0000'
'\u0000'
'�'
from right to left(or here bottom to top) i can get the numeric value of these 4 bytes to 1 integer or number. but I dont know if this is correct
toInt (buff){
console.log('buff ' , typeof buff[0]);
for(let b of buff){
console.log('b' , b);
}
return (buff[3] & 0x000000ff ) |
(buff[2] & 0x0000ff00 ) << 8 |
(buff[1] & 0x00ff0000 ) << 16 |
(buff[0] & 0xff000000 ) << 24 ;
}
I'm not sure I've understood your question correctly but sounds like you want to convert an array of strings which represent hex bytes into a number.
If you've got a string representation of hex numbers you could convert them using something like:
function bufferToInt(buff) {
var string = buff.join('');
return parseInt(string, 16); // parseInt allows specifying a base
}
bufferToInput(['ff', 'ff']); // This returns 65535
Assuming each element in buff contained a string representation of a hex byte such as "ff" the above should allow simple conversion to an number using mainly built in functions.

Extract bits from start to end in javascript

In Java script I want to extract bits 13 to 16 from integer number.
Example: If I extract bits 13 to 16 from number 16640 then output will be 2
I have searched on google and found few links but they are in C language.
Assuming your bit count starts at 0:
var extracted, orig;
orig = parseInt("16640", 10); // best practice on using parseInt: specify number base to avoid spurious octal interpretation on leading zeroes (thx Ken Fyrstenberg)
extracted = ((orig & ((1 << 16) - 1) & ~(((1 << 13) - 1))) >>> 13);
Explanation:
mask the lower 16 bits of the original number
mask the complement of the lower 13 bits of the result (ie. bits 13-31)
you currently have bits 13-16 of the orignal number in their original position. shift this bit pattern 13 bits to the right.
Note that this method only works reliably for numbers less than 2^31. The docs (MDN) are here
Javascript's bitwise operations work essentially the same way as they do in C:
var bits13to16 = (number >> 13) & 15;
This shifts the number 13 bits to the right (eliminating bits 0-12) and masks all but the last 4 remaining bits (which used to be bits 13-16). 15 = 2^4 - 1.
All suggestions are working but the simplest I think is given by #dandavis.
parseInt( 16640 .toString(2).slice(-16, -13), 2 );

JavaScript Typed Arrays - Different Views - 2

I posted a couple of questions on this a few days ago and got some excellent replies JavaScript Typed Arrays - Different Views
My second question involved two views, 8-bit array and 32-bit array of a buffer. By placing 0, 1, 2, 3, in the 8-bit I got 50462976 in the 32-bit. As mentioned the reason for the 32-bit value was well explained.
I can achieve the same thing with the following code:
var buf = new ArrayBuffer(4);
var arr8 = new Int8Array(buf);
var arr32 = new Int32Array(buf);
for (var x = 0; x < buf.byteLength; x++) {
arr8[x] =
(x << 24) |
(x << 16) |
(x << 8) |
x;
}
console.log(arr8); // [0, 1, 2, 3]
console.log(arr32); // [50462976]
I can't find anything that explains the mechanics of this process. It seems to be saying that each arr8 element equals X bit-shifted 24 positions OR bit-shifted 16 positions OR bit-shifted 8 positions OR not bit-shifted.
That doesn't really make sense to me. I'd appreciate it if someone could shed some light on this.
Thanks,
Basically, your buffer is like this:
00000000 00000001 00000010 00000011
When handled as an Int8Array, it reads each 8-bit group individually: 0, 1, 2, 3
When handled as an Int32Array, it reads 32-bit groups (ie. 4 8-bit groups) to get 50462976
The memory used by the buffer is interpreted as 8-bit bytes for the Int8Array and 32-bit words for the Int32Array. The ordering of the bytes in the 8-bit array is the same as the ordering of the bytes in the single 32-bit word in the other array because they're the same bytes. There are no "mechanics" involved; it's just two ways of looking at the same 4 bytes of memory.
You get the exact same effect in C if you allocate a four-byte array and then create an int pointer to the same location.
Furthermore, this expression here:
arr8[x] =
(x << 24) |
(x << 16) |
(x << 8) |
x;
will do precisely the same thing as
arr8[x] = x;
You're shifting the value of x up into ranges that will be truncated away when the value is actually saved into the (8-bit) array element.

using bitwise OR in javascript to convert to integer

we can do the following to convert:
var a = "129.13"|0, // becomes 129
var b = 11.12|0; // becomes 11
var c = "112"|0; // becomes 112
This seem to work but not sure if this is a standard JS feature. Does any one have any idea if this is safe to use for converting strings and decimals to integers ?
Yes, it is standard behavior. Bitwise operators only operate on integers, so they convert whatever number they're give to signed 32 bit integer.
This means that the max range is that of signed 32 bit integer minus 1, which is 2147483647.
(Math.pow(2, 32) / 2 - 1)|0; // 2147483647
(Math.pow(2, 32) / 2)|0; // -2147483648 (wrong result)

Convert a string with a hex representation of an IEEE-754 double into JavaScript numeric variable

Suppose I have a hex number "4072508200000000" and I want the floating point number that it represents (293.03173828125000) in IEEE-754 double format to be put into a JavaScript variable.
I can think of a way that uses some masking and a call to pow(), but is there a simpler solution?
A client-side solution is needed.
This may help. It's a website that lets you enter a hex encoding of an IEEE-754 and get an analysis of mantissa and exponent.
http://babbage.cs.qc.edu/IEEE-754/64bit.html
Because people always tend to ask "why?," here's why: I'm trying to fill out an existing but incomplete implementation of Google's Procol Buffers (protobuf).
I don't know of a good way. It certainly can be done the hard way, here is a single-precision example totally within JavaScript:
js> a = 0x41973333
1100428083
js> (a & 0x7fffff | 0x800000) * 1.0 / Math.pow(2,23) * Math.pow(2, ((a>>23 & 0xff) - 127))
18.899999618530273
A production implementation should consider that most of the fields have magic values, typically implemented by specifying a special interpretation for what would have been the largest or smallest. So, detect NaNs and infinities. The above example should be checking for negatives. (a & 0x80000000)
Update: Ok, I've got it for double's, too. You can't directly extend the above technique because the internal JS representation is a double, and so by its definition it can handle at best a bit string of length 52, and it can't shift by more than 32 at all.
Ok, to do double you first chop off as a string the low 8 digits or 32 bits; process them with a separate object. Then:
js> a = 0x40725082
1081233538
js> (a & 0xfffff | 0x100000) * 1.0 / Math.pow(2, 52 - 32) * Math.pow(2, ((a >> 52 - 32 & 0x7ff) - 1023))
293.03173828125
js>
I kept the above example because it's from the OP. A harder case is when the low 32-bits have a value. Here is the conversion of 0x40725082deadbeef, a full-precision double:
js> a = 0x40725082
1081233538
js> b = 0xdeadbeef
3735928559
js> e = (a >> 52 - 32 & 0x7ff) - 1023
8
js> (a & 0xfffff | 0x100000) * 1.0 / Math.pow(2,52-32) * Math.pow(2, e) +
b * 1.0 / Math.pow(2, 52) * Math.pow(2, e)
293.0319506442019
js>
There are some obvious subexpressions you can factor out but I've left it this way so you can see how it relates to the format.
A quick addition to DigitalRoss' solution, for those finding this page via Google as I did.
Apart from the edge cases for +/- Infinity and NaN, which I'd love input on, you also need to take into account the sign of the result:
s = a >> 31 ? -1 : 1
You can then include s in the final multiplication to get the correct result.
I think for a little-endian solution you'll also need to reverse the bits in a and b and swap them.
The new Typed Arrays mechanism allows you to do this (and is probably an ideal mechanism for implementing protocol buffers):
var buffer = new ArrayBuffer(8);
var bytes = new Uint8Array(buffer);
var doubles = new Float64Array(buffer); // not supported in Chrome
bytes[7] = 0x40; // Load the hex string "40 72 50 82 00 00 00 00"
bytes[6] = 0x72;
bytes[5] = 0x50;
bytes[4] = 0x82;
bytes[3] = 0x00;
bytes[2] = 0x00;
bytes[1] = 0x00;
bytes[0] = 0x00;
my_double = doubles[0];
document.write(my_double); // 293.03173828125
This assumes a little-endian machine.
Unfortunately Chrome does not have Float64Array, although it does have Float32Array. The above example does work in Firefox 4.0.1.

Categories

Resources