I have an Uint8Array that i would like to convert into an Uint16Array so that each consecutive pair of element in the Uint8Array is converted in a Uint16 value. At the end, the uint16Array is thus half the size of the Uint8Array. I thus want to cast the whole array, not each values.
the only solution i found keeps the same lenght and change the underlying type of each element of the array. So it does not 'cast' the array but each element of the array, which is not what i want.
If I understood correctly the question, the most native way coming to my mind (which still not is cast but requires a few lines) is converting to a Buffer, than converting it to a Uint16Array.
Following snippet seems to achieve the desired result.
const ar8 = new Uint8Array([0, 1, 1, 0, 0, 2, 2, 0]);
const buf = new Buffer(ar8);
const ar16 = new Uint16Array(buf.buffer, buf.byteOffset, buf.byteLength / Uint16Array.BYTES_PER_ELEMENT);
console.log(ar8, buf, ar16);
On my platform conversion is little endian; sorry but I don't know if conversion is little endian on all platforms or little/big endian conversion is platform dependent.
You can instantiate Uint16Array from Uint8Array buffer value:
const uint8Array = new Uint8Array([1, 164, 5, 57])
const uint16Array = new Uint16Array(uint8Array.buffer)
// Uint16Array(2) [ 41985, 14597 ]
Your current solution should work fine. The "underlying type" of each element in a Uint8Array is number, just like in a Uint16Array. Therefore, casting the whole array should preserve the types of the values.
Related
I am getting MongoDB ObjectIds, which are 12 bytes and their hex string looks like this: 62d34be4f8cd489f6d5b8e0e.
I want to use the object id as a map key like this:
new Map().set(ObjectId('62d34be4f8cd489f6d5b8e0e'),"value")
That doesn't really work because each ObjectId object is unique so two with the same id are not equal objects.
There are a few ways I can think of:
Use the hex string as map key:
new Map().set("62d34be4f8cd489f6d5b8e0e", "value")
Use a buffer / Uint8Array, and use some special Map which works with arrays as keys
new Map().set(Buffer.from("62d34be4f8cd489f6d5b8e0e", "hex"), "value")
Turn the buffer into a 12 byte string which looks like b�K���H�m[�\x0E
new Map().set(Buffer.from("62d34be4f8cd489f6d5b8e0e", "hex").toString(), "value")
Turn the buffer into a number (this turns a 12 byte buffer into a 16 byte number so it seems a little wierd)
new Map().set(Buffer.from("62d34be4f8cd489f6d5b8e0e", 'hex').readInt16BE(), "value")
Which method has the best performance?
This question already has answers here:
Map binary buffer to Int16Array or Int32Array
(2 answers)
convert nodejs buffer to Uint16Array
(1 answer)
Is Uint8Array / Uint16Array conversion broken in Firefox OS simulator?
(1 answer)
Closed last year.
I'm trying to figure out how to work with the buffer's representation in int.
Simple code
var buffer = Buffer.alloc(4, 'a');
var interface16 = new Uint16Array(buffer);
var interface8 = new Uint8Array(buffer);
console.log(interface16);
console.log(interface8);
Why does the log output the same arrays (by value and length)? After all, I'm doing a data representation in the form of 1 and 2 bytes per integer. At least the manuals say that Uint8Array - takes 1 byte from the buffer, which I think of as a sequence of 0s and 1s, in this case 32 ( 4 * 8). That is, I can put up with the fact that it will be an array with 4 elements.
But Uint16Array takes 2 bytes for an integer, that is, the number must be different and the size of the array is 2.
What am I wrong about and which buffer to pass to these constructors in order to visually see the difference?
I suspect that in any case the returned array will be the same length as the number of bytes, or that it is a matter of how the console generates output. Probably. But there is not enough knowledge to understand why.
Thank you for your attention.
P.S. If you also advise some non-brain-exploding literature directly on this issue, it will be great.
In your example, both typed arrays end up having 4 elements, but the Uint8Array only uses 4 bytes to represent its contents, while the Uint8Array uses 8. You can see this by inspecting the byteLength property:
var buffer = Buffer.alloc(4, 'a');
var interface16 = new Uint16Array(buffer);
var interface8 = new Uint8Array(buffer);
console.log('Uint16 length: %d', interface16.length);
console.log('Uint16 byteLength: %d', interface16.byteLength);
console.log('Uint8 length: %d', interface8.length);
console.log('Uint8 byteLength: %d', interface8.byteLength);
Output
Uint16 length: 4
Uint16 byteLength: 8
Uint8 length: 4
Uint8 byteLength: 4
So the typed array constructors are creating typed arrays with the same number of elements as the source buffer, but the Uint16Array constructor is using two bytes instead of one for each element, filling the high-order byte of each element with zero.
first you create a buffer with 4 a's inside.
then you you call this function with the buffer :
the doc says : new Uint16Array(object);
When called with an object argument, a new typed array is created as if by the TypedArray.from() method.
see more : https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/TypedArray/from
it creates an array of 4 uint16 by using every bytes of the buffer
a = 0x61 = 97
you then get four 97.
here is a stackoverflow of a sucessful buffer to uint16 conversion
convert nodejs buffer to Uint16Array
Can a buffer have both string and image associated with it? If so, how to extract them separately.
An example case would be a buffer with image data and also file name data.
I have worked with sharedArrayBuffers/arrayBuffers before.
If you are storing image pixel data, it's going to be a u32-int array, with 4 8-bit segment controlling rbga respectively... yes: you CAN tack on string data at the front in the form of a 'header' if you encode it and decode it to int values... but I have a hard time understanding why that might be desirable. because working with raw pixel data that is ONLY pixel-data is simpler. (I usually just stick it as a property of an object, with whatever other data I want to store)
Data Buffers
Typed arrays
You can use ArrayBuffer to create a buffer to hold the data. You then create a view using a typed array. eg unsigned characters Uint8Array. Types can be 8-16-32-64 bit (un/signed integers), float - double (32 - 64 bit floating point)
One buffer can have many view. You can read and write to view just like any JS array. The values are automatically converted to the correct type when you write to a buffer, and converted to Number when you read from a view
Example
Using buffer and views to read different data types
For example say you have file data that has a 4 character header, followed by a 16 bit unsigned integer chunk length, then 2 signed 16 bit integer coordinates, and more data
const fileBuffer = ArrayBuffer(fileSizeInBytes);
// Create a view of the buffer so we can fill it with file data
const dataRaw = new Uint8Array(data);
// load the data into dataRaw
// To get a string from the data we can create a util function
function readBufferString(buffer, start, length) {
// create a view at the position of the string in the buffer
const bytes = new Uint8Array(buffer, start, length);
// read each byte converting to JS unicode string
var str = "", idx = 0;
while (idx < length) { str += String.fromCharCode(bytes[idx++]) }
return str;
}
// get 4 char chunk header at start of buffer
const header = readBufferString(fileBuffer, 0, 4);
if (header === "HEAD") {
// Create views for 16 bit signed and unsigned integers
const ints = new Int16Array(fileBuffer);
const uints = new Uint16Array(fileBuffer);
const length = uints[2]; // get the length as unsigned int16
const x = ints[3]; // get the x coord as signed int16
const y = ints[4]; // get the y coord as signed int16
A DataView
The above example is one way of extracting the different types of data from a single buffer. However there could be an problem with older files and some data sources regarding the order of bytes that create multi byte types (eg 32 integers). This is called endianness
To help with using the correct endianness and to simplify access to all the different data types in a buffer you can use a DataView
The data view lets you read from the buffer by type and endianness. For example to read a unsigned 64bit integer from a buffer
// fileBuffer is a array buffer with the data
// Create a view
const dataView = new DataView(fileBuffer);
// read the 64 bit uint starting at the first byte in the buffer
// Note the returned value is a BigInt not a Number
const bInt = dataView.getBigUint64(0);
// If the int was in little endian order you would use
const bInt = dataView.getBigUint64(0, true); // true for little E
Notes
Buffers are not dynamic. That means they can not grow and shrink and that you must know how large the buffer needs to be when you create it.
Buffers tend to be a little slower than JavaScript's standard array as there is a lot type coercion when read or writing to buffers
Buffers can be transferred (Zero copy transfer) across threads making them ideal for distributing large data structures between WebWorkers. There is also a SharedArrayBuffer that lets you create true parallel processing solutions in JS
I have a binary string that must travel over a ws websocket (I can't use websocket.io) and so is being JSON.stringified. e.g. var msg.data = data.toString('base64')
On the other end, I want that data back not as byte wide binary, but as an array of 32 bit integers. e.g. if the binary data is [0, 0, 0, 1] going in, I want [1] coming out. Each output element is 4 bytes.
If I just take the binary string directly, I can new Int32Array(data) and I'm golden; the result is 1/4 the length of the original and each 32 bit element is made up from 4 of the original byte wide elements.
But when I've encoded it, then decoded with var data = Buffer.from(msg.data, 'base64') then new Int32Array(data)is the same length as the original, and each 32 byte element is made from ONE of the original 8 byte elements. Int32Array.from(data) does the same.
I'm not finding any answer by searching, everyone appears to be ok with byte wide data.
.buffer
I was forgetting .buffer.
new Int32Array(data.buffer) works perfectly.
I am trying to create a view of ArrayBuffer object in order to JSONify it.
var data = { data: new Uint8Array(arrayBuffer) }
var json = JSON.stringify(data)
It seems that the size of the ArrayBuffer does not matter even with the smallest Uint8Array. I did not get any RangeError so far:) If so, how do I decide which typed array to use?
You decide based on the data stored in the buffer, or, better said, based on your interpretation of that data.
Also an Uint8Array is not an 8 bit array, it's an array of unsigned 8 bit integers. It can have any length. A Uint8Array created from the same ArrayBuffer as a Uint16Array is going to be twice as long, because every byte in the ArrayBuffer is going to be "placed" as one element of the Uint8Array, while for the Uint16Array each pair of bytes is going to "become" one element in the array.
A good explanation of what happens is if we try thinking in binary. Try running this:
var buffer = new ArrayBuffer(2);
var uint8View = new Uint8Array(buffer);
var uint16View = new Uint16Array(buffer);
uint8View[0] = 2;
uint8View[1] = 1;
console.log(uint8View[0].toString(2));
console.log(uint8View[1].toString(2));
console.log(uint16View[0].toString(2));
The output is going to be
10
1
100000010
because displayed as an unsigned 8 bit integer in binary, 2 is 00000010 and 1 is 00000001. (toString strips leading zeroes).
Uint8Array represents an array of bytes. As I said, an element is an unsigned 8 bit integer. We just pushed two bytes to it.
In memory those two bytes are stored side by side as 00000001 00000010 (binary form again used to make things clearer).
Now when you initialize a Uint16Array over the same buffer it's going to contain the same bytes, but because an element is a unsigned 16 bit integer (two bytes), when you access uint16View[0] it's going to take the first two bytes and give them back to you. So 0000000100000010, which is 100000010 with no leading zeroes.
If you interpret this data as base 10 (decimal) integers you'll know it's 0000000100000010 to base 10 (258).
Neither Uint8Array nor Uint16Array store any data themselves. They are simply different ways of accessing bytes in an ArrayBuffer.
how one chooses which one to use? It's not based on preference but on the underlying data. ArrayBuffer is to be used when you receive some binary data from some external source (web socket maybe) and already know what the data represents. It might be a list of unsigned 8 bit integers, or one of signed 16 bit ones, or even a mixed list where you know the first element is an 8 bit integer and the next one is a 16 bit one. Then you can use DataView to read typed items from it.
If you don't know what the data represents you can't choose what to use.