I have some data in a Uint8array, ( a list of 3d x,y,z coordinates). I need to perform some floating point operations on. (matrix rotation transform multiplication) on this 3d vectors. I am using the http://glmatrix.net/ library for the calculations.
The data is coming from a 3rd party API that packages it as Uint8Array The vertex data is formatted in sets of 8 [x,y,z,w,r,g,b,a]
Up until now, I have been performing this operation on the GPU, passing this Uint8array and the rotation matrix to GLSL. I am moving the operation to a CPU based script that is executed offline.
Update Another piece of the equation. The offline script is a node.js CLI script that writes the final buffer to stdout and eventually to a file using the unix > operator. I then load this in the final app and wrap the arrayBuffer with a typedArray constructor.
When executed in GLSL, the operation looks like:
matrix * vec4(vert.x, vert.y, vert.z, 1.0)
and my results look like:
Now on the CPU My current code attempt is:
const vertData = getVertDataUint8();
const rotation = mat4.fromRotation(mat4.identity([]), -Math.PI/2, [1,0,0]);
var i = 0;
while(i<vertData.length){
const rotatedVerts = vec4.transformMat4([],
vec4.clone([vertData[i],vertData[i+1],vertData[i+2], 1]),
rotation);
vertData[i] = rotatedVerts[0];
vertData[i+1] = rotatedVerts[1];
vertData[i+2] = rotatedVerts[2];
i+=8;
}
and my results look like:
I noticed that when re assigning the values to the vertData, specifically negative values they were being cast as ints. example -10 ~= 246
you can reproduce in a browser console with:
var u = new Uint8Array(1)
undefined
u[0] = -10
-10
u[0]
246
I have tried converting the Uint8Array into a Float32Array by wrapping it in the Float32Array constructor and using Float32Array.from but it seems to produce the same results.
First of all: Consider using the Float32Array for your vertex information. Otherwise there can be massive problems. Or at least use non unsigned data formats.
Now your Problem:
Since Uint8Array is a unsigned data format it will only save positive 8-bit values. All negative values will be interpreted as positives (10000000 as signed integer is -128 but as unsigned it is 128)
But i guess the real problem is another one.
From the code i assume that you are saving the vertex information like this:
[X1,Y1,Z1,...] or [X1,Y1,Z1,W1,...]
Your loop will iterate over the array in steps of 8 since i+=8;.
Therefore at least every second vertex will be skipped in your script.
Related
Can a buffer have both string and image associated with it? If so, how to extract them separately.
An example case would be a buffer with image data and also file name data.
I have worked with sharedArrayBuffers/arrayBuffers before.
If you are storing image pixel data, it's going to be a u32-int array, with 4 8-bit segment controlling rbga respectively... yes: you CAN tack on string data at the front in the form of a 'header' if you encode it and decode it to int values... but I have a hard time understanding why that might be desirable. because working with raw pixel data that is ONLY pixel-data is simpler. (I usually just stick it as a property of an object, with whatever other data I want to store)
Data Buffers
Typed arrays
You can use ArrayBuffer to create a buffer to hold the data. You then create a view using a typed array. eg unsigned characters Uint8Array. Types can be 8-16-32-64 bit (un/signed integers), float - double (32 - 64 bit floating point)
One buffer can have many view. You can read and write to view just like any JS array. The values are automatically converted to the correct type when you write to a buffer, and converted to Number when you read from a view
Example
Using buffer and views to read different data types
For example say you have file data that has a 4 character header, followed by a 16 bit unsigned integer chunk length, then 2 signed 16 bit integer coordinates, and more data
const fileBuffer = ArrayBuffer(fileSizeInBytes);
// Create a view of the buffer so we can fill it with file data
const dataRaw = new Uint8Array(data);
// load the data into dataRaw
// To get a string from the data we can create a util function
function readBufferString(buffer, start, length) {
// create a view at the position of the string in the buffer
const bytes = new Uint8Array(buffer, start, length);
// read each byte converting to JS unicode string
var str = "", idx = 0;
while (idx < length) { str += String.fromCharCode(bytes[idx++]) }
return str;
}
// get 4 char chunk header at start of buffer
const header = readBufferString(fileBuffer, 0, 4);
if (header === "HEAD") {
// Create views for 16 bit signed and unsigned integers
const ints = new Int16Array(fileBuffer);
const uints = new Uint16Array(fileBuffer);
const length = uints[2]; // get the length as unsigned int16
const x = ints[3]; // get the x coord as signed int16
const y = ints[4]; // get the y coord as signed int16
A DataView
The above example is one way of extracting the different types of data from a single buffer. However there could be an problem with older files and some data sources regarding the order of bytes that create multi byte types (eg 32 integers). This is called endianness
To help with using the correct endianness and to simplify access to all the different data types in a buffer you can use a DataView
The data view lets you read from the buffer by type and endianness. For example to read a unsigned 64bit integer from a buffer
// fileBuffer is a array buffer with the data
// Create a view
const dataView = new DataView(fileBuffer);
// read the 64 bit uint starting at the first byte in the buffer
// Note the returned value is a BigInt not a Number
const bInt = dataView.getBigUint64(0);
// If the int was in little endian order you would use
const bInt = dataView.getBigUint64(0, true); // true for little E
Notes
Buffers are not dynamic. That means they can not grow and shrink and that you must know how large the buffer needs to be when you create it.
Buffers tend to be a little slower than JavaScript's standard array as there is a lot type coercion when read or writing to buffers
Buffers can be transferred (Zero copy transfer) across threads making them ideal for distributing large data structures between WebWorkers. There is also a SharedArrayBuffer that lets you create true parallel processing solutions in JS
I am trying to create a view of ArrayBuffer object in order to JSONify it.
var data = { data: new Uint8Array(arrayBuffer) }
var json = JSON.stringify(data)
It seems that the size of the ArrayBuffer does not matter even with the smallest Uint8Array. I did not get any RangeError so far:) If so, how do I decide which typed array to use?
You decide based on the data stored in the buffer, or, better said, based on your interpretation of that data.
Also an Uint8Array is not an 8 bit array, it's an array of unsigned 8 bit integers. It can have any length. A Uint8Array created from the same ArrayBuffer as a Uint16Array is going to be twice as long, because every byte in the ArrayBuffer is going to be "placed" as one element of the Uint8Array, while for the Uint16Array each pair of bytes is going to "become" one element in the array.
A good explanation of what happens is if we try thinking in binary. Try running this:
var buffer = new ArrayBuffer(2);
var uint8View = new Uint8Array(buffer);
var uint16View = new Uint16Array(buffer);
uint8View[0] = 2;
uint8View[1] = 1;
console.log(uint8View[0].toString(2));
console.log(uint8View[1].toString(2));
console.log(uint16View[0].toString(2));
The output is going to be
10
1
100000010
because displayed as an unsigned 8 bit integer in binary, 2 is 00000010 and 1 is 00000001. (toString strips leading zeroes).
Uint8Array represents an array of bytes. As I said, an element is an unsigned 8 bit integer. We just pushed two bytes to it.
In memory those two bytes are stored side by side as 00000001 00000010 (binary form again used to make things clearer).
Now when you initialize a Uint16Array over the same buffer it's going to contain the same bytes, but because an element is a unsigned 16 bit integer (two bytes), when you access uint16View[0] it's going to take the first two bytes and give them back to you. So 0000000100000010, which is 100000010 with no leading zeroes.
If you interpret this data as base 10 (decimal) integers you'll know it's 0000000100000010 to base 10 (258).
Neither Uint8Array nor Uint16Array store any data themselves. They are simply different ways of accessing bytes in an ArrayBuffer.
how one chooses which one to use? It's not based on preference but on the underlying data. ArrayBuffer is to be used when you receive some binary data from some external source (web socket maybe) and already know what the data represents. It might be a list of unsigned 8 bit integers, or one of signed 16 bit ones, or even a mixed list where you know the first element is an 8 bit integer and the next one is a 16 bit one. Then you can use DataView to read typed items from it.
If you don't know what the data represents you can't choose what to use.
I'm looking for help in understanding this line of code in the npm moudle hash-index.
The purpose of this module is to be a function which returns the sha-1 hash of an input mod by the second argument you pass.
The specific function in this module that I don't understand is this one that takes a Buffer as input and returns an integer:
var toNumber = function (buf) {
return buf.readUInt16BE(0) * 0xffffffff + buf.readUInt32BE(2)
}
I can't seem to figure out why those specific offsets of the buffer are chosen and what the purpose of multiplying by 0xffffffff is.
This module is really interesting to me and any help in understanding how it's converting buffers to integers would be greatly appreciated!
It prints the first UINT32 (Unsigned Integer 32 bits) in the buffer.
First, it reads the first two bytes (UINT16) of the buffer, using Big Endian, then, it multiplies it by 0xFFFFFFFF.
Then, it reads the second four bytes (UINT32) in the buffer, and adds it to the multiplied number - resulting in a number constructed from the first 6 bytes of the buffer.
Example: Consider [Buffer BB AA CC CC DD ... ]
0xbb * 0xffffffff = 0xbaffffff45
0xbaffffff45 + 0xaaccccdd = 0xbbaacccc22
And regarding the offsets, it chose that way:
First time, it reads from byte 0 to byte 1 (coverts to type - UINT16)
second time, it reads from byte 2 to byte 5 (converts to type - UINT32)
So to sum it up, it constructs a number from the first 6 bytes of the buffer using big endian notation, and returns it to the calling function.
Hope that's answers your question.
Wikipedia's Big Endian entry
EDIT
As someone pointed in the comments, I was totally wrong about 0xFFFFFFFF being a left-shift of 32, it's just a number multiplication - I'm assuming it's some kind of inner protocol to calculate a correct legal buffer header that complies with what they expect.
EDIT 2
After looking on the function in the original context, I've come to this conclusion:
This function is a part of a hashing flow, and it works in that manner:
Main flow receives a string input and a maximum number for the hash output, it then takes the string input, plugs it in the SHA-1 hashing function.
SHA-1 hashing returns a Buffer, it takes that Buffer, and applies the hash-indexing on it, as can be seen in the following code excerpt:
return toNumber(crypto.createHash('sha1').update(input).digest()) % max
Also, it uses a modulu to make sure the hash index returned doesn't exceed the maximum possible hash.
Multiplication by 2 is equivalent to a shift of bits to the left by 1, so the purpose of multiplying by 2^16 is the equivalent of shifting the bits left 16 times.
Here is a similar question already answered:
Bitwise Logic in C
I'm writing a client-side Python bytecode interpreter in Javascript (specifically Typescript) for a class project. Parsing the bytecode was going fine until I tried out a negative number.
In Python, marshal.dumps(2) gives 'i\x02\x00\x00\x00' and marshal.dumps(-2) gives 'i\xfe\xff\xff\xff'. This makes sense as Python represents integers using two's complement with at least 32 bits of precision.
In my Typescript code, I use the equivalent of Node.js's Buffer class (via a library called BrowserFS, instead of ArrayBuffers and etc.) to read the data. When I see the character 'i' (i.e. buffer.readUInt8(offset) == 105, signalling that the next thing is an int), I then call readInt32LE on the next offset to read a little-endian signed long (4 bytes). This works fine for positive numbers but not for negative numbers: for 1 I get '1', but for '-1' I get something like '-272777233'.
I guess that Javascript represents numbers in 64-bit (floating point?). So, it seems like the following should work:
var longval = buffer.readInt32LE(offset); // reads a 4-byte long, gives -272777233
var low32Bits = longval & 0xffff0000; //take the little endian 'most significant' 32 bits
var newval = ~low32Bits + 1; //invert the bits and add 1 to negate the original value
//but now newval = 272826368 instead of -2
I've tried a lot of different things and I've been stuck on this for days. I can't figure out how to recover the original value of the Python integer from the binary marshal string using Javascript/Typescript. Also I think I deeply misunderstand how bits work. Any thoughts would be appreciated here.
Some more specific questions might be:
Why would buffer.readInt32LE work for positive ints but not negative?
Am I using the correct method to get the 'most significant' or 'lowest' 32 bits (i.e. does & 0xffff0000 work how I think it does?)
Separate but related: in an actual 'long' number (i.e. longer than '-2'), I think there is a sign bit and a magnitude, and I think this information is stored in the 'highest' 2 bits of the number (i.e. at number & 0x000000ff?) -- is this the correct way of thinking about this?
The sequence ef bf bd is the UTF-8 sequence for the "Unicode replacement character", which Unicode encoders use to represent invalid encodings.
It sounds like whatever method you're using to download the data is getting accidentally run through a UTF-8 decoder and corrupting the raw datastream. Be sure you're using blob instead of text, or whatever the equivalent is for the way you're downloading the bytecode.
This got messed up only for negative values because positive values are within the normal mapping space of UTF-8 and thus get translated 1:1 from the original byte stream.
I'd like to read a binary file with a few 32 bit float values at byte offset 31.
Unfortunately, new Float32Array(buffer, 31, 6); does not work. An offset of 32 instead of 31 works but I need 31.
According to this page, offset has to be a multiple of the element size, 4 in this case.
I'm interested in the reason behind this behaviour. Why does it matter where the view starts?
The best workaround I found thus far has not made it into gecko yet so I can't use it.
Do I realy have to cut and copy the byte values into a new array to get my float values?
I'm interested in the reason behind this behaviour. Why does it matter where the view starts?
Some architectures do not allow unaligned word accesses, and there are performance penalties on architectures that do allow it such as x86 (though some instructions must be aligned).
Do I really have to cut and copy the byte values into a new array to get my float values?
Yes, just like Markus' example you should create a new ArrayBuffer with a UInt8Array view and a Float32Array view for a read_buffer (copy with UInt8Array view and interpret from Float32Array view). Then you can read from your data with a UInt8Array, copy that into your read_buffer view and then interpret using the Float32Array. It's quite a seamless process.
DataView.getFloat32() would be the best way to do this. DataView is designed for packed data and allows unaligned access to the data in an ArrayBuffer so you can pass in odd offsets like 31.
You can use slice to get a new ArrayBuffer whose contents are a copy of this ArrayBuffer's bytes from begin, inclusive, up to end
const buffer = new ArrayBuffer(250);
const list = buffer.slice(10); // index [11,250]
const nums = new Int32Array(list, 0, 60);