It seems like there is nothing to handle endianness when working with UInt8. For example, when dealing with UInt16, you can set if you want little or big endian:
dataview.setUint16(byteOffset, value [, littleEndian])
vs
dataview.setUint8(byteOffset, value)
I guess this is because endianness is dealing with the order of the bytes and if I'm inserting one byte at a time, then I need to order them myself.
So how do I go about handling endianness myself? I'm creating a WAVE file header using this spec: http://soundfile.sapp.org/doc/WaveFormat/
The first part of the header is "ChunkID" in big endian and this is how I do it:
dataView.setUint8(0, 'R'.charCodeAt());
dataView.setUint8(1, 'I'.charCodeAt());
dataView.setUint8(2, 'F'.charCodeAt());
dataView.setUint8(3, 'F'.charCodeAt());
The second part of the header is "ChunkSize" in small endian and this is how I do it:
dataView.setUint8(4, 172);
Now I suppose that since the endianness of those chunks is different then I should be doing something different in each chunk. What should I be doing different in those two instances?
Cheers!
EDIT
I'm asking this question, because the wav file I'm creating is invalid (according to https://indiehd.com/auxiliary/flac-validator/). I suspect this is because I'm not handling the endianness correctly. This is the full wave file:
const fs = require('fs');
const BITS_PER_BYTE = 8;
const BITS_PER_SAMPLE = 8;
const SAMPLE_RATE = 44100;
const NB_CHANNELS = 2;
const SUB_CHUNK_2_SIZE = 128;
const chunkSize = 36 + SUB_CHUNK_2_SIZE;
const blockAlign = NB_CHANNELS * (BITS_PER_SAMPLE / BITS_PER_BYTE);
const byteRate = SAMPLE_RATE * blockAlign;
const arrayBuffer = new ArrayBuffer(chunkSize + 8)
const dataView = new DataView(arrayBuffer);
// The RIFF chunk descriptor
// ChunkID
dataView.setUint8(0, 'R'.charCodeAt());
dataView.setUint8(1, 'I'.charCodeAt());
dataView.setUint8(2, 'F'.charCodeAt());
dataView.setUint8(3, 'F'.charCodeAt());
// ChunkSize
dataView.setUint8(4, chunkSize);
// Format
dataView.setUint8(8, 'W'.charCodeAt());
dataView.setUint8(9, 'A'.charCodeAt());
dataView.setUint8(10, 'V'.charCodeAt());
dataView.setUint8(11, 'E'.charCodeAt());
// The fmt sub-chunk
// Subchunk1ID
dataView.setUint8(12, 'f'.charCodeAt());
dataView.setUint8(13, 'm'.charCodeAt());
dataView.setUint8(14, 't'.charCodeAt());
// Subchunk1Size
dataView.setUint8(16, 16);
// AudioFormat
dataView.setUint8(20, 1);
// NumChannels
dataView.setUint8(22, NB_CHANNELS);
// SampleRate
dataView.setUint8(24, ((SAMPLE_RATE >> 8) & 255));
dataView.setUint8(25, SAMPLE_RATE & 255);
// ByteRate
dataView.setUint8(28, ((byteRate >> 8) & 255));
dataView.setUint8(29, byteRate & 255);
// BlockAlign
dataView.setUint8(32, blockAlign);
// BitsPerSample
dataView.setUint8(34, BITS_PER_SAMPLE);
// The data sub-chunk
// Subchunk2ID
dataView.setUint8(36, 'd'.charCodeAt());
dataView.setUint8(37, 'a'.charCodeAt());
dataView.setUint8(38, 't'.charCodeAt());
dataView.setUint8(39, 'a'.charCodeAt());
// Subchunk2Size
dataView.setUint8(40, SUB_CHUNK_2_SIZE);
// Data
for (let i = 0; i < SUB_CHUNK_2_SIZE; i++) {
dataView.setUint8(i + 44, i);
}
A single byte (uint8) doesn't have any endianness, endianness is a property of a sequence of bytes.
According to the spec you linked, the ChunkSize takes space for 4 bytes - with the least significant byte first (little endian). If your value is only one byte (not larger than 255), you would just write the byte at offset 4 as you did. If the 4 bytes were in big endian order, you'd have to write your byte at offset 7.
I would however recommend to simply use setUint32:
dataView.setUint32(0, 0x52494646, false); // RIFF
dataView.setUint32(4, 172 , true);
dataView.setUint32(8, 0x57415645, false) // WAVE
Related
I got this from GitHub.
function LittleEndianView(size) {
Object.defineProperty(this, 'native', {
value: new Uint8Array(size)
})
}
LittleEndianView.prototype.get = function(bits, offset) {
let available = (this.native.length * 8 - offset)
if (bits > available) {
throw new Error('Range error')
}
let value = 0
let i = 0
// why loop through like this?
while (i < bits) {
// remaining bits
const remaining = bits - i
const bitOffset = offset & 7
const currentByte = this.native[offset >> 3]
const read = Math.min(remaining, 8 - bitOffset)
const a = 0xFF << read
mask = ~a
const b = currentByte >> bitOffset
readBits = b & mask
const c = readBits << i
value = value | c
offset += read
i += read
}
return value >>> 0
}
LittleEndianView.prototype.set = function(bits, offset) {
const available = (this.native.length * 8 - offset)
if (bits > available) {
throw new Error('Range error')
}
let i = 0
while (i < bits) {
const remaining = bits - i
const bitOffset = offset & 7
const byteOffset = offset >> 3
const finished = Math.min(remaining, 8 - bitOffset)
const mask = ~(0xFF << finished)
const writeBits = value & mask
const value >>= finished
const destMask = ~(mask << bitOffset)
const byte = this.view[byteOffset]
this.native[byteOffset] = (byte & destMask) | (writeBits << bitOffset)
offset += finished
i += finished
}
}
After studying it for a few hours, I realize that the offset & 7 is only keeping the first 3 bits of the offset, which is for 1 byte. The offset >> 3 is simply dividing the offset by 8 to go from bits to bytes. The 0xFF << read is to get a bunch of 1s on the left like 11111000. Then negate it to get them on the right. I understand most of it just barely in pieces. But I don't see how they figured out how to implement this solution using these techniques (and I don't fully grasp the whole solution -- how or why it works -- after a couple days at this).
So that leads me to the question, I need to apply this same read/write functionality to a Uint32Array in JavaScript (rather than a Uint8Array like the above code would use). I need to read and write arbitrary bits (not bytes) to this Uint32Array. What needs to change in the existing get and set algorithms to accommodate this?
Seems like the offset & 7 might become something different, but I can't tell how to get it to be 32 bit representation. And the offset >> 3 might become divide by 32 or something like that. But I'm not sure. That may be mostly it.
I am making a Three.js application that needs to download some depth data. The data consist of 512x256 depth entries, stored in a compressed binary format with the precision of two bytes each. The data must be readable from the CPU, so I cannot store the data in a texture. Floating point textures is not supported on many browsers anyway, such as Safari on iOS.
I have this working in Unity, but I am not sure how to go about downloading compressed depth like this using javascript / three.js. I am new to javascript, but seems it has limited support for binary data handling and compression.
I was thinking of switching to a textformat, but then memory footprint and download size is a concern. The user could potentially have to load hundreds of these depth buffers.
Is there a better way to download a readable depth buffer?
You can download a file as binary data with fetch and async/await
async function doIt() {
const response = await fetch('https://webglfundamentals.org/webgl/resources/eye-icon.png');
const arrayBuffer = await response.arrayBuffer();
// the data is now in arrayBuffer
}
doIt();
After that you can make TypedArray views to view the data.
async function doIt() {
const response = await fetch('https://webglfundamentals.org/webgl/resources/eye-icon.png');
const arrayBuffer = await response.arrayBuffer();
console.log('num bytes:', arrayBuffer.byteLength);
// arrayBuffer is now the binary data. To access it make one or more views
const bytes = new Uint8Array(arrayBuffer);
console.log('first 4 bytes:', bytes[0], bytes[1], bytes[2], bytes[3]);
const decoder = new TextDecoder();
console.log('bytes 1-3 as unicode:', decoder.decode(bytes.slice(1, 4)));
}
doIt();
As for a format for depth data that's really up to you. Assuming your format was just 16bit values representing ranges of depths from min to max
uint32 width
uint32 height
float min
float max
uint16 data[width * height]
Then after you've loaded the data you can use either muliplte array views.
const uint32s = new Uint32Array(arrayBuffer);
const floats = new Float32Array(arrayBuffer, 8); // skip first 8 bytes
const uint16s = new Uint16Array(arrayBuffer, 16); // skip first 16 bytes
const width = uint32s[0];
const height = uint32s[1];
const min = floats[0];
const max = floats[1];
const range = max - min;
const depthData = new Float32Array(width * height);
for (let i = 0; i < uint16s.length; ++i) {
depthData[i] = uint16s[i] / 0xFFFF * range + min;
}
If you care about endianness for some future world where there are any browsers running on big endian hardware, then you either write your own functions to read bytes and generate those values or you can use a DataView.
Assuming you know the data is in little endian format
const data = new DataView(arrayBuffer);
const width = data.getUint32(0, true);
const height = data.getUint32(4, true);
const min = data.getFloat32(8, true);
const max = data.getFloat32(12, true);
const range = max - min;
const depthData = new Float32Array(width * height);
for (let i = 0; i < uint16s.length; ++i) {
depthData[i] = data.getUint16(i * 2 + 16, true) / 0xFFFF * range + min;
}
Of you want more complex compression like a inflate/deflate file you'll need a library or to write your own.
I'm trying to get exif data from JPG using JavaScript in the browser.
I'm using FileReader() class and readAsArrayBuffer() method.
For most operations I need Uint8Array, so that's what I'm casting ArrayBuffer to.
I've added function to objects of Uint8Array for when I need short:
const getShort = function(position, bigEndian = true) {
const int1 = this[position];
const int2 = this[position+1];
let result = (int1 << 8) | (int2 & 0xFF);
if(!bigEndian) {
let buffer = new ArrayBuffer(16);
let view = new DataView(buffer);
view.setInt16(1,result);
result = view.getInt16(1, true) ;
}
return(result);
}
The problem is when parsing 0110 1001 and 1000 0111 I'm getting 1000 0111 0110 1001 and that is interpreted as -30871 instead of 34665.
I have a BGRA array and need to draw it to a canvas.
Currently i was doing it like this:
var aVal = returnedFromChromeWorker;
var can = doc.createElementNS(NS_HTML, 'canvas');
can.width = aVal.width;
can.height = aVal.height;
var ctx = can.getContext('2d');
ctx.putImageData(aVal, 0, 0);
doc.documentElement.appendChild(can);
Is there some way to get a BGRA array onto the canvas? I was exploring: https://developer.mozilla.org/en-US/docs/Mozilla/Tech/XPCOM/Reference/Interface/imgIEncoder
I can't re-order the array because my goal is to take screenshots and for large screens even just 1280x1024, it takes 2.3s to go through and re-order it all.
I tried re-ordering on the ctypes side but it's giving me quirky issues: 0, making the whole image invisible >_< lol BITMAPV5HEADER getting RGBA keep A at 255
How to put BGRA array into canvas without re-ordering
There is none.
Reorganize the byte-order is necessary as canvas can only hold data in RGBA format (little-endian, ie. ABGR in the buffer). Here is one way to do this:
You could add an extra step for your worker to deal with the reordering. Create a DataView for the raw byte buffer (ArrayBuffer), then iterate each Uint32 value.
Below a Uint32 is read as little-endian. This is because in this case that format is easier to swap around as we only need to right-shift BGR and put A back in front. If your original buffer is in big-endian you will of course need to read it as big-endian and set back as little-endian (getUint32(pos, false)):
Example
var uint32 = new Uint32Array(1), pos = 0; // create some dummy data
var view = new DataView(uint32.buffer); // create DataView for byte-buffer
var pos = 0; // byte-position (we'll skip 4 bytes each time)
// dummy data in BGRA format
uint32[0] = 0x7722ddff; // magenta-ish in BGRA format
document.write("BGRA: 0x" + (uint32[0]).toString(16) + "<br>");
// --- Iterate buffer, for each: ---
var v = view.getUint32(pos, true); // BGRA -> RGBA, read as little-endian
var n = (v >>> 8) | (v << 24); // rotate - move A from last to first position
view.setUint32(pos, n, true); // set back
pos += 4; // do this when inside the loop
// result
document.write("ABGR: 0x" + (uint32[0]>>>0).toString(16));
Update If the byte-order (endian-wise) is the same in both end you can skip the DataView and use Uint32Array directly which will speed things up a tad as well:
var uint32 = new Uint32Array(1), pos = 0; // create some dummy data
// inside loop:
var v = uint32[pos];
uint32[pos++] = (v >>> 8) | (v << 24); // pos index is now per uint32
I need to loop over a binary file via an arrayBufferand retrieve sets of 1024 floating points. I'm doing this:
// chunk_size = 1024
// chunk_len = 48
// response_buffer = ArrayBuffer
// response_buffer.byteLength = 49152
for (i = chunk_len; i > 0; i -= 1) {
switch (i) {
case (chunk_len):
// create view from last chunk of 1024 out of 49152 total
float_buffer = new Float32Array(response_buffer, ((i - 1) * chunk_size));
// add_data(net_len, float_buffer);
break;
case 0:
break;
default:
float_buffer = new Float32Array(response_buffer, ((i - 1) * chunk_size)), chunk_size);
//add_data(net_len, float_buffer);
break;
}
}
My problem is if I call this on the first run for the end of my buffer:
// i - 1 = 47 * chunk_size
new Float32Array(response_buffer, ((i - 1) * chunk_size));
the same statement fails on the next run where I call:
new Float32Array(response_buffer, ((i - 1) * chunk_size), 1024);
Although I can read here, that
I can do this:
Float32Array Float32Array(
ArrayBuffer buffer,
optional unsigned long byteOffset,
optional unsigned long length
);
Question:
Why is my loop failing after declaring the first Float32Array view on my response_offer ArrayBuffer?
I think you have an extra ")" in the first line of your "default" case.
float_buffer = new Float32Array(response_buffer, ((i - 1) * chunk_size)), chunk_size);
Should be:
float_buffer = new Float32Array(response_buffer, ((i - 1) * chunk_size), chunk_size);
So. Finally understand... maybe this helps the next person wondering:
First off - I was all wrong in trying to read my data, which is 4 byte single format.
If I have an arrayBuffer with byteLength = 49182 this means there are that many entries in my array.
Since my array is 4 byte single format, I found out with some SO-help and searching that this is readable with getFloat32 AND that 4 entries comprise one "real" value
My data contains 3 measurements a 4000 data points stored in units of 1024 column by column.
So if I have 12000 data-points and 49182/4 = 12288 datapoints, I will have 288 empty data points at the end of my data structure.
So my binary data should be stored like this:
0 - 1024 a
1025 - 2048 a
2049 - 3072 a
3072 - 4000 [a
4001 - 4096 b]
4097 - 5120 b
5121 - 6144 b
6145 - 7168 b
7169 - 8000 [b
8001 - 8192 c]
8193 - 9216 c
9217 - 10240 c
10240 - 11264 c
11264 - 12000 [c
12000 - 12288 0]
My final snippet will contain 288 empty results, because 3x4000 datapoints in 1024 chunks will return some empty results
To read, I found a nice snippet here (dynamic high range rendering), which helped me to this:
// ...
raw_data = ...
data = new DataView(raw_data);
...
tmp_data = new Float32Array(byte_len / Float32Array.BYTES_PER_ELEMENT);
len = tmp_data.length;
// Incoming data is raw floating point values with little-endian byte ordering.
for (i = 0; i < len; i += 1) {
tmp_data[i] = data.getFloat32(i * Float32Array.BYTES_PER_ELEMENT, true);
}
Now I have a single array with which I can work and build my processing structure.