Best way to download depth data in javascript? - javascript

I am making a Three.js application that needs to download some depth data. The data consist of 512x256 depth entries, stored in a compressed binary format with the precision of two bytes each. The data must be readable from the CPU, so I cannot store the data in a texture. Floating point textures is not supported on many browsers anyway, such as Safari on iOS.
I have this working in Unity, but I am not sure how to go about downloading compressed depth like this using javascript / three.js. I am new to javascript, but seems it has limited support for binary data handling and compression.
I was thinking of switching to a textformat, but then memory footprint and download size is a concern. The user could potentially have to load hundreds of these depth buffers.
Is there a better way to download a readable depth buffer?

You can download a file as binary data with fetch and async/await
async function doIt() {
const response = await fetch('https://webglfundamentals.org/webgl/resources/eye-icon.png');
const arrayBuffer = await response.arrayBuffer();
// the data is now in arrayBuffer
}
doIt();
After that you can make TypedArray views to view the data.
async function doIt() {
const response = await fetch('https://webglfundamentals.org/webgl/resources/eye-icon.png');
const arrayBuffer = await response.arrayBuffer();
console.log('num bytes:', arrayBuffer.byteLength);
// arrayBuffer is now the binary data. To access it make one or more views
const bytes = new Uint8Array(arrayBuffer);
console.log('first 4 bytes:', bytes[0], bytes[1], bytes[2], bytes[3]);
const decoder = new TextDecoder();
console.log('bytes 1-3 as unicode:', decoder.decode(bytes.slice(1, 4)));
}
doIt();
As for a format for depth data that's really up to you. Assuming your format was just 16bit values representing ranges of depths from min to max
uint32 width
uint32 height
float min
float max
uint16 data[width * height]
Then after you've loaded the data you can use either muliplte array views.
const uint32s = new Uint32Array(arrayBuffer);
const floats = new Float32Array(arrayBuffer, 8); // skip first 8 bytes
const uint16s = new Uint16Array(arrayBuffer, 16); // skip first 16 bytes
const width = uint32s[0];
const height = uint32s[1];
const min = floats[0];
const max = floats[1];
const range = max - min;
const depthData = new Float32Array(width * height);
for (let i = 0; i < uint16s.length; ++i) {
depthData[i] = uint16s[i] / 0xFFFF * range + min;
}
If you care about endianness for some future world where there are any browsers running on big endian hardware, then you either write your own functions to read bytes and generate those values or you can use a DataView.
Assuming you know the data is in little endian format
const data = new DataView(arrayBuffer);
const width = data.getUint32(0, true);
const height = data.getUint32(4, true);
const min = data.getFloat32(8, true);
const max = data.getFloat32(12, true);
const range = max - min;
const depthData = new Float32Array(width * height);
for (let i = 0; i < uint16s.length; ++i) {
depthData[i] = data.getUint16(i * 2 + 16, true) / 0xFFFF * range + min;
}
Of you want more complex compression like a inflate/deflate file you'll need a library or to write your own.

Related

Pica: cannot use getImageData on canvas, make sure fingerprinting protection isn't enabled

I am getting this error while using npm pica library. I want to resize image using this library. Have tried two ways
I. by passing image url directly to pica.resize(imageurl, canvas) and
II. by passing image buffer to pica.resize(imageBuffer,canvas). But it shows error Pica: cannot use getImageData on canvas, make sure fingerprinting protection isn't enabled. when I run code.
// Convert Base64 to BufferArray
const buf = Buffer.from(img, 'base64');
let binary = '';
const chunk = 8 * 1024;
let i;
for (i = 0; i < buf.length / chunk; i += 1) {
binary += String.fromCharCode.apply(null, [
...buf.slice(i * chunk, (i + 1) * chunk),
]);
}
binary += String.fromCharCode.apply(null, [...buf.slice(i * chunk)]);
// Dimensions for the image
const width = 1200;
const height = 627;
// Instantiate the canvas object
const can = canvas.createCanvas(width,height);
// const context = canvas.getContext("2d");
console.log(binary)
pica.resize(binary, can)
.then(result => console.log(result));

Write raw binary data to Buffer

I am working on a function that iterates over PCM data. I am getting chunks of data of varying size and I am currently handling this by buffer concatenation. Problem is, I am quite sure that this approach is a performance killer.
One of the simplest algorithm consists of chunking 500 chunks of 4800 bytes (= grain), and repeating them 3 times as such :
buf = <grain1, grain1, grain1, ..., grain500, grain500, grain500>
function(){
// ...
let buf = Buffer.alloc(0) // returned buffer, mutated
// nGrains is defined somewhere else in the function
// example: nGrains = 500
for(let i=0;i<nGrains;i++){
// a chunk of PCM DATA
// example: grain.byteLength = 4800
const grain = Buffer.from(this._getGrain())
// example: nRepeats = 3
for(let j=0;j<nRepeats;j++)
buf = Buffer.concat([buf, grain])
}
return buf
}
I feel like these performance heavy operations (1500 mutating concatenations) could be avoided if there were some sort of way to directly write "raw data" from a given offset to a pre-size-allocated buffer. I made the following helper function that gave me HUGE performance improvements, but I feel like I am doing something wrong...
function writeRaw(buf, rawBytes, offset) => {
for(i=0;i<rawBytes.byteLength;i++){
buf.writeUInt8(rawBytes.readUInt8(i), offset + i)
}
return buf
}
My function now looks like this:
function(){
// ...
const buf = Buffer.alloc(len) // returned buffer, immutable
for(let i=0;i<nGrains;i++){
const grain = Buffer.from(this._getGrain())
for(let j=0;j<nRepeats;j++)
writeRaw(buf, grain, (i * nRepeats + j) * grainSize)
}
return buf
}
My question is : Is there a cleaner way (or more standard way) to do this instead of iterating over bytes ? Buffer.write only seems to work for strings, although this would be ideal...
There is Buffer.copy.
const buf = Buffer.alloc(len);
for(let i = 0; i < nGrains; i++){
const grain = Buffer.from(this._getGrain());
for(let j=0;j<nRepeats;j++)
grain.copy(/*to*/ buf, /*at*/ (i * nRepeats + j) * grainSize);
}
You could also use Buffer.fill:
const buf = Buffer.alloc(len);
for(let i = 0; i < nGrains; i++) {
const grain = Buffer.from(this._getGrain());
buf.fill(grain, i * nRepeats * grainSize, (i + 1) * nRepeats * grainSize);
}

How to handle the endianness with UInt8 in a DataView?

It seems like there is nothing to handle endianness when working with UInt8. For example, when dealing with UInt16, you can set if you want little or big endian:
dataview.setUint16(byteOffset, value [, littleEndian])
vs
dataview.setUint8(byteOffset, value)
I guess this is because endianness is dealing with the order of the bytes and if I'm inserting one byte at a time, then I need to order them myself.
So how do I go about handling endianness myself? I'm creating a WAVE file header using this spec: http://soundfile.sapp.org/doc/WaveFormat/
The first part of the header is "ChunkID" in big endian and this is how I do it:
dataView.setUint8(0, 'R'.charCodeAt());
dataView.setUint8(1, 'I'.charCodeAt());
dataView.setUint8(2, 'F'.charCodeAt());
dataView.setUint8(3, 'F'.charCodeAt());
The second part of the header is "ChunkSize" in small endian and this is how I do it:
dataView.setUint8(4, 172);
Now I suppose that since the endianness of those chunks is different then I should be doing something different in each chunk. What should I be doing different in those two instances?
Cheers!
EDIT
I'm asking this question, because the wav file I'm creating is invalid (according to https://indiehd.com/auxiliary/flac-validator/). I suspect this is because I'm not handling the endianness correctly. This is the full wave file:
const fs = require('fs');
const BITS_PER_BYTE = 8;
const BITS_PER_SAMPLE = 8;
const SAMPLE_RATE = 44100;
const NB_CHANNELS = 2;
const SUB_CHUNK_2_SIZE = 128;
const chunkSize = 36 + SUB_CHUNK_2_SIZE;
const blockAlign = NB_CHANNELS * (BITS_PER_SAMPLE / BITS_PER_BYTE);
const byteRate = SAMPLE_RATE * blockAlign;
const arrayBuffer = new ArrayBuffer(chunkSize + 8)
const dataView = new DataView(arrayBuffer);
// The RIFF chunk descriptor
// ChunkID
dataView.setUint8(0, 'R'.charCodeAt());
dataView.setUint8(1, 'I'.charCodeAt());
dataView.setUint8(2, 'F'.charCodeAt());
dataView.setUint8(3, 'F'.charCodeAt());
// ChunkSize
dataView.setUint8(4, chunkSize);
// Format
dataView.setUint8(8, 'W'.charCodeAt());
dataView.setUint8(9, 'A'.charCodeAt());
dataView.setUint8(10, 'V'.charCodeAt());
dataView.setUint8(11, 'E'.charCodeAt());
// The fmt sub-chunk
// Subchunk1ID
dataView.setUint8(12, 'f'.charCodeAt());
dataView.setUint8(13, 'm'.charCodeAt());
dataView.setUint8(14, 't'.charCodeAt());
// Subchunk1Size
dataView.setUint8(16, 16);
// AudioFormat
dataView.setUint8(20, 1);
// NumChannels
dataView.setUint8(22, NB_CHANNELS);
// SampleRate
dataView.setUint8(24, ((SAMPLE_RATE >> 8) & 255));
dataView.setUint8(25, SAMPLE_RATE & 255);
// ByteRate
dataView.setUint8(28, ((byteRate >> 8) & 255));
dataView.setUint8(29, byteRate & 255);
// BlockAlign
dataView.setUint8(32, blockAlign);
// BitsPerSample
dataView.setUint8(34, BITS_PER_SAMPLE);
// The data sub-chunk
// Subchunk2ID
dataView.setUint8(36, 'd'.charCodeAt());
dataView.setUint8(37, 'a'.charCodeAt());
dataView.setUint8(38, 't'.charCodeAt());
dataView.setUint8(39, 'a'.charCodeAt());
// Subchunk2Size
dataView.setUint8(40, SUB_CHUNK_2_SIZE);
// Data
for (let i = 0; i < SUB_CHUNK_2_SIZE; i++) {
dataView.setUint8(i + 44, i);
}
A single byte (uint8) doesn't have any endianness, endianness is a property of a sequence of bytes.
According to the spec you linked, the ChunkSize takes space for 4 bytes - with the least significant byte first (little endian). If your value is only one byte (not larger than 255), you would just write the byte at offset 4 as you did. If the 4 bytes were in big endian order, you'd have to write your byte at offset 7.
I would however recommend to simply use setUint32:
dataView.setUint32(0, 0x52494646, false); // RIFF
dataView.setUint32(4, 172 , true);
dataView.setUint32(8, 0x57415645, false) // WAVE

How to save binary buffer to png file in nodejs?

I have binary nodejs Buffer object that contains bitmap information. How do make image from the buffer and save it to file?
Edit:
I tried using the file system package as #herchu said but if I do this:
let robot = require("robotjs")
let fs = require('fs')
let size = 200
let img = robot.screen.capture(0, 0, size, size)
let path = 'myfile.png'
let buffer = img.image
fs.open(path, 'w', function (err, fd) {
if (err) {
// Something wrong creating the file
}
fs.write(fd, buffer, 0, buffer.length, null, function (err) {
// Something wrong writing contents!
})
})
I get
Although solutions by #herchu and #Jake work, they are extremely slow (10-15s in my experience).
Jimp supports converting Raw Pixel Buffer into PNG out-of-the-box and works a lot faster (sub-second).
const img = robot.screen.capture(0, 0, width, height).image;
new Jimp({data: img, width, height}, (err, image) => {
image.write(fileName);
});
Note: I am editing my answer according to your last edits
If you are using Robotjs, check that its Bitmap object contains a Buffer to raw pixels data -- not a PNG or any other file format contents, just pixels next to each other (exactly 200 x 200 elements in your case).
I have not found any function to write contents in other format in the Robotjs library (not that I know it either), so in this answer I am using a different library, Jimp, for the image manipulation.
let robot = require("robotjs")
let fs = require('fs')
let Jimp = require('jimp')
let size = 200
let rimg = robot.screen.capture(0, 0, size, size)
let path = 'myfile.png'
// Create a new blank image, same size as Robotjs' one
let jimg = new Jimp(size, size);
for (var x=0; x<size; x++) {
for (var y=0; y<size; y++) {
// hex is a string, rrggbb format
var hex = rimg.colorAt(x, y);
// Jimp expects an Int, with RGBA data,
// so add FF as 'full opaque' to RGB color
var num = parseInt(hex+"ff", 16)
// Set pixel manually
jimg.setPixelColor(num, x, y);
}
}
jimg.write(path)
Note that the conversion is done by manually iterating through all pixels; this is slow in JS. Also there are some details on how each library handles their pixel format, so some manipulation was needed in the loop -- it should be clear from the embedded comments.
Adding this as an addendum to accepted answer from #herchu, this code sample processes/converts the raw bytes much more quickly (< 1s for me for a full screen). Hope this is helpful to someone.
var jimg = new Jimp(width, height);
for (var x=0; x<width; x++) {
for (var y=0; y<height; y++) {
var index = (y * rimg.byteWidth) + (x * rimg.bytesPerPixel);
var r = rimg.image[index];
var g = rimg.image[index+1];
var b = rimg.image[index+2];
var num = (r*256) + (g*256*256) + (b*256*256*256) + 255;
jimg.setPixelColor(num, x, y);
}
}
Four times faster!
About 280ms and 550Kb for full screen 1920x1080, if use this script.
I found this pattern when I compared 2 byte threads per byte to the forehead.
const robotjs = require('robotjs');
const Jimp = require('jimp');
const app = require('express').Router();
app.get('/screenCapture', (req, res)=>{
let image = robotjs.screen.capture();
for(let i=0; i < image.image.length; i++){
if(i%4 == 0){
[image.image[i], image.image[i+2]] = [image.image[i+2], image.image[i]];
}
}
var jimg = new Jimp(image.width, image.height);
jimg.bitmap.data = image.image;
jimg.getBuffer(Jimp.MIME_PNG, (err, result)=>{
res.set('Content-Type', Jimp.MIME_PNG);
res.send(result);
});
});
If you add this code before jimp.getBuffer you'll get about 210ms and 320Kb for full screen
jimg.rgba(true);
jimg.filterType(1);
jimg.deflateLevel(5);
jimg.deflateStrategy(1);
I suggest you to take a look on sharp as it has superior performance metrics over jimp.
The issue with robotjs screen capturing, which actually happened to be very efficient, is BGRA color model and not RGBA. So you would need to do additional color rotation.
Also, as we take screenshot from the desktop I can't imagine the case where we would need transperency. So, I suggest to ignore it.
const [left, top, width, height] = [0, 0, 100, 100]
const channels = 3
const {image, width: cWidth, height: cHeight, bytesPerPixel, byteWidth} = robot.screen.capture(left, right, width, height)
const uint8array = new Uint8Array(cWidth*cHeight*channels);
for(let h=0; h<cHeight; h+=1) {
for(let w=0; w<cWidth; w+=1) {
let offset = (h*cWidth + w)*channels
let offset2 = byteWidth*h + w*bytesPerPixel
uint8array[offset] = image.readUInt8(offset2 + 2)
uint8array[offset + 1] = image.readUInt8(offset2 + 1)
uint8array[offset + 2] = image.readUInt8(offset2 + 0)
}
}
await sharp(Buffer.from(uint8array), {
raw: {
width: cWidth,
height: cHeight,
channels,
}
}).toFile('capture.png')
I use intermediate array here, but you actually can just to swap in the result of the screen capture.

Using Atomics and Float32Array in JavaScript

The Atomics.store/load methods (and others? didn't look) do not support Float32Array.
I read that this is to be consistent with the fact that it also doesn't support Float64Array for compatibility reasons (some computers don't support it).
Aside from the fact that I think this is stupid, does this also mean I must cast every float I want to use into an unsigned int?
Not only will this result in ugly code, it will also make it slower.
E.g.:
let a = new Float32Array(1); // Want the result here
Atomics.store(a, 0, 0.5); // Oops, can't use Float32Array
let b = new Float32Array(1); // Want the result here
let uint = new Uint32Array(1);
let float = new Float32Array(uint.buffer);
float[0] = 0.5;
Atomics.store(b, 0, uint[0]);
As you discovered, the Atomics methods doesn't support floating point values as argument:
Atomics.store(typedArray, index, value)
typedArray
A shared integer typed array. One of Int8Array, Uint8Array, Int16Array, Uint16Array, Int32Array,
or Uint32Array.
You can can read the IEEE754 representation as integer from the underlying buffer as you do in the example code you posted
var buffer = new ArrayBuffer(4); // common buffer
var float32 = new Float32Array(buffer); // floating point
var uint32 = new Uint32Array(buffer); // IEEE754 representation
float32[0] = 0.5;
console.log("0x" + uint32[0].toString(16));
uint32[0] = 0x3f000000; /// IEEE754 32-bit representation of 0.5
console.log(float32[0]);
or you can use fixed numbers if the accuracy isn't important. The accuracy is of course determined by the magnitude.
Scale up when storing:
Atomics.store(a, 0, Math.round(0.5 * 100)); // 0.5 -> 50 (max two decimals with 100)
read back and scale down:
value = Atomics.load(a, 0) * 0.01; // 50 -> 0.5
The other answer didn't help me much and it took awhile for me to figure out a solution, but here's how I solved the same issue:
var data = new SharedArrayBuffer(LEN * 8);
var data_float = new Float32Array(data);
var data_int = new Uint32Array(data);
data_float[0] = 2.3; //some pre-existing data
var tmp = new ArrayBuffer(8);
var tmp_float = new Float32Array(tmp);
var tmp_int = new Uint32Array(tmp);
tmp_int[0] = Atomics.load(data_int, 0);
tmp_float[0] += 1.1; //some math
Atomics.store(data_int, 0, tmp_int[0]);
console.log(data_float[0]);

Categories

Resources