I am trying to get pixel data from PNG images for processing. The current way is by using canvas.drawImage followed canvas.getImageData (example here). I am looking for alternatives.
The problem with the current approach is that browsers modify pixel values influenced by alpha, as discussed here and here.
This question has been asked before, but no satisfactory answers are available.
The only way to do this without using canvas and getImageData() is to load the PNG file as a binary typed array and parse the file in code "manually".
Prerequisites:
For this you need the PNG specification which you can find here.
You need to know how to use typed arrays (for this a DataView is the most suitable view).
PNG files are chunk based and you will need to know how to parse chunks
A typical chunk based file has a four byte header called a FourCC identifier, followed by the size and misc. data depending on the file format definition.
Then chunks are placed right after this containing often a FOURCC (or four character code) and then the size of the chunk without the chunk header. In principle:
MAGIC FOURCC
SIZE/MISC - depending on definition
...
CHK1 - Chunk FourCC
SIZE - unsigned long
.... data
CHK2
SIZE
.... data
This format principle came originally from the Commodore Amiga platform and EA/IFF (Interleaved File Format) back in mid 80's.
But in modern days some vendors has extended or vary the chunk format, so for PNG chunks it will actually look like this:
Header (always 8 bytes and the same byte values):
‰PNG (first byte is 0x89, see specs for reason)
CR + LF 0x0C0A
EOC + LF 0x1A0A
Chunks:
SIZE (4 bytes, may be 0 (f.ex. IEND). Excl. chunk header and crc)
FOURCC (4 bytes, ie. "IHDR", "IDAT")
[...data] (length: SIZE x bytes)
CRC32 (4 bytes representing the CRC-32 checksum of the data)
(see the referenced specification link above for details).
And the byte-order (endianess) for PNG is always big-endian ("network" order).
This makes it easy to parse through the file supporting only some (or all) chunks. For PNG you would need to support at least (source):
IHDR must be the first chunk; it contains (in this order) the image's width, height, bit depth and color type.
IDAT contains the image, which may be split between multiple IDAT chunks. Such splitting increases the file size slightly, but makes it easier to stream the PNG. The IDAT chunk contains the actual image data, which is the output stream of the compression algorithm.
IEND marks the file end.
If you intend to support palette (color indexed) files you would also need to support the PLTE chunk. When you parse the IHDR chunk you will be able to see what color format is used (type 2 for RGB data, or 6 for RGBA and so on).
Parsing is itself easy so your biggest challenge would be supporting things like ICC profiles (when present in the iCCP chunk) to adjust the image color data. A typical chunk is the gamma chunk (gAMA) which contains a single gamma value you can apply to convert the data to linear format so that it displays correctly when display gamma is applied (there are also other special chunks related to colors).
The second biggest challenge would be the decompression which uses INFLATE. You can use a project such as PAKO zlib port to do this job for you and this port has performance close to native zlib. In addition to that, if you want to do error checking on the data (recommended) CRC-32 checking should also be supported.
For security reason you should always check that fields contain the data they're suppose to as well as that reserved space are initialized with either 0 or the defined data.
Hope this helps!
Example chunk parser: (note: won't run in IE).
function pngParser(buffer) {
var view = new DataView(buffer),
len = buffer.byteLength,
magic1, magic2,
chunks = [],
size, fourCC, crc, offset,
pos = 0; // current offset in buffer ("file")
// check header
magic1 = view.getUint32(pos); pos += 4;
magic2 = view.getUint32(pos); pos += 4;
if (magic1 === 0x89504E47 && magic2 === 0x0D0A1A0A) {
// parse chunks
while (pos < len) {
// chunk header
size = view.getUint32(pos);
fourCC = getFourCC(view.getUint32(pos + 4));
// data offset
offset = pos + 8;
pos = offset + size;
// crc
crc = view.getUint32(pos);
pos += 4;
// store chunk
chunks.push({
fourCC: fourCC,
size: size,
offset: offset,
crc: crc
})
}
return {chunks: chunks}
}
else {
return {error: "Not a PNG file."}
}
function getFourCC(int) {
var c = String.fromCharCode;
return c(int >>> 24) + c(int >>> 16 & 0xff) + c(int >>> 8 & 0xff) + c(int & 0xff);
}
}
// USAGE: ------------------------------------------------
fetch("//i.imgur.com/GP6Q3v8.png")
.then(function(resp) {return resp.arrayBuffer()}).then(function(buffer) {
var info = pngParser(buffer);
// parse each chunk here...
for (var i = 0, chunks = info.chunks, chunk; chunk = chunks[i++];) {
out("CHUNK : " + chunk.fourCC);
out("SIZE : " + chunk.size + " bytes");
out("OFFSET: " + chunk.offset + " bytes");
out("CRC : 0x" + (chunk.crc>>>0).toString(16).toUpperCase());
out("-------------------------------");
}
function out(txt) {document.getElementById("out").innerHTML += txt + "<br>"}
});
body {font: 14px monospace}
<pre id="out"></pre>
From here you can extract the IHDR to find image size and color type, then IDAT chunk(s) to deflate (PNG uses filters per scanline which do complicate things a bit, as well as a interlace mode, see specs) and your almost done ;)
Related
I am writing a small utility library for me to request server status of a given minecraft host in js on node. I am using the Server List Ping Protocol as outlined here (https://wiki.vg/Server_List_Ping) and got it mostly working as expected, albeit having big trouble working with unsupported Data Types (VarInt) and had to scour the internet to find a way of converting js nums into VarInts in order to craft the necessary packet buffers:
function toVarIntBuffer(integer) {
let buffer = Buffer.alloc(0);
while (true) {
let tmp = integer & 0b01111111;
integer >>>= 7;
if (integer != 0) {
tmp |= 0b10000000;
}
buffer = Buffer.concat([buffer, Buffer.from([tmp])]);
if (integer <= 0) break;
}
return buffer;
}
Right now I am able to request a server status by sending the handshake packet and then the query packet and do receive a JSON response with the length of the response prepended as a VarInt.
However the issue is here, where I simply don't know how to safely identify the VarInt from the beginning of the JSON response (as it can be anywhere up to 5 byte) and decode it back to a readable num so I can get the proper length of the response byte stream.
[...] as with all strings this is prefixed by its length as a VarInt
(from the protocol documentation)
My current super hacky workaround is to concatenate the chunks as String until the concatenated string contains the same count of '{'s and '}'s (meaning a full json object) and slice the json response at the first '{' before parsing it.
However I am very unhappy with this hacky, inefficient, unelegant and possibly unreliable way of solving the issue and would rather decode the VarInt in front of the JSON response in order to get a proper length to compare against.
I don't know this protocol, but VarInt in protobuf are coded with the MSB bit:
Each byte in a varint, except the last byte, has the most significant
bit (msb) set – this indicates that there are further bytes to come.
The lower 7 bits of each byte are used to store the two's complement
representation of the number in groups of 7 bits, least significant
group first.
Note: Too long for a comment, so posting as an answer.
Update: I browsed a bit through the URL you gave, and it is indeed the ProtoBuf VarInt. It is also described there with pseudo-code:
https://wiki.vg/Protocol#VarInt_and_VarLong
VarInt and VarLong Variable-length format such that smaller numbers
use fewer bytes. These are very similar to Protocol Buffer Varints:
the 7 least significant bits are used to encode the value and the most
significant bit indicates whether there's another byte after it for
the next part of the number. The least significant group is written
first, followed by each of the more significant groups; thus, VarInts
are effectively little endian (however, groups are 7 bits, not 8).
VarInts are never longer than 5 bytes, and VarLongs are never longer
than 10 bytes.
Pseudocode to read and write VarInts and VarLongs:
Thanks to the reference material that #thst pointed me to, I was able to slap together a working way of reading VarInts in javascript.
function readVarInt(buffer) {
let value = 0;
let length = 0;
let currentByte;
while (true) {
currentByte = buffer[length];
value |= (currentByte & 0x7F) << (length * 7);
length += 1;
if (length > 5) {
throw new Error('VarInt exceeds allowed bounds.');
}
if ((currentByte & 0x80) != 0x80) break;
}
return value;
}
buffer must be a byte stream starting with the VarInt, ideally using the std Buffer class.
Can a buffer have both string and image associated with it? If so, how to extract them separately.
An example case would be a buffer with image data and also file name data.
I have worked with sharedArrayBuffers/arrayBuffers before.
If you are storing image pixel data, it's going to be a u32-int array, with 4 8-bit segment controlling rbga respectively... yes: you CAN tack on string data at the front in the form of a 'header' if you encode it and decode it to int values... but I have a hard time understanding why that might be desirable. because working with raw pixel data that is ONLY pixel-data is simpler. (I usually just stick it as a property of an object, with whatever other data I want to store)
Data Buffers
Typed arrays
You can use ArrayBuffer to create a buffer to hold the data. You then create a view using a typed array. eg unsigned characters Uint8Array. Types can be 8-16-32-64 bit (un/signed integers), float - double (32 - 64 bit floating point)
One buffer can have many view. You can read and write to view just like any JS array. The values are automatically converted to the correct type when you write to a buffer, and converted to Number when you read from a view
Example
Using buffer and views to read different data types
For example say you have file data that has a 4 character header, followed by a 16 bit unsigned integer chunk length, then 2 signed 16 bit integer coordinates, and more data
const fileBuffer = ArrayBuffer(fileSizeInBytes);
// Create a view of the buffer so we can fill it with file data
const dataRaw = new Uint8Array(data);
// load the data into dataRaw
// To get a string from the data we can create a util function
function readBufferString(buffer, start, length) {
// create a view at the position of the string in the buffer
const bytes = new Uint8Array(buffer, start, length);
// read each byte converting to JS unicode string
var str = "", idx = 0;
while (idx < length) { str += String.fromCharCode(bytes[idx++]) }
return str;
}
// get 4 char chunk header at start of buffer
const header = readBufferString(fileBuffer, 0, 4);
if (header === "HEAD") {
// Create views for 16 bit signed and unsigned integers
const ints = new Int16Array(fileBuffer);
const uints = new Uint16Array(fileBuffer);
const length = uints[2]; // get the length as unsigned int16
const x = ints[3]; // get the x coord as signed int16
const y = ints[4]; // get the y coord as signed int16
A DataView
The above example is one way of extracting the different types of data from a single buffer. However there could be an problem with older files and some data sources regarding the order of bytes that create multi byte types (eg 32 integers). This is called endianness
To help with using the correct endianness and to simplify access to all the different data types in a buffer you can use a DataView
The data view lets you read from the buffer by type and endianness. For example to read a unsigned 64bit integer from a buffer
// fileBuffer is a array buffer with the data
// Create a view
const dataView = new DataView(fileBuffer);
// read the 64 bit uint starting at the first byte in the buffer
// Note the returned value is a BigInt not a Number
const bInt = dataView.getBigUint64(0);
// If the int was in little endian order you would use
const bInt = dataView.getBigUint64(0, true); // true for little E
Notes
Buffers are not dynamic. That means they can not grow and shrink and that you must know how large the buffer needs to be when you create it.
Buffers tend to be a little slower than JavaScript's standard array as there is a lot type coercion when read or writing to buffers
Buffers can be transferred (Zero copy transfer) across threads making them ideal for distributing large data structures between WebWorkers. There is also a SharedArrayBuffer that lets you create true parallel processing solutions in JS
I am not sure it's even possible but - can I get the image file size from data URI?
For example, let's say there is an IMG element where src goes:
src="data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD...
Based on the src, can I get the image file size by using plain JavaScript? (without server request)
If you want file size, simply decode your base64 string and check the length.
var src ="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP/// yH5BAEAAAAALAAAAAABAAEAAAIBRAA7";
var base64str = src.substr(22);
var decoded = atob(base64str);
console.log("FileSize: " + decoded.length);
If you're okay with a (very good) estimate, the file size is 75% of the size of the base64 string. The true size is no larger than this estimate, and, at most, two bytes smaller.
If you want to write one line and be done with it, use atob() and check the length, as in the other answers.
If you want an exact answer with maximum performance (in the case of gigantic files or millions of files or both), use the estimate but account for the padding to get the exact size:
let base64Length = src.length - (src.indexOf(',') + 1);
let padding = (src.charAt(src.length - 2) === '=') ? 2 : ((src.charAt(src.length - 1) === '=') ? 1 : 0);
let fileSize = base64Length * 0.75 - padding;
This avoids parsing the entire string, and is entirely overkill unless you're hunting for microoptimizations or are short on memory.
Your best option is to calculate the length of the base64 string itself.
What is a base64 length in bytes?
You have to convert the base64 string to a normal string using atob() and then check it length, it will return a value that you can say is close to the actual size of the image. Also you don't need the data:image/jpeg;base64, part from the data URI to check the size.
This is a universal solution for all types of base64 strings based on Daniel Trans's code.
var src ="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP/// yH5BAEAAAAALAAAAAABAAEAAAIBRAA7";
var base64str = src.split('base64,')[1];
var decoded = atob(base64str);
console.log("FileSize: " + decoded.length);
The other solutions make use of atob, which has now been deprecated. Here is an up-to-date example, using Buffer instead.
const src="data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD...";
const base64str = src.split('base64,')[1]; //remove the image type metadata.
const imageFile = Buffer.from(base64str, 'base64'); //encode image into bytes
console.log('FileSize: ' + imageFile.length);
I’m using a node module ‘net’ to create a client application that sends data through a TCP socket. The server-side application accepts this message if it starts and ends with a correct hex code, just for example the data packet would start with a hex “0F” and ends with a hex “0F1C”. How would I create these hex codes with javascript ? I found this code to convert a UTF-8 string into a hex code, not sure if this is what I need as I don’t have much experience with TCP/IP socket connections. Heres some javascript I've used to convert a utf-8 to a hex code. But I'm not sure this is what I'm looking for? Does anyone have experience with TCP/IP transfers and/or javascript hex codes?.
function toHex(str,hex){
try{
hex = unescape(encodeURIComponent(str))
.split('').map(function(v){
return v.charCodeAt(0).toString(16)
}).join('')
}
catch(e){
hex = str
console.log('invalid text input: ' + str)
}
return hex
}
First of all, you do not need to convert your data string into hex values, in order to send it over TCP. Every string in node.js is converted to bytes when sent over the network.
Normally, you'd send over a string like so:
var data = "ABC";
socket.write(data); // will send bytes 65 66 67, or in hex: 44 45 46
Node.JS also allows you to pass Buffer objects to functions like .write().
So, probably the easiest way to achieve what you wish, is to create an appropriate buffer to hold your data.
var data = "ABC";
var prefix = 0x0F; // JavaScript allows hex numbers.
var suffix = 0x0FC1;
var dataSize = Buffer.byteLength(data);
// compute the required buffer length
var bufferSize = 1 + dataSize + 2;
var buffer = new Buffer(bufferSize);
// store first byte on index 0;
buffer.writeUInt8(prefix, 0);
// store string starting at index 1;
buffer.write(data, 1, dataSize);
// stores last two bytes, in big endian format for TCP/IP.
buffer.writeUInt16BE(suffix, bufferSize - 2);
socket.write(buffer);
Explanation:
The prefix hex value 0F requires 1 byte of space. The suffix hex value 0FC1 actually requires two bytes (a 16-bit integer).
When computing the number of required bytes for a string (JavaScript strings are UTF-16 encoded!), str.length is not accurate most of the times, especially when your string has non-ASCII characters in it. For this, the proper way of getting the byte size of a string is to use Buffer.byteLength().
Buffers in node.js have static allocations, meaning you can't resize them after you created them. Hence, you'll need to compute the size of the buffer -in bytes- before creating it. Looking at our data, that is 1 (for our prefix) + Buffer.byteLength(data) (for our data) + 2 (for our suffix).
After that -imagine buffers as arrays of bytes (8-bit values)-, we'll populate the buffer, like so:
write the first byte (the prefix) using writeUInt8(byte, offset) with offset 0 in our buffer.
write the data string, using .write(string[, offset[, length]][, encoding]), starting at offset 1 in our buffer, and length dataSize.
write the last two bytes, using .writeUInt16BE(value, offset) with offset bufferSize - 2. We're using writeUInt16BE to write the 16-bit value in big-endian encoding, which is what you'd need for TCP/IP.
Once we've filled our buffer with the correct data, we can send it over the network, using socket.write(buffer);
Additional tip:
If you really want to convert a large string to bytes, (e.g. to later print as hex), then Buffer is also great:
var buf = Buffer.from('a very large string');
// now you have a byte represetantion of the string.
Since bytes are all 0-255 decimal values, you can easily print them as hex values in console, like so:
for (i = 0; i < buf.length; i++) {
const byte = buf[i];
const hexChar = byte.toString(16); // convert the decimal `byte` to hex string;
// do something with hexChar, e.g. console.log(hexChar);
}
I'm trying to download 16-bit image data from a server and push it into a WebGL texture without browser plug-ins. texImage2d will work with: ImageData, HTMLImageElement, HTMLCanvasElement, or HTMLVideoElement. I'm looking for some javascript (a library or code sample) which can decode 16-bit TIFF or similar (hdf5, etc.) image data into one of these object types.
I have no problem doing this is 8-bit per channel RGB by using an to load a PNG but this doesn't work with 16-bit per channel data since there aren't any "standard" browser supported image formats which are 16-bit.
In case of combining two PNG images, one with the top 8 bits and the second with the low 8 bits, I think it should be:
highp vec4 texCol = texture2D(tex_low, vec2(vTexCoord.s, vTexCoord.t)) * (1.0 / 257.0);
texCol += texture2D(tex_up, vec2(vTexCoord.s, vTexCoord.t)) * (256.0 / 257.0);
In 8 bits per channel RGB colors will range from 0 to 255 = 2^8 - 1.
In 16 bits per channel RGB colors will range from 0 to 65535 = 2^16 - 1 = 255*257.
Explanation
WebGL works using colour values from 0 to 1 and makes it by dividing 8 bit color value by 255. So the divided value belongs to the range <0,1>.
In case of 16 bit per channel we would like to divide it by 65535 to get the proper number from range <0,1>.
What we want is 16 bit color value reduced to range <0,1>.
Let low and up be color value from range 0..255. up is top 8 bits and low is low 8 bits.
To get 16 bit value we can compute: low + up*256. Now we have number in range 0..65535. To get value from range <0,1> we divide it by 65535. Note that WebGL works using color values from range <0,1> , it is Lw=low/255 and Uw=up/255. So, we don't have to multiply it by 255 and divide it by 65535 because 65535 = 255*257. Instead we just divide by 257.
Also I could not find any software to split 16 bit / channel image into two 8 bit/channel image, so here is my code, feel free to use it, it splits 16 bit / channel Tiff into two 8 bit/channel PNGs:
https://github.com/czero69/ImageSplitter
PNGToy is a pretty featured library for extracting PNG chunks of almost all depths and channel modes with javascript (really client-side / without node.js, just Promise.js dependencies). The decode method will return the desired buffer. Here is an example for 16 bits grayscale PNG (16 bits RGB should work as well) :
var dataObj;
var img = new PngImage();
var buffer;
img.onload = function() {
var pngtoy = this.pngtoy;
dataObj = pngtoy.decode().then(function(results) {
buffer = new Uint16Array(results.bitmap);
for(var i = 0, j; i < buffer.length; i++) {
j = buffer[i];
buffer[i] = ((j & 0xff) << 8) | ((j & 0xff00) >>> 8); // needed to swap bytes for correct unsigned integer values
}
console.log(buffer);
});
};
img.onerror = function(e) {
console.log(e.message);
};
img.src = "image.png";
I don't think the main browsers natively support any 16-bit/channel image format at the moment.
One way to achieve the same effect would be to create two PNG images, one with the top 8 bits of each colour channel in the image and one with the bottom 8 bits.
Then bind the images as two textures and combine the values in your shader, e.g.
highp float val = texture2d(samplerTop8bits, tex_coord) * (256.0 / 257.0);
val += texture2d(samplerBottom8bits, tex_coord) * (1.0 / 257.0);
(Note: you need highp precision to represent your data correctly in a 16-bit range)
Another method is only possible if floating point textures are supported in your target browser(s). You would, in the browser, combine the two PNG images into a floating point texture then access that normally. This may not be any faster and will probably use twice the amount of texture memory.