Javascript Converting an incoming buffer as a string to a buffer - javascript

const stringArray = ['0x00','0x3c','0xbc]
to
const array = [0x00,0x3c,0bc]
var buf = new Buffer.from(array)
How should I go about using the buffers in the string above as buffers?

You appear to have an array of strings where the strings are byte values written as hexadecimal strings. So you need to:
Convert each hex string to a byte; that's easily done with parseInt(str, 16) (the 16 being hexadecimal). parseInt will allow the 0x prefix. Or you could use +str or Number(str) since the prefix is there to tell them what number base to use. (More about various ways to convert strings to numbers in my answer here.)
Create a buffer and fill it in with the bytes.
If the array isn't massive and you can happily create a temporary array, use map and Buffer.from:
const buffer = Buffer.from(theArray.map(str => +str)));
If you want to avoid any unnecessary intermediate arrays, I'm surprised not to see any variant of Buffer.from that allows mapping, so we have to do those things separately:
const buffer = Buffer.alloc(theArray.length);
for (let index = 0; index < theArray.length; ++index) {
buffer[index] = +theArray[index];
}

Related

How to convert a hex binary string to Uint8Array

I have this string of bytes represented in hex:
const s = "\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\x8bV23J15O4\xb14\xb1H61417KKLL\xb50L5U\x8a\x05\x00\xf6\xaa\x8e.\x1c\x00\x00\x00"
I would like to convert it to Uint8Array in order to further manipulate it.
How can it be done?
Update:
The binary string is coming from python backend. In python I can create this representation correctly:
encoded = base64.b64encode(b'\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\x8bV23J15O4\xb14\xb1H61417KKLL\xb50L5U\x8a\x05\x00\xf6\xaa\x8e.\x1c\x00\x00\x00')
Since JavaScript strings support \x escapes, this should work to convert a Python byte string to a Uint8Array :
const s = "\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\x8bV23J15O4\xb14\xb1H61417KKLL\xb50L5U\x8a\x05\x00\xf6\xaa\x8e.\x1c\x00\x00\x00";
const array = Uint8Array.from([...s].map(v => v.charCodeAt(0)));
console.log(array);
In Node.js, one uses Buffer.from to convert a (base64-encoded) string into a Buffer.
If the original argument is a base64 encoded string, as in Python:
const buffer = Buffer.from(encodedString, 'base64');
It if's a UTF-8 encoded string:
const buffer = Buffer.from(encodedString);
Buffers are instances of Uint8Array, so they can be used wherever a Uint8Array is expected. Quoting from the docs:
The Buffer class is a subclass of JavaScript's Uint8Array class and extends it with methods that cover additional use cases. Node.js APIs accept plain Uint8Arrays wherever Buffers are supported as well.
const s = "\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\x8bV23J15O4\xb14\xb1H61417KKLL\xb50L5U\x8a\x05\x00\xf6\xaa\x8e.\x1c\x00\x00\x00"
//btoa(base64) - transforms base64 to ascii
let str = btoa(s)
let encoder = new TextEncoder()
let typedarr = encoder.encode(str) //encode() returns Uint8Array
console.log(typedarr)

Why String.protoptype.charCodeAt() can convert binary string into an Uint8Array?

Suppose I have a base64 encoded string and I want to convert it into an ArrayBuffer, I can do it in this way:
// base64 decode the string to get the binary data
const binaryString = window.atob(base64EncodedString);
// convert from a binary string to an ArrayBuffer
const buf = new ArrayBuffer(binaryString.length);
const bufView = new Uint8Array(buf);
for (let i = 0, strLen = binaryString.length; i < strLen; i++) {
bufView[i] = binaryString.charCodeAt(i);
}
// get ArrayBuffer: `buf`
From String.protoptype.charCodeAt(), it will return an integer between 0 and 65535 representing the UTF-16 code unit at the given index. But an Uint8Array's range value is [0, 255].
I was initially thinking that the code point we obtained from charCodeAt() could go out of the bound of the Uint8Array range. Then I checked the built-in atob() function, which returns an ASCII string containing decoded data. According to Binary Array, ASCII string has a range from 0 to 127, which is included in the range of Uint8Array, and that's why we are safe to use charCodeAt() in this case.
That's my understanding. I'm not sure if I interpret this correctly. Thanks for your help!
So looks like my understanding is correct.
Thanks to #Konrad, and here is his/her add-up:
charCodeAt is designed to support utf-16. And utf-16 was designed to be compatible with ASCII so the first 256 characters have exact values like in ASCII encoding.

Extract a buffer having different types of data with it

Can a buffer have both string and image associated with it? If so, how to extract them separately.
An example case would be a buffer with image data and also file name data.
I have worked with sharedArrayBuffers/arrayBuffers before.
If you are storing image pixel data, it's going to be a u32-int array, with 4 8-bit segment controlling rbga respectively... yes: you CAN tack on string data at the front in the form of a 'header' if you encode it and decode it to int values... but I have a hard time understanding why that might be desirable. because working with raw pixel data that is ONLY pixel-data is simpler. (I usually just stick it as a property of an object, with whatever other data I want to store)
Data Buffers
Typed arrays
You can use ArrayBuffer to create a buffer to hold the data. You then create a view using a typed array. eg unsigned characters Uint8Array. Types can be 8-16-32-64 bit (un/signed integers), float - double (32 - 64 bit floating point)
One buffer can have many view. You can read and write to view just like any JS array. The values are automatically converted to the correct type when you write to a buffer, and converted to Number when you read from a view
Example
Using buffer and views to read different data types
For example say you have file data that has a 4 character header, followed by a 16 bit unsigned integer chunk length, then 2 signed 16 bit integer coordinates, and more data
const fileBuffer = ArrayBuffer(fileSizeInBytes);
// Create a view of the buffer so we can fill it with file data
const dataRaw = new Uint8Array(data);
// load the data into dataRaw
// To get a string from the data we can create a util function
function readBufferString(buffer, start, length) {
// create a view at the position of the string in the buffer
const bytes = new Uint8Array(buffer, start, length);
// read each byte converting to JS unicode string
var str = "", idx = 0;
while (idx < length) { str += String.fromCharCode(bytes[idx++]) }
return str;
}
// get 4 char chunk header at start of buffer
const header = readBufferString(fileBuffer, 0, 4);
if (header === "HEAD") {
// Create views for 16 bit signed and unsigned integers
const ints = new Int16Array(fileBuffer);
const uints = new Uint16Array(fileBuffer);
const length = uints[2]; // get the length as unsigned int16
const x = ints[3]; // get the x coord as signed int16
const y = ints[4]; // get the y coord as signed int16
A DataView
The above example is one way of extracting the different types of data from a single buffer. However there could be an problem with older files and some data sources regarding the order of bytes that create multi byte types (eg 32 integers). This is called endianness
To help with using the correct endianness and to simplify access to all the different data types in a buffer you can use a DataView
The data view lets you read from the buffer by type and endianness. For example to read a unsigned 64bit integer from a buffer
// fileBuffer is a array buffer with the data
// Create a view
const dataView = new DataView(fileBuffer);
// read the 64 bit uint starting at the first byte in the buffer
// Note the returned value is a BigInt not a Number
const bInt = dataView.getBigUint64(0);
// If the int was in little endian order you would use
const bInt = dataView.getBigUint64(0, true); // true for little E
Notes
Buffers are not dynamic. That means they can not grow and shrink and that you must know how large the buffer needs to be when you create it.
Buffers tend to be a little slower than JavaScript's standard array as there is a lot type coercion when read or writing to buffers
Buffers can be transferred (Zero copy transfer) across threads making them ideal for distributing large data structures between WebWorkers. There is also a SharedArrayBuffer that lets you create true parallel processing solutions in JS

javascript hex codes with TCP/IP communication

I’m using a node module ‘net’ to create a client application that sends data through a TCP socket. The server-side application accepts this message if it starts and ends with a correct hex code, just for example the data packet would start with a hex “0F” and ends with a hex “0F1C”. How would I create these hex codes with javascript ? I found this code to convert a UTF-8 string into a hex code, not sure if this is what I need as I don’t have much experience with TCP/IP socket connections. Heres some javascript I've used to convert a utf-8 to a hex code. But I'm not sure this is what I'm looking for? Does anyone have experience with TCP/IP transfers and/or javascript hex codes?.
function toHex(str,hex){
try{
hex = unescape(encodeURIComponent(str))
.split('').map(function(v){
return v.charCodeAt(0).toString(16)
}).join('')
}
catch(e){
hex = str
console.log('invalid text input: ' + str)
}
return hex
}
First of all, you do not need to convert your data string into hex values, in order to send it over TCP. Every string in node.js is converted to bytes when sent over the network.
Normally, you'd send over a string like so:
var data = "ABC";
socket.write(data); // will send bytes 65 66 67, or in hex: 44 45 46
Node.JS also allows you to pass Buffer objects to functions like .write().
So, probably the easiest way to achieve what you wish, is to create an appropriate buffer to hold your data.
var data = "ABC";
var prefix = 0x0F; // JavaScript allows hex numbers.
var suffix = 0x0FC1;
var dataSize = Buffer.byteLength(data);
// compute the required buffer length
var bufferSize = 1 + dataSize + 2;
var buffer = new Buffer(bufferSize);
// store first byte on index 0;
buffer.writeUInt8(prefix, 0);
// store string starting at index 1;
buffer.write(data, 1, dataSize);
// stores last two bytes, in big endian format for TCP/IP.
buffer.writeUInt16BE(suffix, bufferSize - 2);
socket.write(buffer);
Explanation:
The prefix hex value 0F requires 1 byte of space. The suffix hex value 0FC1 actually requires two bytes (a 16-bit integer).
When computing the number of required bytes for a string (JavaScript strings are UTF-16 encoded!), str.length is not accurate most of the times, especially when your string has non-ASCII characters in it. For this, the proper way of getting the byte size of a string is to use Buffer.byteLength().
Buffers in node.js have static allocations, meaning you can't resize them after you created them. Hence, you'll need to compute the size of the buffer -in bytes- before creating it. Looking at our data, that is 1 (for our prefix) + Buffer.byteLength(data) (for our data) + 2 (for our suffix).
After that -imagine buffers as arrays of bytes (8-bit values)-, we'll populate the buffer, like so:
write the first byte (the prefix) using writeUInt8(byte, offset) with offset 0 in our buffer.
write the data string, using .write(string[, offset[, length]][, encoding]), starting at offset 1 in our buffer, and length dataSize.
write the last two bytes, using .writeUInt16BE(value, offset) with offset bufferSize - 2. We're using writeUInt16BE to write the 16-bit value in big-endian encoding, which is what you'd need for TCP/IP.
Once we've filled our buffer with the correct data, we can send it over the network, using socket.write(buffer);
Additional tip:
If you really want to convert a large string to bytes, (e.g. to later print as hex), then Buffer is also great:
var buf = Buffer.from('a very large string');
// now you have a byte represetantion of the string.
Since bytes are all 0-255 decimal values, you can easily print them as hex values in console, like so:
for (i = 0; i < buf.length; i++) {
const byte = buf[i];
const hexChar = byte.toString(16); // convert the decimal `byte` to hex string;
// do something with hexChar, e.g. console.log(hexChar);
}

Javascript : Encode byte [ ] array to base64

I have a simple question: how to encode a byte [ ] to base64 format?
I have the following code:
var hash = CryptoJS.SHA1("payLoad");
document.writeln(hash);
hash = hash.toString();
var bytes = [];
for (var i = 0; i < hash.length; ++i)
{
bytes.push(hash.charCodeAt(i));
}
Now I would like to encode bytes[ ] to base64 format. Is there a library to do that?
I will appreciate your help!
btoa and atob work with strings, which they treat as arrays of bytes, so to use these two functions, you should first convert your array of integers (provided they fall in the range 0-255) into a string.
something that worked here for me was these two simple functions:
b64encode = function(x) {
return btoa(x.map(function(v){return String.fromCharCode(v)}).join(''))
};
b64decode = function(x) {
return atob(x).split('').map(function(v) {return v.codePointAt(0)});
};
I'm sure you can write them in better style.
in your case though your data was already in the correct format, before you converted it to a 32 bit integer array. the fact that you call the array bytes is only misleading you I think. after that, btoa converted the bloated array, instead of the array of bytes you had in your hash.

Categories

Resources