I’m using a node module ‘net’ to create a client application that sends data through a TCP socket. The server-side application accepts this message if it starts and ends with a correct hex code, just for example the data packet would start with a hex “0F” and ends with a hex “0F1C”. How would I create these hex codes with javascript ? I found this code to convert a UTF-8 string into a hex code, not sure if this is what I need as I don’t have much experience with TCP/IP socket connections. Heres some javascript I've used to convert a utf-8 to a hex code. But I'm not sure this is what I'm looking for? Does anyone have experience with TCP/IP transfers and/or javascript hex codes?.
function toHex(str,hex){
try{
hex = unescape(encodeURIComponent(str))
.split('').map(function(v){
return v.charCodeAt(0).toString(16)
}).join('')
}
catch(e){
hex = str
console.log('invalid text input: ' + str)
}
return hex
}
First of all, you do not need to convert your data string into hex values, in order to send it over TCP. Every string in node.js is converted to bytes when sent over the network.
Normally, you'd send over a string like so:
var data = "ABC";
socket.write(data); // will send bytes 65 66 67, or in hex: 44 45 46
Node.JS also allows you to pass Buffer objects to functions like .write().
So, probably the easiest way to achieve what you wish, is to create an appropriate buffer to hold your data.
var data = "ABC";
var prefix = 0x0F; // JavaScript allows hex numbers.
var suffix = 0x0FC1;
var dataSize = Buffer.byteLength(data);
// compute the required buffer length
var bufferSize = 1 + dataSize + 2;
var buffer = new Buffer(bufferSize);
// store first byte on index 0;
buffer.writeUInt8(prefix, 0);
// store string starting at index 1;
buffer.write(data, 1, dataSize);
// stores last two bytes, in big endian format for TCP/IP.
buffer.writeUInt16BE(suffix, bufferSize - 2);
socket.write(buffer);
Explanation:
The prefix hex value 0F requires 1 byte of space. The suffix hex value 0FC1 actually requires two bytes (a 16-bit integer).
When computing the number of required bytes for a string (JavaScript strings are UTF-16 encoded!), str.length is not accurate most of the times, especially when your string has non-ASCII characters in it. For this, the proper way of getting the byte size of a string is to use Buffer.byteLength().
Buffers in node.js have static allocations, meaning you can't resize them after you created them. Hence, you'll need to compute the size of the buffer -in bytes- before creating it. Looking at our data, that is 1 (for our prefix) + Buffer.byteLength(data) (for our data) + 2 (for our suffix).
After that -imagine buffers as arrays of bytes (8-bit values)-, we'll populate the buffer, like so:
write the first byte (the prefix) using writeUInt8(byte, offset) with offset 0 in our buffer.
write the data string, using .write(string[, offset[, length]][, encoding]), starting at offset 1 in our buffer, and length dataSize.
write the last two bytes, using .writeUInt16BE(value, offset) with offset bufferSize - 2. We're using writeUInt16BE to write the 16-bit value in big-endian encoding, which is what you'd need for TCP/IP.
Once we've filled our buffer with the correct data, we can send it over the network, using socket.write(buffer);
Additional tip:
If you really want to convert a large string to bytes, (e.g. to later print as hex), then Buffer is also great:
var buf = Buffer.from('a very large string');
// now you have a byte represetantion of the string.
Since bytes are all 0-255 decimal values, you can easily print them as hex values in console, like so:
for (i = 0; i < buf.length; i++) {
const byte = buf[i];
const hexChar = byte.toString(16); // convert the decimal `byte` to hex string;
// do something with hexChar, e.g. console.log(hexChar);
}
Related
So I currently am trying to implement the huffman alg and it works fine for decoding and encoding. However, I store the encoded data as follows.
The result of the encoding function is a list containing many strings made up of 0 and 1 and all are varying length.
If i'd safe them in a normal txt file it would take up more space, if Id store them how they are in a binary file it could be that for example an 'e' which would have the code 101 would be stored in a full 8 bits looking like '00000101' which is wasteful and wont take up less storage then the original txt file. I took all the strings in the list and put them into one string and split it into equal parts of length 8 to store them more effectively.
However if I wanna read the data now, instead of 0 and 1 I get utf-8 chars, even some escape characters.
I'm reading the file with fs.readFileSync("./encoded.bin", "binary"); but javascript then thinks it's a buffer already and converts it to a string and it gets all weird... Any solutions or ideas to convert it back to 0 and 1?
I also tried to switch the "binary" in fs.readFileSync("./encoded.bin", "binary"); to a "utf-8" which helped with not crashing my terminal but still is "#��C��Ʃ��Ԧ�y�Kf�g��<�e�t"
To clarify, my goal in the end is to read out the massive string of binary data which would look like this "00011001000101001010" and actually get this into a string...
You can convert a String of 1s and 0s to the numerical representation of a byte using Number.parseInt(str, 2) and to convert it back, you can use nr.toString(2).
The entire process will look something like this:
const original = '0000010100000111';
// Split the string in 8 char long substrings
const stringBytes = original.match(/.{8}/g);
// Convert the 8 char long strings to numerical byte representations
const numBytes = stringBytes.map((s) => Number.parseInt(s, 2));
// Convert the numbers to an ArrayBuffer
const buffer = Uint8Array.from(numBytes);
// Write to file
// Read from file and reverse the process
const decoded = [...buffer].map((b) => b.toString(2).padStart(8, '0')).join('');
console.log('original', original, 'decoded', decoded, 'same', original === decoded);
var binary = fs.readFileSync("./binary.bin");
binary = [...binary].map((b) => b.toString(2).padStart(8, "0")).join("");
console.log(binary);
//Output will be like 010000111011010
Suppose I have a base64 encoded string and I want to convert it into an ArrayBuffer, I can do it in this way:
// base64 decode the string to get the binary data
const binaryString = window.atob(base64EncodedString);
// convert from a binary string to an ArrayBuffer
const buf = new ArrayBuffer(binaryString.length);
const bufView = new Uint8Array(buf);
for (let i = 0, strLen = binaryString.length; i < strLen; i++) {
bufView[i] = binaryString.charCodeAt(i);
}
// get ArrayBuffer: `buf`
From String.protoptype.charCodeAt(), it will return an integer between 0 and 65535 representing the UTF-16 code unit at the given index. But an Uint8Array's range value is [0, 255].
I was initially thinking that the code point we obtained from charCodeAt() could go out of the bound of the Uint8Array range. Then I checked the built-in atob() function, which returns an ASCII string containing decoded data. According to Binary Array, ASCII string has a range from 0 to 127, which is included in the range of Uint8Array, and that's why we are safe to use charCodeAt() in this case.
That's my understanding. I'm not sure if I interpret this correctly. Thanks for your help!
So looks like my understanding is correct.
Thanks to #Konrad, and here is his/her add-up:
charCodeAt is designed to support utf-16. And utf-16 was designed to be compatible with ASCII so the first 256 characters have exact values like in ASCII encoding.
I am writing a small utility library for me to request server status of a given minecraft host in js on node. I am using the Server List Ping Protocol as outlined here (https://wiki.vg/Server_List_Ping) and got it mostly working as expected, albeit having big trouble working with unsupported Data Types (VarInt) and had to scour the internet to find a way of converting js nums into VarInts in order to craft the necessary packet buffers:
function toVarIntBuffer(integer) {
let buffer = Buffer.alloc(0);
while (true) {
let tmp = integer & 0b01111111;
integer >>>= 7;
if (integer != 0) {
tmp |= 0b10000000;
}
buffer = Buffer.concat([buffer, Buffer.from([tmp])]);
if (integer <= 0) break;
}
return buffer;
}
Right now I am able to request a server status by sending the handshake packet and then the query packet and do receive a JSON response with the length of the response prepended as a VarInt.
However the issue is here, where I simply don't know how to safely identify the VarInt from the beginning of the JSON response (as it can be anywhere up to 5 byte) and decode it back to a readable num so I can get the proper length of the response byte stream.
[...] as with all strings this is prefixed by its length as a VarInt
(from the protocol documentation)
My current super hacky workaround is to concatenate the chunks as String until the concatenated string contains the same count of '{'s and '}'s (meaning a full json object) and slice the json response at the first '{' before parsing it.
However I am very unhappy with this hacky, inefficient, unelegant and possibly unreliable way of solving the issue and would rather decode the VarInt in front of the JSON response in order to get a proper length to compare against.
I don't know this protocol, but VarInt in protobuf are coded with the MSB bit:
Each byte in a varint, except the last byte, has the most significant
bit (msb) set – this indicates that there are further bytes to come.
The lower 7 bits of each byte are used to store the two's complement
representation of the number in groups of 7 bits, least significant
group first.
Note: Too long for a comment, so posting as an answer.
Update: I browsed a bit through the URL you gave, and it is indeed the ProtoBuf VarInt. It is also described there with pseudo-code:
https://wiki.vg/Protocol#VarInt_and_VarLong
VarInt and VarLong Variable-length format such that smaller numbers
use fewer bytes. These are very similar to Protocol Buffer Varints:
the 7 least significant bits are used to encode the value and the most
significant bit indicates whether there's another byte after it for
the next part of the number. The least significant group is written
first, followed by each of the more significant groups; thus, VarInts
are effectively little endian (however, groups are 7 bits, not 8).
VarInts are never longer than 5 bytes, and VarLongs are never longer
than 10 bytes.
Pseudocode to read and write VarInts and VarLongs:
Thanks to the reference material that #thst pointed me to, I was able to slap together a working way of reading VarInts in javascript.
function readVarInt(buffer) {
let value = 0;
let length = 0;
let currentByte;
while (true) {
currentByte = buffer[length];
value |= (currentByte & 0x7F) << (length * 7);
length += 1;
if (length > 5) {
throw new Error('VarInt exceeds allowed bounds.');
}
if ((currentByte & 0x80) != 0x80) break;
}
return value;
}
buffer must be a byte stream starting with the VarInt, ideally using the std Buffer class.
Can a buffer have both string and image associated with it? If so, how to extract them separately.
An example case would be a buffer with image data and also file name data.
I have worked with sharedArrayBuffers/arrayBuffers before.
If you are storing image pixel data, it's going to be a u32-int array, with 4 8-bit segment controlling rbga respectively... yes: you CAN tack on string data at the front in the form of a 'header' if you encode it and decode it to int values... but I have a hard time understanding why that might be desirable. because working with raw pixel data that is ONLY pixel-data is simpler. (I usually just stick it as a property of an object, with whatever other data I want to store)
Data Buffers
Typed arrays
You can use ArrayBuffer to create a buffer to hold the data. You then create a view using a typed array. eg unsigned characters Uint8Array. Types can be 8-16-32-64 bit (un/signed integers), float - double (32 - 64 bit floating point)
One buffer can have many view. You can read and write to view just like any JS array. The values are automatically converted to the correct type when you write to a buffer, and converted to Number when you read from a view
Example
Using buffer and views to read different data types
For example say you have file data that has a 4 character header, followed by a 16 bit unsigned integer chunk length, then 2 signed 16 bit integer coordinates, and more data
const fileBuffer = ArrayBuffer(fileSizeInBytes);
// Create a view of the buffer so we can fill it with file data
const dataRaw = new Uint8Array(data);
// load the data into dataRaw
// To get a string from the data we can create a util function
function readBufferString(buffer, start, length) {
// create a view at the position of the string in the buffer
const bytes = new Uint8Array(buffer, start, length);
// read each byte converting to JS unicode string
var str = "", idx = 0;
while (idx < length) { str += String.fromCharCode(bytes[idx++]) }
return str;
}
// get 4 char chunk header at start of buffer
const header = readBufferString(fileBuffer, 0, 4);
if (header === "HEAD") {
// Create views for 16 bit signed and unsigned integers
const ints = new Int16Array(fileBuffer);
const uints = new Uint16Array(fileBuffer);
const length = uints[2]; // get the length as unsigned int16
const x = ints[3]; // get the x coord as signed int16
const y = ints[4]; // get the y coord as signed int16
A DataView
The above example is one way of extracting the different types of data from a single buffer. However there could be an problem with older files and some data sources regarding the order of bytes that create multi byte types (eg 32 integers). This is called endianness
To help with using the correct endianness and to simplify access to all the different data types in a buffer you can use a DataView
The data view lets you read from the buffer by type and endianness. For example to read a unsigned 64bit integer from a buffer
// fileBuffer is a array buffer with the data
// Create a view
const dataView = new DataView(fileBuffer);
// read the 64 bit uint starting at the first byte in the buffer
// Note the returned value is a BigInt not a Number
const bInt = dataView.getBigUint64(0);
// If the int was in little endian order you would use
const bInt = dataView.getBigUint64(0, true); // true for little E
Notes
Buffers are not dynamic. That means they can not grow and shrink and that you must know how large the buffer needs to be when you create it.
Buffers tend to be a little slower than JavaScript's standard array as there is a lot type coercion when read or writing to buffers
Buffers can be transferred (Zero copy transfer) across threads making them ideal for distributing large data structures between WebWorkers. There is also a SharedArrayBuffer that lets you create true parallel processing solutions in JS
I have binary strings of length 60 representing yes/no states for the minutes of an hour and I would like to write them to file in Java. My three objectives are that this should be
compact (better than saving as string)
enabling an easy way to rebuild my binary strings when reading the file from JavaScript
without using third party libraries
My first thought was to convert the string to a Long (8 bytes) and save as such, but it seems complicated to get my binary string back when reading the file in JavaScript due to floating point number format and mantissa length. What is a good way to do this?
Javascript can handle integers correctly up to 253-1, so you can use standard methods if you split the 60-bit data in two, and store it as two 32-bit integers.
Alternatively, you could store the data e.g. as a 15-character hexadecimal string, and recode it into a binary string with something like this:
function hex2bin(s) {
return ("0000000000000000000000000000000" + parseInt(s.substr(0,8), 16).toString(2)).substr(-32)
+ ("000000000000000000000000000" + parseInt(s.substr(8,7), 16).toString(2)).substr(-28);
}
document.write(hex2bin("123456789ABCDEF"));
Or you could use a base-64 string to reduce the data size to 10 characters, and decode it with something like this:
function base642bin(s) {
var b = "", e = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
for (var i = 0; i < 10; i++) b += ("00000" + e.indexOf(s.charAt(i)).toString(2)).substr(-6);
return b;
}
document.write(base642bin("EjRWeJq83v"));
If you use a built-in Java function for base-64 encoding, check which encoding table it uses (sometimes the + and / are substituted for other characters).
Long binary strings can be converted to BigInt and back to binary.
const big = BigInt('0b' + a);
// store, perform math, etc.
const binaryStr = big.toString(2);
This supports values greater than 253 - 1 (Number.MAX_SAFE_INTEGER). E.g. here is a 95-char binary string being converted:
BigInt('0b' + '10100000100100110110010000010101111011011001101110111111111101000000101111001110001111100001101')
24847893154024981730169397005n
The prefix 0b tells the constructor that this is a binary representation.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt