I need to add compression to my project and I decided to use the LZJB algorithm that is fast and the code is small. Found this library https://github.com/copy/jslzjb-k
But the API is not very nice because to decompress the file you need input buffer length (because Uint8Array is not dynamic you need to allocate some data). So I want to save the length of the input buffer as the first few bytes of Uint8Array so I can extract that value and create output Uint8Array based on that integer value.
I want the function that returns Uint8Array from integer to be generic, maybe save the length of the bytes into the first byte so you know how much data you need to extract to read the integer. I guess I need to extract those bytes and use some bit shifting to get the original number. But I'm not exactly sure how to do this.
So how can I write a generic function that converts an integer into Uint8Array that can be embedded into a bigger array and then extract that number?
Here are working functions (based on Converting javascript Integer to byte array and back)
function numberToBytes(number) {
// you can use constant number of bytes by using 8 or 4
const len = Math.ceil(Math.log2(number) / 8);
const byteArray = new Uint8Array(len);
for (let index = 0; index < byteArray.length; index++) {
const byte = number & 0xff;
byteArray[index] = byte;
number = (number - byte) / 256;
}
return byteArray;
}
function bytesToNumber(byteArray) {
let result = 0;
for (let i = byteArray.length - 1; i >= 0; i--) {
result = (result * 256) + byteArray[i];
}
return result;
}
by using const len = Math.ceil(Math.log2(number) / 8); the array have only bytes needed. If you want a fixed size you can use a constant 8 or 4.
In my case, I just saved the length of the bytes in the first byte.
General answer
These functions allow any integer (it uses BigInts internally, but can accept Number arguments) to be encoded into, and decoded from, any part of a Uint8Array. It is somewhat overkill, but I wanted to learn how to work with arbitrary-sized integers in JS.
// n can be a bigint or a number
// bs is an optional Uint8Array of sufficient size
// if unspecified, a large-enough Uint8Array will be allocated
// start (optional) is the offset
// where the length-prefixed number will be written
// returns the resulting Uint8Array
function writePrefixedNum(n, bs, start) {
start = start || 0;
let len = start+2; // start, length, and 1 byte min
for (let i=0x100n; i<n; i<<=8n, len ++) /* increment length */;
if (bs === undefined) {
bs = new Uint8Array(len);
} else if (bs.length < len) {
throw `byte array too small; ${bs.length} < ${len}`;
}
let r = BigInt(n);
for (let pos = start+1; pos < len; pos++) {
bs[pos] = Number(r & 0xffn);
r >>= 8n;
}
bs[start] = len-start-1; // write byte-count to start byte
return bs;
}
// bs must be a Uint8Array from where the number will be read
// start (optional, defaults to 0)
// is where the length-prefixed number can be found
// returns a bigint, which can be coerced to int using Number()
function readPrefixedNum(bs, start) {
start = start || 0;
let size = bs[start]; // read byte-count from start byte
let n = 0n;
if (bs.length < start+size) {
throw `byte array too small; ${bs.length} < ${start+size}`;
}
for (let pos = start+size; pos >= start+1; pos --) {
n <<= 8n;
n |= BigInt(bs[pos])
}
return n;
}
function test(n) {
const array = undefined;
const offset = 2;
let bs = writePrefixedNum(n, undefined, offset);
console.log(bs);
let result = readPrefixedNum(bs, offset);
console.log(n, result, "correct?", n == result)
}
test(0)
test(0x1020304050607080n)
test(0x0807060504030201n)
Simple 4-byte answer
This answer encodes 4-byte integers to and from Uint8Arrays.
function intToArray(i) {
return Uint8Array.of(
(i&0xff000000)>>24,
(i&0x00ff0000)>>16,
(i&0x0000ff00)>> 8,
(i&0x000000ff)>> 0);
}
function arrayToInt(bs, start) {
start = start || 0;
const bytes = bs.subarray(start, start+4);
let n = 0;
for (const byte of bytes.values()) {
n = (n<<8)|byte;
}
return n;
}
for (let v of [123, 123<<8, 123<<16, 123<<24]) {
let a = intToArray(v);
let r = arrayToInt(a, 0);
console.log(v, a, r);
}
Posting this one-liner in case it is useful to anyone who is looking to work with numbers below 2^53. This strictly uses bitwise operations and has no need for constants or values other than the input to be defined.
export const encodeUvarint = (n: number): Uint8Array => n >= 0x80
? Uint8Array.from([(n & 0x7f) | 0x80, ...encodeUvarint(n >> 7)])
: Uint8Array.from([n & 0xff]);
Related
I have found 3 methods to convert Uint8Array to BigInt and all of them give different results for some reason. Could you please tell me which one is correct and which one should I use?
Using bigint-conversion library. We can use bigintConversion.bufToBigint() function to get a BigInt. The implementation is as follows:
export function bufToBigint (buf: ArrayBuffer|TypedArray|Buffer): bigint {
let bits = 8n
if (ArrayBuffer.isView(buf)) bits = BigInt(buf.BYTES_PER_ELEMENT * 8)
else buf = new Uint8Array(buf)
let ret = 0n
for (const i of (buf as TypedArray|Buffer).values()) {
const bi = BigInt(i)
ret = (ret << bits) + bi
}
return ret
}
Using DataView:
let view = new DataView(arr.buffer, 0);
let result = view.getBigUint64(0, true);
Using a FOR loop:
let result = BigInt(0);
for (let i = arr.length - 1; i >= 0; i++) {
result = result * BigInt(256) + BigInt(arr[i]);
}
I'm honestly confused which one is right since all of them give different results but do give results.
I'm fine with either BE or LE but I'd just like to know why these 3 methods give a different result.
One reason for the different results is that they use different endianness.
Let's turn your snippets into a form where we can execute and compare them:
let source_array = new Uint8Array([
0xff, 0xee, 0xdd, 0xcc, 0xbb, 0xaa, 0x99, 0x88,
0x77, 0x66, 0x55, 0x44, 0x33, 0x22, 0x11]);
let buffer = source_array.buffer;
function method1(buf) {
let bits = 8n
if (ArrayBuffer.isView(buf)) {
bits = BigInt(buf.BYTES_PER_ELEMENT * 8)
} else {
buf = new Uint8Array(buf)
}
let ret = 0n
for (const i of buf.values()) {
const bi = BigInt(i)
ret = (ret << bits) + bi
}
return ret
}
function method2(buf) {
let view = new DataView(buf, 0);
return view.getBigUint64(0, true);
}
function method3(buf) {
let arr = new Uint8Array(buf);
let result = BigInt(0);
for (let i = arr.length - 1; i >= 0; i--) {
result = result * BigInt(256) + BigInt(arr[i]);
}
return result;
}
console.log(method1(buffer).toString(16));
console.log(method2(buffer).toString(16));
console.log(method3(buffer).toString(16));
Note that this includes a bug fix for method3: where you wrote for (let i = arr.length - 1; i >= 0; i++), you clearly meant i-- at the end.
For "method1" this prints: ffeeddccbbaa998877665544332211
Because method1 is a big-endian conversion (first byte of the array is most-significant part of the result) without size limit.
For "method2" this prints: 8899aabbccddeeff
Because method2 is a little-endian conversion (first byte of the array is least significant part of the result) limited to 64 bits.
If you switch the second getBigUint64 argument from true to false, you get big-endian behavior: ffeeddccbbaa9988.
To eliminate the size limitation, you'd have to add a loop: using getBigUint64 you can get 64-bit chunks, which you can assemble using shifts similar to method1 and method3.
For "method3" this prints: 112233445566778899aabbccddeeff
Because method3 is a little-endian conversion without size limit. If you reverse the for-loop's direction, you'll get the same big-endian behavior as method1: result * 256n gives the same value as result << 8n; the latter is a bit faster.
(Side note: BigInt(0) and BigInt(256) are needlessly verbose, just write 0n and 256n instead. Additional benefit: 123456789123456789n does what you'd expect, BigInt(123456789123456789) does not.)
So which method should you use? That depends on:
(1) Do your incoming arrays assume BE or LE encoding?
(2) Are your BigInts limited to 64 bits or arbitrarily large?
(3) Is this performance-critical code, or are all approaches "fast enough"?
Taking a step back: if you control both parts of the overall process (converting BigInts to Uint8Array, then transmitting/storing them, then converting back to BigInt), consider simply using hexadecimal strings instead: that'll be easier to code, easier to debug, and significantly faster. Something like:
function serialize(bigint) {
return "0x" + bigint.toString(16);
}
function deserialize(serialized_bigint) {
return BigInt(serialized_bigint);
}
If you need to store really big integers that isn't bound to any base64 or 128 and also keep negative numbers then this is a solution for you...
function encode(n) {
let hex, bytes
// shift all numbers 1 step to the left and xor if less then 0
n = (n << 1n) ^ (n < 0n ? -1n : 0n)
// convert to hex
hex = n.toString(16)
// pad if neccesseery
if (hex.length % 2) hex = '0' + hex
// convert hex to bytes
bytes = hex.match(/.{1,2}/g).map(byte => parseInt(byte, 16))
return bytes
}
function decode(bytes) {
let hex, n
// convert bytes back into hex
hex = bytes.map(e => e.toString(16).padStart(2, 0)).join('')
// Convert hex to BigInt
n = BigInt(`0x`+hex)
// Shift all numbers to right and xor if the first bit was signed
n = (n >> 1n) ^ (n & 1n ? -1n : 0n)
return n
}
const input = document.querySelector('input')
input.oninput = () => {
console.clear()
const bytes = encode(BigInt(input.value))
// TODO: Save or transmit this bytes
// new Uint8Array(bytes)
console.log(bytes.join(','))
const n = decode(bytes)
console.log(n.toString(10)+'n') // cuz SO can't render bigints...
}
input.oninput()
<input type="number" value="-39287498324798237498237498273323423" style="width: 100%">
Suppose I want to convert a base-36 encoded string to a BigInt, I can do this:
BigInt(parseInt(x,36))
But what if my string exceeds what can safely fit in a Number? e.g.
parseInt('zzzzzzzzzzzzz',36)
Then I start losing precision.
Are there any methods for parsing directly into a BigInt?
You could convert the number to a bigint type.
function convert(value, radix) {
return [...value.toString()]
.reduce((r, v) => r * BigInt(radix) + BigInt(parseInt(v, radix)), 0n);
}
console.log(convert('zzzzzzzzzzzzz', 36).toString());
With greater chunks, like just for example with ten (eleven return a false result).
function convert(value, radix) { // value: string
var size = 10,
factor = BigInt(radix ** size),
i = value.length % size || size,
parts = [value.slice(0, i)];
while (i < value.length) parts.push(value.slice(i, i += size));
return parts.reduce((r, v) => r * factor + BigInt(parseInt(v, radix)), 0n);
}
console.log(convert('zzzzzzzzzzzzz', 36).toString());
Not sure if there's a built-in one, but base-X to BigInt is pretty easy to implement:
function parseBigInt(
numberString,
keyspace = "0123456789abcdefghijklmnopqrstuvwxyz",
) {
let result = 0n;
const keyspaceLength = BigInt(keyspace.length);
for (let i = numberString.length - 1; i >= 0; i--) {
const value = keyspace.indexOf(numberString[i]);
if (value === -1) throw new Error("invalid string");
result = result * keyspaceLength + BigInt(value);
}
return result;
}
console.log(parseInt("zzzzzzz", 36));
console.log(parseBigInt("zzzzzzz"));
console.log(parseBigInt("zzzzzzzzzzzzzzzzzzzzzzzzzz"));
outputs
78364164095
78364164095n
29098125988731506183153025616435306561535n
The default keyspace there is equivalent to what parseInt with base 36 uses, but should you need something else, the option's there. :)
I have this hash function below.
I know that for an input string of length 8 I get a hash with the value of 16530092119764772
The input string can only consist of the characters "abcdefghijklmnop"
What is the best approach to find the input string?
Is there a way to break down the problem mathematically without relying on a brute-force approach to find the string?
Would a recursive solution overflow the stack?
function hash(str) {
let g = 8;
let charset = "abcdefghijklmnop";
for(let i = 0; i < str.length; i++) {
g = (g * 82 + charset.indexOf(str[i]));
}
return g;
}
As an example for the string "agile" it hashes to 29662550362
That’s not even really a hash, because charset doesn’t have 82 characters in it. It’s more like parsing a string as a base-82 number where you can only use the first 16 symbols. It’d be completely reversible if it didn’t use floating-point numbers, which are imprecise for integers that big. In case you’re not familiar with why, the simplified version is that the operation inside the loop:
g * 82 + d
gives a different result for every possible value of g and d as long as d is less than 82, because there’s enough space between g * 82 and (g + 1) * 82 to fit 82 different ds (from 0 to 81). Each different result is reversible back to g and d by dividing by 82; the whole value is g and the remainder is d. When every operation inside the loop is reversible, you can reverse the whole thing.
So, like you might convert a number to decimal manually with a loop that divides out one digit at a time, you can convert this imprecise number into base 82:
const getDigits = (value, base) => {
const result = [];
while (value) {
result.push(value % base);
value /= base;
}
return result.reverse();
};
const getLetter = index =>
String.fromCharCode(97 + index);
const getPreimage = value =>
getDigits(value, 82n)
.map(Number)
.map(getLetter)
.join('');
console.log(getPreimage(29662550362n));
console.log(getPreimage(16530092119764772n));
The results start with “i” because g starts at 8 instead of 0. The second number is also big enough to not be unique (in contrast to agile’s “hash”, which can be represented exactly by a JavaScript number), but if you were just trying to find any preimage, it’s good enough.
function hash(str) {
let g = 8;
let charset = "abcdefghijklmnop";
for(let i = 0; i < str.length; i++) {
g = (g * 82 + charset.indexOf(str[i]));
}
return g;
}
for (const s of ['hijackec', 'hijacked', 'hijackee', 'hijackef', 'hijackeg']) {
console.log(s, hash(s) === 16530092119764772);
}
You could make a recursive function that starts from 8, iterates over the charset indices and stops (returns) whenever the current value gets over the passed hash.
Check the comments below for more details:
const charset = 'abcdefghijklmnop';
function bruteforce(hash, base = 8, result = {value: ''}) {
// Always multiply the previous value by 82
base *= 82;
for (let i = 0; i < charset.length; i++) {
// Add the char index to the value
value = base + i;
// If we found the hash, append the current char and return
if (value === hash) {
result.value += charset[i];
return base === 656 ? result.value : value;
}
// If we went past the hash, return null to mark this iteration as failed
if (value > hash) {
return null;
}
// Otherwise, attempt next level starting from current value
value = bruteforce(hash, value, result);
// If we found the hash from there, prepend the current char and return
if (value === hash) {
result.value = charset[i] + result.value;
return base === 656 ? result.value : value;
}
}
// We tried everything, no match found :(
return null;
}
console.log(bruteforce(29662550362));
I have a working script in python doing string to integer conversion based on specified radix using long(16):
modulus=public_key["n"]
modulusDecoded = long(public_key["n"], 16)
which prints:
8079d7ae567dd2c02dadd1068843136314fa3893fa1fb1ab331682c6a85cad62b208d66c9974bbbb15d52676fd9907efb158c284e96f5c7a4914fd927b7326c40efa14922c68402d05ff53b0e4ccda90bbee5e6c473613e836e2c79da1072e366d0d50933327e77651b6984ddbac1fdecf1fd8fa17e0f0646af662a8065bd873
and
90218878289834622370514047239437874345637539049004160177768047103383444023879266805615186962965710608753937825108429415800005684101842952518531920633990402573136677611127418094912644368840442620417414685225340199872975797295511475162170060618806831021437109054760851445152320452665575790602072479287289305203
respectively.
This looks like a Hex to decimal conversion.
I tried to have the same result in JS but parseInt() and parseFloat() produce something completely different. On top of that JavaScript seems not to like chars in input string and sometimes returns NaN.
Could anyone please provide a function / guidance how to get the same functionality as in Python script?
Numbers in JavaScript are floating point so they always lose precision after a certain digit. To have unlimited numbers one could rather use an array of numbers from 0 to 9, which has an unlimited range. To do so based on the hex string input, i do a hex to int array conversion, then I use the double dabble algorithm to convert the array to BCD. That can be printed easily:
const hexToArray = arr => arr.split("").map(n => parseInt(n,16));
const doubleDabble = arr => {
var l = arr.length;
for( var b = l * 4; b--;){
//add && leftshift
const overflow = arr.reduceRight((carry,n,i) => {
//apply the >4 +3, then leftshift
var shifted = ((i < (arr.length - l ) && n>4)?n+3:n ) << 1;
//just take the right four bits and add the eventual carry value
arr[i] = (shifted & 0b1111) | carry;
//carry on
return shifted > 0b1111;
}, 0);
// we've exceeded the current array, lets extend it:
if(overflow) arr.unshift(overflow);
}
return arr.slice(0,-l);
};
const arr = hexToArray("8079d7");
const result = doubleDabble(arr);
console.log(result.join(""));
Try it
Using the built in api parseInt, you can get upto 100 digts of accuracy on Firefox and 20 digits of accuracy on Chrome.
a = parseInt('8079d7ae567dd2c02dadd1068843136314fa3893fa1fb1ab331682c6a85cad62b208d66c9974bbbb15d52676fd9907efb158c284e96f5c7a4914fd927b7326c40efa14922c68402d05ff53b0e4ccda90bbee5e6c473613e836e2c79da1072e366d0d50933327e77651b6984ddbac1fdecf1fd8fa17e0f0646af662a8065bd873', 16)
a.toPrecision(110)
> Uncaught RangeError: toPrecision() argument must be between 1 and 21
# Chrome
a.toPrecision(20)
"9.0218878289834615508e+307"
# Firefox
a.toPrecision(100)
"9.021887828983461550807409292694387726882781812072572899692574101215517323445643340153182035092932819e+307"
From the ECMAScript Spec,
Let p be ? ToInteger(precision).
...
If p < 1 or p > 100, throw a RangeError exception.
As described in this answer, JavaScript numbers cannot represent integers larger than 9.007199254740991e+15 without loss of precision.
Working with larger integers in JavaScript requires a BigInt library or other special-purpose code, and large integers will then usually be represented as strings or arrays.
Re-using code from this answer helps to convert the hexadecimal number representation
8079d7ae567dd2c02dadd1068843136314fa3893fa1fb1ab331682c6a85cad62b208d66c9974bbbb15d52676fd9907efb158c284e96f5c7a4914fd927b7326c40efa14922c68402d05ff53b0e4ccda90bbee5e6c473613e836e2c79da1072e366d0d50933327e77651b6984ddbac1fdecf1fd8fa17e0f0646af662a8065bd873
to its decimal representation
90218878289834622370514047239437874345637539049004160177768047103383444023879266805615186962965710608753937825108429415800005684101842952518531920633990402573136677611127418094912644368840442620417414685225340199872975797295511475162170060618806831021437109054760851445152320452665575790602072479287289305203
as demonstrated in the following snippet:
function parseBigInt(bigint, base) {
//convert bigint string to array of digit values
for (var values = [], i = 0; i < bigint.length; i++) {
values[i] = parseInt(bigint.charAt(i), base);
}
return values;
}
function formatBigInt(values, base) {
//convert array of digit values to bigint string
for (var bigint = '', i = 0; i < values.length; i++) {
bigint += values[i].toString(base);
}
return bigint;
}
function convertBase(bigint, inputBase, outputBase) {
//takes a bigint string and converts to different base
var inputValues = parseBigInt(bigint, inputBase),
outputValues = [], //output array, little-endian/lsd order
remainder,
len = inputValues.length,
pos = 0,
i;
while (pos < len) { //while digits left in input array
remainder = 0; //set remainder to 0
for (i = pos; i < len; i++) {
//long integer division of input values divided by output base
//remainder is added to output array
remainder = inputValues[i] + remainder * inputBase;
inputValues[i] = Math.floor(remainder / outputBase);
remainder -= inputValues[i] * outputBase;
if (inputValues[i] == 0 && i == pos) {
pos++;
}
}
outputValues.push(remainder);
}
outputValues.reverse(); //transform to big-endian/msd order
return formatBigInt(outputValues, outputBase);
}
var largeNumber =
'8079d7ae567dd2c02dadd1068843136314fa389'+
'3fa1fb1ab331682c6a85cad62b208d66c9974bb'+
'bb15d52676fd9907efb158c284e96f5c7a4914f'+
'd927b7326c40efa14922c68402d05ff53b0e4cc'+
'da90bbee5e6c473613e836e2c79da1072e366d0'+
'd50933327e77651b6984ddbac1fdecf1fd8fa17'+
'e0f0646af662a8065bd873';
//convert largeNumber from base 16 to base 10
var largeIntDecimal = convertBase(largeNumber, 16, 10);
//show decimal result in console:
console.log(largeIntDecimal);
//check that it matches the expected output:
console.log('Matches expected:',
largeIntDecimal === '90218878289834622370514047239437874345637539049'+
'0041601777680471033834440238792668056151869629657106087539378251084294158000056'+
'8410184295251853192063399040257313667761112741809491264436884044262041741468522'+
'5340199872975797295511475162170060618806831021437109054760851445152320452665575'+
'790602072479287289305203'
);
//check that conversion and back-conversion results in the original number
console.log('Converts back:',
convertBase(convertBase(largeNumber, 16, 10), 10, 16) === largeNumber
);
Hi there I need function to calculate unique integer number from number (real number double precision) and integer.
Try explain I am developing GIS application in javascript and I am working with complex vector object like polygon (array of points object with two coordinate in ring) and lines array of points. I need fast algorithm to recognize that element has been changed it must be really fast because my vector object is collection of thousand points . In C# I am calculating hash code from coordinate using bitwise operation XOR.
But javascript convert all operands in bitwise operation to integer but i need convert double precision to integer before apply bitwise in c# way (binnary). In reflector i see this that c# calculate hash code fro double like this and I need this function in javascript as fast as can be.
public override unsafe int GetHashCode() //from System.Double
{
double num = this;
if (num == 0.0)
{
return 0;
}
long num2 = *((long*) &num);
return (((int) num2) ^ ((int) (num2 >> 32)));
}
Example:
var rotation = function (n) {
n = (n >> 1) | ((n & 0x001) << 31);
return n;
}
var x: number = 1;
var y: number = 5;
var hash = x ^ rotation(y); // result is -2147483645
var x1: number = 1.1;
var y1: number = 5;
var hash1 = x1 ^ rotation(y1); // result is -2147483645
Example result is not correct hash == hash1
Example 2: Using to string there is correct result but calculate Hash from string is to complicate and I thing is not fast enough.
var rotation = function (n) {
n = (n >> 1) | ((n & 0x001) << 31);
return n;
}
var GetHashCodeString = function(str: string): number {
var hash = 0, i, l, ch;
if (str.length == 0) return hash;
for (i = 0, l = str.length; i < l; i++) {
ch = str.charCodeAt(i);
hash = ((hash << 5) - hash) + ch;
hash |= 0; // Convert to 32bit integer
}
return hash;
}
var x: number = 1;
var y: number = 5;
var hash = GetHashCodeString(x.toString()) ^ rotation(GetHashCodeString(y.toString()));
//result is -2147483605
var x1: number = 1.1;
var y1: number = 5;
var hash1 = GetHashCodeString(x1.toString()) ^ rotation(GetHashCodeString(y1.toString()));
//result is -2147435090
Example2 result is correct hash != hash1
Is there some faster way than converting number to string than calculate hash from each character? Because my object is very large and it will take lot of time and operation in this way ...
I try do it using TypedArrays but yet I am not successful.
Thanks very much for your help
Hi there I tried use TypedArrays to calculate Hash code from number and the result is interesting. In IE the performance 4x better in Chrome 2x in FireFox this approach is equal to string version ...
var GetHashCodeNumber = function (n: number): number {
//create 8 byte array buffer number in js is 64bit
var arr = new ArrayBuffer(8);
//create view to array buffer
var dv = new DataView(arr);
//set number to buffer as 64 bit float
dv.setFloat64(0, n);
//now get first 32 bit from array and convert it to integer
// from offset 0
var c = dv.getInt32(0);
//now get next 32 bit from array and convert it to integer
//from offset 4
var d = dv.getInt32(4);
//XOR first end second integer numbers
return c ^ d;
}
I think this can be useful for someone
EDIT: using one buffer and DataView is faster !
Here is a faster way to do this in JavaScript.
const kBuf = new ArrayBuffer(8);
const kBufAsF64 = new Float64Array(kBuf);
const kBufAsI32 = new Int32Array(kBuf);
function hashNumber(n) {
// Remove this `if` if you want 0 and -0 to hash to different values.
if (~~n === n) {
return ~~n;
}
kBufAsF64[0] = n;
return kBufAsI32[0] ^ kBufAsI32[1];
}
It's 250x faster than the DataView approach: see benchmark.
I looked up some hashing libraries to see how they did it: xxhashjs, jshashes, etc.
Most seem to take a string or an ArrayBuffer, and also depend on UINT32-like functionality. This is equivalent to you needing a binary representation of the double (from your C# example). Notably I did not find any solution that included more-strange types, other than in another (unanswered) question.
His solution uses a method proposed here, which converts it to various typed arrays. This is most likely what you want, and the fastest accurate solution (I think).
I highly recommend that you structure your code to traverse objects/arrays as desired, and also benchmark the solution to see how comparable it is to your existing methods (the non-working one and the string one).