JavaScript Hexadecimal string to IEEE-754 Floating Point - javascript

I've looked around Stackoverflow for a few days trying to solve this problem and played in JsFiddle for a while but no luck so far.
Here's my problem: i receive a base64 encoded string which is packed with several values. Both Unsigned Integers and floating point numbers. The decoding of the base64 to a hex string goes fine. So do the unsigned integers. The problem lies in decoding the floats.
b76a40e9000000350a3d422d00000059e37c409a0000002f90003b6e00000000
This is my data, as an example well use the first 32 bits, is a IEE-754 float with byte order 1-0-3-2.
So that's 0xb76a40e9 which i know is 7.30363941192626953125 and when we swap it to be 3-2-0-1 it becomes 0x40e9b76a. When i put this hex string into https://www.h-schmidt.net/FloatConverter/IEEE754.html that is confirmed.
My question is, and iv'e searched for this very long, if there is an implementation in javascript that converts hexadecimal IEEE-754 float strings into a javascript float. JS' own parseInt will accept hex, but parseFloat doesn't for some reason?
Any imput would be greatly appreciated, all the examples of code people made thusfar result in different numbers than i'm expecting.

Unpacking a float from a word using DataView:
> v = new DataView(new ArrayBuffer(4))
> v.setUint32(0, 0x40e9b76a)
> v.getFloat32(0)
7.3036394119262695
(Recommended over Uint32Array and Float32Array because it does not inherit the platform's endianness.)
You could also use this to swap the two 16-bit units:
> v.setUint32(0, 0xb76a40e9)
> ((hi, lo) => {
v.setUint16(0, hi);
v.setUint16(2, lo);
})(v.getUint16(2), v.getUint16(0))
> v.getUint32(0).toString(16)
'40e9b76a'

Related

How to convert 4 byte Hex to floating-point number in Node.js

I have four bytes of Hex data, I am trying to convert it to floating-point number in Node.js.
i.e.
0x58 0x86 0x6B 0x42 --> 58.8812
0x76 0xD6 0xE3 0x42 --> 113.9189
0x91 0x2A 0xB4 0x41 --> 22.52078
I have tried to convert from different functions found on the internet, But unfortunately not getting the desired outcome.
On https://www.scadacore.com/tools/programming-calculators/online-hex-converter/ link I am getting proper value in "Float - Little Endian (DCBA)" cell by entering Hex string, But do not know how to do it in node js.
I think maybe I am searching the wrong thing or I have understood it wrong.
Thank You.
Given that you have a string representation of your hex data (e.g. '58866B42' regarding your first example) do the following to convert it to a floating point number:
let myNumber = Buffer.from(hexString, 'hex').readFloatLE()
The LE in readFloatLE stands for Little Endian.

I can't get proper uint32 number in javascript

i am trying to convert a long number to unit in JavaScript, but the result i got it different from the one i already have in c#.
c#:
var tt=431430059159441001;
var t=(UInt32)tt;//1570754153
js:
var arr =new Uint32Array(1);
arr[0]=431430059159441001;//1570754176
so could any body explain why there is difference.
That's because your number literal is rather a 64 bit integer, and that cannot be represented in JavaScripts regular Number type. The number type is a 64-bit precision floating point number, which can only represent integer values up to around 2**53. So I would recommend to just not use such a huge number literal.
A recent development in the JavaScript world is BigInts. If you can afford to use them, then your code is easy to fix:
var t = Number(BigInt.asUintN(32, 431430059159441001n));
console.log(t); // 1570754153
This is not about uints, but about floats. JavaScript uses floating point numbers, and your number exceeds the maximum range of integers that can safely be represented:
console.log(431430059159441001)
You cannot convert 431430059159441001 to unsigned integer in c#. Max Value of UInt32 is 4294967295. So the var t=(UInt32)431430059159441001; assignment gives Compiler error.
also 431430059159441001 is larger then max value of float number (javascript holds number with float format)

Reassembling negative Python marshal int's into Javascript numbers

I'm writing a client-side Python bytecode interpreter in Javascript (specifically Typescript) for a class project. Parsing the bytecode was going fine until I tried out a negative number.
In Python, marshal.dumps(2) gives 'i\x02\x00\x00\x00' and marshal.dumps(-2) gives 'i\xfe\xff\xff\xff'. This makes sense as Python represents integers using two's complement with at least 32 bits of precision.
In my Typescript code, I use the equivalent of Node.js's Buffer class (via a library called BrowserFS, instead of ArrayBuffers and etc.) to read the data. When I see the character 'i' (i.e. buffer.readUInt8(offset) == 105, signalling that the next thing is an int), I then call readInt32LE on the next offset to read a little-endian signed long (4 bytes). This works fine for positive numbers but not for negative numbers: for 1 I get '1', but for '-1' I get something like '-272777233'.
I guess that Javascript represents numbers in 64-bit (floating point?). So, it seems like the following should work:
var longval = buffer.readInt32LE(offset); // reads a 4-byte long, gives -272777233
var low32Bits = longval & 0xffff0000; //take the little endian 'most significant' 32 bits
var newval = ~low32Bits + 1; //invert the bits and add 1 to negate the original value
//but now newval = 272826368 instead of -2
I've tried a lot of different things and I've been stuck on this for days. I can't figure out how to recover the original value of the Python integer from the binary marshal string using Javascript/Typescript. Also I think I deeply misunderstand how bits work. Any thoughts would be appreciated here.
Some more specific questions might be:
Why would buffer.readInt32LE work for positive ints but not negative?
Am I using the correct method to get the 'most significant' or 'lowest' 32 bits (i.e. does & 0xffff0000 work how I think it does?)
Separate but related: in an actual 'long' number (i.e. longer than '-2'), I think there is a sign bit and a magnitude, and I think this information is stored in the 'highest' 2 bits of the number (i.e. at number & 0x000000ff?) -- is this the correct way of thinking about this?
The sequence ef bf bd is the UTF-8 sequence for the "Unicode replacement character", which Unicode encoders use to represent invalid encodings.
It sounds like whatever method you're using to download the data is getting accidentally run through a UTF-8 decoder and corrupting the raw datastream. Be sure you're using blob instead of text, or whatever the equivalent is for the way you're downloading the bytecode.
This got messed up only for negative values because positive values are within the normal mapping space of UTF-8 and thus get translated 1:1 from the original byte stream.

Dealing With Binary / Bitshifts in JavaScript

I am trying to perform some bitshift operations and dealing with binary numbers in JavaScript.
Here's what I'm trying to do. A user inputs a value and I do the following with it:
// Square Input and mod with 65536 to keep it below that value
var squaredInput = (inputVal * inputVal) % 65536;
// Figure out how many bits is the squared input number
var bits = Math.floor(Math.log(squaredInput) / Math.log(2)) + 1;
// Convert that number to a 16-bit number using bitshift.
var squaredShifted = squaredInput >>> (16 - bits);
As long as the number is larger than 46, it works. Once it is less than 46, it does not work.
I know the problem is the in bitshift. Now coming from a C background, I know this would be done differently, since all numbers will be stored in 32-bit format (given it is an int). Does JavaScript do the same (since it vars are not typed)?
If so, is it possible to store a 16-bit number? If not, can I treat it as 32-bits and do the required calculations to assume it is 16-bits?
Note: I am trying to extract the middle 4-bits of the 16-bit value in squaredInput.
Another note: When printing out the var, it just prints out the value without the padding so I couldn't figure it out. Tried using parseInt and toString.
Thanks
Are you looking for this?
function get16bitnumber( inputVal ){
return ("0000000000000000"+(inputVal * inputVal).toString(2)).substr(-16);
}
This function returns last 16 bits of (inputVal*inputVal) value.By having binary string you could work with any range of bits.
Don't use bitshifting in JS if you don't absolutely have to. The specs mention at least four number formats
IEEE 754
Int32
UInt32
UInt16
It's really confusing to know which is used when.
For example, ~ applies a bitwise inversion while converting to Int32. UInt16 seems to be used only in String.fromCharCode. Using bitshift operators converts the operands to either UInt32 or to Int32.
In your case, the right shift operator >>> forces conversion to UInt32.
When you type
a >>> b
this is what you get:
ToUInt32(a) >>> (ToUInt32(b) & 0x1f)

Difficulty coverting unsigned int16 and unsigned int32 in javascript

I'm working with an api that sends data in a series of base64 strings that I'm converting into an array of bytes. I'm been able to parse the time values sent in the data (year, day, hour etc. The api lists their datatype as unsigned char). I'm using parseInt(..., 2) in javascript.
The difficulty I'm having is converting signed int32 and unsigned int16 into their decimal values. For example, these are the bit stings for voltage and power:
Voltage (unsigned int16 ) 01101010 00001001 - Should be around 120.0
Power (signed int32) 10101010 00010110 00000000 00000000 - Should be 0-10 kWh
Does anyone know how I can convert these values? Also, I wrote a simple function to convert base64 to an array of bytes that I'm pretty sure is correct, but the above values don't make any sense maybe it isn't. If that's the case, does anyone know of a plugin that converts base64 to binary.
Thanks,
Tristan
I can't see how 0110101000001001 converts into 120... It's either 27415 or 2410 depending on endianness
Your voltage as an unsigned int is 27145, is that what you're getting from your conversion, because it is the correct value. Your power is -1441398784 as a signed int.

Categories

Resources