Difficulty coverting unsigned int16 and unsigned int32 in javascript - javascript

I'm working with an api that sends data in a series of base64 strings that I'm converting into an array of bytes. I'm been able to parse the time values sent in the data (year, day, hour etc. The api lists their datatype as unsigned char). I'm using parseInt(..., 2) in javascript.
The difficulty I'm having is converting signed int32 and unsigned int16 into their decimal values. For example, these are the bit stings for voltage and power:
Voltage (unsigned int16 ) 01101010 00001001 - Should be around 120.0
Power (signed int32) 10101010 00010110 00000000 00000000 - Should be 0-10 kWh
Does anyone know how I can convert these values? Also, I wrote a simple function to convert base64 to an array of bytes that I'm pretty sure is correct, but the above values don't make any sense maybe it isn't. If that's the case, does anyone know of a plugin that converts base64 to binary.
Thanks,
Tristan

I can't see how 0110101000001001 converts into 120... It's either 27415 or 2410 depending on endianness

Your voltage as an unsigned int is 27145, is that what you're getting from your conversion, because it is the correct value. Your power is -1441398784 as a signed int.

Related

output INT64 from js UDF

I'm trying to use BigQuery's INT64 type to hold bit encoded information. I have to use a javascript udf function and I'd like to use all the 64 bits.
My issue is that javascript only deals with int32 so 1 << 32 == 1 and I'm not sure how to use the full 64 range that BigQuery supports in the udf.
It’s not possible to directly convert Big Query’s INT64 type to JavaScript UDF, neither as input nor output, as JavaScript does not support 64-bit integer type [1]. You could use FLOAT64 instead, as far as the values are less than 2^53 - 1, since it follows the IEEE 754-2008 standard for double precision [2]. You can also use a string containing the number value. Here is the documentation for supported external UDF data types [3].

JavaScript Hexadecimal string to IEEE-754 Floating Point

I've looked around Stackoverflow for a few days trying to solve this problem and played in JsFiddle for a while but no luck so far.
Here's my problem: i receive a base64 encoded string which is packed with several values. Both Unsigned Integers and floating point numbers. The decoding of the base64 to a hex string goes fine. So do the unsigned integers. The problem lies in decoding the floats.
b76a40e9000000350a3d422d00000059e37c409a0000002f90003b6e00000000
This is my data, as an example well use the first 32 bits, is a IEE-754 float with byte order 1-0-3-2.
So that's 0xb76a40e9 which i know is 7.30363941192626953125 and when we swap it to be 3-2-0-1 it becomes 0x40e9b76a. When i put this hex string into https://www.h-schmidt.net/FloatConverter/IEEE754.html that is confirmed.
My question is, and iv'e searched for this very long, if there is an implementation in javascript that converts hexadecimal IEEE-754 float strings into a javascript float. JS' own parseInt will accept hex, but parseFloat doesn't for some reason?
Any imput would be greatly appreciated, all the examples of code people made thusfar result in different numbers than i'm expecting.
Unpacking a float from a word using DataView:
> v = new DataView(new ArrayBuffer(4))
> v.setUint32(0, 0x40e9b76a)
> v.getFloat32(0)
7.3036394119262695
(Recommended over Uint32Array and Float32Array because it does not inherit the platform's endianness.)
You could also use this to swap the two 16-bit units:
> v.setUint32(0, 0xb76a40e9)
> ((hi, lo) => {
v.setUint16(0, hi);
v.setUint16(2, lo);
})(v.getUint16(2), v.getUint16(0))
> v.getUint32(0).toString(16)
'40e9b76a'

Convert to string nodejs

0xc4115 0x4cf8
Im not sure what data type this is so my question would be:
What data type is this and how can I convert it to something more manageable using NODE.JS?
you can convert Hexadecimal to Decimal by this
let hex_num = "0xc4115";
console.log(Number(hex_num)); #803093
In general you have the format 0x for hexadecimal, 0b for binary and 0 for octal. All this represent Numbers. JavaScript converts to decimal all of this types automatically. In case you want to do it yourself, you can use parseInt(number,base).

Reassembling negative Python marshal int's into Javascript numbers

I'm writing a client-side Python bytecode interpreter in Javascript (specifically Typescript) for a class project. Parsing the bytecode was going fine until I tried out a negative number.
In Python, marshal.dumps(2) gives 'i\x02\x00\x00\x00' and marshal.dumps(-2) gives 'i\xfe\xff\xff\xff'. This makes sense as Python represents integers using two's complement with at least 32 bits of precision.
In my Typescript code, I use the equivalent of Node.js's Buffer class (via a library called BrowserFS, instead of ArrayBuffers and etc.) to read the data. When I see the character 'i' (i.e. buffer.readUInt8(offset) == 105, signalling that the next thing is an int), I then call readInt32LE on the next offset to read a little-endian signed long (4 bytes). This works fine for positive numbers but not for negative numbers: for 1 I get '1', but for '-1' I get something like '-272777233'.
I guess that Javascript represents numbers in 64-bit (floating point?). So, it seems like the following should work:
var longval = buffer.readInt32LE(offset); // reads a 4-byte long, gives -272777233
var low32Bits = longval & 0xffff0000; //take the little endian 'most significant' 32 bits
var newval = ~low32Bits + 1; //invert the bits and add 1 to negate the original value
//but now newval = 272826368 instead of -2
I've tried a lot of different things and I've been stuck on this for days. I can't figure out how to recover the original value of the Python integer from the binary marshal string using Javascript/Typescript. Also I think I deeply misunderstand how bits work. Any thoughts would be appreciated here.
Some more specific questions might be:
Why would buffer.readInt32LE work for positive ints but not negative?
Am I using the correct method to get the 'most significant' or 'lowest' 32 bits (i.e. does & 0xffff0000 work how I think it does?)
Separate but related: in an actual 'long' number (i.e. longer than '-2'), I think there is a sign bit and a magnitude, and I think this information is stored in the 'highest' 2 bits of the number (i.e. at number & 0x000000ff?) -- is this the correct way of thinking about this?
The sequence ef bf bd is the UTF-8 sequence for the "Unicode replacement character", which Unicode encoders use to represent invalid encodings.
It sounds like whatever method you're using to download the data is getting accidentally run through a UTF-8 decoder and corrupting the raw datastream. Be sure you're using blob instead of text, or whatever the equivalent is for the way you're downloading the bytecode.
This got messed up only for negative values because positive values are within the normal mapping space of UTF-8 and thus get translated 1:1 from the original byte stream.

Unpack BinaryString sent from JavaScript FileReader API to Python

I'm trying to unpack a binary string sent via Javascript's FileReader readAsBinaryString method in my python app. It seems I could use the struct module for this. I'm unsure what to provide as as the format for the unpack exactly.
Can someone confirm this is the right approach, and if so, what format I should specify?
According to the JS documentation:
The result will contain the file's data
as a binary string. Every byte is
represented by an integer in the range
[0..255].
It sounds as if you just have an ordinary string (or bytes object in Python 3), so I'm not sure what you need to unpack.
One method of accessing the byte data is to use a bytearray; this lets you index the byte data easily:
>>> your_data = b'\x00\x12abc'
>>> b = bytearray(your_data)
>>> b[0]
0
>>> b[1]
18
If you have it as a string and don't want to use a bytearray (which need Python 2.6 or later) then use ord to convert the character to an integer.
>>> ord(your_data[1])
18
If your binary data has a particular interpretation in terms of groups of bytes representing integers or floats with particular endianness then the struct module is certainly your friend, but you don't need it just to examine the byte data.

Categories

Resources