Interpret 16bit two's complement in javascript (nodejs) - javascript

Hello dear swarm intelligence,
One of my current private projects is in the field of the internet of things, specifically LoRaWan and TTN. For easy data-handling I decided to use node-red which is a node-js based flow tool to process the received data.
This is the first time ever I have encoutered contact with the javascript world (apart from minor reading ;)). Here's the problem:
I am transmitting an C-Style int16_t signed type devided into two 8-bit nibbles via ttn. On the receiving site I want to merge these two nibbles again into a signed 16 bit type. Well the problem is that javascript only supports 32-bit intergers which means by simply mergin them via bitwise operations like this:
newMsg.payload=(msg.payload[1]<<8)|(msg.payload[0]);
I lose the signed information and just get the unsigned interpretation of the data, since it is not stored in a 32-bit two's complement.
Since I am not yet firmly familiar with the javascript "standard library" this seems like a hard problem for me!
Any help will be appreciated

var unsignedValue = (msg.payload[1] << 8) | (msg.payload[0]);
if (result & 0x8000) {
// If the sign bit is set, then set the two first bytes in the result to 0xff.
newMsg.payload = unsignedValue | 0xffff0000;
} else {
// If the sign bit is not set, then the result is the same as the unsigned value.
newMsg.payload = unsignedValue;
}
Note that this still stores the value as a signed 32-bit integer, but with the right value.

Related

Understanding this websocket frame parsing code

Beginner here trying to understand on a low-level how websockets work. I am trying to create my own implementation, however I am very confused on the logic of parsing the data frame that get's sent from client => server.
I know the buffer that is received on the server side consists of multiple bytes, with the first two being the main header information (fin bit, length, opcode, mask, etc).
I found the following code on SO that parses both the bytes, and from testing, it DOES indeed return the correct values.
let index = 0;
frame = {
data: new Buffer(0),
fin: (buffer[index] & 128) === 128,
length: buffer[index + 1] & 127,
masked: (buffer[index + 1] & 128) === 128,
opcode: buffer[index] & 15
}
What my main question is though.... HOW exactly is this returning the correct values?
I know buffer[index] and buffer[index+1] are referring to the first and second byte, and the AND operand is being used to compare the binary values of each, and output 1 whenever both indexes in both numbers equal to 1, otherwise 0...... but...
Where do the numbers after the & operator come from? ex: opcode is 15, length is 127.
HOW exactly does using the AND operator on both these values, give the right result? This is what I really don't understand.
I apologize if this is basic computer science concepts that I'm not understanding, but if anyone out there is able to explain to me what exactly is occurring with this code, it would be so much appreciated.
I get that it looks like a normal AND comparison but rather it is a boolean AND comparison being made.
To clarify a bit more specific, buffer[index] & 15 for opcode says compare buffer[index] as a binary number with 15(this is the highest allowed opcode for websockets) as a binary number bit by bit and return the binary results as an integer, the opcode itself tells which frame type is being sent.(If you are curious you can deep dive on this here https://www.rfc-editor.org/rfc/rfc6455#section-11.8.)
On the length part of 127 I refer to this answer on SO since it is a solid answer: how to work out payload size from html5 websocket
For further reading on the operator see the section in my source on bitwise logical operators.
Source: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Expressions_and_Operators

Should I avoid decimal type in C# when manipulating data in JavaScript

In current project we IMHO use too much decimal types. We use i.e. for property like Mass which is calculated (multiplications, additions, etc) in BackEnd (C#) and FrontEnd (JavaScript).
The number type in JavaScript is always 64-bit Floating Point (like double in C#). When converting double to decimal and back there are situations when conversion fails.
Question:
Should the data being manipulated in JavaScript be always double?
I created a test which shown when this conversion will fail in C#.
[Fact]
public void Test()
{
double value = 0.000001d;
value *= 10;
// Console.WriteLine(value); // 9.9999999999999991E-06
ConvertToDecimalAndBack(value);
}
private static void ConvertToDecimalAndBack(double doubleValue)
{
decimal decimalValue = (decimal)doubleValue;
double doubleResult = (double)decimalValue;
Assert.Equal(doubleValue, doubleResult);
}
I interpret your question "Should the data being manipulated in JavaScript be always double?" to be one about development policy. As you and comments indicate, max precision in Javascript is double. Also, as you point out, the C# decimal type has greater precision than double, so conversion issues can occur.
Generally numbers be consistent between processing platforms, especially if numbers are involved in financial transactions or specifically displayed in the UI for some reason.
Here are two options, dependent on specific requirements.
IF you have requirement that JS operates on the original decimal values, you should institute a policy of using System.Decimal.ToDouble(System.Decimal) on C# side.
If preserving the decimal value is of utmost importance (ie. it is money)
keep numeric calcs involving decimal values in one place, on the server.

Javascript to python math translation

I have a java script function that I'm trying to replicate in python 2, and the java script is doing some kind of precision error wrap around (or something) which I'm having trouble understanding. These are large numbers, but here's the example:
In javascript:
a = 3141592751
b = 1234567890
result = (a*31) ^ b
window.alert(result)
Here, result = -447877661. I'm assuming it's because of a bit limitation on storing large numbers and the related wrap around to a large negative number.
Using python 2:
a = 3141592751
b = 1234567890
result = (a*31) ^ b
print result
Here, result = 98336370147, which is correct mathematically.
How can I replicate the functionality of the javascript code using python? What is the wrap around point? Thanks!
The limit of a variable in javascript is-
+/- 9007199254740991
i.e., 2^53 -1
One more thing to consider is that if you are dealing with bitwise operators and shift operators to get your job done, they operate on 32-bit ints, so in that case, the max safe integer is-
2^31-1, or 2147483647
Hope it helps!
More reference - MDN and StackOverflow
So, you may have to use this value to wrap your python code.
Note : ^ symbol above represents power of.

Reassembling negative Python marshal int's into Javascript numbers

I'm writing a client-side Python bytecode interpreter in Javascript (specifically Typescript) for a class project. Parsing the bytecode was going fine until I tried out a negative number.
In Python, marshal.dumps(2) gives 'i\x02\x00\x00\x00' and marshal.dumps(-2) gives 'i\xfe\xff\xff\xff'. This makes sense as Python represents integers using two's complement with at least 32 bits of precision.
In my Typescript code, I use the equivalent of Node.js's Buffer class (via a library called BrowserFS, instead of ArrayBuffers and etc.) to read the data. When I see the character 'i' (i.e. buffer.readUInt8(offset) == 105, signalling that the next thing is an int), I then call readInt32LE on the next offset to read a little-endian signed long (4 bytes). This works fine for positive numbers but not for negative numbers: for 1 I get '1', but for '-1' I get something like '-272777233'.
I guess that Javascript represents numbers in 64-bit (floating point?). So, it seems like the following should work:
var longval = buffer.readInt32LE(offset); // reads a 4-byte long, gives -272777233
var low32Bits = longval & 0xffff0000; //take the little endian 'most significant' 32 bits
var newval = ~low32Bits + 1; //invert the bits and add 1 to negate the original value
//but now newval = 272826368 instead of -2
I've tried a lot of different things and I've been stuck on this for days. I can't figure out how to recover the original value of the Python integer from the binary marshal string using Javascript/Typescript. Also I think I deeply misunderstand how bits work. Any thoughts would be appreciated here.
Some more specific questions might be:
Why would buffer.readInt32LE work for positive ints but not negative?
Am I using the correct method to get the 'most significant' or 'lowest' 32 bits (i.e. does & 0xffff0000 work how I think it does?)
Separate but related: in an actual 'long' number (i.e. longer than '-2'), I think there is a sign bit and a magnitude, and I think this information is stored in the 'highest' 2 bits of the number (i.e. at number & 0x000000ff?) -- is this the correct way of thinking about this?
The sequence ef bf bd is the UTF-8 sequence for the "Unicode replacement character", which Unicode encoders use to represent invalid encodings.
It sounds like whatever method you're using to download the data is getting accidentally run through a UTF-8 decoder and corrupting the raw datastream. Be sure you're using blob instead of text, or whatever the equivalent is for the way you're downloading the bytecode.
This got messed up only for negative values because positive values are within the normal mapping space of UTF-8 and thus get translated 1:1 from the original byte stream.

Angular / Javascript 'rounding' long values?

I have the following JSON:
[{"hashcode": 4830991188237466859},{...}]
I have the following Angular/JS code:
var res = $resource('<something>');
...
res.query({}, function(json) {hashcode = json[0].hashcode;};
...
Surprisingly (to me, I'm no JS expert), I find that something (?) is rounding the value to the precision of 1000 (rounding the last 3 digits). This is a problem, since this is a hash code of something.
If, on the other hand I write the value as a String to the JSON, e.g -
[{"hashcode": "4830991188237466859"},{...}]
this does not happen. But this causes a different problem for me (with JMeter/JSON Path, which extracts the value ["4830991188237466859"] by running my query $.hashcode - which I can't use as a HTTP request parameter (I need to add ?hashcode=... to the query, but I end up with ?hashcode=["..."]
So I appreciate help with:
Understanding who and why -- is rounding my hash, and how to avoid it
Help with JMeter/JSON Path
Thanks!
Each system architecture has a maximum number it can represent. See Number.MAX_VALUE or paste your number into the console. You'll see it happens at the JavaScript level, nothing to do with angular. Since the hash doesn't represent the amount of something, it's perfectly natural for it to be a string. Which leads me to
Nothing wrong with site.com/page?hashcode=4830991188237466859 - it's treated as a string there and you should keep treating it as such.
The javascript Number type is floating point based, and can only represent all integers in the range between -253 and 253. Some integers outside this range are therefore subject to "rounding" as you experience.
In regards to JMeter JSON Path Extractor plugin, the correct JSON Path query for your hashcode will look like
$..hashcode[0]
See Parsing JSON chapter of the guide for XPath to JSON Path mappings and more details.

Categories

Resources