MongoDB int64 and JavaScript - javascript

I write a Long value from Java to MongoDB which stores it as an int64.
Browsing the data via RoboMongo I can see the following value:
nanoTimestamp: 1467100788819818000
I then fetch the values in JS (using meteor) and I end up with the following object:
Object {_bsontype: "Long", low_: 932437528, high_: 341586032}
How can I work with this type on the client side?

The problem here is that JavaScript's number type is IEEE-754 double-precision binary floating point, which has roughly 15 digits of decimal precision. So although you can get a JS number from that BSON Long:
// May not be precise!
var num = l.high_ * Math.pow(2,32) + l.low_;
...it won't be exactly the same number (in your example case, it'll come out 1467100837142847000).
If it's okay that it's imprecise (we are talking about nanoseconds here), you're all set.
If not, and you need to deal with these in JavaScript, you might consider recording them as a string rather than Long:
nanoTimestamp: "1467100788819818000"
...and then using one of the several JavaScript "big number" libraries that can do operations on arbitrarily-large integers or floating-point numbers.

Related

parseFloat stripping last digits and converting to zeros

I have a scenario where I need to parsefloat 19 digit string to number.
e.g. parseFloat("1000000000100000043") gives me 1000000000100000000
but the expected output required is 1000000000100000043
This is likely a precision overflow error.
The Number data type (but also int and float in other languages) have a finite number of bits available to represent a number. Typically around 15-16 decimal digits worth.
When length of original number in the string exceeds available precision, such number can no longer be represented by the target data type.
In this case the parseFloat function fails silently. If you want to catch this situation you need to add code to check incoming data or use another function, possibly a custom one.
Alternatively, you can convert the numeric value back to string and compare it with original to detect a discrepancy.
See also a question regarding double.Parse
You are running into how Javascript numbers are stored. See, e.g., here: https://www.w3schools.com/js/js_numbers.asp
You can use a library like decimal.js to work with large, exact numbers. These libraries store the number as string, but allow you to do mathematical operations.

Javascript does not support 64 bit integer, why new Date().getTime() return 41 bits number?

let number = new Date().getTime()
// number is 1523546797869
// binary: 10110001010111010011101110011011100101101
// (41 bits)
When I save it through GraphQL, I got an error which says it
can only handle 32 bit because of Javascript language limitation.
In field "invoiceDate": Expected type "Int", found 1523546797869: Int
cannot represent non 32-bit signed integer value: 1523546797869
My question is that if Javascript language is limited to 32 bit integer, why getTime() return a number that 41 bits??
I also read this thread. I think it is a little related, but can't fully understand the precision thing.
You've answered your own question with the links you have provided.
This link: Does JavaScript support 64-bit integers?. Which explains that javascript is limited to 53 bits due to it's support of IEEE-754 double-precision (64 bit) format.
And this link: GraphQL BigInt Which explains the existence of that package because GraphQl only supports 32 bit integers
The GraphQL spec limits its Int type to 32-bits. Maybe you've seen this error before:
GraphQLError: Argument "num" has invalid value 9007199254740990.
Expected type "Int", found 9007199254740990.
Why? 64-bits would be too large for JavaScript's 53-bit limit. According to Lee Byron, a 52-bit integer spec would have been "too weird" see this issue. The spec therefore has 32-bit integers to ensure portability to languages that can't represent 64-bit integers.
None of this has anything to do with Date.prototype.getTime() returning 41 bits. Which (by the way) is all it takes for a numeric timestamp that has milliseconds. So my confusion is "What is it you are confused about?"

Should I avoid decimal type in C# when manipulating data in JavaScript

In current project we IMHO use too much decimal types. We use i.e. for property like Mass which is calculated (multiplications, additions, etc) in BackEnd (C#) and FrontEnd (JavaScript).
The number type in JavaScript is always 64-bit Floating Point (like double in C#). When converting double to decimal and back there are situations when conversion fails.
Question:
Should the data being manipulated in JavaScript be always double?
I created a test which shown when this conversion will fail in C#.
[Fact]
public void Test()
{
double value = 0.000001d;
value *= 10;
// Console.WriteLine(value); // 9.9999999999999991E-06
ConvertToDecimalAndBack(value);
}
private static void ConvertToDecimalAndBack(double doubleValue)
{
decimal decimalValue = (decimal)doubleValue;
double doubleResult = (double)decimalValue;
Assert.Equal(doubleValue, doubleResult);
}
I interpret your question "Should the data being manipulated in JavaScript be always double?" to be one about development policy. As you and comments indicate, max precision in Javascript is double. Also, as you point out, the C# decimal type has greater precision than double, so conversion issues can occur.
Generally numbers be consistent between processing platforms, especially if numbers are involved in financial transactions or specifically displayed in the UI for some reason.
Here are two options, dependent on specific requirements.
IF you have requirement that JS operates on the original decimal values, you should institute a policy of using System.Decimal.ToDouble(System.Decimal) on C# side.
If preserving the decimal value is of utmost importance (ie. it is money)
keep numeric calcs involving decimal values in one place, on the server.

Reassembling negative Python marshal int's into Javascript numbers

I'm writing a client-side Python bytecode interpreter in Javascript (specifically Typescript) for a class project. Parsing the bytecode was going fine until I tried out a negative number.
In Python, marshal.dumps(2) gives 'i\x02\x00\x00\x00' and marshal.dumps(-2) gives 'i\xfe\xff\xff\xff'. This makes sense as Python represents integers using two's complement with at least 32 bits of precision.
In my Typescript code, I use the equivalent of Node.js's Buffer class (via a library called BrowserFS, instead of ArrayBuffers and etc.) to read the data. When I see the character 'i' (i.e. buffer.readUInt8(offset) == 105, signalling that the next thing is an int), I then call readInt32LE on the next offset to read a little-endian signed long (4 bytes). This works fine for positive numbers but not for negative numbers: for 1 I get '1', but for '-1' I get something like '-272777233'.
I guess that Javascript represents numbers in 64-bit (floating point?). So, it seems like the following should work:
var longval = buffer.readInt32LE(offset); // reads a 4-byte long, gives -272777233
var low32Bits = longval & 0xffff0000; //take the little endian 'most significant' 32 bits
var newval = ~low32Bits + 1; //invert the bits and add 1 to negate the original value
//but now newval = 272826368 instead of -2
I've tried a lot of different things and I've been stuck on this for days. I can't figure out how to recover the original value of the Python integer from the binary marshal string using Javascript/Typescript. Also I think I deeply misunderstand how bits work. Any thoughts would be appreciated here.
Some more specific questions might be:
Why would buffer.readInt32LE work for positive ints but not negative?
Am I using the correct method to get the 'most significant' or 'lowest' 32 bits (i.e. does & 0xffff0000 work how I think it does?)
Separate but related: in an actual 'long' number (i.e. longer than '-2'), I think there is a sign bit and a magnitude, and I think this information is stored in the 'highest' 2 bits of the number (i.e. at number & 0x000000ff?) -- is this the correct way of thinking about this?
The sequence ef bf bd is the UTF-8 sequence for the "Unicode replacement character", which Unicode encoders use to represent invalid encodings.
It sounds like whatever method you're using to download the data is getting accidentally run through a UTF-8 decoder and corrupting the raw datastream. Be sure you're using blob instead of text, or whatever the equivalent is for the way you're downloading the bytecode.
This got messed up only for negative values because positive values are within the normal mapping space of UTF-8 and thus get translated 1:1 from the original byte stream.

JSON transfer of bigint: 12000000000002539 is converted to 12000000000002540?

I'm transferring raw data like [{id: 12000000000002539, Name: "Some Name"}] and I'm getting the object [{id: 12000000000002540, Name: "Some Name"}] after parsing, for now server side converting id into string seems to help.
But is there a better way to transfer bigint data correctly?
The value is actually not exceeding the maximum numeric value in JavaScript (which is "only" 1.7308 or so).
However, the value is exceeding the range of "integral precision". It is not that the wrong number is sent: rather, it is that the literal 12000000000002539 can only be represented as precisely as 12000000000002540, and thus there was never the correct numeric value in JavaScript. (The range of integrals is about +/- 253.)
This is an interesting phenomena of using a double relative-precision (binary64 in IEEE-754 speak) type to store all numeric values, including integers:
12000000000002539 === 12000000000002540 // true
The maximum significant number of decimal digits that be precisely stored as a numeric value is 15 (15.95, really). In the above, there are 17 significant digits, so some of the least-significant information is silently lost. In this case, as the JavaScript parser/engine reads in the literal value.
The only safe way to handle integral numbers of this magnitude in JavaScript is to use a string literal or to break it down in another fashion (e.g. a custom numeric type or a "bigint library"). However, I recommend just using a string, as it is human readable, relatively compact (only two extra characters in JSON), and doesn't require special serialization. Since the value is just an "id" in this case, I hope that math does not need to be performed upon it :)
Happy coding.

Categories

Resources