NaN payload storing 64 bit ints - javascript

I recently found about the topic nan boxing, I'm also trying to build a dynamic typed programming language so nan boxing seemed the choice to represent the types but I'm still a lot confused about it, one of them is how would i store data like a 64 bit integer, one way i think will work is heap allocating them, as it decreases the size since now it points to the memory rather than the whole integer, but wouldn't that be slow and inefficient? Part of this question comes from my other confusion: How can javascript represent numbers as high as Number.MAX_SAFE_INTEGER if it uses IEEE 754 doubles (wouldn't it be limited to 32 bit ints then?), sorry if i sound stupid I'm just new to this kind of bitwise things.

I think i got it, i was really being stupid, since nans are doubles anyway and doubles are 64 bits, i could represent all numbers as doubles like javascript does and store it as a 32 bit int if possible because ints are faster when possible, i'd like further feedback if possible.

Related

Doing a bitwise NOT operation on a string in binary in Javascript without 2's complement

Recently I was asked to take a binary string input 10 and NOT it so the output is 01 in Javascript. My initial thought - loop over the variables and manually flip the bits - cannot be the best solution to this problem.
I am fairly sure you can use the tilde (bitwise NOT) operator to some degree, but I am awful with bit manipulation and have struggled to do this operation properly in Javascript. How could I use tilde in this instance to invert the binary? I assume I would first convert the binary to a base ten number, invert it, and convert it back -- but is there an easy way to get it out of two's complement so my final result is still 01?
Also, this was from an interview-style question, so I'm really looking to beat out the time complexity of looping through the array - any alternative methods would also be appreciated
After ostensible testing, I have come to the conclusion that (for this particular instance) looping remains the most idiomatic, performant way to complete this binary operation. Any alternative solution was complex, and the tested op/s was negligible. Retaining a simple loop for string manipulation and memoizing remains the most performant option I tested.

What is the best way to store very large binary numbers in JavaScript?

I'm currently learning JavaScript and I'm very surprised there is not built-in way to work with binary numbers. I have to work with integers up to 2^4096 so I just can't use JS integers for my calculus.
My first thought was to use Arrays of 1 and 0. But that's not satisfying since a simple multiplication by 2 requires to shift the whole table.
So how can I work efficiently with binary numbers in JavaScript ?
I'd like not to use any library. I'm interested in how it works, not in using other's abstraction.
Javascript doesn't have any biginteger type, so you would need to use an array to hold that much information.
An array of 0 and 1 values would make it easy to implement functions for it, but not very efficient.
A regular number in Javascript is a double precision floating point number, so it can hold 52 bits of numeric information (ref), but you would use slightly less to be far away from any rounding errors, for example 48 bits.
The bitwise operators in Javascript work with 32 bit integers, i.e. a double is converted to a 32 bit integer when used with a bitwise operator. If you want to use the bitwise operators on the data, you could choose to store 32 bits per item in the array.
JavaScript only supports 53 bit integers.
The best way to store "big integers" is to convert them to strings on server side. And in case you want to manipulate them I would suggest to look at this library https://github.com/rauschma/strint

Does using rounded numbers decrease CPU usage in Javascript?

I'm performing a lot of calculations in javascript. I was wondering if using rounded numbers would decrease CPU usage? When I look at the inner workings of my code using console.log, the numbers have upwards of 15 decimal places.
Sometimes highly optimized engines can tell the difference between an integer and a double. For instance 1+1 might use integer math where 1.0+1 might not. Most likely this "integerness" will quickly get lost, functions like Math.pow, Math.sqrt, etc. will likely lose the property. However I would not rely on this behavior and even rounded number might not have this effect (i.e. they might still be floats after rounding).
Also, as an aside, there's probably so much overhead in the JS engine that the difference between using a float and an integer would not be that big (given that the difference is maybe a factor of 2-3 on the processor itself and the overhead is probably at least a factor of 10).
No. JavaScript does not distinguish between integers and real numbers. It only has double-precision floats.
This means that accuracy will be best with integers or binary fractions (within the range of about 15 significant digits), but actual performance won't vary much, if at all.

JavaScript 64 bit numeric precision

Is there a way to represent a number with higher than 53-bit precision in JavaScript? In other words, is there a way to represent 64-bit precision number?
I am trying to implement some logic in which each bit of a 64-bit number represents something. I lose the lower significant bits when I try to set bits higher than 2^53.
Math.pow(2,53) + Math.pow(2,0) == Math.pow(2,53)
Is there a way to implement a custom library or something to achieve this?
Google's Closure library has goog.math.Long for this purpose.
The GWT team have added a long emulation support so java longs really hold 64 bits. Do you want 64 bit floats or whole numbers ?
I'd just use either an array of integers or a string.
The numbers in javascript are doubles, I think there is a rounding error involved in your equation.
Perhaps I should have added some technical detail. Basically the GWT long emulation uses a tuple of two numbers, the first holding the high 32 bits and the second the low 32 bits of the 64 bit long.
The library of course contains methods to add stuff like adding two "longs" and getting a "long" result. Within your GWT Java code it just looks like two regular longs - one doesn't need to fiddle or be aware of the tuple. By using this approach GWT avoids the problem you're probably alluding to, namely "longs" dropping the lower bits of precision which isn't acceptable in many cases.
Whilst floats are by definition imprecise / approximations of a value, a whole number like a long isn't. GWT always holds a 64 bit long - maths using such longs never use precision. The exception to this is overflows but that accurately matches what occurs in Java etc when you add two very large long values which require more than 64 bits - eg 2^32-1 + 2^32-1.
To do the same for floating point numbers will require a similar approach. You will need to have a library that uses a tuple.
The following code might work for you; I haven't tested it however yet:
BigDecimal for JavaScript
Yes, 11 bit are reserved for exponent, only 52 bits containt value also called fraction.
Javascript allows bitwise operations on numbers but only first 32 bits are used in those operations according to Javascript standard specification.
I do not understand misleading GWT/Java/long answers in Javascript/double question though? Javascript is not Java.
Why would anyone need 64 bit precision in javascript ?
Longs sometimes hold ID of stuff in a DB so its important not to lose some of the lower bits... but floating point numbers are most of the time used for calculations. To use floats to hold monetary or similar exacting values is plain wrong. If you truely need 64 bit precision do the maths on the server where its faster and so on.

JavaScript Endian Encoding?

A response on SO got me thinking, does JavaScript guarantee a certain endian encoding across OSs and browsers?
Or put another way are bitwise shifts on integers "safe" in JavaScript?
Shifting is safe, but your question is flawed because endianness doesn't affect bit-shift operations anyway. Shifting left is the same on big-endian and little-endian systems in all languages. (Shifting right can differ, but only due to interpretation of the sign bit, not the relative positions of any bits.)
Endianness only comes into play when you have the option of interpreting some block of memory as bytes or as larger integer values. In general, Javascript doesn't give you that option since you don't get access to arbitrary blocks of memory, especially not the blocks of memory occupied by variables. Typed arrays offer views of data in an endian-sensitive way, but the ordering depends on the host system; it's not necessarily the same for all possible Javascript host environments.
Endianness describes physical storage order, not logical storage order. Logically, the rightmost bit is always the least significant bit. Whether that bit's byte is the one that resides at the lowest memory address is a completely separate issue, and it only matters when your language exposes such a concept as "lowest memory address," which Javascript does not. Typed arrays do, but then only within the context of typed arrays; they still don't offer access to the storage of arbitrary data.
Some of these answers are dated, because endianness can be relevant when using typed arrays! Consider:
var arr32 = new Uint32Array(1);
var arr8 = new Uint8Array(arr32.buffer);
arr32[0] = 255;
console.log(arr8[0], arr8[1], arr8[2], arr8[3]);
When I run this in Chrome's console, it yields 255 0 0 0, indicating that my machine is little-endian. However, typed arrays use the system endianness by default, so you might see 0 0 0 255 instead if your machine is big-endian.
Yes, they are safe. Although you're not getting the speed benefits you might hope for since JS bit operations are "a hack".
ECMA Script does actually have a concept of an integer type but it is implicitly coerced to or from a double-precision floating-point value as necessary (if the number represented is too large or if it has a fractional component).
Many mainstream Javascript interpreters (SpiderMonkey is an example) take a shortcut in implementation and interpret all numeric values as doubles to avoid checking the actual native type of the value for each instruction. As a result of the implementation hack, bit operations are implemented as a cast to an integral type followed by a cast back to a double representation. It is therefore not a good idea to use bit-level operations in Javascript and you won't get a performance boost anyway.
are bitwise shifts on integers "safe" in JavaScript?
Only for integers that fit within 32 bits (31+sign). Unlike, say, Python, you can't get 1<<40.
This is how the bitwise operators are defined to work by ECMA-262, even though JavaScript Numbers are actually floats. (Technically, double-precision floats, giving you 52 bits of mantissa, easily enough to cover the range of a 32-bit int.)
There is no issue of 'endianness' involved in bitwise arithmetic, and no byte-storage format where endianness could be involved is built into JavaScript.
JavaScript doesn't have an integer type, only a floating point type. You can never get close enough to the implementation details to worry about this.

Categories

Resources