How does V8 store integers in memory?
For example the integer 5?
I know it stores it the heap, but how exactly does it store it?
Things like metadata and the actual value itself.
Is there a constant added to the int before storing it?
V8 uses a pointer tagging scheme to distinguish small integers and heap object pointers. 5 would be stored as a Smi type, which is not heap allocated in V8.
You can check out the source code for the Smi class to learn more.
On 32-bit platforms, Smis are a 31 bit signed int with a 0 set for the bottom bit.
On 64-bit platforms, Smis are a 32 bit signed int, 31 bits of 0 padding and a 0 for the bottom bit.
Pointers to heap objects have a 1 set for the bottom bit so that V8 can tell the difference between pointers and Smis without extra metadata.
In Javascript, all numbers are stored as 64bit floating point values. C and C++ call this type double. There is no distinct "integer" type.
To some degree, you can use integer values naivly and get the result you expect, without having to fear rounding errors. These integers are so called "safe" integers.
All integers in the range [-(2^53 - 1), +(2^53 - 1)] are "safe" integers, as described here. This means that if you add, subtract or multiply integers in that range, and the result is within that range too, then the calculation is without rounding errors.
Of course, all values in Javascript/V8 are somehow "boxed", because a variable doesn't have a type (except small integers which use tagged pointers). If you have a variable x that is 5.25, it has to know that it is a "number" and that that number is 5.25. So it will take more than 8 bytes of space. You will have to look up the source code of v8 to find out more.
Related
From what I understand integers range from
-2147483648 through 2147483647
I am confused because I noticed some references to big int in javascript. Can someone explain how I can store an integer number larger than 2147483647?
To get past JavaScript's internal numeric limits, use something like a bignum library (Just a random one I found, research for a good library left as an exercise to OP).
BigInteger.js is an arbitrary-length integer library for Javascript, allowing arithmetic operations on integers of unlimited size, notwithstanding memory and time limitations.
Depending on what you are doing the number, you can handle integers up to +/- (2^53)-1 or 9007199254740991.
While not all browsers support Number.MAX_SAFE_INTEGER yet, you can calculate it with:
Math.pow(2, 53) - 1
Note: This works when in pure math functions, but many bitwise operations are limited to 16 bit integers, as well as the index in arrays.
More details can be found here:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER
two ways we can store big int
const bigIntVal=123243435454656565657575n;
or
const bigIntVal=BigInt("123243435454656565657575");
Obviously numbers in JavaScript aren't explicitly typed, but are represented as types by the interpreter. I just saw a thing about Google's V8 JS engine that said it's greatly optimized for 32 bit numbers, but found it odd many JS programmers would have a need for doubles even with floating point. The only examples I could think of personally is if I'm dividing two integers, which I do often in order to normalize screen coordinates between 0 and 1, and the interpreter is truncating the result at 64 bits instead of 32. This also seems unlikely to me, but then again I don't know how else someone needing such precision would specify it. So now I'm wondering...is there a way to ensure the quotient of two (not gigantic) integers is under 32 bits in length?
I just saw a thing about Google's V8 JS engine that said it's greatly optimized for 32 bit numbers
This only means that V8 does internally store those numbers as integers when it can deduce that they will stay in the respective range. This is common for counters or array indices, for example.
Is there a way to ensure the quotient of two (not gigantic) integers is under 32 bits in length?
No - all arithmetic operations are carried out as if they were 64 bit floating point numbers (like all numbers in JS). They only thing you can do is to truncate the result back to a 32 bit integer. You'll use the bitwise right shift operator for that which internally casts its operands to integers:
var q = (a / b) >>> 0;
See What is the JavaScript >>> operator and how do you use it? for details.
I'm reading the Number Type section of the book Professional JavaScript for Web Developers. It seems to say that all ECMAScript numbers are binary64 floating point, which is corroborated by this MDN article. But the book author also says:
Because storing floating-point values uses twice as much memory as storing integer values, ECMAScript always looks for ways to convert values into integers.
I expected numbers to each occupy the same amount of memory: 64 bits. And the MDN article says, "There is no specific type for integers". Anyone know what the book author meant? How can integers take up less memory when they're stored as 64-bit floats (if I have that right)? You'll find the whole section at the link above (free sample of the book).
JavaScript doesn't have any other number type than double precision floating point (except in ECMAScript 6 typed arrays), but the underlying implementation may choose to store the numbers in any way it likes as long as the JavaScript code behaves the same.
JavaScript is compiled nowadays, which means that it can be optimised in many ways that are not obvious in the language.
If a local variable in a function only ever takes on an integer value and isn't exposed outside the function in any way, then it could actually be implemented using an integer type when the code is compiled.
The implementation varies in different browsers. Currently it seems to make a huge difference in MS Edge, a big difference in Firefox, and no difference at all in Chrome: http://jsperf.com/int-vs-double-implementation (Note: jsperf thinks that MS Edge is Chrome 42.)
Further research:
The JS engines Spidermonkey (Firefox), V8 (Chrome, Opera), JavaScriptCore (Safari), Chakra (IE) and Rhino (and possibly others, but those are harder to find implementation details about) use different ways of using integer types or storing numbers as integers when possible. Some quotes:
"To have an efficient representation of numbers and JavaScript
objects, V8 represents both of us with a 32 bits value. It uses a bit
to know if it is an object (flag = 1) or an integer (flag = 0) called
here SMall Integer or SMI because of its 31 bits."
http://thibaultlaurens.github.io/javascript/2013/04/29/how-the-v8-engine-works/
"JavaScript does not have a built-in notion of an integer value, but
for efficiency JavaScriptCore will represent most integers as int32
rather than as double."
http://trac.webkit.org/wiki/JavaScriptCore
"[...] non-double values are a 32-bit type tag and a 32-bit payload,
which is normally either a pointer or a signed 32-bit integer."
https://developer.mozilla.org/en-US/docs/Mozilla/Projects/SpiderMonkey/Internals
"In Windows 10 and Microsoft Edge, we’ve started optimizing Chakra’s
parser and the JIT compiler to identify non const variable
declarations of integers that are defined globally and are never
changed during the course of the execution time of the program."
https://blogs.windows.com/msedgedev/2015/05/20/delivering-fast-javascript-performance-in-microsoft-edge/
Because storing floating-point values uses twice as much memory as storing integer values, ECMAScript always looks for ways to convert values into integers.
This paragraph is complete nonsense. Ignore it!
Numbers are numbers. ECMAScript makes no distinction whatsoever between floating-point and integer numeric values.
Even within most JS runtimes, all numeric values are stored as double-precision floating point.
Not sure I fully understood your question, but "There is no specific type for integers" means that JavaScript doesn't recognize separate types for integers and floats, but they are both typed as Numbers. The int/float separation happens "behind the curtains", and that's what they meant by "ECMAScript always looks for ways to convert values into integers".
The bottom line is that you don't have to worry about it, unless you specifically need your variables to mimic integers or floats for use in other languages, in which case it's probably (did I say probably?) the best to pass them as strings (because you'd have trouble passing, say, 5.0 as a float because JS would immediatelly convert it to 5, exactly because of the "ECMAScript always looks for ways to convert values into integers" part).
alert(5.0); // don't expect a float from this
Is there a way to represent a number with higher than 53-bit precision in JavaScript? In other words, is there a way to represent 64-bit precision number?
I am trying to implement some logic in which each bit of a 64-bit number represents something. I lose the lower significant bits when I try to set bits higher than 2^53.
Math.pow(2,53) + Math.pow(2,0) == Math.pow(2,53)
Is there a way to implement a custom library or something to achieve this?
Google's Closure library has goog.math.Long for this purpose.
The GWT team have added a long emulation support so java longs really hold 64 bits. Do you want 64 bit floats or whole numbers ?
I'd just use either an array of integers or a string.
The numbers in javascript are doubles, I think there is a rounding error involved in your equation.
Perhaps I should have added some technical detail. Basically the GWT long emulation uses a tuple of two numbers, the first holding the high 32 bits and the second the low 32 bits of the 64 bit long.
The library of course contains methods to add stuff like adding two "longs" and getting a "long" result. Within your GWT Java code it just looks like two regular longs - one doesn't need to fiddle or be aware of the tuple. By using this approach GWT avoids the problem you're probably alluding to, namely "longs" dropping the lower bits of precision which isn't acceptable in many cases.
Whilst floats are by definition imprecise / approximations of a value, a whole number like a long isn't. GWT always holds a 64 bit long - maths using such longs never use precision. The exception to this is overflows but that accurately matches what occurs in Java etc when you add two very large long values which require more than 64 bits - eg 2^32-1 + 2^32-1.
To do the same for floating point numbers will require a similar approach. You will need to have a library that uses a tuple.
The following code might work for you; I haven't tested it however yet:
BigDecimal for JavaScript
Yes, 11 bit are reserved for exponent, only 52 bits containt value also called fraction.
Javascript allows bitwise operations on numbers but only first 32 bits are used in those operations according to Javascript standard specification.
I do not understand misleading GWT/Java/long answers in Javascript/double question though? Javascript is not Java.
Why would anyone need 64 bit precision in javascript ?
Longs sometimes hold ID of stuff in a DB so its important not to lose some of the lower bits... but floating point numbers are most of the time used for calculations. To use floats to hold monetary or similar exacting values is plain wrong. If you truely need 64 bit precision do the maths on the server where its faster and so on.
A response on SO got me thinking, does JavaScript guarantee a certain endian encoding across OSs and browsers?
Or put another way are bitwise shifts on integers "safe" in JavaScript?
Shifting is safe, but your question is flawed because endianness doesn't affect bit-shift operations anyway. Shifting left is the same on big-endian and little-endian systems in all languages. (Shifting right can differ, but only due to interpretation of the sign bit, not the relative positions of any bits.)
Endianness only comes into play when you have the option of interpreting some block of memory as bytes or as larger integer values. In general, Javascript doesn't give you that option since you don't get access to arbitrary blocks of memory, especially not the blocks of memory occupied by variables. Typed arrays offer views of data in an endian-sensitive way, but the ordering depends on the host system; it's not necessarily the same for all possible Javascript host environments.
Endianness describes physical storage order, not logical storage order. Logically, the rightmost bit is always the least significant bit. Whether that bit's byte is the one that resides at the lowest memory address is a completely separate issue, and it only matters when your language exposes such a concept as "lowest memory address," which Javascript does not. Typed arrays do, but then only within the context of typed arrays; they still don't offer access to the storage of arbitrary data.
Some of these answers are dated, because endianness can be relevant when using typed arrays! Consider:
var arr32 = new Uint32Array(1);
var arr8 = new Uint8Array(arr32.buffer);
arr32[0] = 255;
console.log(arr8[0], arr8[1], arr8[2], arr8[3]);
When I run this in Chrome's console, it yields 255 0 0 0, indicating that my machine is little-endian. However, typed arrays use the system endianness by default, so you might see 0 0 0 255 instead if your machine is big-endian.
Yes, they are safe. Although you're not getting the speed benefits you might hope for since JS bit operations are "a hack".
ECMA Script does actually have a concept of an integer type but it is implicitly coerced to or from a double-precision floating-point value as necessary (if the number represented is too large or if it has a fractional component).
Many mainstream Javascript interpreters (SpiderMonkey is an example) take a shortcut in implementation and interpret all numeric values as doubles to avoid checking the actual native type of the value for each instruction. As a result of the implementation hack, bit operations are implemented as a cast to an integral type followed by a cast back to a double representation. It is therefore not a good idea to use bit-level operations in Javascript and you won't get a performance boost anyway.
are bitwise shifts on integers "safe" in JavaScript?
Only for integers that fit within 32 bits (31+sign). Unlike, say, Python, you can't get 1<<40.
This is how the bitwise operators are defined to work by ECMA-262, even though JavaScript Numbers are actually floats. (Technically, double-precision floats, giving you 52 bits of mantissa, easily enough to cover the range of a 32-bit int.)
There is no issue of 'endianness' involved in bitwise arithmetic, and no byte-storage format where endianness could be involved is built into JavaScript.
JavaScript doesn't have an integer type, only a floating point type. You can never get close enough to the implementation details to worry about this.