bitwise AND in Javascript with a 64 bit integer - javascript
I am looking for a way of performing a bitwise AND on a 64 bit integer in JavaScript.
JavaScript will cast all of its double values into signed 32-bit integers to do the bitwise operations (details here).
Javascript represents all numbers as 64-bit double precision IEEE 754 floating point numbers (see the ECMAscript spec, section 8.5.) All positive integers up to 2^53 can be encoded precisely. Larger integers get their least significant bits clipped. This leaves the question of how can you even represent a 64-bit integer in Javascript -- the native number data type clearly can't precisely represent a 64-bit int.
The following illustrates this. Although javascript appears to be able to parse hexadecimal numbers representing 64-bit numbers, the underlying numeric representation does not hold 64 bits. Try the following in your browser:
<html>
<head>
<script language="javascript">
function showPrecisionLimits() {
document.getElementById("r50").innerHTML = 0x0004000000000001 - 0x0004000000000000;
document.getElementById("r51").innerHTML = 0x0008000000000001 - 0x0008000000000000;
document.getElementById("r52").innerHTML = 0x0010000000000001 - 0x0010000000000000;
document.getElementById("r53").innerHTML = 0x0020000000000001 - 0x0020000000000000;
document.getElementById("r54").innerHTML = 0x0040000000000001 - 0x0040000000000000;
}
</script>
</head>
<body onload="showPrecisionLimits()">
<p>(2^50+1) - (2^50) = <span id="r50"></span></p>
<p>(2^51+1) - (2^51) = <span id="r51"></span></p>
<p>(2^52+1) - (2^52) = <span id="r52"></span></p>
<p>(2^53+1) - (2^53) = <span id="r53"></span></p>
<p>(2^54+1) - (2^54) = <span id="r54"></span></p>
</body>
</html>
In Firefox, Chrome and IE I'm getting the following. If numbers were stored in their full 64-bit glory, the result should have been 1 for all the substractions. Instead, you can see how the difference between 2^53+1 and 2^53 is lost.
(2^50+1) - (2^50) = 1
(2^51+1) - (2^51) = 1
(2^52+1) - (2^52) = 1
(2^53+1) - (2^53) = 0
(2^54+1) - (2^54) = 0
So what can you do?
If you choose to represent a 64-bit integer as two 32-bit numbers, then applying a bitwise AND is as simple as applying 2 bitwise AND's, to the low and high 32-bit 'words'.
For example:
var a = [ 0x0000ffff, 0xffff0000 ];
var b = [ 0x00ffff00, 0x00ffff00 ];
var c = [ a[0] & b[0], a[1] & b[1] ];
document.body.innerHTML = c[0].toString(16) + ":" + c[1].toString(16);
gets you:
ff00:ff0000
Here is code for AND int64 numbers, you can replace AND with other bitwise operation
function and(v1, v2) {
var hi = 0x80000000;
var low = 0x7fffffff;
var hi1 = ~~(v1 / hi);
var hi2 = ~~(v2 / hi);
var low1 = v1 & low;
var low2 = v2 & low;
var h = hi1 & hi2;
var l = low1 & low2;
return h*hi + l;
}
This can now be done with the new BigInt built-in numeric type. BigInt is currently (July 2019) only available in certain browsers, see the following link for details:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt
I have tested bitwise operations using BigInts in Chrome 67 and can confirm that they work as expected with up to 64 bit values.
Javascript doesn't support 64 bit integers out of the box. This is what I ended up doing:
Found long.js, a self contained Long implementation on github.
Convert the string value representing the 64 bit number to a Long.
Extract the high and low 32 bit values
Do a 32 bit bitwise and between the high and low bits, separately
Initialise a new 64 bit Long from the low and high bit
If the number is > 0 then there is correlation between the two numbers
Note: for the code example below to work you need to load
long.js.
// Handy to output leading zeros to make it easier to compare the bits when outputting to the console
function zeroPad(num, places){
var zero = places - num.length + 1;
return Array(+(zero > 0 && zero)).join('0') + num;
}
// 2^3 = 8
var val1 = Long.fromString('8', 10);
var val1High = val1.getHighBitsUnsigned();
var val1Low = val1.getLowBitsUnsigned();
// 2^61 = 2305843009213693960
var val2 = Long.fromString('2305843009213693960', 10);
var val2High = val2.getHighBitsUnsigned();
var val2Low = val2.getLowBitsUnsigned();
console.log('2^3 & (2^3 + 2^63)')
console.log(zeroPad(val1.toString(2), 64));
console.log(zeroPad(val2.toString(2), 64));
var bitwiseAndResult = Long.fromBits(val1Low & val2Low, val1High & val2High, true);
console.log(bitwiseAndResult);
console.log(zeroPad(bitwiseAndResult.toString(2), 64));
console.log('Correlation betwen val1 and val2 ?');
console.log(bitwiseAndResult > 0);
Console output:
2^3
0000000000000000000000000000000000000000000000000000000000001000
2^3 + 2^63
0010000000000000000000000000000000000000000000000000000000001000
2^3 & (2^3 + 2^63)
0000000000000000000000000000000000000000000000000000000000001000
Correlation between val1 and val2?
true
The Closure library has goog.math.Long with a bitwise add() method.
Unfortunately, the accepted answer (and others) appears not to have been adequately tested. Confronted by this problem recently, I initially tried to split my 64-bit numbers into two 32-bit numbers as suggested, but there's another little wrinkle.
Open your JavaScript console and enter:
0x80000001
When you press Enter, you'll obtain 2147483649, the decimal equivalent. Next try:
0x80000001 & 0x80000003
This gives you -2147483647, not quite what you expected. It's clear that in performing the bitwise AND, the numbers are treated as signed 32-bit integers. And the result is wrong. Even if you negate it.
My solution was to apply ~~ to the 32-bit numbers after they were split off, check for a negative sign, and then deal with this appropriately.
This is clumsy. There may be a more elegant 'fix', but I can't see it on quick examination. There's a certain irony that something that can be accomplished by a couple of lines of assembly should require so much more labour in JavaScript.
Related
How does bitwise AND OR and XOR works on -negative signed integers?
I was just solving random problems on bitwise operators and trying various other combination for making personal notes. And somehow I just cannot figure out the solution. Say I wanted to check bitwise AND between two integers or on a ~number and -negative number(~num1 & -num2) and various other combo's. Then I can see the answer but I haven't been able to establish how this happened? Console: console.log(25 & 3); outputs 1 (I can solve this easily). console.log(-25 & -3); outputs-27. Similarly console.log(~25 & ~3); outputs -28. console.log(25 & ~3); outputs -24. console.log(~25 & 3); outputs -2. console.log(~25 & -3); outputs --28. console.log(-25 & ~3); outputs --28. I know the logic behind "console.log(25 & -3)". 25 is 11001 -3 is 11101(3=00011 The minus sign is like 2s compliment+1) AND-11001 = 25. But I cannot make it work the same way when both the numbers are negative or with the other cases mentioned above. I have tried various combinations of numbers too, not just these two. But I cannot solve the problem. Can somebody explain the binary logic used in the problems I cannot solve. (I've spend about 2 hrs here on SO to find the answer and another 1 hr+ on google, but I still haven't found the answer). Thanks and Regards.
JavaScript specifies that bitwise operations on integers are performed as though they were stored in two's-complement notation. Fortunately, most computer hardware nowadays uses this notation natively anyway. For brevity's sake I'm going to show the following numbers as 8-bit binary. They're actually 32-bit in JavaScript, but for the numbers in the original question, this doesn't change the outcome. It does, however, let us drop a whole lot of leading bits. console.log(-25 & -3); //outputs -27. How? If we write the integers in binary, we get (11100111 & 11111101) respectively. AND those together and you get 11100101, which is -27. In your later examples, you seem to be using the NOT operator (~) and negation (-) interchangeably. You can't do that in two's complement: ~ and - are not the same thing. ~25 is 11100110, which is -26, not -25. Similarly, ~3 is 11111100, which is -4, not -3. But when we put these together, we can work out the examples you gave. console.log(~25 & ~3); //outputs-28. How? 11100110 & 11111100 = 11100100, which is -28 (not 28, as you wrote) console.log(25 & ~3);//outputs-24. How? 00011001 & 11111100 = 00011000, which is 24 console.log(~25 & 3);//outputs-2. How? 11100110 & 00000011 = 00000001, which is 2 console.log(~25 & -3);//outputs--28. How? 11100110 & 11111101 = 11100100, which is -28 console.log(-25 & ~3);//outputs--28. How? 11100111 & 11111100 = 11100100, which is -28 The real key to understanding this is that you don't really use bitwise operations on integers. You use them on bags of bits of a certain size, and these bags of bits happen to be conveniently representable as integers. This is key to understanding what's going on here, because you've stumbled across a case where the difference matters. There are specific circumstances in computer science where you can manipulate bags of bits in ways that, by coincidence, give the same results as if you'd done particular mathematical operations on numbers. But this only works in specific circumstances, and they require you to assume certain things about the numbers you're working on, and if your numbers don't fit those assumptions, things break down. This is one of the reasons Donald Knuth said "premature optimization is the root of all evil". If you want to use bitwise operations in place of actual integer math, you have to be absolutely certain that your inputs will actually follow the assumptions required for that trick to work. Otherwise, the results will start looking strange when you start using inputs outside of those assumptions.
25 = 16+8+1 = 0b011001, I've added another 0 digit as the sign digit. Practically you'll have at least 8 binary digits but the two's complement math is the same. To get -25 in 6-bits two's complement, you'd do -25 = ~25 + 1=0b100111 3=2+1=0b000011; -3 = ~3+1 = 0b111101 When you & the two, you get: -25 = ~25 + 1=0b100111 & -3 = ~3 + 1 = 0b111101 0b100101 The leftmost bit (sign bit) is set so it's a negative number. To find what it's a negative of, you reverse the process and first subtract 1 and then do ~. ~(0b100101-1) = 0b011011 thats 1+2+0*4+8+16 = 27 so -25&-3=-27. For 25 & ~3, it's: 25 = 16+8+1 = 0b011001 & ~3 = 0b111100 ______________________ 0b011000 = 24 For ~25 & 3, it's: ~25 = 0b100110 & ~3 = 0b000011 ______________________ 0b000010 = 2 For ~25 & -3, it's: ~25 = 0b100110 & ~3+1 = 0b111101 ______________________ 0b100100 #negative #find what it's a negative of: ~(0b100100-1) =~0b100011 = 0b011100 = 4+8+16 = 28 0b100100 = -28
-27 has 6 binary digits in it so you should be using numbers with at least that many digits. With 8-bit numbers then we have: 00011001 = 25 00000011 = 3 00011011 = 27 and: 11100111 = -25 11111101 = -3 11100101 = -27 Now -25 & -3 = -27 because 11100111 & 11111101 = 11100101
The binary string representation of a 32 bit integer can be found with: (i >>> 0).toString(2).padStart(32, '0') The bitwise anding of two binary strings is straightforward The integer value of a signed, 32 bit binary string is either parseInt(bitwiseAndString, 2) if the string starts with a '0', or -~parseInt(bitwiseAndString, 2) - 1 if it starts with a '1' Putting all that together: const tests = [ ['-25', '-3'], ['~25', '-3'], ['25', '~3'], ['~25', '3'], ['~25', '~3'], ['-25', '~3'] ] const output = (s,t) => { console.log(`${`${s}:`.padEnd(20, ' ')}${t}`); } const bitwiseAnd = (i, j) => { console.log(`Calculating ${i} & ${j}`); const bitStringI = (eval(i) >>> 0).toString(2).padStart(32, '0'); const bitStringJ = (eval(j) >>> 0).toString(2).padStart(32, '0'); output(`bit string for ${i}`, bitStringI); output(`bit string for ${j}`, bitStringJ); const bitArrayI = bitStringI.split(''); const bitArrayJ = bitStringJ.split(''); const bitwiseAndString = bitArrayI.map((s, idx) => s === '1' && bitArrayJ[idx] === '1' ? '1' : '0').join(''); output('bitwise and string', bitwiseAndString); const intValue = bitwiseAndString[0] === '1' ? -~parseInt(bitwiseAndString, 2) - 1 : parseInt(bitwiseAndString, 2); if (intValue === (eval(i) & eval(j))) { console.log(`integer value: ${intValue} ✓`); } else { console.error(`calculation failed: ${intValue} !== ${i & j}`); } } tests.forEach(([i, j]) => { bitwiseAnd(i, j); })
Is there any way to see a number in it's 64 bit float IEEE754 representation
Javascript stores all numbers as double-precision 64-bit format IEEE 754 values according to the spec: The Number type has exactly 18437736874454810627 (that is, 264−253+3) values, representing the double-precision 64-bit format IEEE 754 values as specified in the IEEE Standard for Binary Floating-Point Arithmetic Is there any way to see the number in this form in Javascript?
You can use typed arrays to examine the raw bytes of a number. Create a Float64Array with one element, and then create a Uint8Array with the same buffer. You can then set the first element of the float array to your number, and examine the bytes via the Uint8Array. You'll have to do some shifting and combining for the various pieces of the number of course, but it's not hard. There are no built-in facilities to do things like extract the exponent.
Based on #Pointy's suggestion I've implemented the following function to get a number in it's 64 bit float IEEE754 representation: function to64bitFloat(number) { var f = new Float64Array(1); f[0] = number; var view = new Uint8Array(f.buffer); var i, result = ""; for (i = view.length - 1; i >= 0; i--) { var bits = view[i].toString(2); if (bits.length < 8) { bits = new Array(8 - bits.length).fill('0').join("") + bits; } result += bits; } return result; } console.log(to64bitFloat(12)); // 0100000000101000000000000000000000000000000000000000000000000000 console.log(to64bitFloat(-12)); // 1100000000101000000000000000000000000000000000000000000000000000
You can use Basenumber.js to transform a number into its IEEE754 representation: let x = Base(326.9); let y = Base(-326.9); // specify double precision (64) let a = x.toIEEE754(64); let b = y.toIEEE754(64); console.log(a); console.log(b); // You can join them in an unique string this way console.log(Object.values(a).join("")); console.log(Object.values(b).join("")); <script src='https://cdn.jsdelivr.net/gh/AlexSp3/Basenumber.js#main/BaseNumber.min.js'></script>
Javascript calculate logical AND between two 64bit numbers in 32bit browser?
I am running Sharepoint 2007 farm and am trying to calculate user permissions. I have read This post about the particular metadata field where I am getting my mask from. And I have been looking at The following guide in order to see what masks I need to compare to. Here is my delima, whenever I run the following code in a IE javascript console I get back 0: ((0x4000000000000000).toString(16) & (0x400001F07FFF1BFF).toString(16)) Now I know this is incorrect because the respective binary values are: 100000000000000000000000000000000000000000000000000000000000000 100000000000000000000011111000001111111111111110001101111111111 Which should equal 100000000000000000000000000000000000000000000000000000000000000 I have also put this into my windows calculator just to make sure I wasn't crazy (and to get those super long binary numbers). NOTE As I got to this line I realized that my browser is 32bit (which is a requirement for the site I am using this on) and this is a 64 bit number! How can I (preferably in one line) calculate the bitwise AND of a two 64bit numbers using a 32bit browser? I do know that I could convert the number into a binary String and utilize a loop to check each bit but is there a simpler method? EDIT - Solution Utilizing the information from This Post and the answer below I came up with the following solution: var canEdit = false; var canEditMask = [0x00000000,0x00000004]; var canApprove = false; var canApproveMask = [0x00000000,0x00000010]; var canRead = false; var canReadMask = [0x00000000,0x00000001]; var canDesign = false; var canDesignMask = [0x00000000,0x00000800]; var mask = [originalmask.substring(0,10).toString(16), ("0x"+itemperms.substring(9)).toString(16)]; canEdit = (mask[0] & canEditMask[0]) >0 || (mask[1] & canEditMask[1]) >0; canRead = (mask[0] & canReadMask[0]) >0 || (mask[1] & canReadMask[1]) >0; canDesign = (mask[0] & canDesignMask[0]) >0 || (mask[1] & canDesignMask[1]) >0; canApprove = (mask[0] & canApproveMask[0]) >0 || (mask[1] & canApproveMask[1]) >0;
I hit the same problem this evening, so I wrote the following to allow bitwise AND and OR of values above 2^32: function bitand(val, bit) { if (bit > 0xFFFFFFF || val > 0xFFFFFFF) { var low = val & 0xFFFFFFF; var lowbit = bit & 0xFFFFFFF; val /= 0x10000000; bit /= 0x10000000; return (val & bit) * 0x10000000 + (low & lowbit); } return (val & bit); } function bitor(val, bit) { if (bit > 0xFFFFFFF || val > 0xFFFFFFF) { var low = val & 0xFFFFFFF; var lowbit = bit & 0xFFFFFFF; val /= 0x10000000; bit /= 0x10000000; return (val | bit) * 0x10000000 + (low | lowbit); } return (val | bit); } Cheers, Jason
It's not really your browser that's the issue but the language specification. In short.. the logical bitwise operators &, | etc all treat their operands as 32-bit integers: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Bitwise_Operators In order to do 64-bit wide operation you're going to have to rely on a library or write your own function.
Bitwise XOR in Javascript compared to C++
I am porting a simple C++ function to Javascript, but it seems I'm running into problems with the way Javascript handles bitwise operators. In C++: AnsiString MyClass::Obfuscate(AnsiString source) { int sourcelength=source.Length(); for(int i=1;i<=sourcelength;i++) { source[i] = source[i] ^ 0xFFF; } return source; } Obfuscate("test") yields temporary intvalues -117, -102, -116, -117 Obfuscate ("test") yields stringvalue ‹šŒ‹ In Javascript: function obfuscate(str) { var obfuscated= ""; for (i=0; i<str.length;i++) { var a = str.charCodeAt(i); var b = a ^ 0xFFF; obfuscated= obfuscated+String.fromCharCode(b); } return obfuscated; } obfuscate("test") yields temporary intvalues 3979 , 3994 , 3980 , 3979 obfuscate("test") yields stringvalue ྋྚྌྋ Now, I realize that there are a ton of threads where they point out that Javascript treats all numbers as floats, and bitwise operations involve a temporary cast to 32bit int. It really wouldn't be a problem except for that I'm obfuscating in Javascript and reversing in C++, and the different results don't really match. How do i tranform the Javascript result into the C++ result? Is there some simple shift available?
Working demo Judging from the result that xoring 116 with 0xFFF gives -117, we have to emulate 2's complement 8-bit integers in javascript: function obfuscate(str) { var bytes = []; for (var i=0; i<str.length;i++) { bytes.push( ( ( ( str.charCodeAt(i) ^ 0xFFF ) & 0xFF ) ^ 0x80 ) -0x80 ); } return bytes; } Ok these bytes are interpreted in windows cp 1252 and if they are negative, probably just subtracted from 256. var ascii = [ 0x0000,0x0001,0x0002,0x0003,0x0004,0x0005,0x0006,0x0007,0x0008,0x0009,0x000A,0x000B,0x000C,0x000D,0x000E,0x000F ,0x0010,0x0011,0x0012,0x0013,0x0014,0x0015,0x0016,0x0017,0x0018,0x0019,0x001A,0x001B,0x001C,0x001D,0x001E,0x001F ,0x0020,0x0021,0x0022,0x0023,0x0024,0x0025,0x0026,0x0027,0x0028,0x0029,0x002A,0x002B,0x002C,0x002D,0x002E,0x002F ,0x0030,0x0031,0x0032,0x0033,0x0034,0x0035,0x0036,0x0037,0x0038,0x0039,0x003A,0x003B,0x003C,0x003D,0x003E,0x003F ,0x0040,0x0041,0x0042,0x0043,0x0044,0x0045,0x0046,0x0047,0x0048,0x0049,0x004A,0x004B,0x004C,0x004D,0x004E,0x004F ,0x0050,0x0051,0x0052,0x0053,0x0054,0x0055,0x0056,0x0057,0x0058,0x0059,0x005A,0x005B,0x005C,0x005D,0x005E,0x005F ,0x0060,0x0061,0x0062,0x0063,0x0064,0x0065,0x0066,0x0067,0x0068,0x0069,0x006A,0x006B,0x006C,0x006D,0x006E,0x006F ,0x0070,0x0071,0x0072,0x0073,0x0074,0x0075,0x0076,0x0077,0x0078,0x0079,0x007A,0x007B,0x007C,0x007D,0x007E,0x007F ]; var cp1252 = ascii.concat([ 0x20AC,0xFFFD,0x201A,0x0192,0x201E,0x2026,0x2020,0x2021,0x02C6,0x2030,0x0160,0x2039,0x0152,0xFFFD,0x017D,0xFFFD ,0xFFFD,0x2018,0x2019,0x201C,0x201D,0x2022,0x2013,0x2014,0x02DC,0x2122,0x0161,0x203A,0x0153,0xFFFD,0x017E,0x0178 ,0x00A0,0x00A1,0x00A2,0x00A3,0x00A4,0x00A5,0x00A6,0x00A7,0x00A8,0x00A9,0x00AA,0x00AB,0x00AC,0x00AD,0x00AE,0x00AF ,0x00B0,0x00B1,0x00B2,0x00B3,0x00B4,0x00B5,0x00B6,0x00B7,0x00B8,0x00B9,0x00BA,0x00BB,0x00BC,0x00BD,0x00BE,0x00BF ,0x00C0,0x00C1,0x00C2,0x00C3,0x00C4,0x00C5,0x00C6,0x00C7,0x00C8,0x00C9,0x00CA,0x00CB,0x00CC,0x00CD,0x00CE,0x00CF ,0x00D0,0x00D1,0x00D2,0x00D3,0x00D4,0x00D5,0x00D6,0x00D7,0x00D8,0x00D9,0x00DA,0x00DB,0x00DC,0x00DD,0x00DE,0x00DF ,0x00E0,0x00E1,0x00E2,0x00E3,0x00E4,0x00E5,0x00E6,0x00E7,0x00E8,0x00E9,0x00EA,0x00EB,0x00EC,0x00ED,0x00EE,0x00EF ,0x00F0,0x00F1,0x00F2,0x00F3,0x00F4,0x00F5,0x00F6,0x00F7,0x00F8,0x00F9,0x00FA,0x00FB,0x00FC,0x00FD,0x00FE,0x00FF ]); function toStringCp1252(bytes){ var byte, codePoint, codePoints = []; for( var i = 0; i < bytes.length; ++i ) { byte = bytes[i]; if( byte < 0 ) { byte = 256 + byte; } codePoint = cp1252[byte]; codePoints.push( codePoint ); } return String.fromCharCode.apply( String, codePoints ); } Result toStringCp1252(obfuscate("test")) //"‹šŒ‹"
I'm guessing that AnsiString contains 8-bit characters (since the ANSI character set is 8 bits). When you assign the result of the XOR back to the string, it is truncated to 8 bits, and so the resulting value is in the range [-128...127]. (On some platforms, it could be [0..255], and on others the range could be wider, since it's not specified whether char is signed or unsigned, or whether it's 8 bits or larger). Javascript strings contain unicode characters, which can hold a much wider range of values, the result is not truncated to 8 bits. The result of the XOR will have a range of at least 12 bits, [0...4095], hence the large numbers you see there. Assuming the original string contains only 8-bit characters, then changing the operation to a ^ 0xff should give the same results in both languages.
I assume that AnsiString is in some form, an array of chars. And this is the problem. in c, char can typically only hold 8-bits. So when you XOR with 0xfff, and store the result in a char, it is the same as XORing with 0xff. This is not the case with javascript. JavaScript using Unicode. This is demonstrated by looking at the integer values: -117 == 0x8b and 3979 == 0xf8b I would recommend XORing with 0xff as this will work in both languages. Or you can switch your c++ code to use Unicode.
First, convert your AnsiString to wchar_t*. Only then obfuscate its individual characters: AnsiString MyClass::Obfuscate(AnsiString source) { /// allocate string int num_wchars = source.WideCharBufSize(); wchar_t* UnicodeString = new wchar_t[num_wchars]; source.WideChar(UnicodeString, source.WideCharBufSize()); /// obfuscate individual characters int sourcelength=source.Length(); for(int i = 0 ; i < num_wchars ; i++) { UnicodeString[i] = UnicodeString[i] ^ 0xFFF; } /// create obfuscated AnsiString AnsiString result = AnsiString(UnicodeString); /// delete tmp string delete [] UnicodeString; return result; } Sorry, I'm not an expert on C++ Builder, but my point is simple: in JavaScript you have WCS2 symbols (or UTF-16), so you have to convert AnsiString to wide chars first. Try using WideString instead of AnsiString
I don't know AnsiString at all, but my guess is this relates to the width of its characters. Specifically, I suspect they're less than 32 bits wide, and of course in bitwise operations, the width of what you're operating on matters, particularly when dealing with 2's complement numbers. In JavaScript, your "t" in "test" is character code 116, which is b00000000000000000000000001110100. 0xFFF (4095) is b00000000000000000000111111111111, and the result you're getting (3979) is b00000000000000000000111110001011. We can readily see that you're getting the right result for the XOR: 116 = 00000000000000000000000001110100 4095 = 00000000000000000000111111111111 3979 = 00000000000000000000111110001011 So I'm thinking you're getting some truncation or similar in your C++ code, not least because -117 is b10001011 in eight-bit 2's complement...which is exactly what we see as the last eight bits of 3979 above.
Opposite of Number.toExponential in JS
I need to get the value of an extremely large number in JavaScript in non-exponential form. Number.toFixed simply returns it in exponential form as a string, which is worse than what I had. This is what Number.toFixed returns: >>> x = 1e+31 1e+31 >>> x.toFixed() "1e+31" Number.toPrecision also does not work: >>> x = 1e+31 1e+31 >>> x.toPrecision( 21 ) "9.99999999999999963590e+30" What I would like is: >>> x = 1e+31 1e+31 >>> x.toNotExponential() "10000000000000000000000000000000" I could write my own parser but I would rather use a native JS method if one exists.
You can use toPrecision with a parameter specifying how many digits you want to display: x.toPrecision(31) However, among the browsers I tested, the above code only works on Firefox. According to the ECMAScript specification, the valid range for toPrecision is 1 to 21, and both IE and Chrome throw a RangeError accordingly. This is due to the fact that the floating-point representation used in JavaScript is incapable of actually representing numbers to 31 digits of precision.
Use Number(string) Example : var a = Number("1.1e+2"); Return : a = 110
The answer is there's no such built-in function. I've searched high and low. Here's the RegExp I use to split the number into sign, coefficient (digits before decimal point), fractional part (digits after decimal point) and exponent: /^([+-])?(\d+)\.?(\d*)[eE]([+-]?\d+)$/ "Roll your own" is the answer, which you already did.
It's possible to expand JavaScript's exponential output using string functions. Admittedly, what I came up is somewhat cryptic, but it works if the exponent after the e is positive: var originalNumber = 1e+31; var splitNumber = originalNumber.toString().split('e'); var result; if(splitNumber[1]) { var regexMatch = splitNumber[0].match(/^([^.]+)\.?(.*)$/); result = /* integer part */ regexMatch[1] + /* fractional part */ regexMatch[2] + /* trailing zeros */ Array(splitNumber[1] - regexMatch[2].length + 1).join('0'); } else result = splitNumber[0];
"10000000000000000000000000000000"? Hard to believe that anybody would rather look at that than 1.0e+31, or in html: 1031. But here's one way, much of it is for negative exponents(fractions): function longnumberstring(n){ var str, str2= '', data= n.toExponential().replace('.','').split(/e/i); str= data[0], mag= Number(data[1]); if(mag>=0 && str.length> mag){ mag+=1; return str.substring(0, mag)+'.'+str.substring(mag); } if(mag<0){ while(++mag) str2+= '0'; return '0.'+str2+str; } mag= (mag-str.length)+1; while(mag> str2.length){ str2+= '0'; } return str+str2; } input: 1e+30 longnumberstring: 1000000000000000000000000000000 to Number: 1e+30 input: 1.456789123456e-30 longnumberstring: 0.000000000000000000000000000001456789123456 to Number: 1.456789123456e-30 input: 1.456789123456e+30 longnumberstring: 1456789123456000000000000000000 to Number: 1.456789123456e+30 input: 1e+80 longnumberstring: 100000000000000000000000000000000000000000000000000000000000000000000000000000000 to Number: 1e+80