I've been trying to figure out how to implement some stuff about rijndael algorithm in js. One particular step (mixColumns) requires to mess around with binary numbers. I decided to follow a guide and at a particular point it does this operation:
it has to multiply d4 (hexadecimal) by 2. Now d4 in binary is 1101 0100, and it's turned into 1010 1000. I guess this is because he left-shifted the number so the leftmost number goes away and a 0 is pushed in the right side. After that it does 1010 1000 XOR 0001 1011 and then the result is: 1011 0011
I don't understand where those XOR and 0001 1011 come from.
I tried to do 10101000 ^ 00011011, yet I didn't manage to get the same result.
Could anyone explain me why that? Please try to be as clear as you can, since I'm new with binaries and these operations.
(Also, sorry for my english or imperfections (it's my first question here));
By the way, I'm also trying to understand this one:
{03} . {bf} = {10 XOR 01} . {1011 1111}
= {1011 1111 . 10} XOR {1011 1111 . 01}
= {1011 1111 . 10} XOR {1011 1111}
= 0111 1110 XOR 0001 1011 XOR 1011 1111
= 1101 1010 (ans)
It does more or less the same things, it converts the bf and the 3 in binary, then it becomes 10 XOR 01..and then..I don't know..
still far away from the solution.
I also wonder if there is a simplest way to do what it does.
Thanks
Related
I have the following code :
var a = parseInt('010001',2);
console.log(a.toString(2));
// 10001
var b = ~a;
console.log(b.toString(2));
// -10010
The MSDN Say
~ Performs the NOT operator on each bit. NOT a yields the inverted
value (a.k.a. one's complement) of a.
010001 should thus return this 101110.
This Topic kinda confirm that
So I can't understand how we can get -10010 instead ? The only potential explanation is that:
010001 is negated 101110 but he write this -10001
and then for an obscure reason he give me the two complements
and -10001 become -10010.
But all this is very obscure in my mind, would you have an idea on what happen precisely.
JavaScript's bitwise operators convert their operands to 32-bit signed integers (the usual 2's complement), perform their operation, then return the result as the most appropriate value in JavaScript's number type (double-precision floating point; JavaScript doesn't have an integer type). More in §12.5.11 (Bitwise NOT Operator ~) and §7.1.5 (ToInt32).
So your 10001 is:
00000000 00000000 00000000 00010001
which when ~ is:
11111111 11111111 11111111 11101110
...which is indeed negative in 2s complement representation.
You may be wondering: If the bit pattern is as above, then why did b.toString(2) give you -10010 instead? Because it's showing you signed binary, not the actual bit pattern. - meaning negative, and 10010 meaning 18 decimal. The bit pattern above is how that's represented in 2s complement bits. (And yes, I did have to go check myself on that!)
Under the covers, when Javascript does bitwise operations, it converts to a 32-bit signed integer representation, and uses that, then converts the result back into its internal decimal representation.
As such, your input value, 010001 becomes 00000000 00000000 00000000 00010001.
This is then inverted:
~00000000 00000000 00000000 00010001 => 11111111 11111111 11111111 11101110
Converted into hex, the inverted value is 0xFFFFFFEE, which is equivalent to the decimal value of -18.
Since this is a signed integer with a value of -18, this value is converted to the underlying decimal representation of -18 by Javascript.
When Javascript tries to print it as a base-2 number, it sees the negative sign and the value of 18, and prints it as -10010, since 10010 is the binary representation of positive 18.
JavaScript uses 32-bit signed numbers,so
a (010001) (17) is 0000 0000 0000 0000 0000 0000 0001 0001
b = ~a (?) (-18) is 1111 1111 1111 1111 1111 1111 1110 1110
The reason for printing -18 as -10010 and methods to get actual value is explained well here Link
As per the documentation on the Mozilla developer website here. Bitwise NOTing any number x yields -(x + 1). For example, ~5 yields -6. That is why you are getting the negative sign in front of the number.
var reverseBits = function(n) {
var re = 0;
for( var i = 0; i < 32; i++ ) {
re = (re << 1) | (n & 1);
n >>>= 1;
}
return re;
};
This is my code to reverse bit in Javascript, but when n = 1, it gives -2147483648 (-10000000000000000000000000000000), should not it be a positive number? Where am I wrong?
The reason you are getting a negative number is because of how computers store negative and positive numbers. The most significant bit (the bit with the greatest value) is used in sign numbers to determine if a number should be negative or positive. If this bit is a 0, then its positive. If it's 1, then its negative. Computers use a technique called 2's compliment to convert a number from negative to positive. Here is how it works:
In your example, you assigned the number 1 to n. In a 32-bit computer, the binary would look like this:
0000 0000 0000 0000 0000 0000 0000 0001
After your reverse your bits, your binary looks like this:
1000 0000 0000 0000 0000 0000 0000 0000
If you pull out your binary calculator and punch in this number and convert it to decimal, you'll see its the value 2147483648. Because the left most bit is a 1, its a negative number. Because Javascript only has generic var variables, it assumes you want a signed result. The >>> operator in JavaScript is called a zero-fill right shift and using it with an operand of 0 (>>> 0), tells Javascript that you want an unsigned result.
In case you're curious (or some other reader of this post is curious), here is how a binary-based computer deals with negative numbers. Suppose you want to store the value -96. How does a computer store this? Well first, just ignore the sign. 96 in binary looks like this:
0000 0000 0000 0000 0000 0000 0110 0000
Next, the computer performs a 2's compliment. This is accomplished by first inverting each bit (1's become 0's and 0's become 1's):
1111 1111 1111 1111 1111 1111 1001 1111
Finally, you simply add 1, which looks like this:
1111 1111 1111 1111 1111 1111 1010 0000
Internally, this is how it's stored in your computer's memory. This equates to 4,294,967,200 or -96
should not it be a positive number? Where am I wrong?
No it should not. All bitwise operators except >>> work with signed 32 bit integers, where a leading 1 (in twos complement) signifies a negative number.
As suggested by #Icemanind, you can use the unsignedness of >>> to fix that and "cast" it to an unsigned integer:
return re >>> 0;
Why does this return 10010 instead of 00001?
0110 >> 2 // 10010
I thought the bits would be shifted to the right 2 times, but they're not. The output I expected was 0001 or 1 but I got 0 instead. Why is this?
0110 is an octal constant because it starts with a zero:
>>> 0110
72
>>> 0110 >> 2
18
>>> bin(_)
'0b10010'
This is Python, but the same is true of many other languages with octal constants (Java, C, JavaScript, ...). Not all languages provide binary constants. If you don't have them, you can use hexadecimal constants instead (0b0110 is 0x6, for example).
Your number is not being interpreted as binary, but rather octal (base 8). Octal 0110 is 72 in decimal, or 1001000 in binary. When you right shift by 2, that becomes 10010 as you are seeing.
It's common in programming languages that a leading zero means octal. Depending on the language you are using, there may or may not be a way to specify a binary literal.
A more universal way to express a binary number would be using hex since each nibble (hex digit) is exactly 4 bits.
0 0000
1 0001
2 0010
3 0011
4 0100
5 0101
6 0110
7 0111
8 1000
9 1001
A 1010
B 1011
C 1100
D 1101
E 1110
F 1111
So, to make 0110 (binary) we'd use 0x6. To make 01101101 we'd use 0x6D.
In Javascript when I do this
var num = 1;
~ num == -2
why does ~num not equal 0
in binary 1 is stored as 1 ... thus not 1 should be 0
or it is stored like 0001 thus not 0001 would be 1110
I think I am missing something... can someone clear this up
Look up Two's complement for signed binary numbers
Lets assume that a javascript Number is 8 bits wide (which its not):
then
1 = 0000 0001b
and
~1 = 1111 1110b
Which is the binary representation of -2
0000 0010b = 2
0000 0001b = 1
0000 0000b = 0
1111 1111b = -1
1111 1110b = -2
~ toggles the bits of the operand so
00000001
becomes
11111110
which is -2
Note: In javascript, the numbers are 32-bit, but I shortened it to illustrate the point.
From the documentation:
Bitwise NOTing any number x yields -(x + 1). For example, ~5 yields -6.
The reason for this is that using a bitwise NOT reverses all the bits of a value. If you are storing the value of 1 in a signed 8-bit integer, you're storing the binary value 00000001. If you apply a bitwise NOT, you get 11111110, which for a signed 8-bit integer is the binary value for -2.
I'm attempting to produce a number from a set of bytes with JavaScript in Google Chrome, from an ArrayBuffer to get at MP3 tag information. The ID3v2 specification states that to get the tag size you must take 4 bytes at a certain location and get the integer value from them, except:
The ID3v2 tag size is encoded with
four bytes where the most significant
bit (bit 7) is set to zero in every
byte, making a total of 28 bits. The
zeroed bits are ignored, so a 257
bytes long tag is represented as $00
00 02 01.
The naive way to do this seems to be to go through each byte and get the values for each bit and produce a new 4 byte values, produced from the 7 bits from the original 4 bytes such that say for example we have these 4 original bytes:
0111 1111 0111 1111 0111 1111 0111 1111
I create a new ArrayBuffer and loop through each bit to produce:
0000 1111 1111 1111 1111 1111 1111 1111
And then I calculate the integer value from this 32bit integer using Uint32Array
Is there an easier way to do this?
If you think about it, what you've got is a 4-digit base-128 number. Each of the bytes holds a single "digit", and each "digit" is a value between 0 and 127 (inclusive). Thus, to turn them into a usable number, you just multiply and add like you'd do with any other base: the least-significant "digit" is the "one's place" digit, the next one is the "128s", the next is the "16384s", and the most-significant digit is the "2097152s" place.
I'm not sure exactly how to show this in code because I'm not really familiar with the new "ArrayBuffer" APIs; you use a "ArrayBufferView" or something to get access to the values, right? Well assuming it's easy to get the individual bytes, it should be a very simple function to do the multiplies and additions.
If you just target Chrome, you can also use DataViews to read out different datatypes of your bytestream: https://developer.mozilla.org/en/JavaScript_typed_arrays/DataView