Converting large integer to 8 byte array in JavaScript - javascript

I'm trying to convert a large number into an 8 byte array in javascript.
Here is an IMEI that I am passing in: 45035997012373300
var bytes = new Array(7);
for(var k=0;k<8;k++) {
bytes[k] = value & (255);
value = value / 256;
}
This ends up giving the byte array: 48,47,7,44,0,0,160,0. Converted back to a long, the value is 45035997012373296, which is 4 less than the correct value.
Any idea why this is and how I can fix it to serialize into the correct bytes?

Since you are converting from decimal to bytes, dividing by 256 is an operation that is pretty easily simulated by splitting up a number in a string into parts. There are two mathematical rules that we can take advantage of.
The right-most n digits of a decimal number can determine divisibility by 2^n.
10^n will always be divisible by 2^n.
Thus we can take the number and split off the right-most 8 digits to find the remainder (i.e., & 255), divide the right part by 256, and then also divide the left part of the number by 256 separately. The remainder from the left part can be shifted into the right part of the number (the right-most 8 digits) by the formula n*10^8 \ 256 = (q*256+r)*10^8 \ 256 = q*256*10^8\256 + r*10^8\256 = q*10^8 + r*5^8, where \ is integer division and q and r are quotient and remainder, respectively for n \ 256. This yields the following method to do integer division by 256 for strings of up to 23 digits (15 normal JS precision + 8 extra yielded by this method) in length:
function divide256(n)
{
if (n.length <= 8)
{
return (Math.floor(parseInt(n) / 256)).toString();
}
else
{
var top = n.substring(0, n.length - 8);
var bottom = n.substring(n.length - 8);
var topVal = Math.floor(parseInt(top) / 256);
var bottomVal = Math.floor(parseInt(bottom) / 256);
var rem = (100000000 / 256) * (parseInt(top) % 256);
bottomVal += rem;
topVal += Math.floor(bottomVal / 100000000); // shift back possible carry
bottomVal %= 100000000;
if (topVal == 0) return bottomVal.toString();
else return topVal.toString() + bottomVal.toString();
}
}
Technically this could be implemented to divide an integer of any arbitrary size by 256, simply by recursively breaking the number into 8-digit parts and handling the division of each part separately using the same method.
Here is a working implementation that calculates the correct byte array for your example number (45035997012373300): http://jsfiddle.net/kkX2U/.
[52, 47, 7, 44, 0, 0, 160, 0]

Your value and the largest JavaScript integer compared:
45035997012373300 // Yours
9007199254740992 // JavaScript's biggest integer
JavaScript cannot represent your original value exactly as an integer; that's why your script breaking it down gives you an inexact representation.
Related:
var diff = 45035997012373300 - 45035997012373298;
// 0 (not 2)
Edit: If you can express your number as a hexadecimal string:
function bytesFromHex(str,pad){
if (str.length%2) str="0"+str;
var bytes = str.match(/../g).map(function(s){
return parseInt(s,16);
});
if (pad) for (var i=bytes.length;i<pad;++i) bytes.unshift(0);
return bytes;
}
var imei = "a000002c072f34";
var bytes = bytesFromHex(imei,8);
// [0,160,0,0,44,7,47,52]
If you need the bytes ordered from least-to-most significant, throw a .reverse() on the result.

store the imei as a hex string (if you can), then parse the string in that manner, this way you can keep the precision when you build the array. I will be back with a PoC when i get home on my regular computer, if this question has not been answered.
something like:
function parseHexString(str){
for (var i=0, j=0; i<str.length; i+=2, j++){
array[j] = parseInt("0x"+str.substr(i, 2));
}
}
or close to that whatever...

Related

In JavaScript, is there a way to make 0.84729347293923 into an integer without using any string or regex manipulation?

Given any number between 0 and 1, such as 0.84729347293923, is there a simple way to make it into 84729347293923 without string or regex manipulation? I can think of using a loop, which probably is no worse than using a string because it is O(n) with n being the number of digits. But is there a better way?
function getRandom() {
let r = Math.random();
while (Math.floor(r) !== r) r *= 10;
return r;
}
for (let i = 0; i < 10; i++)
console.log(getRandom());
Integers mod 1 = 0, non integers mod 1 != 0.
while ((r*=10) % 1);
Ok, just want to refactor my code (i realized that was bad so this is what i discovered to correctly get the value as you requested).
NOTE: As the question says that "given any number between 0 and 1", this solution only works for values between 0 and 1:
window.onload = ()=>{
function getLen(num){
let currentNumb = num;
let integratedArray = [];
let realLen = 0;
/*While the number is not an integer, we will multiply the copy of the original
*value by ten, and when the loop detects that the number is already an integer
*the while simply breaks, in this process we are storing each transformations
*of the number in an array called integratedArray*/
while(!(Number.isInteger(currentNumb))){
currentNumb *= 10;
integratedArray.push(currentNumb);
}
/*We iterate over the array and compare each value of the array with an operation
*in which the resultant value should be exactly the same as the actual item of the
*array, in the case that both are equal we assign the var realLen to i, and
*in case that the values were not the same, we simply breaks the loop, if the
*values are not the same, this indicates that we found the "trash numbers", so
*we simply skip them.*/
for(let i = 0; i < integratedArray.length; i++){
if(Math.floor(integratedArray[i]) === Math.floor(num * Math.pow(10, i + 1))){
realLen = i;
}else{
break;
}
}
return realLen;
}
//Get the float value of a number between 0 and 1 as an integer.
function getShiftedNumber(num){
//First we need the length to get the float part of the number as an integer
const len = getLen(num);
/*Once we have the length of the number we simply multiply the number by
*(10) ^ numberLength, this eliminates the comma (,), or point (.), and
*automatically transforms the number to an integer in this case a large integer*/
return num * (Math.pow(10, len));
}
console.log(getShiftedNumber(0.84729347293923));
}
So the explanation is the next:
Because we want to convert this number without using any string, regex or any another thing, first we need to get the length of the number, this is a bit hard to do without using string conversions... so i did the function getLen for this purpose.
In the function getLen, we have 3 variables:
currentNumb: This var is a copy of the original value (the original number), this value help us to found the length of the number and we can do some transforms to this value whitout changing the original reference of the number.
We need to multiply this value any times is needed to transform the number to an integer and then multiplyng this value by ten to ten.
with the help of a while (this method makes the number a false integer).
NOTE: I saw "False integer" because when i was making the tests i realized that in the number is being adding more digits than normal... (Very very strange), so this stupid but important thing makes neccesary the filter of these "trash numbers", so later we proccess them.
integratedArray: This array stores the values of the result of the first while operations, so the last number stored in this array is an integer, but this number is one of the "fake integers", so with this array we need to iterate later to compare what of those stored values are different to the original value multiplied by (10 * i + 1), so here is the hint:
In this case the first 12 values of this array are exactly the same with the operation of Math.floor(num * Math.pow(10, i + 1))), but in the 13th value of the array these values are not the same so... yes!, there are those "trash numbers" that we were searching for.
realLen: This is the variable where we will store the real length of the number converting the float part of this number in an integer.
Some binary search approach:
Its useless if avarage length < 8;
It contains floating point issues.
But hey it is O(log n) with tons of wasted side computations - i guess if one counts them its event worse than just plain multiplication.
I prefer #chiliNUT answer. One line stamp.
function floatToIntBinarySearch(number){
const max_safe_int_length = 16;
const powers = [
1,
10,
100,
1000,
10000,
100000,
1000000,
10000000,
100000000,
1000000000,
10000000000,
100000000000,
1000000000000,
10000000000000,
100000000000000,
1000000000000000,
10000000000000000
]
let currentLength = 16
let step = 16
let _number = number * powers[currentLength]
while(_number % 1 != 0 || (_number % 10 | 0) == 0){
step /= 2
if( (_number % 10 | 0) == 0 && !(_number % 1 != 0)){
currentLength = currentLength - step;
} else {
currentLength = step + currentLength;
}
if(currentLength < 1 || currentLength > max_safe_int_length * 2) throw Error("length is weird: " + currentLength)
_number = number * powers[currentLength]
console.log(currentLength, _number)
if(Number.isNaN(_number)) throw Error("isNaN: " + ((number + "").length - 2) + " maybe greater than 16?")
}
return number * powers[currentLength]
}
let randomPower = 10 ** (Math.random() * 10 | 0)
let test = (Math.random() * randomPower | 0) / randomPower
console.log(test)
console.log(floatToIntBinarySearch(test))

Compress group of arrays into smallest possible string

This question can be answered using javascript, underscore or Jquery functions.
given 4 arrays:
[17,17,17,17,17,18,18,18,18,18,19,19,19,19,19,20,20,20,20,20] => x coordinate of unit
[11,12,13,14,15,11,12,13,14,15,11,12,13,14,15,11,12,13,14,15] => y coordinate of unit
[92,92,92,92,92,92,92,92,92,92,92,92,92,92,92,92,92,92,92,92] => x (unit moves to this direction x)
[35,36,37,38,39,35,36,37,38,39,35,36,37,38,39,35,36,37,38,39] => y (unit moves to this direction y)
They are very related to each other.
For example first element of all arrays: [17,11,92,35] is unit x/y coordinates and also x/y coordinates of this units target.
So here are totally 5*4 = 20 units. Every unit has slightly different stats.
These 4 arrays of units x/y coordinates visually looks like an army of 20 units "x" (targeting "o"):
xxxxx o
xxxxx o
xxxxx o
xxxxx o
There will always be 4 arrays. Even if 1 unit, there will be 4 arrays, but each size of 1. This is the simplest situation and most common.
In real situation, every unit has totally 20 different stats(keys) and 14 keys are mostly exact to other group of units - all 14 keys.
So they are grouped as an army with same stats. Difference is only coordinates of the unit and also coordinates of the units target.
I need to compress all this data into as small as possible data, which later can be decompressed.
There can also be more complex situations, when all these 14 keys are accidently same, but coordinates are totally different from pattern. Example:
[17,17,17,17,17,18,18,18, 215, 18,18,19,19,19,19,19,20,20,20,20,20] => x coordinate of unit
[11,12,13,14,15,11,12,13, 418, 14,15,11,12,13,14,15,11,12,13,14,15] => y coordinate of unit
[92,92,92,92,92,92,92,92, -78, 92,92,92,92,92,92,92,92,92,92,92,92] => x (unit moves to this direction x)
[35,36,37,38,39,35,36,37, -887, 38,39,35,36,37,38,39,35,36,37,38,39] => y (unit moves to this direction y)
In this situation i need to extract this array as for 2 different armies. When there are less than 3 units in army, i just simply write these units without the
pattern - as [215,418,-78,-887],[..] and if there are more than 2 units army, i need a compressed string with pattern, which can be decompressed later. In this example there are 21 units. It just has to be splitted into armies of 1 unit and (5x4 = 20) untis army.
In assumption that every n units has a pattern,
encode units with
n: sequence units count
ssx: start of source x
dsx: difference of source x
ssy: start of source y
dsy: difference of source y
stx: start of target x
dtx: difference of target x
sty: start of target y
dty: difference of target y
by the array: [n,ssx,dsx,ssy,dsy,stx,dtx,sty,dty]
so that the units:
[17,17,17,17,17],
[11,12,13,14,15],
[92,92,92,92,92],
[35,36,37,38,39]
are encoded:
[5,17,0,11,1,92,0,35,1]
of course if you know in advance that, for example the y targets are always the same for such a sequence you can give up the difference parameter, to have:
[n,ssx,dsx,ssy,---,stx,dtx,sty,---] => [n,ssx,dsx,ssy,stx,dtx,sty], and so on.
For interruption of a pattern like you mentioned in your last example, you can use other 'extra' arrays, and then insert them in the sequence, with:
exsx: extra unit starting x
exsy: extra unit starting y
extx: extra unit target x
exty: extra unit target y
m: insert extra unit at
so that the special case is encoded:
{
patterns:[
[5,17,0,11,1,92,0,35,1],
[5,18,0,11,1,92,0,35,1],
[5,19,0,11,1,92,0,35,1],
[5,17,0,11,1,92,0,35,1]
],
extras: [
[215,418,-78,-887,8] // 8 - because you want to insert this unit at index 8
]
}
Again, this is a general encoding. Any specific properties for the patterns may further reduce the encoded representation.
Hope this helps.
High compression using bitstreams
You can encode sets of values into a bit stream allowing you to remove unused bits. The numbers you have shown are not greater than -887 (ignoring the negative) and that means you can fit all the numbers into 10 bits saving 54 bits per number (Javascript uses 64 bit numbers).
Run length compression
You also have many repeated sets of numbers which you can use run length compression on. You set a flag in the bitstream that indicates that the following set of bits represents a repeated sequence of numbers, then you have the number of repeats and the value to repeat. For sequences of random numbers you just keep them as is.
If you use run-length compression you create a block type structure in the bit stream, this makes it possible to embed further compression. As you have many numbers that are below 128 many of the numbers can be encoded into 7 bits, or even less. For a small overhead (in this case 2 bits per block) you can select the smallest bit size to pack all the numbers in that block in.
Variable bit depth numbers
I have created a number type value that represent the number of bits used to store numbers in a block. Each block has a number type and all numbers in the block use that type. There are 4 number types that can be encoded into 2 bits.
00 = 4 bit numbers. Range 0-15
01 = 5 bit numbers. Range 0-31
10 = 7 bit numbers. Range 0-127
11 = 10 bit numbers. Range 0-1023
The bitstream
To make this easy you will need a bit stream read/write. It allows you to easily write and read bits from a stream of bits.
// Simple unsigned bit stream
// Read and write to and from a bit stream.
// Numbers are stored as Big endian
// Does not comprehend sign so wordlength should be less than 32 bits
// methods
// eof(); // returns true if read pos > buffer bit size
// read(numberBits); // number of bits to read as one number. No sign so < 32
// write(value,numberBits); // value to write, number of bits to write < 32
// getBuffer(); // return object with buffer and array of numbers, and bitLength the total number of bits
// setBuffer(buffer,bitLength); // the buffers as an array of numbers, and bitLength the total number of bits
// Properties
// wordLength; // read only length of a word.
function BitStream(){
var buffer = [];
var pos = 0;
var numBits = 0;
const wordLength = 16;
this.wordLength = wordLength;
// read a single bit
var readBit = function(){
var b = buffer[Math.floor(pos / wordLength)]; // get word
b = (b >> ((wordLength - 1) - (pos % wordLength))) & 1;
pos += 1;
return b;
}
// write a single bit. Will fill bits with 0 if wite pos is moved past buffer length
var writeBit = function(bit){
var rP = Math.floor(pos / wordLength);
if(rP >= buffer.length){ // check that the buffer has values at current pos.
var end = buffer.length; // fill buffer up to pos with zero
while(end <= rP){
buffer[end] = 0;
end += 1;
}
}
var b = buffer[rP];
bit &= 1; // mask out any unwanted bits
bit <<= (wordLength - 1) - (pos % wordLength);
b |= bit;
buffer[rP] = b;
pos += 1;
}
// returns true is past eof
this.eof = function(){
return pos >= numBits;
}
// reads number of bits as a Number
this.read = function(bits){
var v = 0;
while(bits > 0){
v <<= 1;
v |= readBit();
bits -= 1;
}
return v;
}
// writes value to bit stream
this.write = function(value,bits){
var v;
while(bits > 0){
bits -= 1;
writeBit( (value >> bits) & 1 );
}
}
// returns the buffer and length
this.getBuffer = function(){
return {
buffer : buffer,
bitLength : pos,
};
}
// set the buffer and length and returns read write pos to start
this.setBuffer = function(_buffer,bitLength){
buffer = _buffer;
numBits = bitLength;
pos = 0;
}
}
A format for your numbers
Now to design the format. The first bit read from a stream is a sequence flag, if 0 then the following block will be a repeated value, if 1 the following block will be a sequence of random numbers.
Block bits : description;
repeat block holds a repeated number
bit 0 : Val 0 = repeat
bit 1 : Val 0 = 4bit repeat count or 1 = 5bit repeat count
then either
bits 2,3,4,5 : 4 bit number of repeats - 1
bits 6,7 : 2 bit Number type
or
bits 2,3,4,5,6 : 5 bit number of repeats - 1
bits 7,8 : 2 bit Number type
Followed by
Then a value that will be repeated depending on the number type
End of block
sequence block holds a sequence of random numbers
bit 0 : Val 1 = sequence
bit 1 : Val 0 = positive sequence Val 1 = negative sequence
bits 2,3,4,5 : 4 bit number of numbers in sequence - 1
bits 6,7 : 2 bit Number type
then the sequence of numbers in the number format
End of block.
Keep reading blocks until the end of file.
Encoder and decoder
The following object will encode and decode the a flat array of numbers. It will only handles numbers upto 10 bits long, So no values over 1023 or under -1023.
If you want larger numbers you will have to change the number types that are used. To do this change the arrays
const numberSize = [0,0,0,0,0,1,2,2,3,3,3]; // the number bit depth
const numberBits = [4,5,7,10]; // the number bit depth lookup;
If you want max number to be 12 bits -4095 to 4095 ( the sign bit is in the block encoding). I have also shown the 7 bit number type changed to 8. The first array is used to look up the bit depth, if I have a 3 bit number you get the number type with numberSize[bitcount] and the bits used to store the number numberBits[numberSize[bitCount]]
const numberSize = [0,0,0,0,0,1,2,2,2,3,3,3,3]; // the number bit depth
const numberBits = [4,5,8,12]; // the number bit depth lookup;
function ArrayZip(){
var zipBuffer = 0;
const numberSize = [0,0,0,0,0,1,2,2,3,3,3]; // the number bit depth lookup;
const numberBits = [4,5,7,10]; // the number bit depth lookup;
this.encode = function(data){ // encodes the data
var pos = 0;
function getRepeat(){ // returns the number of repeat values
var p = pos + 1;
if(data[pos] < 0){
return 1; // ignore negative numbers
}
while(p < data.length && data[p] === data[pos]){
p += 1;
}
return p - pos;
}
function getNoRepeat(){ // returns the number of non repeat values
// if the sequence has negitive numbers then
// the length is returned as a negative
var p = pos + 1;
if(data[pos] < 0){ // negative numbers
while(p < data.length && data[p] !== data[p-1] && data[p] < 0){
p += 1;
}
return -(p - pos);
}
while(p < data.length && data[p] !== data[p-1] && data[p] >= 0){
p += 1;
}
return p - pos;
}
function getMax(count){
var max = 0;
var p = pos;
while(count > 0){
max = Math.max(Math.abs(data[p]),max);
p += 1;
count -= 1;
}
return max;
}
var out = new BitStream();
while(pos < data.length){
var reps = getRepeat();
if(reps > 1){
var bitCount = numberSize[Math.ceil(Math.log(getMax(reps) + 1) / Math.log(2))];
if(reps < 16){
out.write(0,1); // repeat header
out.write(0,1); // use 4 bit repeat count;
out.write(reps-1,4); // write 4 bit number of reps
out.write(bitCount,2); // write 2 bit number size
out.write(data[pos],numberBits[bitCount]);
pos += reps;
}else {
if(reps > 32){ // if more than can fit in one repeat block split it
reps = 32;
}
out.write(0,1); // repeat header
out.write(1,1); // use 5 bit repeat count;
out.write(reps-1,5); // write 5 bit number of reps
out.write(bitCount,2); // write 2 bit number size
out.write(data[pos],numberBits[bitCount]);
pos += reps;
}
}else{
var seq = getNoRepeat(); // get number no repeats
var neg = seq < 0 ? 1 : 0; // found negative numbers
seq = Math.min(16,Math.abs(seq));
// check if last value is the start of a repeating block
if(seq > 1){
var tempPos = pos;
pos += seq;
seq -= getRepeat() > 1 ? 1 : 0;
pos = tempPos;
}
// ge the max bit count to hold numbers
var bitCount = numberSize[Math.ceil(Math.log(getMax(seq) + 1) / Math.log(2))];
out.write(1,1); // sequence header
out.write(neg,1); // write negative flag
out.write(seq - 1,4); // write sequence length;
out.write(bitCount,2); // write 2 bit number size
while(seq > 0){
out.write(Math.abs(data[pos]),numberBits[bitCount]);
pos += 1;
seq -= 1;
}
}
}
// get the bit stream buffer
var buf = out.getBuffer();
// start bit stream with number of trailing bits. There are 4 bits used of 16 so plenty
// of room for aulturnative encoding flages.
var str = String.fromCharCode(buf.bitLength % out.wordLength);
// convert bit stream to charcters
for(var i = 0; i < buf.buffer.length; i ++){
str += String.fromCharCode(buf.buffer[i]);
}
// return encoded string
return str;
}
this.decode = function(zip){
var count,rSize,header,_in,i,data,endBits,numSize,val,neg;
data = []; // holds character codes
decompressed = []; // holds the decompressed array of numbers
endBits = zip.charCodeAt(0); // get the trailing bits count
for(i = 1; i < zip.length; i ++){ // convert string to numbers
data[i-1] = zip.charCodeAt(i);
}
_in = new BitStream(); // create a bitstream to read the bits
// set the buffer data and length
_in.setBuffer(data,(data.length - 1) * _in.wordLength + endBits);
while(!_in.eof()){ // do until eof
header = _in.read(1); // read header bit
if(header === 0){ // is repeat header
rSize = _in.read(1); // get repeat count size
if(rSize === 0){
count = _in.read(4); // get 4 bit repeat count
}else{
count = _in.read(5); // get 5 bit repeat count
}
numSize = _in.read(2); // get 2 bit number size type
val = _in.read(numberBits[numSize]); // get the repeated value
while(count >= 0){ // add to the data count + 1 times
decompressed.push(val);
count -= 1;
}
}else{
neg = _in.read(1); // read neg flag
count = _in.read(4); // get 4 bit seq count
numSize = _in.read(2); // get 2 bit number size type
while(count >= 0){
if(neg){ // if negative numbers convert to neg
decompressed.push(-_in.read(numberBits[numSize]));
}else{
decompressed.push(_in.read(numberBits[numSize]));
}
count -= 1;
}
}
}
return decompressed;
}
}
The best way to store a bit stream is as a string. Javascript has Unicode strings so we can pack 16 bits into every character
The results and how to use.
You need to flatten the array. If you need to add extra info to reinstate the multi/dimensional arrays just add that to the array and let the compressor compress it along with the rest.
// flatten the array
var data = [17,17,17,17,17,18,18,18,18,18,19,19,19,19,19,20,20,20,20,20,11,12,13,14,15,11,12,13,14,15,11,12,13,14,15,11,12,13,14,15,92,92,92,92,92,92,92,92,92,92,92,92,92,92,92,92,92,92,92,92,35,36,37,38,39,35,36,37,38,39,35,36,37,38,39,35,36,37,38,39];
var zipper = new ArrayZip();
var encoded = zipper.encode(data); // packs the 80 numbers in data into 21 characters.
// compression rate of the data array 5120 bits to 336 bits
// 93% compression.
// or as a flat 7bit ascii string as numbers 239 charcters (including ,)
// 239 * 7 bits = 1673 bits to 336 bits 80% compression.
var decoded = zipper.decode(encoded);
I did not notice the negative numbers at first so the compression does not do well with the negative values.
var data = [17,17,17,17,17,18,18,18, 215, 18,18,19,19,19,19,19,20,20,20,20,20, 11,12,13,14,15,11,12,13, 418, 14,15,11,12,13,14,15,11,12,13,14,15, 92,92,92,92,92,92,92,92, -78, 92,92,92,92,92,92,92,92,92,92,92,92, 35,36,37,38,39,35,36,37, -887, 38,39,35,36,37,38,39,35,36,37,38,39]
var encoded = zipper.encode(data); // packs the 84 numbers in data into 33 characters.
// compression rate of the data array 5376 bits to 528 bits
var decoded = zipper.decode(encoded);
Summary
As you can see this results in a very high compression rate (almost twice as good as LZ compression). The code is far from optimal and you could easily implement a multi pass compressor with various settings ( there are 12 spare bits at the start of the encoded string that can be used to select many options to improve compression.)
Also I did not see the negative numbers until I came back to post so the fix for negatives is not good, so you can some more out of it by modifying the bitStream to understand negatives (ie use the >>> operator)

Can someone explain this base conversion code

var ShortURL = new function() {
var _alphabet = '23456789bcdfghjkmnpqrstvwxyzBCDFGHJKLMNPQRSTVWXYZ-_',
_base = _alphabet.length;
this.encode = function(num) {
var str = '';
while (num > 0) {
str = _alphabet.charAt(num % _base) + str;
num = Math.floor(num / _base);
}
return str;
};
this.decode = function(str) {
var num = 0;
for (var i = 0; i < str.length; i++) {
num = num * _base + _alphabet.indexOf(str.charAt(i));
}
return num;
};
};
I understand encode works by converting from decimal to custom base (custom alphabet/numbers in this case)
I am not quite sure how decode works.
Why do we multiply base by a current number and then add the position number of the alphabet? I know that to convert 010 base 2 to decimal, we would do
(2 * 0^2) + (2 * 1^1) + (2 * 0 ^ 0) = 2
Not sure how it is represented in that decode algorithm
EDIT:
My own decode version
this.decode2 = function (str) {
var result = 0;
var position = str.length - 1;
var value;
for (var i = 0; i < str.length; i++) {
value = _alphabet.indexOf(str[i]);
result += value * Math.pow(_base, position--);
}
return result;
}
This is how I wrote my own decode version (Just like I want convert this on paper. I would like someone to explain more in detail how the first version of decode works. Still don't get why we multiply num * base and start num with 0.
OK, so what does 376 mean as a base-10 output of your encode() function? It means:
1 * 100 +
5 * 10 +
4 * 1
Why? Because in encode(), you divide by the base on every iteration. That means that, implicitly, the characters pushed onto the string on the earlier iterations gain in significance by a factor of the base each time through the loop.
The decode() function, therefore, multiplies by the base each time it sees a new character. That way, the first digit is multiplied by the base once for every digit position past the first that it represents, and so on for the rest of the digits.
Note that in the explanation above, the 1, 5, and 4 come from the positions of the characters 3, 7, and 6 in the "alphabet" list. That's how your encoding/decoding mechanism works. If you feed your decode() function a numeric string encoded by something trying to produce normal base-10 numbers, then of course you'll get a weird result; that's probably obvious.
edit To further elaborate on the decode() function: forget (for now) about the special base and encoding alphabet. The process is basically the same regardless of the base involved. So, let's look at a function that interprets a base-10 string of numeric digits as a number:
function decode10(str) {
var num = 0, zero = '0'.charCodeAt(0);
for (var i = 0; i < str.length; ++i) {
num = (num * 10) + (str[i] - zero);
}
return num;
}
The accumulator variable num is initialized to 0 first, because before examining any characters of the input numeric string the only value that makes sense to start with is 0.
The function then iterates through each character of the input string from left to right. On each iteration, the accumulator is multiplied by the base, and the digit value at the current string position is added.
If the input string is "214", then, the iteration will proceed as follows:
num is set to 0
First iteration: str[i] is 2, so (num * 10) + 2 is 2
Second iteration: str[i] is 1, so (num * 10) + 1 is 21
Third iteration: str[i] is 4, so (num * 10) + 4 is 214
The successive multiplications by 10 achieve what the call to Math.pow() does in your code. Note that 2 is multiplied by 10 twice, which effectively multiplies it by 100.
The decode() routine in your original code does the same thing, only instead of a simple character code computation to get the numeric value of a digit, it performs a lookup in the alphabet string.
Both the original and your own version of the decode function achieve the same thing, but the original version does it more efficiently.
In the following assignment:
num = num * _base + _alphabet.indexOf(str.charAt(i));
... there are two parts:
_alphabet.indexOf(str.charAt(i))
The indexOf returns the value of a digit in base _base. You have this part in your own algorithm, so that should be clear.
num * _base
This multiplies the so-far accumulated result. The rest of my answer is about that part:
In the first iteration this has no effect, as num is still 0 at that point. But at the end of the first iteration, num contains the value as if the str only had its left most character. It is the base-51 digit value of the left most digit.
From the next iteration onwards, the result is multiplied by the base, which makes room for the next value to be added to it. It functions like a digit shift.
Take this example input to decode:
bd35
The individual characters represent value 8, 10, 1 and 3. As there are 51 characters in the alphabet, we're in base 51. So bd35 this represents value:
8*51³ + 10*51² + 1*51 + 3
Here is a table with the value of num after each iteration:
8
8*51 + 10
8*51² + 10*51 + 1
8*51³ + 10*51² + 1*51 + 3
Just to make the visualisation cleaner, let's put the power of 51 in a column header, and remove that from the rows:
3 2 1 0
----------------------------
8
8 10
8 10 1
8 10 1 3
Note how the 8 shifts to the left at each iteration and gets multiplied with the base (51). The same happens with 10, as soon as it is shifted in from the right, and the same with the 1, and 3, although that is the last one and doesn't shift any more.
The multiplication num * _base represents thus a shift of base-digits to the left, making room for a new digit to shift in from the right (through simple addition).
At the last iteration all digits have shifted in their correct position, i.e. they have been multiplied by the base just enough times.
Putting your own algorithm in the same scheme, you'd have this table:
3 2 1 0
----------------------------
8
8 10
8 10 1
8 10 1 3
Here, there is no shifting: the digits are immediately put in the right position, i.e. they are multiplied with the correct power of 51 immediately.
You ask
I would like to understand how the decode function works from logical perspective. Why are we using num * base and starting with num = 0.
and write that
I am not quite sure how decode works. Why do we multiply base by a
current number and then add the position number of the alphabet? I
know that to convert 010 base 2 to decimal, we would do
(2 * 0^2) + (2 * 1^1) + (2 * 0 ^ 0) = 2
The decode function uses an approach to base conversion known as Horner's rule, used because it is computationally efficient:
start with a variable set to 0, num = 0
multiply the variable num by the base
take the value of the most significant digit (the leftmost digit) and add it to num,
repeat step 2 and 3 for as long as there are digits left to convert,
the variable num now contains the converted value (in base 10)
Using an example of a hexadecimal number A5D:
start with a variable set to 0, num = 0
multiply by the base (16), num is now still 0
take the value of the most significant digit (the A has a digit value of 10) and add it to num, num is now 10
repeat step 2, multiply the variable num by the base (16), num is now 160
repeat step 3, add the hexadecimal digit 5 to num, num is now 165
repeat step 2, multiply the variable num by the base (16), num is now 2640
repeat step 3, add the hexadecimal digit D to num (add 13)
there are no digits left to convert, the variable num now contains the converted value (in base 10), which is 2653
Compare the expression of the standard approach:
(10 × 162) + (5 × 161) + (13 × 160) = 2653
to the use of Horner's rule:
(((10 × 16) + 5) × 16) + 13 = 2653
which is exactly the same computation, but rearranged in a form making it easier to compute. This is how the decode function works.
Why are we using num * base and starting with num = 0.
The conversion algorithm needs a start value, therefore num is set to 0. For each repetition (each loop iteration), num is multiplied by base. This only has any effect on the second iteration, but is written like this to make it easier to write the conversion as a for loop.

JavaScript Typed Arrays - Different Views

I'm trying to get a handle on the interaction between different types of typed arrays.
Example 1.
var buf = new ArrayBuffer(2);
var arr8 = new Int8Array(buf);
var arr8u = new Uint8Array(buf);
arr8[0] = 7.2;
arr8[1] = -45.3;
console.log(arr8[0]); // 7
console.log(arr8[1]); // -45
console.log(arr8u[0]); // 7
console.log(arr8u[1]); // 211
I have no problem with the first three readouts but where does 211 come from in the last. Does this have something to do with bit-shifting because of the minus sign.
Example 2
var buf = new ArrayBuffer(4);
var arr8 = new Int8Array(buf);
var arr32 = new Int32Array(buf);
for (var i = 0; i < buf.byteLength; i++){
arr8[i] = i;
}
console.log(arr8); // [0, 1, 2, 3]
console.log(arr32); // [50462976]
So where does the 50462976 come from?
Example #1
Examine positive 45 as a binary number:
> (45).toString(2)
"101101"
Binary values are negated using a two's complement:
00101101 => 45 signed 8-bit value
11010011 => -45 signed 8-bit value
When we read we read 11010011 as an unsigned 8-bit value, it comes out to 211:
> parseInt("11010011", 2);
211
Example #2
If you print 50462976 in base 2:
> (50462976).toString(2);
"11000000100000000100000000"
We can add leading zeros and rewrite this as:
00000011000000100000000100000000
And we can break it into octets:
00000011 00000010 00000001 00000000
This shows binary 3, 2, 1, and 0. The storage of 32-bit integers is big-endian. The 8-bit values 0 to 3 are read in order of increasing significance when constructing the 32-bit value.
First question.
Signed 8bit integers range from -128 to 127. The positive part (0-127) maps to binary values from 00000000 to 01111111), and the other half (-128-1) from 10000000 to 11111111.
If you omit the first bit, you can create a number by adding a 7bit number to a boundary. In your case, the binary representation is 11010011. The first bit is 1, this means the number will be negative. The last 7bits are 1010011, that gives us value 83. Add it to the boundary: -128 + 83 = -45. That’s it.
Second question.
32bit integers are represented by four bytes in memory. You are storing four 8bit integers in the buffer. When converted to an Int32Array, all those values are combined to form one value.
If this was decimal system, you could think of it as combining "1" and "2" gives "12". It’s similar in this case, except the multipliers are different. For the first value, it’s 2^24. Then it’s 2^16, then 2^8 and finally 2^0. Let’s do the math:
2^24 * 3 + 2^16 * 2 + 2^8 * 1 + 2^0 * 0 =
16777216 * 3 + 65536 * 2 + 256 * 1 + 1 * 0 =
50331648 + 131072 + 256 + 0 =
50462976
That’s why you’re seeing such a large number.

How do I convert an integer to binary in JavaScript?

I’d like to see integers, positive or negative, in binary.
Rather like this question, but for JavaScript.
function dec2bin(dec) {
return (dec >>> 0).toString(2);
}
console.log(dec2bin(1)); // 1
console.log(dec2bin(-1)); // 11111111111111111111111111111111
console.log(dec2bin(256)); // 100000000
console.log(dec2bin(-256)); // 11111111111111111111111100000000
You can use Number.toString(2) function, but it has some problems when representing negative numbers. For example, (-1).toString(2) output is "-1".
To fix this issue, you can use the unsigned right shift bitwise operator (>>>) to coerce your number to an unsigned integer.
If you run (-1 >>> 0).toString(2) you will shift your number 0 bits to the right, which doesn't change the number itself but it will be represented as an unsigned integer. The code above will output "11111111111111111111111111111111" correctly.
This question has further explanation.
-3 >>> 0 (right logical shift) coerces its arguments to unsigned integers, which is why you get the 32-bit two's complement representation of -3.
Try
num.toString(2);
The 2 is the radix and can be any base between 2 and 36
source here
UPDATE:
This will only work for positive numbers, Javascript represents negative binary integers in two's-complement notation. I made this little function which should do the trick, I haven't tested it out properly:
function dec2Bin(dec)
{
if(dec >= 0) {
return dec.toString(2);
}
else {
/* Here you could represent the number in 2s compliment but this is not what
JS uses as its not sure how many bits are in your number range. There are
some suggestions https://stackoverflow.com/questions/10936600/javascript-decimal-to-binary-64-bit
*/
return (~dec).toString(2);
}
}
I had some help from here
A simple way is just...
Number(42).toString(2);
// "101010"
The binary in 'convert to binary' can refer to three main things. The positional number system, the binary representation in memory or 32bit bitstrings. (for 64bit bitstrings see Patrick Roberts' answer)
1. Number System
(123456).toString(2) will convert numbers to the base 2 positional numeral system. In this system negative numbers are written with minus signs just like in decimal.
2. Internal Representation
The internal representation of numbers is 64 bit floating point and some limitations are discussed in this answer. There is no easy way to create a bit-string representation of this in javascript nor access specific bits.
3. Masks & Bitwise Operators
MDN has a good overview of how bitwise operators work. Importantly:
Bitwise operators treat their operands as a sequence of 32 bits (zeros and ones)
Before operations are applied the 64 bit floating points numbers are cast to 32 bit signed integers. After they are converted back.
Here is the MDN example code for converting numbers into 32-bit strings.
function createBinaryString (nMask) {
// nMask must be between -2147483648 and 2147483647
for (var nFlag = 0, nShifted = nMask, sMask = ""; nFlag < 32;
nFlag++, sMask += String(nShifted >>> 31), nShifted <<= 1);
return sMask;
}
createBinaryString(0) //-> "00000000000000000000000000000000"
createBinaryString(123) //-> "00000000000000000000000001111011"
createBinaryString(-1) //-> "11111111111111111111111111111111"
createBinaryString(-1123456) //-> "11111111111011101101101110000000"
createBinaryString(0x7fffffff) //-> "01111111111111111111111111111111"
This answer attempts to address inputs with an absolute value in the range of 214748364810 (231) – 900719925474099110 (253-1).
In JavaScript, numbers are stored in 64-bit floating point representation, but bitwise operations coerce them to 32-bit integers in two's complement format, so any approach which uses bitwise operations restricts the range of output to -214748364810 (-231) – 214748364710 (231-1).
However, if bitwise operations are avoided and the 64-bit floating point representation is preserved by using only mathematical operations, we can reliably convert any safe integer to 64-bit two's complement binary notation by sign-extending the 53-bit twosComplement:
function toBinary (value) {
if (!Number.isSafeInteger(value)) {
throw new TypeError('value must be a safe integer');
}
const negative = value < 0;
const twosComplement = negative ? Number.MAX_SAFE_INTEGER + value + 1 : value;
const signExtend = negative ? '1' : '0';
return twosComplement.toString(2).padStart(53, '0').padStart(64, signExtend);
}
function format (value) {
console.log(value.toString().padStart(64));
console.log(value.toString(2).padStart(64));
console.log(toBinary(value));
}
format(8);
format(-8);
format(2**33-1);
format(-(2**33-1));
format(2**53-1);
format(-(2**53-1));
format(2**52);
format(-(2**52));
format(2**52+1);
format(-(2**52+1));
.as-console-wrapper{max-height:100%!important}
For older browsers, polyfills exist for the following functions and values:
Number.isSafeInteger()
Number.isInteger()
Number.MAX_SAFE_INTEGER
String.prototype.padStart()
As an added bonus, you can support any radix (2–36) if you perform the two's complement conversion for negative numbers in ⌈64 / log2(radix)⌉ digits by using BigInt:
function toRadix (value, radix) {
if (!Number.isSafeInteger(value)) {
throw new TypeError('value must be a safe integer');
}
const digits = Math.ceil(64 / Math.log2(radix));
const twosComplement = value < 0
? BigInt(radix) ** BigInt(digits) + BigInt(value)
: value;
return twosComplement.toString(radix).padStart(digits, '0');
}
console.log(toRadix(0xcba9876543210, 2));
console.log(toRadix(-0xcba9876543210, 2));
console.log(toRadix(0xcba9876543210, 16));
console.log(toRadix(-0xcba9876543210, 16));
console.log(toRadix(0x1032547698bac, 2));
console.log(toRadix(-0x1032547698bac, 2));
console.log(toRadix(0x1032547698bac, 16));
console.log(toRadix(-0x1032547698bac, 16));
.as-console-wrapper{max-height:100%!important}
If you are interested in my old answer that used an ArrayBuffer to create a union between a Float64Array and a Uint16Array, please refer to this answer's revision history.
A solution i'd go with that's fine for 32-bits, is the code the end of this answer, which is from developer.mozilla.org(MDN), but with some lines added for A)formatting and B)checking that the number is in range.
Some suggested x.toString(2) which doesn't work for negatives, it just sticks a minus sign in there for them, which is no good.
Fernando mentioned a simple solution of (x>>>0).toString(2); which is fine for negatives, but has a slight issue when x is positive. It has the output starting with 1, which for positive numbers isn't proper 2s complement.
Anybody that doesn't understand the fact of positive numbers starting with 0 and negative numbers with 1, in 2s complement, could check this SO QnA on 2s complement. What is “2's Complement”?
A solution could involve prepending a 0 for positive numbers, which I did in an earlier revision of this answer. And one could accept sometimes having a 33bit number, or one could make sure that the number to convert is within range -(2^31)<=x<2^31-1. So the number is always 32bits. But rather than do that, you can go with this solution on mozilla.org
Patrick's answer and code is long and apparently works for 64-bit, but had a bug that a commenter found, and the commenter fixed patrick's bug, but patrick has some "magic number" in his code that he didn't comment about and has forgotten about and patrick no longer fully understands his own code / why it works.
Annan had some incorrect and unclear terminology but mentioned a solution by developer.mozilla.org
Note- the old link https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Bitwise_Operators now redirects elsewhere and doesn't have that content but the proper old link , which comes up when archive.org retrieves pages!, is available here https://web.archive.org/web/20150315015832/https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Bitwise_Operators
The solution there works for 32-bit numbers.
The code is pretty compact, a function of three lines.
But I have added a regex to format the output in groups of 8 bits. Based on How to format a number with commas as thousands separators? (I just amended it from grouping it in 3s right to left and adding commas, to grouping in 8s right to left, and adding spaces)
And, while mozilla made a comment about the size of nMask(the number fed in)..that it has to be in range, they didn't test for or throw an error when the number is out of range, so i've added that.
I'm not sure why they named their parameter 'nMask' but i'll leave that as is.
https://web.archive.org/web/20150315015832/https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Bitwise_Operators
function createBinaryString(nMask) {
// nMask must be between -2147483648 and 2147483647
if (nMask > 2**31-1)
throw "number too large. number shouldn't be > 2**31-1"; //added
if (nMask < -1*(2**31))
throw "number too far negative, number shouldn't be < -(2**31)" //added
for (var nFlag = 0, nShifted = nMask, sMask = ''; nFlag < 32;
nFlag++, sMask += String(nShifted >>> 31), nShifted <<= 1);
sMask=sMask.replace(/\B(?=(.{8})+(?!.))/g, " ") // added
return sMask;
}
console.log(createBinaryString(-1)) // "11111111 11111111 11111111 11111111"
console.log(createBinaryString(1024)) // "00000000 00000000 00000100 00000000"
console.log(createBinaryString(-2)) // "11111111 11111111 11111111 11111110"
console.log(createBinaryString(-1024)) // "11111111 11111111 11111100 00000000"
//added further console.log example
console.log(createBinaryString(2**31 -1)) //"01111111 11111111 11111111 11111111"
You can write your own function that returns an array of bits.
Example how to convert number to bits
Divisor| Dividend| bits/remainder
2 | 9 | 1
2 | 4 | 0
2 | 2 | 0
~ | 1 |~
example of above line: 2 * 4 = 8 and remainder is 1
so 9 = 1 0 0 1
function numToBit(num){
var number = num
var result = []
while(number >= 1 ){
result.unshift(Math.floor(number%2))
number = number/2
}
return result
}
Read remainders from bottom to top. Digit 1 in the middle to top.
This is how I manage to handle it:
const decbin = nbr => {
if(nbr < 0){
nbr = 0xFFFFFFFF + nbr + 1
}
return parseInt(nbr, 10).toString(2)
};
got it from this link: https://locutus.io/php/math/decbin/
we can also calculate the binary for positive or negative numbers as below:
function toBinary(n){
let binary = "";
if (n < 0) {
n = n >>> 0;
}
while(Math.ceil(n/2) > 0){
binary = n%2 + binary;
n = Math.floor(n/2);
}
return binary;
}
console.log(toBinary(7));
console.log(toBinary(-7));
You could use a recursive solution:
function intToBinary(number, res = "") {
if (number < 1)
if (res === "") return "0"
else
return res
else return intToBinary(Math.floor(number / 2), number % 2 + res)
}
console.log(intToBinary(12))
console.log(intToBinary(546))
console.log(intToBinary(0))
console.log(intToBinary(125))
Works only with positive numbers.
An actual solution that logic can be implemented by any programming language:
If you sure it is positive only:
var a = 0;
var n = 12; // your input
var m = 1;
while(n) {
a = a + n%2*m;
n = Math.floor(n/2);
m = m*10;
}
console.log(n, ':', a) // 12 : 1100
If can negative or positive -
(n >>> 0).toString(2)
I’d like to see integers, positive or negative, in binary.
This is an old question and I think there are very nice solutions here but there is no explanation about the use of these clever solutions.
First, we need to understand that a number can be positive or negative.
Also, JavaScript provides a MAX_SAFE_INTEGER constant that has a value of 9007199254740991. The reasoning behind that number is that JavaScript uses double-precision floating-point format numbers as specified in IEEE 754 and can only safely represent integers between -(2^53 - 1) and 2^53 - 1.
So, now we know the range where numbers are "safe". Also, JavaScript ES6 has the built-in method Number.isSafeInteger() to check if a number is a safe integer.
Logically, if we want to represent a number n as binary, this number needs a length of 53 bits, but for better presentation lets use 7 groups of 8 bits = 56 bits and fill the left side with 0 or 1 based on its sign using the padStart function.
Next, we need to handle positive and negative numbers: positive numbers will add 0s to the left while negative numbers will add 1s. Also, negative numbers will need a two's-complement representation. We can easily fix this by adding Number.MAX_SAFE_INTEGER + 1 to the number.
For example, we want to represent -3 as binary, lets assume that Number.MAX_SAFE_INTEGER is 00000000 11111111 (255) then Number.MAX_SAFE_INTEGER + 1 will be 00000001 00000000 (256). Now lets add the number Number.MAX_SAFE_INTEGER + 1 - 3 this will be 00000000 11111101 (253) but as we said we will fill with the left side with 1 like this 11111111 11111101 (-3), this represent -3 in binary.
Another algorithm will be we add 1 to the number and invert the sign like this -(-3 + 1) = 2 this will be 00000000 00000010 (2). Now we invert every bit like this 11111111 11111101 (-3) again we have a binary representation of -3.
Here we have a working snippet of these algos:
function dec2binA(n) {
if (!Number.isSafeInteger(n)) throw new TypeError('n value must be a safe integer')
if (n > 2**31) throw 'number too large. number should not be greater than 2**31'
if (n < -1*(2**31)) throw 'number too far negative, number should not be lesser than 2**31'
const bin = n < 0 ? Number.MAX_SAFE_INTEGER + 1 + n : n
const signBit = n < 0 ? '1' : '0'
return parseInt(bin, 10).toString(2)
.padStart(56, signBit)
.replace(/\B(?=(.{8})+(?!.))/g, ' ')
}
function dec2binB(n) {
if (!Number.isSafeInteger(n)) throw new TypeError('n value must be a safe integer')
if (n > 2**31) throw 'number too large. number should not be greater than 2**31'
if (n < -1*(2**31)) throw 'number too far negative, number should not be lesser than 2**31'
const bin = n < 0 ? -(1 + n) : n
const signBit = n < 0 ? '1' : '0'
return parseInt(bin, 10).toString(2)
.replace(/[01]/g, d => +!+d)
.padStart(56, signBit)
.replace(/\B(?=(.{8})+(?!.))/g, ' ')
}
const a = -805306368
console.log(a)
console.log('dec2binA:', dec2binA(a))
console.log('dec2binB:', dec2binB(a))
const b = -3
console.log(b)
console.log('dec2binA:', dec2binA(b))
console.log('dec2binB:', dec2binB(b))
One more alternative
const decToBin = dec => {
let bin = '';
let f = false;
while (!f) {
bin = bin + (dec % 2);
dec = Math.trunc(dec / 2);
if (dec === 0 ) f = true;
}
return bin.split("").reverse().join("");
}
console.log(decToBin(0));
console.log(decToBin(1));
console.log(decToBin(2));
console.log(decToBin(3));
console.log(decToBin(4));
console.log(decToBin(5));
console.log(decToBin(6));
I used a different approach to come up with something that does this. I've decided to not use this code in my project, but I thought I'd leave it somewhere relevant in case it is useful for someone.
Doesn't use bit-shifting or two's complement coercion.
You choose the number of bits that comes out (it checks for valid values of '8', '16', '32', but I suppose you could change that)
You choose whether to treat it as a signed or unsigned integer.
It will check for range issues given the combination of signed/unsigned and number of bits, though you'll want to improve the error handling.
It also has the "reverse" version of the function which converts the bits back to the int. You'll need that since there's probably nothing else that will interpret this output :D
function intToBitString(input, size, unsigned) {
if ([8, 16, 32].indexOf(size) == -1) {
throw "invalid params";
}
var min = unsigned ? 0 : - (2 ** size / 2);
var limit = unsigned ? 2 ** size : 2 ** size / 2;
if (!Number.isInteger(input) || input < min || input >= limit) {
throw "out of range or not an int";
}
if (!unsigned) {
input += limit;
}
var binary = input.toString(2).replace(/^-/, '');
return binary.padStart(size, '0');
}
function bitStringToInt(input, size, unsigned) {
if ([8, 16, 32].indexOf(size) == -1) {
throw "invalid params";
}
input = parseInt(input, 2);
if (!unsigned) {
input -= 2 ** size / 2;
}
return input;
}
// EXAMPLES
var res;
console.log("(uint8)10");
res = intToBitString(10, 8, true);
console.log("intToBitString(res, 8, true)");
console.log(res);
console.log("reverse:", bitStringToInt(res, 8, true));
console.log("---");
console.log("(uint8)127");
res = intToBitString(127, 8, true);
console.log("intToBitString(res, 8, true)");
console.log(res);
console.log("reverse:", bitStringToInt(res, 8, true));
console.log("---");
console.log("(int8)127");
res = intToBitString(127, 8, false);
console.log("intToBitString(res, 8, false)");
console.log(res);
console.log("reverse:", bitStringToInt(res, 8, false));
console.log("---");
console.log("(int8)-128");
res = intToBitString(-128, 8, false);
console.log("intToBitString(res, 8, true)");
console.log(res);
console.log("reverse:", bitStringToInt(res, 8, true));
console.log("---");
console.log("(uint16)5000");
res = intToBitString(5000, 16, true);
console.log("intToBitString(res, 16, true)");
console.log(res);
console.log("reverse:", bitStringToInt(res, 16, true));
console.log("---");
console.log("(uint32)5000");
res = intToBitString(5000, 32, true);
console.log("intToBitString(res, 32, true)");
console.log(res);
console.log("reverse:", bitStringToInt(res, 32, true));
console.log("---");
This is a method that I use. It's a very fast and concise method that works for whole numbers.
If you want, this method also works with BigInts. You just have to change each 1 to 1n.
// Assuming {num} is a whole number
function toBin(num){
let str = "";
do {
str = `${num & 1}${str}`;
num >>= 1;
} while(num);
return str
}
Explanation
This method, in a way, goes through all the bits of the number as if it's already a binary number.
It starts with an empty string, and then it prepends the last bit. num & 1 will return the last bit of the number (1 or 0). num >>= 1 then removes the last bit and makes the second-to-last bit the new last bit. The process is repeated until all the bits have been read.
Of course, this is an extreme simplification of what's actually going on. But this is how I generalize it.
This is my code:
var x = prompt("enter number", "7");
var i = 0;
var binaryvar = " ";
function add(n) {
if (n == 0) {
binaryvar = "0" + binaryvar;
}
else {
binaryvar = "1" + binaryvar;
}
}
function binary() {
while (i < 1) {
if (x == 1) {
add(1);
document.write(binaryvar);
break;
}
else {
if (x % 2 == 0) {
x = x / 2;
add(0);
}
else {
x = (x - 1) / 2;
add(1);
}
}
}
}
binary();
This is the solution . Its quite simple as a matter of fact
function binaries(num1){
var str = num1.toString(2)
return(console.log('The binary form of ' + num1 + ' is: ' + str))
}
binaries(3
)
/*
According to MDN, Number.prototype.toString() overrides
Object.prototype.toString() with the useful distinction that you can
pass in a single integer argument. This argument is an optional radix,
numbers 2 to 36 allowed.So in the example above, we’re passing in 2 to
get a string representation of the binary for the base 10 number 100,
i.e. 1100100.
*/

Categories

Resources