convert mebibyte to gigabyte - javascript

How can I convert mebibyte to gigabyte? Currently I'm just converting from megabytes to gigabytes.
export function megabyteToGigabyte(n: number) {
return n / Math.pow(10, 3);
}
But what about mebibytes?

1 Gigabyte is approximately 954 mebibyte
So you can simply divide n by 954 to convert MiB to GB
function MebibyteToGigabyte(n){
return n / 953.674; // 953.674 is the exact number
}

Convert to bytes: 1 mebibyte = 1048576 bytes
Then convert to gigabytes: 1 gigabyte = 1000000000 bytes
This gives you:
const MiBToGB = n => n * 1048576 / 1000000000;
console.log(MiBToGB(1000)); // 1.048576

Mebibyte(MiB) is a binary, unit, defined as 2^20 or 1024^2 Bytes.
So, 1 MiB = 1,048,576 Bytes
Whereas, Gigabyte(GB) is a decimal unit, defined as 10^9 or 1000^3 Bytes.
That makes 1 GB = 1,000,000,000 Bytes
To convert n MiB to GB, we have to multiply n with 1,048,576 and divide with 1000,000,000
for e.g. 1 MiB to GB = 1 * 1,048,576 / 1000,000,000 = 0.001048576
From above, the simplest way will be to just multiply n with 0.001048576.

Related

how can i convert kilobyte to GB

I have
size: 0.20800000000000002``` in KB
I need to convert the readable format of the GB
{size / 1000000} is used this
output: 2.08e7
but it is not correct in a readable format I need to set it to a proper readable format of GB
let size_gb = (size / 1000000).toFixed(2);
The toFixed(2) method formats the number to have two decimal
Usually, 0.20800000000000002 Kb would be 0 Gb in a readable format. because the result will be 0.0000002000008. however, if you want to show it in a decimal number you can use the .toFixed() function.
let size: 0.20800000000000002
let sizeGb = (size/1000000).toFixed(7) // for this scenario
// the output will be 0.0000002
or
let size: 0.20800000000000002
let sizeGb = (size/1000000).toFixed(2) // the output will be 0.00
What do you define as a "readable" format? The hundreths place (if necessary?)
This will convert your number UP to the hundreths place IF necessary:
let kb = 0.20800000000000002;
let gb = kb / 1000000;
let gbFormat = Math.round(((kb / 1000000) + Number.EPSILON) * 100) / 100;
console.log(gbFormat);

Calculate the size of a base64 image in Kb, Mb

In build a small Javascript application. With this app you can take photos via the cam. The App temporarily stores images in the Browser localStorage as base64. Now I want the app to be told the amount of storage used. So how big the image is in KB.
Each character has 8 bits. And 8 bits are 1 byte. So you have the formula:
small Example
const base64Image = 'data;image/png;base64, ... ';
const yourBase64String =
base64Image.substring(base64Image.indexOf(',') + 1);
const bits = yourBase64String.length * 6; // 567146
const bytes = bits / 8;
const kb = ceil(bytes / 1000);
// in one Line
const kb = ceil( ( (yourBase64String.length * 6) / 8) / 1000 ); // 426 kb

How to round to nearest doubling factor of 32?

I have a bunch of bins which double in size, starting from size 32. The bins are divided in half and added to the lower bin, but this isn't important for the question. I am currently hardcoding a "max" 16777216 which is the size of the largest bin.
let bins = [
[], // 1 = 32
[], // 2 = 64
[], // 3 = 128
[], // 4 = 256
[], // 5 = 512
[], // 6 = 1024
[], // 7 = 2048
[], // 8 = 4096
[], // 9 = 8192
[], // 10 = 16384
[], // 11 = 32768
[], // 12 = 65536
[], // 13 = 131072
[], // 14 = 262144
[], // 15 = 524288
[], // 16 = 1048576
[], // 17 = 2097152
[], // 18 = 4194304
[], // 19 = 8388608
[0], // 20 = 16777216
I would like to dynamically determine the number of bins based on the available memory on the platform, as a factor of 32. So if there was 24 TB of available memory on the machine, that is 1.92e+14 bits, or 6e+12 32-bit chunks. So I would round that number up to the nearest double factor multiple of 32, following this same pattern of how the numbers grow.
How do I do this programmatically with a generic equation? I gathered these numbers by doing this manually:
a = 1 * 32
b = a * 2
c = b * 2
d = c * 2
...
How do I do this with a generic equation?
How do I round up efficiently to the nearest one of these numbers?
const smallestBin = n => {
if(n <= 32) return 32;
let size = 5;
for(let x = Math.trunc(n/32); x > 0; x = Math.trunc(x/2), size++) { }
size = 2**size;
return size == n*2 ? n : size;
};
Note that we don't use bit-wise operators because they're limited to 32 bits.
When you say 32 * 2 * 2 * 2... you are multiplying 32 by a certain power of 2, or basically:
32 * 2^i
Now, since in your example i starts from 1. the correct equation is actually:
16 * 2^i
And since 16 is also a power of 2, you can just write this as:
2^4 * 2^i
Which is equal to:
2^(4+i)
If you now have a random number, how do you round it up to the nearest power of 2? This is basically calculating the logarithm (base 2) of your number, then round the result to the next integer.
This integer value is the exponent of the nearest power of two (rounding up).
If you want the nearest power of 2, this is just: 2^result. So the complete equation is:
2^round(log_2(num))
In javascript, you can do this with:
2**Math.ceil(Math.log2(num))
If you need your index i, remember that your numbers are 2^(4+i), so just subtract 4 from round(log_2(num)) and you get i

About the unit transfer into [B, Kb, Mb],I don't know how to use Math.log()

The code is as follows.
function formatBytes(bytes,decimals) {
if(bytes == 0) return '0 Byte';
var k = 1000; // or 1024 for binary
var dm = decimals + 1 || 3;
var sizes = ['Bytes', 'KB', 'MB', 'GB', 'TB', 'PB', 'EB', 'ZB', 'YB'];
var i = Math.floor(Math.log(bytes) / Math.log(k));
return parseFloat((bytes / Math.pow(k, i)).toFixed(dm)) + ' ' + sizes[i];
}
But I don't know how to use Math.log().
Correct way to convert size in bytes to KB, MB, GB in Javascript
The Math.log() function returns the natural logarithm (base e) of a
number, that is
∀x>0,Math.log(x) = ln(x)= the unique y such that e^y=x
var i = Math.floor(Math.log(bytes) / Math.log(k));
i will be index of sizes
For example: bytes < 1000 => i = 0 because floor method rounds a number downward to its nearest integer.
Consider that the size units bytes, KB, MB, GB etc. are in successive powers of 1024, as in 1024^0, 1024^1, 1024^2 and so on.
So to know which unit to use for an arbitrary number of bytes, one needs to calculate the highest power of 1024 below it.
The integral part of a postive valued base 2 logarithm, ln2( bytes), is the power of two below it, so an easy way to obtain the power of 1024 below a number is to divide its base 2 logarithm by 10, since 1024 is 2^10, and take the integral part.
This yields a function
function p1024( n) { return Math.floor(Math.log2( n) / 10);}
to use as an index into the units abbreviation array.
The posted code is using mathematical knowledge that ln2( n) / 10 is equivalent to ln( n) / ln(1024), where ln is the natural logarithm function and ln2 is a function for base 2 logarithms.
In Javascript Math is a global object which is accessible to all javascript code. In Java you may need to import java.lang.Math or java.lang.* before use (not a Java expert here).

JavaScript Typed Arrays - Different Views

I'm trying to get a handle on the interaction between different types of typed arrays.
Example 1.
var buf = new ArrayBuffer(2);
var arr8 = new Int8Array(buf);
var arr8u = new Uint8Array(buf);
arr8[0] = 7.2;
arr8[1] = -45.3;
console.log(arr8[0]); // 7
console.log(arr8[1]); // -45
console.log(arr8u[0]); // 7
console.log(arr8u[1]); // 211
I have no problem with the first three readouts but where does 211 come from in the last. Does this have something to do with bit-shifting because of the minus sign.
Example 2
var buf = new ArrayBuffer(4);
var arr8 = new Int8Array(buf);
var arr32 = new Int32Array(buf);
for (var i = 0; i < buf.byteLength; i++){
arr8[i] = i;
}
console.log(arr8); // [0, 1, 2, 3]
console.log(arr32); // [50462976]
So where does the 50462976 come from?
Example #1
Examine positive 45 as a binary number:
> (45).toString(2)
"101101"
Binary values are negated using a two's complement:
00101101 => 45 signed 8-bit value
11010011 => -45 signed 8-bit value
When we read we read 11010011 as an unsigned 8-bit value, it comes out to 211:
> parseInt("11010011", 2);
211
Example #2
If you print 50462976 in base 2:
> (50462976).toString(2);
"11000000100000000100000000"
We can add leading zeros and rewrite this as:
00000011000000100000000100000000
And we can break it into octets:
00000011 00000010 00000001 00000000
This shows binary 3, 2, 1, and 0. The storage of 32-bit integers is big-endian. The 8-bit values 0 to 3 are read in order of increasing significance when constructing the 32-bit value.
First question.
Signed 8bit integers range from -128 to 127. The positive part (0-127) maps to binary values from 00000000 to 01111111), and the other half (-128-1) from 10000000 to 11111111.
If you omit the first bit, you can create a number by adding a 7bit number to a boundary. In your case, the binary representation is 11010011. The first bit is 1, this means the number will be negative. The last 7bits are 1010011, that gives us value 83. Add it to the boundary: -128 + 83 = -45. That’s it.
Second question.
32bit integers are represented by four bytes in memory. You are storing four 8bit integers in the buffer. When converted to an Int32Array, all those values are combined to form one value.
If this was decimal system, you could think of it as combining "1" and "2" gives "12". It’s similar in this case, except the multipliers are different. For the first value, it’s 2^24. Then it’s 2^16, then 2^8 and finally 2^0. Let’s do the math:
2^24 * 3 + 2^16 * 2 + 2^8 * 1 + 2^0 * 0 =
16777216 * 3 + 65536 * 2 + 256 * 1 + 1 * 0 =
50331648 + 131072 + 256 + 0 =
50462976
That’s why you’re seeing such a large number.

Categories

Resources