How to get the last 2 bits in a byte - javascript

In an array of bytes, such as [40, 0, 156, 243], I need to make 1 byte from the last two bits in each byte of the array. This is for decoding and encoding PICO-8 cartridges (see here), which stores the program data in PNG images. Here is the information that I've been given on how the data is stored:
For png files, the lower 2 bits of each pixel component makes up one byte, so 1 byte per pixel. The last 32 bytes are metadata for the PNG (since 160*205 = 0x8020, 32 extra bytes)
byte = (a << 6) | (r << 4) | (g << 2) | b;
Here's what I have so far, but the output is all gibberish:
var imgWidth = 160;
var imgHeight = 205;
var canvas = document.createElement('canvas');
var ctx = canvas.getContext('2d');
var img = document.createElement('img');
img.src = 'Invasion.png';
canvas.width = imgWidth;
canvas.height = imgHeight;
img.onload = function() {
var imgData = [];
var str = '';
var i = 0, n = 0, dataLen = 0;
ctx.drawImage(img, 0, 0);
imgData = ctx.getImageData(0, 0, imgWidth, imgHeight).data;
dataLen = imgData.length;
for(i; i < dataLen; i+=4) {
var r = imgData[i];
var g = imgData[i + 1];
var b = imgData[i + 2];
var a = imgData[i + 3];
r = (r & 3) << 4;
g = (g & 3) << 2;
b = (b & 3);
a = (a & 3) << 6;
byte = a | r | g | b; // assembled byte
str += String.fromCharCode(byte);
}
console.log(str);
};
Fiddle: https://jsfiddle.net/24o3fzg3/
Greatly appreciate any help on this, thanks!

To get the last two bits, try value & 3 rather than value % 100. (value % 100 takes the remainder of value after dividing by 100 in decimal, not binary.) You then need to shift those two bits into the appropriate position in the final byte. Something like this:
r = (pixels[n][0] & 3) << 4;
g = (pixels[n][1] & 3) << 2;
b = (pixels[n][2] & 3);
a = (pixels[n][3] & 3) << 6;
val = a | r | g | b; // assembled byte
(I'm not addressing whether this is the correct order for the bits in the assembled value or the correct pixel array order.)

I'm a not an js guy, but I think code should be like this:
...
r = (pixels[n][0] & 3);
g = (pixels[n][1] & 3);
b = (pixels[n][2] & 3);
a = (pixels[n][3] & 3);
byte = (a << 6) | (r << 4) | (g << 2) | b;
str += String.fromCharCode(byte);
...

Related

How to decode google polyline, when bitwise OR is destructive?

I'm looking to decode an encoded google polyline:
`~oia#
However to reverse one of the steps requires reversing the bitwise OR operation, which is destructive.
I see it's done here: How to decode Google's Polyline Algorithm? but I can't see how to do that in Javascript.
Here is what I have so far:
const partialDecodedPolyline = "`~oia#".split('').map(char => (char.codePointAt()-63).toString(2))
console.log(partialDecodedPolyline)
The next step is to reverse the bitwise OR... how is that possible?
There is a libriary for that https://github.com/mapbox/polyline/blob/master/src/polyline.js
/*
https://github.com/mapbox/polyline/blob/master/src/polyline.js
*/
const decode = function(str, precision) {
var index = 0,
lat = 0,
lng = 0,
coordinates = [],
shift = 0,
result = 0,
byte = null,
latitude_change,
longitude_change,
factor = Math.pow(10, Number.isInteger(precision) ? precision : 5);
// Coordinates have variable length when encoded, so just keep
// track of whether we've hit the end of the string. In each
// loop iteration, a single coordinate is decoded.
while (index < str.length) {
// Reset shift, result, and byte
byte = null;
shift = 0;
result = 0;
do {
byte = str.charCodeAt(index++) - 63;
result |= (byte & 0x1f) << shift;
shift += 5;
} while (byte >= 0x20);
latitude_change = ((result & 1) ? ~(result >> 1) : (result >> 1));
shift = result = 0;
do {
byte = str.charCodeAt(index++) - 63;
result |= (byte & 0x1f) << shift;
shift += 5;
} while (byte >= 0x20);
longitude_change = ((result & 1) ? ~(result >> 1) : (result >> 1));
lat += latitude_change;
lng += longitude_change;
coordinates.push([lat / factor, lng / factor]);
}
return coordinates;
};
console.log(decode("`~oia#"));

Javascript - turning on bits

I have some understanding about bits and bytes, shifting concept and so - but no actual experience with it.
So:
I need to turn an array of true and false into a buffer, made of 1344 bits (which i send using UDP packets).
The other side will evaluate the buffer bit by bit..
Since i'm new to nodeJs, feel free to add tips or point me to new directions.
var arrBinary = new Array(1344);
for(i=0;i<1344;i++)arrBinary[i]=0;
// some code here, which will turn some of the array's elements to 1
var arrForBuffer = new Array(168);
for(i=0;i<168;i++)arrForBuffer[i]=0;
var x = buffer.from(arr);
/****** the question ******/
// How to change and set arrForBuffer so it will represent the arrBinary Bits state?
You can use some bitshifting as you said:
// arrForBuffer must be initialized with 0s
for(let i = 0; i < 1344; i++)
arrForBuffer[ Math.floor(i / 8) ] += arrBinary[i] << (7 - (i % 8));
The first bit for example of arrBinary will be left shifted by 7 and added to the first byte, the second will be shifted left by 6, and so on. The 8th will be shifted left by 7 again, and will be added to the second byte.
It might be more readable (and possibly more performant), if it would be written as:
for(let byte = 0; byte < 168; byte++) {
arrForBuffer[byte] =
arrBinary[byte * 8 + 0] << 7 |
arrBinary[byte * 8 + 1] << 6 |
arrBinary[byte * 8 + 2] << 5 |
arrBinary[byte * 8 + 3] << 4 |
arrBinary[byte * 8 + 4] << 3 |
arrBinary[byte * 8 + 5] << 2 |
arrBinary[byte * 8 + 6] << 1 |
arrBinary[byte * 8 + 7];
}
Javascript supports bits operations like in every major language. You can use the | and << operators to achieve this transformation:
const size = 16;
const packsize = 8;
const arrBinary = new Array(size).fill(false);
arrBinary[2] = true;
arrBinary[6] = true;
arrBinary[8] = true;
let arrForBuffer = new Array(size / packsize);
let acc = 0;
let byteCounter = 0;
for (let i = 0; i < arrBinary.length; i++) {
if (arrBinary[i]) {
acc |= 1 << (i % packsize);
}
if (i % packsize == packsize - 1) {
arrForBuffer[byteCounter] = acc;
byteCounter++;
acc = 0;
}
}
for (let i = 0; i < arrForBuffer.length; i++) {
console.log(`${i}: ${arrForBuffer[i]}`);
}

Using bitwise operators with large numbers in javascript [duplicate]

This question already has answers here:
Bitshift in javascript
(4 answers)
Closed 3 years ago.
I am writing a Javascript version of this Microsoft string decoding algorithm and its failing on large numbers. This seems to be because of sizing (int / long) issues. If I step through the code in C# I see that the JS implementation fails on this line
n |= (b & 31) << k;
This happens when the values are (and the C# result is 240518168576)
(39 & 31) << 35
If I play around with these values in C# I can replicate the JS issue if b is an int. And If I set b to be long it works correctly.
So then I checked the max size of a JS number, and compared it to the C# long result
240518168576 < Number.MAX_SAFE_INTEGER = true
So.. I can see that there is some kind of number size issue happening but do not know how to force JS to treat this number as a long.
Full JS code:
private getPointsFromEncodedString(encodedLine: string): number[][] {
const EncodingString = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_-";
var points: number[][] = [];
if (!encodedLine) {
return points;
}
var index = 0;
var xsum = 0;
var ysum = 0;
while (index < encodedLine.length) {
var n = 0;
var k = 0;
debugger;
while (true) {
if (index >= encodedLine.length) {
return points;
}
var b = EncodingString.indexOf(encodedLine[index++]);
if (b == -1) {
return points;
}
n |= (b & 31) << k;
k += 5;
if (b < 32) {
break;
}
}
var diagonal = ((Math.sqrt(8 * n + 5) - 1) / 2);
n -= diagonal * (diagonal + 1) / 2;
var ny = n;
var nx = diagonal - ny;
nx = (nx >> 1) ^ -(nx & 1);
ny = (ny >> 1) ^ -(ny & 1);
xsum += nx;
ysum += ny;
points.push([ysum * 0.000001, xsum * 0.000001]);
}
console.log(points);
return points;
}
Expected input output:
Encoded string
qkoo7v4q-lmB0471BiuuNmo30B
Decoded points:
35.89431, -110.72522
35.89393, -110.72578
35.89374, -110.72606
35.89337, -110.72662
Bitwise operators treat their operands as a sequence of 32 bits
(zeroes and ones), rather than as decimal, hexadecimal, or octal
numbers. For example, the decimal number nine has a binary
representation of 1001. Bitwise operators perform their operations on
such binary representations, but they return standard JavaScript
numerical values.
(39 & 31) << 35 tries to shift 35 bits when there only 32
Bitwise Operators
To solve this problem you could use BigInt to perform those operations and then downcast it back to Number
Number((39n & 31n) << 35n)
You can try this:
function getPointsFromEncodedString(encodedLine) {
const EncodingString = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_-";
var points = [];
if (!encodedLine) {
return points;
}
var index = 0;
var xsum = 0;
var ysum = 0;
while (index < encodedLine.length) {
var n = 0n;
var k = 0n;
while (true) {
if (index >= encodedLine.length) {
return points;
}
var b = EncodingString.indexOf(encodedLine[index++]);
if (b === -1) {
return points;
}
n |= (b & 31n) << k;
k += 5n;
if (b < 32n) {
break;
}
}
var diagonal = ((Math.sqrt(8 * Number(n) + 5) - 1) / 2);
n -= diagonal * (diagonal + 1) / 2;
var ny = n;
var nx = diagonal - ny;
nx = (nx >> 1) ^ -(nx & 1);
ny = (ny >> 1) ^ -(ny & 1);
xsum += Number(nx);
ysum += Number(ny);
points.push([ysum * 0.000001, xsum * 0.000001]);
}
console.log(points);
return points;
}

HTML5 pixel manipulation issues; canvas not reading my images (can't be online)

Trying Canvas pixel manipulation and it won't read my image just displays white screen. It works when I use fillRect but not reading from an image from the hard drive? What I'm trying to do is use the sprite to edit the image. I want to read off images.
count = 3;
for (var i = 0; i < 172000; ++i) {
switch (i % 480) {
case 240:
count = count - 1918;
break;
case 0:
count = count + 1918
break;
}
var hold = 4 * i + count;
var nn = (imageData.data[hold] >> 1) % 64;
var mm = screen[i] * 8 + (imageData.data[hold + 1] >> 5);
var hq = (imageData.data[hold + 1] % 18);
var sq = 18 * (imageData.data[hold] % 2);
var m = (RELUM[mm][nn] * 36 + hq + sq);
//ading hue to luminance
var value = (m % 216);
//outpuutting color
var r = imageData.data[4 * value];
var g = imageData.data[4 * value + 1];
var b = imageData.data[4 * value + 2];
var a = imageData.data[4 * value + 3];
//Little endian
data[i] = (a << 24) |
(b << 16) |
(g << 8) |
r;
}
in order for canvas to load an image it can't be on your harddrive, it must be on a server (local or otherwise) and it must be on the same domain or CORS enabled.
an easy work-around is to use a base64 image. use a site like this one to convert your image into a base65 string which you can use the same way you would use an image src url, and then it will load.

Guidance to understand Base64 encoding algorithm

I found this algorithm on the net but I'm having a bit of trouble understanding exactly how it works. It encodes an Uint8Array to Base64. I would like to understand especially the sections under the comments "Combine the three bytes into a single integer" and "Use bitmasks to extract 6-bit segments from the triplet". I understood the concept of bit shifting used there, but can't understand what's its purpose in those two sections.
function base64ArrayBuffer(bytes) {
var base64 = ''
var encodings = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'
var byteLength = bytes.byteLength
var byteRemainder = byteLength % 3
var mainLength = byteLength - byteRemainder
var a, b, c, d
var chunk
// Main loop deals with bytes in chunks of 3
for (var i = 0; i < mainLength; i = i + 3) {
// Combine the three bytes into a single integer
chunk = (bytes[i] << 16) | (bytes[i + 1] << 8) | bytes[i + 2]
// Use bitmasks to extract 6-bit segments from the triplet
a = (chunk & 16515072) >> 18 // 16515072 = (2^6 - 1) << 18
b = (chunk & 258048) >> 12 // 258048 = (2^6 - 1) << 12
c = (chunk & 4032) >> 6 // 4032 = (2^6 - 1) << 6
d = chunk & 63 // 63 = 2^6 - 1
// Convert the raw binary segments to the appropriate ASCII encoding
base64 += encodings[a] + encodings[b] + encodings[c] + encodings[d]
}
// Deal with the remaining bytes and padding
if (byteRemainder == 1) {
chunk = bytes[mainLength]
a = (chunk & 252) >> 2 // 252 = (2^6 - 1) << 2
// Set the 4 least significant bits to zero
b = (chunk & 3) << 4 // 3 = 2^2 - 1
base64 += encodings[a] + encodings[b] + '=='
} else if (byteRemainder == 2) {
chunk = (bytes[mainLength] << 8) | bytes[mainLength + 1]
a = (chunk & 64512) >> 10 // 64512 = (2^6 - 1) << 10
b = (chunk & 1008) >> 4 // 1008 = (2^6 - 1) << 4
// Set the 2 least significant bits to zero
c = (chunk & 15) << 2 // 15 = 2^4 - 1
base64 += encodings[a] + encodings[b] + encodings[c] + '='
}
return base64
}
The first step takes each group of 3 bytes in the input and combines them into a 24-bit number. If we call them x = bytes[i], y = bytes[i+1], and z = bytes[i+2], it uses bit-shifting and bit-OR to create a 24-bit integer whose bits are:
xxxxxxxxyyyyyyyyzzzzzzzz
Then it extracts these bits in groups of 6 to get 4 numbers. The bits of a, b, c, and d correspond this way:
xxxxxxxxyyyyyyyyzzzzzzzz
aaaaaabbbbbbccccccdddddd
Then for each of these 6-bit numbers, it indexes the encodings string to get a corresponding character, and concatenates them into the base64 result string.
At the end there are some special cases to deal with the last 1 or 2 bytes in the input if it wasn't a multiple of 3 bytes long.

Categories

Resources