Algorith problem Decode Hex to set output - javascript

I got an algorithm that I need to solve. Unfortunately, I can't even find a clue about a solution. Please help me to understand how to solve this problem.
The problem
An imaginary pen input device sends hex data to the device to display a line with color.
Examples Draw simple Green Line
Set color to green, draw a line from (0,0) to (4000, 4000). Filled circle in this diagram indicates a pen down
position, empty circle indicates pen up position.
Input Data: F0A04000417F4000417FC040004000804001C05F205F20804000
Output: CLR; CO 0 255 0 255; MV (0, 0); PEN DOWN;
MV (4000, 4000); PEN UP;
I got the information about each output.
This code is for encoding hex to binary. I guess that the solution would encode the hex to binary, and manipulate it to set correct output.
function hex2bin(hex){
return (parseInt(hex, 16).toString(2))}
The expected result has the same output as Examaple's with the input data.

First of all, some important information is missing from your question: how the numbers like 4000 (in the result) are encoded in the hex format.
I think I could derive it from the example though
The peculiar numeric encoding
Numbers seem to be encoded with 2 bytes (4 hex characters) each, where the most significant bits of these 2 bytes (bits 7 and 15) do not contribute to the value (they are always zero).
Furthermore, the remaining 14 bits are encoded in offset binary, where the most significant of those 14 bits is the inverted sign bit.
This means that "4000" is zero, "0000" is -8192 (the minimum), and "7F7F" is 8191 (the maximum). Note that the one but last character cannot be more than 7, since that would set a bit that is not used in this (custom) encoding.
This was the hardest part to derive from the little info you provided.
On the Example
The input example you provided can be broken down into pieces like this:
opcode | argument(s)
-------+----------------------------
"F0" |
"A0" | "4000" "417F" "4000" "417F"
"C0" | "4000" "4000"
"80" | "4001"
"C0" | "5F20" "5F20"
"80" | "4000"
Using the numeric conversion discussed above, this would translate to:
opcode | argument(s)
-------+------------
240 |
160 | 0 255 0 255
192 | 0 0
128 | 1
192 | 4000 4000
128 | 0
And then it is a matter of following the instructions to turn that into the required output.
So the algorithm could first decode the input string into commands, where each command consists of an opcode and zero or more numeric arguments.
And then those commands could be turned into the required output by keeping track of whether the pen is down and what the current coordinates are:
function decode(hex) {
let commands = [];
let command;
for (let i = 0, len; i < hex.length; i+=len) {
// Opcodes take 1 byte (i.e. 2 hex characters), and
// numbers take 2 bytes (4 characters)
len = hex[i] >= "8" ? 2 : 4;
let num = parseInt(hex.slice(i, i+len), 16);
if (len === 2) { // opcode
command = []; // start a new command
commands.push(command);
} else { // number
// The encoded format is a custom one. This seems to be it:
num = ((num & 0x7F00) >> 1) + (num & 0x7F) - 0x2000;
}
command.push(num); // push opcode or argument in current command
}
return commands;
}
function disassemble(hex) {
let isPenDown = false;
let x = 0, y = 0;
let output = "";
let commands = decode(hex);
for (let [opcode, ...args] of commands) {
if (opcode === 0xF0) {
x = y = 0;
isPenDown = false;
output += "CLR;\n";
} else if (opcode === 0x80) {
isPenDown = args[0] > 0;
output += "PEN " + (isPenDown ? "DOWN" : "UP") + ";\n";
} else if (opcode === 0xA0) {
output += "CO " + args.join(" ") + ";\n";
} else if (opcode === 0xC0) {
let allPos = "", lastPos;
for (let i = 0; i < args.length; i+=2) {
x += args[i];
y += args[i+1];
lastPos = ` (${x}, ${y})`;
if (isPenDown) allPos += lastPos;
}
output += "MV" + (allPos || lastPos) + ";\n";
} // else: ignore unknown commands
}
return output;
}
// Sample:
console.log(disassemble("F0A04000417F4000417FC040004000804001C05F205F20804000"));
More to do
In the screenshot of the problem, near the end, there is also mention of clipping movements to a bounding box. This goes beyond your question about decoding the hex input, so I will leave it at this. For the interested, you could check out Q&A about calculating line segment intersections, such as this one.

Related

How do you adapt this 8-bit bit-buffer implementation to be 32-bit?

I got this from GitHub.
function LittleEndianView(size) {
Object.defineProperty(this, 'native', {
value: new Uint8Array(size)
})
}
LittleEndianView.prototype.get = function(bits, offset) {
let available = (this.native.length * 8 - offset)
if (bits > available) {
throw new Error('Range error')
}
let value = 0
let i = 0
// why loop through like this?
while (i < bits) {
// remaining bits
const remaining = bits - i
const bitOffset = offset & 7
const currentByte = this.native[offset >> 3]
const read = Math.min(remaining, 8 - bitOffset)
const a = 0xFF << read
mask = ~a
const b = currentByte >> bitOffset
readBits = b & mask
const c = readBits << i
value = value | c
offset += read
i += read
}
return value >>> 0
}
LittleEndianView.prototype.set = function(bits, offset) {
const available = (this.native.length * 8 - offset)
if (bits > available) {
throw new Error('Range error')
}
let i = 0
while (i < bits) {
const remaining = bits - i
const bitOffset = offset & 7
const byteOffset = offset >> 3
const finished = Math.min(remaining, 8 - bitOffset)
const mask = ~(0xFF << finished)
const writeBits = value & mask
const value >>= finished
const destMask = ~(mask << bitOffset)
const byte = this.view[byteOffset]
this.native[byteOffset] = (byte & destMask) | (writeBits << bitOffset)
offset += finished
i += finished
}
}
After studying it for a few hours, I realize that the offset & 7 is only keeping the first 3 bits of the offset, which is for 1 byte. The offset >> 3 is simply dividing the offset by 8 to go from bits to bytes. The 0xFF << read is to get a bunch of 1s on the left like 11111000. Then negate it to get them on the right. I understand most of it just barely in pieces. But I don't see how they figured out how to implement this solution using these techniques (and I don't fully grasp the whole solution -- how or why it works -- after a couple days at this).
So that leads me to the question, I need to apply this same read/write functionality to a Uint32Array in JavaScript (rather than a Uint8Array like the above code would use). I need to read and write arbitrary bits (not bytes) to this Uint32Array. What needs to change in the existing get and set algorithms to accommodate this?
Seems like the offset & 7 might become something different, but I can't tell how to get it to be 32 bit representation. And the offset >> 3 might become divide by 32 or something like that. But I'm not sure. That may be mostly it.

Why isn't my negative number properly obtained in NodeJS?

I am using nodeJS to parse some HEX string, I am trying to convert the HEX value into a integer value using parseInt but I am running into some difficulties with the negative number that I don't understand the reason why.
I have the following HEX string D3FFBDFFF900 that is ecoding the following integers x:-0.45*100 y:-0.67*100 z:2.49*100 in the this way
D3FF | BDFF | F900 => -0.45*100 | -0.64*100 | 2.49*100
And I have created the following code snippet ( and I do now that the division by 100 is being missed there )
var x = "D3FFBDFFF900".substring(0,4);
var y = "D3FFBDFFF900".substring(4,8);
var z = "D3FFBDFFF900".substring(8);
console.log("x:"+x);
console.log("y:"+y);
console.log("z:"+z);
console.log("parseInt x "+parseInt(x.toString(16),16));
console.log("parseInt y "+parseInt(y.toString(16),16));
console.log("parseInt z "+parseInt(z.toString(16),16));
Why isn't parseInt been able to decode at least the values x=-45, y=-67 and z=249 and instead I have the above output?
Thanks in advance,
EDIT: the way of encoding the data is like below, where the print just print the original HEX string into a serial bus
#define NIBBLE_TO_HEX_CHAR(i) ((i <= 9) ? ('0' + i) : ('A' - 10 + i))
#define HIGH_NIBBLE(i) ((i >> 4) & 0x0F)
#define LOW_NIBBLE(i) (i & 0x0F)
for (int i = 0; i < size; ++i) {
print(static_cast<char>(NIBBLE_TO_HEX_CHAR(HIGH_NIBBLE(payload[i]))));
print(static_cast<char>(NIBBLE_TO_HEX_CHAR(LOW_NIBBLE(payload[i]))));
}
and the values x,y,z are got as below where type of accelerometer.getX() -> double
x = (int16_t)(accelerometer.getX()*100)
y = (int16_t)(accelerometer.getX()*100)
z = (int16_t)(accelerometer.getX()*100)
How should the parser know, that you swapped the nibbles and use hex with 4 digits?
0xD3FF = 15 * 1 + 15 * 16 + 3 * 256 + 13 * 4096 = 54271
-45 = -0x2D
-67 = -0x43
249 = 0xF9
The parser does a correct job.
To parse the received hex values you have to swap the high and low nibbles:
D3FF => FFD3
Next you have to parse the hex to dec. If your value >= 0x8000 you have to invert the binary representation and add 1
0xFFD3 = 65491 > 0x8000 = 32768
-(~65491 & 0xFFFF) + 1 = -43

HERE Polyline Encoding: JavaScript -> Swift

I am trying to implement HERE's Javascript polyline encoding algorithm (see below) in Swift. I have searched online and have not found a Swift version of this algorithm.
function hereEncodeFloat(value) {
var ENCODING_CHARS = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_';
var result = [];
// convert to fixed point
var fixedPoint = Math.round(value * 100000);
// make room on the lowest bit
fixedPoint = fixedPoint << 1;
// flip bits of negative numbers and ensure that the last bit is set
// (should actually always be the case, but for readability it is ok to do it explicitly)
if (fixedPoint > 0) {
fixedPoint = ~(fixedPoint) | 0x01
}
// var-length encode the number in chunks of 5 bits starting with the least significant
// to the most significant
while (fixedPoint > 0x1F) {
result.push(ENCODING_CHARS[(fixedPoint & 0x1F) | 0x20]);
fixedPoint >>= 5;
}
result.push(ENCODING_CHARS[fixedPoint]);
return result.join('');
}
Is there someone who can help convert this to Swift?
Details of the algorithm may be found here:
https://developer.here.com/documentation/places/topics/location-contexts.html#location-contexts__here-polyline-encoding
Thanks in advance for your help,
Jason
I figured it out:
func hereEncodeNumber(_ value: Double) -> [Character] {
let ENCODING_CHARS : [Character] = ["A","B","C","D","E","F","G","H","I","J","K","L","M","N","O","P","Q","R","S","T","U","V","W","X","Y","Z","a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p","q","r","s","t","u","v","w","x","y","z","0","1","2","3","4","5","6","7","8","9","-","_"]
var result : [Character] = []
// Convert value to fixed point
let fixedPoint = (value * 100000).rounded(.toNearestOrAwayFromZero)
// Convert fixed point to binary
var binaryNum = Int32(exactly: fixedPoint)!
// Make room on lowest bit
binaryNum = binaryNum << 1
// Flip bits of negative numbers and ensure that last bit is set
// (should actually always be case, but for readability it is ok to do it explicitly)
if binaryNum < 0 {
binaryNum = ~(binaryNum) | 0x01
}
// Var-length encode number in chunks of 5 bits starting with least significant
// to most significant
while binaryNum > 0x1F {
result.append(ENCODING_CHARS[Int((binaryNum & 0x1F) | 0x20)])
binaryNum >>= 5
}
result.append(ENCODING_CHARS[Int(binaryNum)])
return result
}

Compress group of arrays into smallest possible string

This question can be answered using javascript, underscore or Jquery functions.
given 4 arrays:
[17,17,17,17,17,18,18,18,18,18,19,19,19,19,19,20,20,20,20,20] => x coordinate of unit
[11,12,13,14,15,11,12,13,14,15,11,12,13,14,15,11,12,13,14,15] => y coordinate of unit
[92,92,92,92,92,92,92,92,92,92,92,92,92,92,92,92,92,92,92,92] => x (unit moves to this direction x)
[35,36,37,38,39,35,36,37,38,39,35,36,37,38,39,35,36,37,38,39] => y (unit moves to this direction y)
They are very related to each other.
For example first element of all arrays: [17,11,92,35] is unit x/y coordinates and also x/y coordinates of this units target.
So here are totally 5*4 = 20 units. Every unit has slightly different stats.
These 4 arrays of units x/y coordinates visually looks like an army of 20 units "x" (targeting "o"):
xxxxx o
xxxxx o
xxxxx o
xxxxx o
There will always be 4 arrays. Even if 1 unit, there will be 4 arrays, but each size of 1. This is the simplest situation and most common.
In real situation, every unit has totally 20 different stats(keys) and 14 keys are mostly exact to other group of units - all 14 keys.
So they are grouped as an army with same stats. Difference is only coordinates of the unit and also coordinates of the units target.
I need to compress all this data into as small as possible data, which later can be decompressed.
There can also be more complex situations, when all these 14 keys are accidently same, but coordinates are totally different from pattern. Example:
[17,17,17,17,17,18,18,18, 215, 18,18,19,19,19,19,19,20,20,20,20,20] => x coordinate of unit
[11,12,13,14,15,11,12,13, 418, 14,15,11,12,13,14,15,11,12,13,14,15] => y coordinate of unit
[92,92,92,92,92,92,92,92, -78, 92,92,92,92,92,92,92,92,92,92,92,92] => x (unit moves to this direction x)
[35,36,37,38,39,35,36,37, -887, 38,39,35,36,37,38,39,35,36,37,38,39] => y (unit moves to this direction y)
In this situation i need to extract this array as for 2 different armies. When there are less than 3 units in army, i just simply write these units without the
pattern - as [215,418,-78,-887],[..] and if there are more than 2 units army, i need a compressed string with pattern, which can be decompressed later. In this example there are 21 units. It just has to be splitted into armies of 1 unit and (5x4 = 20) untis army.
In assumption that every n units has a pattern,
encode units with
n: sequence units count
ssx: start of source x
dsx: difference of source x
ssy: start of source y
dsy: difference of source y
stx: start of target x
dtx: difference of target x
sty: start of target y
dty: difference of target y
by the array: [n,ssx,dsx,ssy,dsy,stx,dtx,sty,dty]
so that the units:
[17,17,17,17,17],
[11,12,13,14,15],
[92,92,92,92,92],
[35,36,37,38,39]
are encoded:
[5,17,0,11,1,92,0,35,1]
of course if you know in advance that, for example the y targets are always the same for such a sequence you can give up the difference parameter, to have:
[n,ssx,dsx,ssy,---,stx,dtx,sty,---] => [n,ssx,dsx,ssy,stx,dtx,sty], and so on.
For interruption of a pattern like you mentioned in your last example, you can use other 'extra' arrays, and then insert them in the sequence, with:
exsx: extra unit starting x
exsy: extra unit starting y
extx: extra unit target x
exty: extra unit target y
m: insert extra unit at
so that the special case is encoded:
{
patterns:[
[5,17,0,11,1,92,0,35,1],
[5,18,0,11,1,92,0,35,1],
[5,19,0,11,1,92,0,35,1],
[5,17,0,11,1,92,0,35,1]
],
extras: [
[215,418,-78,-887,8] // 8 - because you want to insert this unit at index 8
]
}
Again, this is a general encoding. Any specific properties for the patterns may further reduce the encoded representation.
Hope this helps.
High compression using bitstreams
You can encode sets of values into a bit stream allowing you to remove unused bits. The numbers you have shown are not greater than -887 (ignoring the negative) and that means you can fit all the numbers into 10 bits saving 54 bits per number (Javascript uses 64 bit numbers).
Run length compression
You also have many repeated sets of numbers which you can use run length compression on. You set a flag in the bitstream that indicates that the following set of bits represents a repeated sequence of numbers, then you have the number of repeats and the value to repeat. For sequences of random numbers you just keep them as is.
If you use run-length compression you create a block type structure in the bit stream, this makes it possible to embed further compression. As you have many numbers that are below 128 many of the numbers can be encoded into 7 bits, or even less. For a small overhead (in this case 2 bits per block) you can select the smallest bit size to pack all the numbers in that block in.
Variable bit depth numbers
I have created a number type value that represent the number of bits used to store numbers in a block. Each block has a number type and all numbers in the block use that type. There are 4 number types that can be encoded into 2 bits.
00 = 4 bit numbers. Range 0-15
01 = 5 bit numbers. Range 0-31
10 = 7 bit numbers. Range 0-127
11 = 10 bit numbers. Range 0-1023
The bitstream
To make this easy you will need a bit stream read/write. It allows you to easily write and read bits from a stream of bits.
// Simple unsigned bit stream
// Read and write to and from a bit stream.
// Numbers are stored as Big endian
// Does not comprehend sign so wordlength should be less than 32 bits
// methods
// eof(); // returns true if read pos > buffer bit size
// read(numberBits); // number of bits to read as one number. No sign so < 32
// write(value,numberBits); // value to write, number of bits to write < 32
// getBuffer(); // return object with buffer and array of numbers, and bitLength the total number of bits
// setBuffer(buffer,bitLength); // the buffers as an array of numbers, and bitLength the total number of bits
// Properties
// wordLength; // read only length of a word.
function BitStream(){
var buffer = [];
var pos = 0;
var numBits = 0;
const wordLength = 16;
this.wordLength = wordLength;
// read a single bit
var readBit = function(){
var b = buffer[Math.floor(pos / wordLength)]; // get word
b = (b >> ((wordLength - 1) - (pos % wordLength))) & 1;
pos += 1;
return b;
}
// write a single bit. Will fill bits with 0 if wite pos is moved past buffer length
var writeBit = function(bit){
var rP = Math.floor(pos / wordLength);
if(rP >= buffer.length){ // check that the buffer has values at current pos.
var end = buffer.length; // fill buffer up to pos with zero
while(end <= rP){
buffer[end] = 0;
end += 1;
}
}
var b = buffer[rP];
bit &= 1; // mask out any unwanted bits
bit <<= (wordLength - 1) - (pos % wordLength);
b |= bit;
buffer[rP] = b;
pos += 1;
}
// returns true is past eof
this.eof = function(){
return pos >= numBits;
}
// reads number of bits as a Number
this.read = function(bits){
var v = 0;
while(bits > 0){
v <<= 1;
v |= readBit();
bits -= 1;
}
return v;
}
// writes value to bit stream
this.write = function(value,bits){
var v;
while(bits > 0){
bits -= 1;
writeBit( (value >> bits) & 1 );
}
}
// returns the buffer and length
this.getBuffer = function(){
return {
buffer : buffer,
bitLength : pos,
};
}
// set the buffer and length and returns read write pos to start
this.setBuffer = function(_buffer,bitLength){
buffer = _buffer;
numBits = bitLength;
pos = 0;
}
}
A format for your numbers
Now to design the format. The first bit read from a stream is a sequence flag, if 0 then the following block will be a repeated value, if 1 the following block will be a sequence of random numbers.
Block bits : description;
repeat block holds a repeated number
bit 0 : Val 0 = repeat
bit 1 : Val 0 = 4bit repeat count or 1 = 5bit repeat count
then either
bits 2,3,4,5 : 4 bit number of repeats - 1
bits 6,7 : 2 bit Number type
or
bits 2,3,4,5,6 : 5 bit number of repeats - 1
bits 7,8 : 2 bit Number type
Followed by
Then a value that will be repeated depending on the number type
End of block
sequence block holds a sequence of random numbers
bit 0 : Val 1 = sequence
bit 1 : Val 0 = positive sequence Val 1 = negative sequence
bits 2,3,4,5 : 4 bit number of numbers in sequence - 1
bits 6,7 : 2 bit Number type
then the sequence of numbers in the number format
End of block.
Keep reading blocks until the end of file.
Encoder and decoder
The following object will encode and decode the a flat array of numbers. It will only handles numbers upto 10 bits long, So no values over 1023 or under -1023.
If you want larger numbers you will have to change the number types that are used. To do this change the arrays
const numberSize = [0,0,0,0,0,1,2,2,3,3,3]; // the number bit depth
const numberBits = [4,5,7,10]; // the number bit depth lookup;
If you want max number to be 12 bits -4095 to 4095 ( the sign bit is in the block encoding). I have also shown the 7 bit number type changed to 8. The first array is used to look up the bit depth, if I have a 3 bit number you get the number type with numberSize[bitcount] and the bits used to store the number numberBits[numberSize[bitCount]]
const numberSize = [0,0,0,0,0,1,2,2,2,3,3,3,3]; // the number bit depth
const numberBits = [4,5,8,12]; // the number bit depth lookup;
function ArrayZip(){
var zipBuffer = 0;
const numberSize = [0,0,0,0,0,1,2,2,3,3,3]; // the number bit depth lookup;
const numberBits = [4,5,7,10]; // the number bit depth lookup;
this.encode = function(data){ // encodes the data
var pos = 0;
function getRepeat(){ // returns the number of repeat values
var p = pos + 1;
if(data[pos] < 0){
return 1; // ignore negative numbers
}
while(p < data.length && data[p] === data[pos]){
p += 1;
}
return p - pos;
}
function getNoRepeat(){ // returns the number of non repeat values
// if the sequence has negitive numbers then
// the length is returned as a negative
var p = pos + 1;
if(data[pos] < 0){ // negative numbers
while(p < data.length && data[p] !== data[p-1] && data[p] < 0){
p += 1;
}
return -(p - pos);
}
while(p < data.length && data[p] !== data[p-1] && data[p] >= 0){
p += 1;
}
return p - pos;
}
function getMax(count){
var max = 0;
var p = pos;
while(count > 0){
max = Math.max(Math.abs(data[p]),max);
p += 1;
count -= 1;
}
return max;
}
var out = new BitStream();
while(pos < data.length){
var reps = getRepeat();
if(reps > 1){
var bitCount = numberSize[Math.ceil(Math.log(getMax(reps) + 1) / Math.log(2))];
if(reps < 16){
out.write(0,1); // repeat header
out.write(0,1); // use 4 bit repeat count;
out.write(reps-1,4); // write 4 bit number of reps
out.write(bitCount,2); // write 2 bit number size
out.write(data[pos],numberBits[bitCount]);
pos += reps;
}else {
if(reps > 32){ // if more than can fit in one repeat block split it
reps = 32;
}
out.write(0,1); // repeat header
out.write(1,1); // use 5 bit repeat count;
out.write(reps-1,5); // write 5 bit number of reps
out.write(bitCount,2); // write 2 bit number size
out.write(data[pos],numberBits[bitCount]);
pos += reps;
}
}else{
var seq = getNoRepeat(); // get number no repeats
var neg = seq < 0 ? 1 : 0; // found negative numbers
seq = Math.min(16,Math.abs(seq));
// check if last value is the start of a repeating block
if(seq > 1){
var tempPos = pos;
pos += seq;
seq -= getRepeat() > 1 ? 1 : 0;
pos = tempPos;
}
// ge the max bit count to hold numbers
var bitCount = numberSize[Math.ceil(Math.log(getMax(seq) + 1) / Math.log(2))];
out.write(1,1); // sequence header
out.write(neg,1); // write negative flag
out.write(seq - 1,4); // write sequence length;
out.write(bitCount,2); // write 2 bit number size
while(seq > 0){
out.write(Math.abs(data[pos]),numberBits[bitCount]);
pos += 1;
seq -= 1;
}
}
}
// get the bit stream buffer
var buf = out.getBuffer();
// start bit stream with number of trailing bits. There are 4 bits used of 16 so plenty
// of room for aulturnative encoding flages.
var str = String.fromCharCode(buf.bitLength % out.wordLength);
// convert bit stream to charcters
for(var i = 0; i < buf.buffer.length; i ++){
str += String.fromCharCode(buf.buffer[i]);
}
// return encoded string
return str;
}
this.decode = function(zip){
var count,rSize,header,_in,i,data,endBits,numSize,val,neg;
data = []; // holds character codes
decompressed = []; // holds the decompressed array of numbers
endBits = zip.charCodeAt(0); // get the trailing bits count
for(i = 1; i < zip.length; i ++){ // convert string to numbers
data[i-1] = zip.charCodeAt(i);
}
_in = new BitStream(); // create a bitstream to read the bits
// set the buffer data and length
_in.setBuffer(data,(data.length - 1) * _in.wordLength + endBits);
while(!_in.eof()){ // do until eof
header = _in.read(1); // read header bit
if(header === 0){ // is repeat header
rSize = _in.read(1); // get repeat count size
if(rSize === 0){
count = _in.read(4); // get 4 bit repeat count
}else{
count = _in.read(5); // get 5 bit repeat count
}
numSize = _in.read(2); // get 2 bit number size type
val = _in.read(numberBits[numSize]); // get the repeated value
while(count >= 0){ // add to the data count + 1 times
decompressed.push(val);
count -= 1;
}
}else{
neg = _in.read(1); // read neg flag
count = _in.read(4); // get 4 bit seq count
numSize = _in.read(2); // get 2 bit number size type
while(count >= 0){
if(neg){ // if negative numbers convert to neg
decompressed.push(-_in.read(numberBits[numSize]));
}else{
decompressed.push(_in.read(numberBits[numSize]));
}
count -= 1;
}
}
}
return decompressed;
}
}
The best way to store a bit stream is as a string. Javascript has Unicode strings so we can pack 16 bits into every character
The results and how to use.
You need to flatten the array. If you need to add extra info to reinstate the multi/dimensional arrays just add that to the array and let the compressor compress it along with the rest.
// flatten the array
var data = [17,17,17,17,17,18,18,18,18,18,19,19,19,19,19,20,20,20,20,20,11,12,13,14,15,11,12,13,14,15,11,12,13,14,15,11,12,13,14,15,92,92,92,92,92,92,92,92,92,92,92,92,92,92,92,92,92,92,92,92,35,36,37,38,39,35,36,37,38,39,35,36,37,38,39,35,36,37,38,39];
var zipper = new ArrayZip();
var encoded = zipper.encode(data); // packs the 80 numbers in data into 21 characters.
// compression rate of the data array 5120 bits to 336 bits
// 93% compression.
// or as a flat 7bit ascii string as numbers 239 charcters (including ,)
// 239 * 7 bits = 1673 bits to 336 bits 80% compression.
var decoded = zipper.decode(encoded);
I did not notice the negative numbers at first so the compression does not do well with the negative values.
var data = [17,17,17,17,17,18,18,18, 215, 18,18,19,19,19,19,19,20,20,20,20,20, 11,12,13,14,15,11,12,13, 418, 14,15,11,12,13,14,15,11,12,13,14,15, 92,92,92,92,92,92,92,92, -78, 92,92,92,92,92,92,92,92,92,92,92,92, 35,36,37,38,39,35,36,37, -887, 38,39,35,36,37,38,39,35,36,37,38,39]
var encoded = zipper.encode(data); // packs the 84 numbers in data into 33 characters.
// compression rate of the data array 5376 bits to 528 bits
var decoded = zipper.decode(encoded);
Summary
As you can see this results in a very high compression rate (almost twice as good as LZ compression). The code is far from optimal and you could easily implement a multi pass compressor with various settings ( there are 12 spare bits at the start of the encoded string that can be used to select many options to improve compression.)
Also I did not see the negative numbers until I came back to post so the fix for negatives is not good, so you can some more out of it by modifying the bitStream to understand negatives (ie use the >>> operator)

Working with hex strings and hex values more easily in Javascript

I have some code which takes strings representing hexadecimal numbers - hex colors, actually - and adds them. For example, adding aaaaaa and 010101 gives the output ababab.
However, my method seems unnecessarily long and complicated:
var hexValue = "aaaaaa";
hexValue = "0x" + hexValue;
hexValue = parseInt(hexValue, 16);
hexValue = hexValue + 0x010101;
hexValue = hexValue.toString(16);
document.write(hexValue); // outputs 'ababab'
The hex value is still a string after concatenating 0x, so then I have to change it to a number, then I can add, and then I have to change it back into hex format! There are even more steps if the number I'm adding to it is a hexadecimal string to begin with, or if you take into consideration that I am removing the # from the hex color before all this starts.
Surely there's a simpler way to do such simple hexadecimal calculations! And just to be clear, I don't mean just putting it all on one line like (parseInt("0x"+"aaaaaa",16)+0x010101).toString(16) or using shorthand - I mean actually doing less operations.
Is there some way to get javascript to stop using decimal for all of its mathematical operations and use hex instead? Or is there some other method of making JS work with hex more easily?
No, there is no way to tell the JavaScript language to use hex integer format instead of decimal by default. Your code is about as concise as it gets but note that you do not need to prepend the "0x" base indicator when you use "parseInt" with a base.
Here is how I would approach your problem:
function addHexColor(c1, c2) {
var hexStr = (parseInt(c1, 16) + parseInt(c2, 16)).toString(16);
while (hexStr.length < 6) { hexStr = '0' + hexStr; } // Zero pad.
return hexStr;
}
addHexColor('aaaaaa', '010101'); // => 'ababab'
addHexColor('010101', '010101'); // => '020202'
As mentioned by a commenter, the above solution is chock full of problems, so below is a function that does proper input validation and adds color channels separately while checking for overflow.
function addHexColor2(c1, c2) {
const octetsRegex = /^([0-9a-f]{2})([0-9a-f]{2})([0-9a-f]{2})$/i
const m1 = c1.match(octetsRegex)
const m2 = c2.match(octetsRegex)
if (!m1 || !m2) {
throw new Error(`invalid hex color triplet(s): ${c1} / ${c2}`)
}
return [1, 2, 3].map(i => {
const sum = parseInt(m1[i], 16) + parseInt(m2[i], 16)
if (sum > 0xff) {
throw new Error(`octet ${i} overflow: ${m1[i]}+${m2[i]}=${sum.toString(16)}`)
}
return sum.toString(16).padStart(2, '0')
}).join('')
}
addHexColor2('aaaaaa', 'bogus!') // => Error: invalid hex color triplet(s): aaaaaa / bogus!
addHexColor2('aaaaaa', '606060') // => Error: octet 1 overflow: aa+60=10a
How about this:
var hexValue = "aaaaaa";
hexValue = (parseInt(hexValue, 16) + 0x010101).toString(16);
document.writeln(hexValue); // outputs 'ababab'
There is no need to add the 0x prefix if you use parseInt.
I think accepted answer is wrong. Hexadecimal color representation is not a linear. But instead, 3 sets of two characters are given to R, G & B.
So you can't just add a whole number and expect to RGB to add up correctly.
For Example
n1 = '005500'; <--- green
n2 = '00ff00'; <--- brighter green
Adding these numbers should result in a greener green.
In no way, adding greens should increase RED to increase. but by doing what accepted answer is doing, as in just treat whole number as one number then you'd carry over for numbers adding upto greater than f, f+1 = 10.
you get `015400` so by adding greens the RED increased .... WRONG
adding 005500 + 00ff00 should result in, = 00ff00. You can't add more green to max green.
For folks looking for a function that can add and subtract HEX colors without going out of bounds on an individual tuple, I wrote this function a few minutes ago to do just that:
export function shiftColor(base, change, direction) {
const colorRegEx = /^\#?[A-Fa-f0-9]{6}$/;
// Missing parameter(s)
if (!base || !change) {
return '000000';
}
// Invalid parameter(s)
if (!base.match(colorRegEx) || !change.match(colorRegEx)) {
return '000000';
}
// Remove any '#'s
base = base.replace(/\#/g, '');
change = change.replace(/\#/g, '');
// Build new color
let newColor = '';
for (let i = 0; i < 3; i++) {
const basePiece = parseInt(base.substring(i * 2, i * 2 + 2), 16);
const changePiece = parseInt(change.substring(i * 2, i * 2 + 2), 16);
let newPiece = '';
if (direction === 'add') {
newPiece = (basePiece + changePiece);
newPiece = newPiece > 255 ? 255 : newPiece;
}
if (direction === 'sub') {
newPiece = (basePiece - changePiece);
newPiece = newPiece < 0 ? 0 : newPiece;
}
newPiece = newPiece.toString(16);
newPiece = newPiece.length < 2 ? '0' + newPiece : newPiece;
newColor += newPiece;
}
return newColor;
}
You pass your base color as parameter 1, your change as parameter 2, and then 'add' or 'sub' as the last parameter depending on your intent.

Categories

Resources