Encoding strings to small sizes for QRCode generation - javascript

I'm generating QR codes using strings that could very easily be longer in length then a QRCode could handle. I'm looking for suggestions on algorithms to encode these strings as small as possible, or a proof that the string cannot be shrunk any further.
Since I'm encoding a series of items, I can represent them using ID's and delineate them using pipes as in the following lookup table:
function encodeLookUp(character){
switch(character){
case '0': return '0000';
case '1': return '0001';
case '2': return '0010';
case '3': return '0011';
case '4': return '0100';
case '5': return '0101';
case '6': return '0110';
case '7': return '0111';
case '8': return '1000';
case '9': return '1001';
case '|': return '1010';
case ':': return '1011';
}
return false;
}
Using this table I am already doing a base 16 encoding, therefore each 32 ascii character from the original string becomes half a character in the new string (effectively halving the length).
Starting String: 01251548|4654654:4465464 // ID1 | ID2 : ID3 demonstrates both pipes.
Bit String: 000000010010010100010101010010001010010001100101010001100101010010110100010001100101010001100100
Result String: %H¤eFT´FTd // Half the length of the starting string.
Then this new ascii code, is translated according to QRCode specification.
EDIT: The most amount of characters currently encodable: 384
CLARIFICATION: Both ID numberic length, and the quantity of ID's or pipes is variable with a tendancy towards one. I am looking to be able to reduce this algorithm to contain on average the least amount of characters by the time its a 'result string'.
NOTE: The result string is only an ascii represenetaion of the binary string i've encoded with the data to conform with standard QRCode specifications and readers.

If you have relatively non-random data, a Huffman encoding might be a good solution.

Using the function, you're going to loose a lot of space (since 4 bits are way too much storage for 12 combinations).
I'd start by looking at the maximum length possible for your IDs and find a suitable storage block.
If you are storing these items serially in a fixed count (say, 4 ids). You would need id_length*id_count at most, and you won't need to use any separators.
Edit: Again according to the number of IDs you want to write and their expected maximum length, there may be different types of encodings to compress it done. RLE (run length encoding) came to my mind.

QR codes support a binary mode, and that's going to be the most efficient way for you to store your IDs. Either:
Pick a length (in bytes) that is sufficient to store all your IDs, and encode the QR-code as a series of fixed-length integers. 4 bytes (32 bits) is a standard choice that ought to cover the likely range, or
If you want to be able to encode a wide range of IDs, but expect most of the values to be small, use a variable-length encoding scheme. One example is to use the lowest 7 bits of each byte to store the integer, and the most significant bit to indicate if there are any further bytes.
Also note that QR codes can be a lot larger than 384 characters!
Edit: From your original question, though, it looks like you're encoding more than just a series of integers - you have at least two different types of delimiters. Where can they appear and in what circumstances? The encoding format is going to depend on those parameters.

QR codes already have special encoding modes that are optimized for digits, or just alphanumeric data. It would probably be easier to take advantage of these rather than invent a scheme.
If you're going to do something custom, I think you'll find it hard to beat something like gzip compression. Just gzip the bytes, encode the bytes in byte mode, and decompress on the other end.

As a start of an answer to my own question:
If I start with a string of numbers
I can parse that string for patterns and hold those patters in special symbols that are able to take up the other 4 spaces available in my Huffman tree.
EDIT: Example: staring string 12222345, ending string 12x345. Where x is a symbol that means 'repeat the last symbol 3 more times'

Related

Implementing extendible hash table in javascript: how to use binary number as index

I'm studying data structures and trying to implement extendible hashing from scratch in Javascript and I'm confused. Here is an example I'm using as reference hash table with binary labels
Example: to store "john":35 in a table of size: 8 indexes / depth 3 (last 3 digits of binary hash)
"john" gets converted to a hash, example: 13,
13 is converted to a binary: 1101
find which index of the table 1101 belongs to, by looking at the last 3 digits "101"
This is where I'm stuck. Am I suppose to convert 101 back to decimal form (which would be 5), to then access the index by doing array[5]? Is there a way to label the array indexes in binary format like array[101] (but then wouldn't it be better to use an object?)? This seems like a lot of unnecessary extra steps to avoid just using modulo (13%8), am I missing something? Is this implementation useful in not-javascript language?
First post - thanks in advance!
Internally, all data in the computer is stored in binary, so you can't "convert" from decimal to binary since everything is already binary (it's just shown to use as decimal). If you want to print out a number as binary for debugging purposes, you can do:
console.log((5).toString(2)); // will print "101"
The .toString(2) method converts the number to a string with the binary representation of the number.
You can also write numbers in binary by starting it with 0b:
let x = 0b1101; // == 13
If you want to get the last few binary digits of a number, use the modulo operator to 2 to the power of the number of digits you want:
(0b1101 % (2**3)).toString(2) // "101"
With the table selected, you probably want to use the rest of the number that you haven't used already as the index in the table. We can use the bitshift operator, >>, to do this:
(0b1101 >> 3).toString(2) // "1", right three bits cut off
With a longer number:
// Note that underscores don't mean anything, they are just used for spacing
(0b1101_1101 >> 3).toString(2) // "11011" you can see that the right three bits have been cut off
Keep in mind that you probably shouldn't be using .toString(2) to actually store anything in the table; it should only be used for debugging.

Encode ArrayBuffer of arbitrary length to custom alphabet

I have had a few questions about how to convert integers into custom alphabet, and some stuff about encoding which I still don't fully understand yet, but I'm getting there. What I'm wondering about now though is how to convert an arbitrary-length ArrayBuffer (basically just a bunch of bits of arbitrary length), into a custom alphabet (without using major library helpers like JavaScript toString or parseInt or others).
So this value is bigger than the max integer by far, as you could have a whole paragraph or document as input.
From my understanding so far, I would do this:
var array = new Uint8Array(500000)
array[0] = 123
array[1] = 123
array[2] = 123
// ... fill it in with some stuff.
stringify(array.buffer, '123abc')
// encode to 6-character alphabet, such as:
// 1a2ba3caa13a...
Then I feel stuck... There is this helpful example on how to do it for integers. But I am having difficulty applying it to this new situation.
Also would be helpful to know how to convert it back into the ArrayBuffer from the string that used the custom alphabet, so it would go both ways.
The conversion of the array.buffer to some example output like 1a2ba3caa13a... would happen similar to the radix stringifying in the linked question (well, I don't know how it work work actually). But it would go through the bits somehow and basically encode them somehow using characters from the custom alphabet, like hex encoding, base64 encoding, etc.

Reassembling negative Python marshal int's into Javascript numbers

I'm writing a client-side Python bytecode interpreter in Javascript (specifically Typescript) for a class project. Parsing the bytecode was going fine until I tried out a negative number.
In Python, marshal.dumps(2) gives 'i\x02\x00\x00\x00' and marshal.dumps(-2) gives 'i\xfe\xff\xff\xff'. This makes sense as Python represents integers using two's complement with at least 32 bits of precision.
In my Typescript code, I use the equivalent of Node.js's Buffer class (via a library called BrowserFS, instead of ArrayBuffers and etc.) to read the data. When I see the character 'i' (i.e. buffer.readUInt8(offset) == 105, signalling that the next thing is an int), I then call readInt32LE on the next offset to read a little-endian signed long (4 bytes). This works fine for positive numbers but not for negative numbers: for 1 I get '1', but for '-1' I get something like '-272777233'.
I guess that Javascript represents numbers in 64-bit (floating point?). So, it seems like the following should work:
var longval = buffer.readInt32LE(offset); // reads a 4-byte long, gives -272777233
var low32Bits = longval & 0xffff0000; //take the little endian 'most significant' 32 bits
var newval = ~low32Bits + 1; //invert the bits and add 1 to negate the original value
//but now newval = 272826368 instead of -2
I've tried a lot of different things and I've been stuck on this for days. I can't figure out how to recover the original value of the Python integer from the binary marshal string using Javascript/Typescript. Also I think I deeply misunderstand how bits work. Any thoughts would be appreciated here.
Some more specific questions might be:
Why would buffer.readInt32LE work for positive ints but not negative?
Am I using the correct method to get the 'most significant' or 'lowest' 32 bits (i.e. does & 0xffff0000 work how I think it does?)
Separate but related: in an actual 'long' number (i.e. longer than '-2'), I think there is a sign bit and a magnitude, and I think this information is stored in the 'highest' 2 bits of the number (i.e. at number & 0x000000ff?) -- is this the correct way of thinking about this?
The sequence ef bf bd is the UTF-8 sequence for the "Unicode replacement character", which Unicode encoders use to represent invalid encodings.
It sounds like whatever method you're using to download the data is getting accidentally run through a UTF-8 decoder and corrupting the raw datastream. Be sure you're using blob instead of text, or whatever the equivalent is for the way you're downloading the bytecode.
This got messed up only for negative values because positive values are within the normal mapping space of UTF-8 and thus get translated 1:1 from the original byte stream.

Any way to reliably compress a short string?

I have a string exactly 53 characters long that contains a limited set of possible characters.
[A-Za-z0-9\.\-~_+]{53}
I need to reduce this to length 50 without loss of information and using the same set of characters.
I think it should be possible to compress most strings down to 50 length, but is it possible for all possible length 53 strings? We know that in the worst case 14 characters from the possible set will be unused. Can we use this information at all?
Thanks for reading.
If, as you stated, your output strings have to use the same set of characters as the input string, and if you don't know anything special about the requirements of the input string, then no, it's not possible to compress every possible 53-character string down to 50 characters. This is a simple application of the pigeonhole principle.
Your input strings can be represented as a 53-digit number in base 67, i.e., an integer from 0 to 6753 - 1 ≅ 6*1096.
You want to map those numbers to an integer from 0 to 6750 - 1 ≅ 2*1091.
So by the pigeonhole principle, you're guaranteed that 673 = 300,763 different inputs will map to each possible output -- which means that, when you go to decompress, you have no way to know which of those 300,763 originals you're supposed to map back to.
To make this work, you have to change your requirements. You could use a larger set of characters to encode the output (you could get it down to 50 characters if each one had 87 possible values, instead of the 67 in the input). Or you could identify redundancy in the input -- perhaps the first character can only be a '3' or a '5', the nineteenth and twentieth are a state abbreviation that can only have 62 different possible values, that sort of thing.
If you can't do either of those things, you'll have to use a compression algorithm, like Huffman coding, and accept the fact that some strings will be compressible (and get shorter) and others will not (and will get longer).
What you ask is not possible in the most general case, which can be proven very simply.
Say it was possible to encode an arbitrary 53 character string to 50 chars in the same set. Do that, then add three random characters to the encoded string. Then you have another arbitrary, 53 character string. How do you compress that?
So what you want can not be guaranteed to work for any possible data. However, it is possible that all your real data has low enough entropy that you can devise a scheme that will work.
In that case, you will probably want to do some variant of Huffman coding, which basically allocates variable-bit-length encodings for the characters in your set, using the shortest encodings for the most commonly used characters. You can analyze all your data to come up with a set of encodings. After Huffman coding, your string will be a (hopefully shorter) bitstream, which you encode to your character set at 6 bits per character. It may be short enough for all your real data.
A library-based encoding like Smaz (referenced in another answer) may work as well. Again, it is impossible to guarantee that it will work for all possible data.
One byte (character) can encode 256 values (0-255) but your set of valid characters uses only 67 values, which can be represented in 7 bits (alas, 6 bits gets you only 64) and none of your characters uses the high bit of the byte.
Given that, you can throw away the high bit and store only 7 bits, running the initial bits of the next character into the "spare" space of the first character. This would require only 47 bytes of space to store. (53 x 7 = 371 bits, 371 / 8 = 46.4 == 47)
This is not really considered compression, but rather a change in encoding.
For example "ABC" is 0x41 0x42 0x43
0x41 0x42 0x43 // hex values
0100 0001 0100 0010 0100 0011 // binary
100 0001 100 0010 100 0011 // drop high bit
// run it all together
100000110000101000011
// split as 8 bits (and pad to 8)
10000011 00001010 00011[000]
0x83 0x0A 0x18
As an example these 3 characters won't save any space, but your 53 characters will always come out as 47, guaranteed.
Note, however, that the output will not be in your original character set, if that is important to you.
The process becomes:
original-text --> encode --> store output-text (in database?)
retrieve --> decode --> original-text restored
If I remember correctly Huffman coding is going to be the most compact way to store the data. It has been too long since I used it to write the algorithm quickly, but the general idea is covered here, but if I remember correctly what you do is:
get the count for each character that is used
prioritize them based on how frequently they occurred
build a tree based off the prioritization
get the compressed bit representation of each character by traversing the tree (start at the root, left = 0 right = 1)
replace each character with the bits from the tree
Smaz is a simple compression library suitable for compressing very short strings.

JSON transfer of bigint: 12000000000002539 is converted to 12000000000002540?

I'm transferring raw data like [{id: 12000000000002539, Name: "Some Name"}] and I'm getting the object [{id: 12000000000002540, Name: "Some Name"}] after parsing, for now server side converting id into string seems to help.
But is there a better way to transfer bigint data correctly?
The value is actually not exceeding the maximum numeric value in JavaScript (which is "only" 1.7308 or so).
However, the value is exceeding the range of "integral precision". It is not that the wrong number is sent: rather, it is that the literal 12000000000002539 can only be represented as precisely as 12000000000002540, and thus there was never the correct numeric value in JavaScript. (The range of integrals is about +/- 253.)
This is an interesting phenomena of using a double relative-precision (binary64 in IEEE-754 speak) type to store all numeric values, including integers:
12000000000002539 === 12000000000002540 // true
The maximum significant number of decimal digits that be precisely stored as a numeric value is 15 (15.95, really). In the above, there are 17 significant digits, so some of the least-significant information is silently lost. In this case, as the JavaScript parser/engine reads in the literal value.
The only safe way to handle integral numbers of this magnitude in JavaScript is to use a string literal or to break it down in another fashion (e.g. a custom numeric type or a "bigint library"). However, I recommend just using a string, as it is human readable, relatively compact (only two extra characters in JSON), and doesn't require special serialization. Since the value is just an "id" in this case, I hope that math does not need to be performed upon it :)
Happy coding.

Categories

Resources