using php-gzdeflate and pako-js - javascript

i am attempting to combine php-gzdeflate and pako. to compress the string i am using:
const compressed = ' <?php echo base64_encode(gzdeflate('Compress me')); ?> ' ;
// compressed now contains: c87PLShKLS5WyE0FAA==
but i cannot seem to read this string back using pako. i have tried the following:
var enc = new TextEncoder("utf-8");
pako.ungzip(enc.encode(compressed) );
i get this message back: uncaught incorrect header check
is there a simple way to compress using generic-php and inflate using pako?
so far i have tried adding various gzdeflate "levels" from one to nine, but none of them appear to make any difference. and at this point, i am just guessing.
and we would rather not install any special extension to php if possible
thank you very much.

Update to #edwardsmarkf's answer you can solve this without the atos function now. Most newer browsers have the TextDecoder api. You can use it like so:
const decoder = new TextDecoder();
const result = decoder.decode(pako.ungzip(atob(compressedBase64Data)));

I couldn't get the answers here to work, so I did some research.
As PleaseStand pointed out here, the problem is that PHP uses UTF-8 strings, while JS uses UTF-16 strings. Hence, the binary string to base64 string encoding will differ.
The solution I used is to force JS to interpret the data as UTF-8. This is straightforward, as pako accepts and returns Uint8Arrays, which are essentially UTF-8 strings.:
Compress in JS, Decompress in PHP:
//JS
const pako = require('pako');
const compress = str => Buffer.from(pako.deflateRaw(str)).toString('base64');
console.log(compress('asdfasdfasdfasdf')); //SyxOSUtEwgA=
//PHP
function decompress($str) { return gzinflate(base64_decode($str)); }
echo decompress('SyxOSUtEwgA='); //asdfasdfasdfasdf
Compress in PHP, Decompress in JS:
//PHP
function compress($str) { return base64_encode(gzdeflate($str, 9)); }
echo compress('asdfasdfasdf'); //SyxOSUuEYgA=
//JS
const pako = require('pako');
const decompress = str => pako.inflateRaw(Buffer.from(str, 'base64'), {to: 'string'});
console.log(decompress('SyxOSUuEYgA=')); //asdfasdfasdfs
Note: Buffer instances are also Uint8Array instances, hence we don't need to convert the Buffer to a Uint8Array before giving it to pako.
These functions are also compatible within the languages:
//JS
console.log(decompress(compress('asdfasdfasdfasdf'))); //asdfasdfasdfasdf
//PHP
echo decompress(compress('asdfasdfasdfasdf')); //asdfasdfasdfasdf
For JS, this works out of the box in NodeJs. In a browser environment, you will need a polyfil for Buffer.
For PHP, remember to install the Zlib extension.

I'm not familiar with php, so I kinda struggled with this problem, so I thought to post a minimal working solution in php:
$response = gzdeflate('My data', 9, ZLIB_ENCODING_DEFLATE);
header('Content-Encoding: deflate');
echo $response;
No need to use pako after this, the data will be decompressed by the browser.
Example if you're requesting json formatted data:
$.ajax({
type: 'GET',
url: 'http://target.com',
dataType: "json",
contentType: "application/json; charset=utf-8",
headers : {'Accept-Encoding': 'deflate '},
})
.done(function(res) {
console.log(res)
})
.fail(function(xhr, textStatus, errorThrown) {
});

This appears to work(below)
Steps involved:
server side (php):
1) gzdeflate using ZLIB_ENCODING_DEFLATE option
2) base64_encode
client side:(jScript)
1) atob
2) pako.ungzip
3) atos function
<script src='//cdnjs.cloudflare.com/ajax/libs/pako/1.0.5/pako_deflate.js' type='text/javascript'></script>
<script type='text/javascript'>
const compressedDEFLATE = '<?php echo base64_encode(gzdeflate('Compress me', 6, ZLIB_ENCODING_DEFLATE )); ?>' ;
function atos(arr) {
for (var i=0, l=arr.length, s='', c; c = arr[i++];)
s += String.fromCharCode(
c > 0xdf && c < 0xf0 && i < l-1
? (c & 0xf) << 12 | (arr[i++] & 0x3f) << 6 | arr[i++] & 0x3f
: c > 0x7f && i < l
? (c & 0x1f) << 6 | arr[i++] & 0x3f
: c
);
return s
}
alert ( atos(pako.ungzip( atob(compressedDEFLATE) ) ) );
</script>

Related

Cannot convert canvas.toDataURL('image/png') to an image with base64_decode

I am trying to save an image to a folder in PHP by decoding a canvas.toDataURL('image/png') coded image. The toDataURL gives me the following string:
data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAXYAAAF1CAYAAAD8ysHLAAAgAElEQVR4XuydCbiOVff/lw...
which is around 43 000 characters long. I send this to PHP via AJAX. Then I tried to follow some answers online how to decode it but none worked as expected. The one that came up the most is the following approach:
$img64String = $_POST['file'];
$img64String = str_replace('data:image/png;base64,', '', $img64String);
$img64String = str_replace(' ', '+', $img64String);
$fileData = base64_decode($img64String);
Line 1 gets the original string, line 2 and 3 is to remove the first part and keep everything after the comma and then replace empty places with + according to documentation. However, the 4th line gives me a result which looks something like this:
FileData: �PNG IHDRvu���� IDATx^� ��U���eʔ! !
%��Pр$C*�FC��FD��&�(RRƷdJ�䯼Pʔyʐ�����?�s�9g�������ֺ.W�s�}��|
��߽�����8�?&fF�#�#��5si�#�dka�����f�i�?
���d�1'ݔ'�#�k���Ʉ]7��V&�...
Basically a string with a bunch of nonsense (?) in it. What is going on?
If there is of any help, here is the AJAX call:
form_data = new FormData();
form_data.append('file', newImage);
form_data.append('action', 'file_upload');
form_data.append('security', blog.security);
// Send to AJAX
$.ajax({
url: blog.ajaxurl,
type: 'POST',
contentType: false,
processData: false,
data: form_data, ...
What I want
Previously I sent the uploaded file itself but now I want to resize it first, hence I used the new approach. Previously I used this on PHP side which works fine:
$file_type = $_FILES['file']['type'];
$file_name = $_FILES['file']['name'];
$file_temp_name = $_FILES['file']['tmp_name'];
$allowed_file_types = array('image/png', 'image/jpeg', 'image/jpg');
if (in_array($file_type, $allowed_file_types)) {
$upload = wp_upload_bits($file_name, null, file_get_contents($file_temp_name));
$status = "ok";
}
I want to get my string into something that I can use in a similar way if possible.

How to compress URL parameters

Say I have a single-page application that uses a third party API for content. The app’s logic is in-browser only; there is no backend I can write to.
To allow deep-linking into the state of the app, I use pushState() to keep track of a few variables that determine the state of the app. (Note that Ubersicht’s public version doesn’t do this yet.)
Variables: repos, labels, milestones, username, show_open (bool), with_comments (bool), and without_comments (bool).
URL format: ?label=label_1,label_2,label_3&repos=repo_1….
Values: the usual suspects. Roughly, [a-zA-Z][a-zA-Z0-9_-], or any boolean indicator.
So far so good.
Now, since the query string can be a bit long and unwieldy and I would like to be able to pass around URLs like http://espy.github.io/ubersicht/?state=SOMOPAQUETOKENTHATLOSSLESSLYDECOMPRESSESINTOTHEORIGINALVALUES#hoodiehq, the shorter the better.
My first attempt was going to be using some zlib-like algorithm for this. Then #flipzagging pointed to antirez/smaz, which looks more suitable for short strings. (JavaScript version here.)
Since = and & are not specifically handled in the Javascript version (see line 9 of the main lib file), we might be able to tweak things a little there.
Furthermore, there is an option for encoding the values in a fixed table. With this option, the order of arguments is pre-defined and all we need to keep track of is the actual value. Example: turn a=hamster&b=cat into 7hamster3cat (length+chars) or hamster|cat (value + |), potentially before the smaz compression.
Is there anything else I should be looking for?
A working solution putting various bits of good (or so I think) ideas together
I did this for fun, mainly because it gave me an opportunity to implement an Huffman encoder in PHP and I could not find a satisfactory existing implementation.
However, this might save you some time if you plan to explore a similar path.
Burrows-Wheeler+move-to-front+Huffman transform
I'm not quite sure BWT would be best suited for your kind of input.
This is no regular text, so recurring patterns would probably not occur as often as in source code or plain English.
Besides, a dynamic Huffman code would have to be passed along with the encoded data which, for very short input strings, would harm the compression gain badly.
I might well be wrong, in which case I would gladly see someone prove me to be.
Anyway, I decided to try another approach.
General principle
1) define a structure for your URL parameters and strip the constant part
for instance, starting from:
repos=aaa,bbb,ccc&
labels=ddd,eee,fff&
milestones=ggg,hhh,iii&
username=kkk&
show_open=0&
show_closed=1&
show_commented=1&
show_uncommented=0
extract:
aaa,bbb,ccc|ddd,eee,fff|ggg,hhh,iii|kkk|0110
where , and | act as string and/or field terminators, while boolean values don't need any.
2) define a static repartition of symbols based on the expected average input and derive a static Huffman code
Since transmitting a dynamic table would take more space than your initial string, I think the only way to achhieve any compression at all is to have a static huffman table.
However, you can use the structure of your data to your advantage to compute reasonable probabilities.
You can start with the repartition of letters in English or other languages and throw in a certain percentage of numbers and other punctuation signs.
Testing with a dynamic Huffman coding, I saw compression rates of 30 to 50%.
This means with a static table you can expect maybe a .6 compression factor (reducing the lenght of your data by 1/3), not much more.
3) convert this binary Huffmann code into something an URI can handle
The 70 regular ASCII 7 bits chars in that list
!'()*-.0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ_abcdefghijklmnopqrstuvwxyz
would give you an expansion factor of about 30%, practically no better than a base64 encode.
A 30% expansion would ruin the gain from a static Huffman compression, so this is hardly an option!
However, since you control the encoding client and server side, you can use about anything that is not an URI reserved character.
An interesting possiblity would be to complete the above set up to 256 with whatever unicode glyphs, which would allow to encode your binary data with the same number of URI-compliant characters, thus replacing a painful and slow bunch of long integer divisions with a lightning fast table lookup.
Structure description
The codec is meant to be used both client and server side, so it is essential that server and clients share a common data structure definition.
Since the interface is likely to evolve, it seems wise to store a version number for upward compatibility.
The interface definition will use a very minimalistic description language, like so:
v 1 // version number (between 0 and 63)
a en // alphabet used (English)
o 10 // 10% of digits and other punctuation characters
f 1 // 1% of uncompressed "foreign" characters
s 15:3 repos // list of expeced 3 strings of average length 15
s 10:3 labels
s 8:3 milestones
s 10 username // single string of average length 10
b show_open // boolean value
b show_closed
b show_commented
b show_uncommented
Each language supported will have a frequency table for all its used letters
digits and other computerish symbols like -, . or _ will have a global frequency, regardless of languages
separators (, and |) frequencies will be computed according to the number of lists and fields present in the structure.
All other "foreign" characters will be escaped with a specific code and encoded as plain UTF-8.
Implementation
The bidirectional conversion path is as follows:
list of fields <-> UTF-8 data stream <-> huffman codes <-> URI
Here is the main codec
include ('class.huffman.codec.php');
class IRI_prm_codec
{
// available characters for IRI translation
static private $translator = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõöùúûüýþÿĀāĂ㥹ĆćĈĉĊċČčĎďĐđĒēĔĕĖėĘęĚěĜĝĞğĠġĢģĤĥĦħĨĩĪīĬĭĮįİıIJijĴĵĶķĸĹĺĻļĽľĿŀŁłŃńŅņŇňʼnŊŋŌōŎŏŐőŒœŔŕŖŗŘřŚśŜŝŞşŠšŢţŤťŦŧŨũŪūŬŭŮůŰűŲųŴŵŶŷŸŹźŻżŽžſƀƁƂƃƄƅ";
const VERSION_LEN = 6; // version number between 0 and 63
// ========================================================================
// constructs an encoder
// ========================================================================
public function __construct ($config)
{
$num_record_terminators = 0;
$num_record_separators = 0;
$num_text_sym = 0;
// parse config file
$lines = file($config, FILE_IGNORE_NEW_LINES|FILE_SKIP_EMPTY_LINES);
foreach ($lines as $line)
{
list ($code, $val) = preg_split('/\s+/', $line, 2);
switch ($code)
{
case 'v': $this->version = intval($val); break;
case 'a': $alphabet = $val; break;
case 'o': $percent_others = $val; break;
case 'f': $percent_foreign = $val; break;
case 'b':
$this->type[$val] = 'b';
break;
case 's':
list ($val, $field) = preg_split('/\s+/u', $val, 2);
#list ($len,$num) = explode (':', $val);
if (!$num) $num=1;
$this->type[$field] = 's';
$num_record_terminators++;
$num_record_separators+=$num-1;
$num_text_sym += $num*$len;
break;
default: throw new Exception ("Invalid config parameter $code");
}
}
// compute symbol frequencies
$total = $num_record_terminators + $num_record_separators + $num_text_sym + 1;
$num_chars = $num_text_sym * (100-($percent_others+$percent_foreign))/100;
$num_sym = $num_text_sym * $percent_others/100;
$num_foreign = $num_text_sym * $percent_foreign/100;
$this->get_frequencies ($alphabet, $num_chars/$total);
$this->set_frequencies (" .-_0123456789", $num_sym/$total);
$this->set_frequencies ("|", $num_record_terminators/$total);
$this->set_frequencies (",", $num_record_separators/$total);
$this->set_frequencies ("\1", $num_foreign/$total);
$this->set_frequencies ("\0", 1/$total);
// create Huffman codec
$this->huffman = new Huffman_codec();
$this->huffman->make_code ($this->frequency);
}
// ------------------------------------------------------------------------
// grab letter frequencies for a given language
// ------------------------------------------------------------------------
private function get_frequencies ($lang, $coef)
{
$coef /= 100;
$frequs = file("$lang.dat", FILE_IGNORE_NEW_LINES|FILE_SKIP_EMPTY_LINES);
foreach ($frequs as $line)
{
$vals = explode (" ", $line);
$this->frequency[$vals[0]] = floatval ($vals[1]) * $coef;
}
}
// ------------------------------------------------------------------------
// set a given frequency for a group of symbols
// ------------------------------------------------------------------------
private function set_frequencies ($symbols, $coef)
{
$coef /= strlen ($symbols);
for ($i = 0 ; $i != strlen($symbols) ; $i++) $this->frequency[$symbols[$i]] = $coef;
}
// ========================================================================
// encodes a parameter block
// ========================================================================
public function encode($input)
{
// get back input values
$bools = '';
foreach (get_object_vars($input) as $prop => $val)
{
if (!isset ($this->type[$prop])) throw new Exception ("unknown property $prop");
switch ($this->type[$prop])
{
case 'b': $bools .= $val ? '1' : '0'; break;
case 's': $strings[] = $val; break;
default: throw new Exception ("Uh oh... type ".$this->type[$prop]." not handled ?!?");
}
}
// set version number and boolean values in front
$prefix = sprintf ("%0".self::VERSION_LEN."b$bools", $this->version);
// pass strings to our Huffman encoder
$strings = implode ("|", $strings);
$huff = $this->huffman->encode ($strings, $prefix, "UTF-8");
// translate into IRI characters
mb_internal_encoding("UTF-8");
$res = '';
for ($i = 0 ; $i != strlen($huff) ; $i++) $res .= mb_substr (self::$translator, ord($huff[$i]), 1);
// done
return $res;
}
// ========================================================================
// decodes an IRI string into a lambda object
// ========================================================================
public function decode($input)
{
// convert IRI characters to binary
mb_internal_encoding("UTF-8");
$raw = '';
$len = mb_strlen ($input);
for ($i = 0 ; $i != $len ; $i++)
{
$c = mb_substr ($input, 0, 1);
$input = mb_substr ($input, 1);
$raw .= chr(mb_strpos (self::$translator, $c));
}
$this->bin = '';
// check version
$version = $this->read_bits ($raw, self::VERSION_LEN);
if ($version != $this->version) throw new Exception ("Version mismatch: expected {$this->version}, found $version");
// read booleans
foreach ($this->type as $field => $type)
if ($type == 'b')
$res->$field = $this->read_bits ($raw, 1) != 0;
// decode strings
$strings = explode ('|', $this->huffman->decode ($raw, $this->bin));
$i = 0;
foreach ($this->type as $field => $type)
if ($type == 's')
$res->$field = $strings[$i++];
// done
return $res;
}
// ------------------------------------------------------------------------
// reads raw bit blocks from a binary string
// ------------------------------------------------------------------------
private function read_bits (&$raw, $len)
{
while (strlen($this->bin) < $len)
{
if ($raw == '') throw new Exception ("premature end of input");
$this->bin .= sprintf ("%08b", ord($raw[0]));
$raw = substr($raw, 1);
}
$res = bindec (substr($this->bin, 0, $len));
$this->bin = substr ($this->bin, $len);
return $res;
}
}
The underlying Huffman codec
include ('class.huffman.dict.php');
class Huffman_codec
{
public $dict = null;
// ========================================================================
// encodes a string in a given string encoding (default: UTF-8)
// ========================================================================
public function encode($input, $prefix='', $encoding="UTF-8")
{
mb_internal_encoding($encoding);
$bin = $prefix;
$res = '';
$input .= "\0";
$len = mb_strlen ($input);
while ($len--)
{
// get next input character
$c = mb_substr ($input, 0, 1);
$input = substr($input, strlen($c)); // avoid playing Schlemiel the painter
// check for foreign characters
if (isset($this->dict->code[$c]))
{
// output huffman code
$bin .= $this->dict->code[$c];
}
else // foreign character
{
// escape sequence
$lc = strlen($c);
$bin .= $this->dict->code["\1"]
. sprintf("%02b", $lc-1); // character length (1 to 4)
// output plain character
for ($i=0 ; $i != $lc ; $i++) $bin .= sprintf("%08b", ord($c[$i]));
}
// convert code to binary
while (strlen($bin) >= 8)
{
$res .= chr(bindec(substr ($bin, 0, 8)));
$bin = substr($bin, 8);
}
}
// output last byte if needed
if (strlen($bin) > 0)
{
$bin .= str_repeat ('0', 8-strlen($bin));
$res .= chr(bindec($bin));
}
// done
return $res;
}
// ========================================================================
// decodes a string (will be in the string encoding used during encoding)
// ========================================================================
public function decode($input, $prefix='')
{
$bin = $prefix;
$res = '';
$len = strlen($input);
for ($i=0 ;;)
{
$c = $this->dict->symbol($bin);
switch ((string)$c)
{
case "\0": // end of input
break 2;
case "\1": // plain character
// get char byte size
if (strlen($bin) < 2)
{
if ($i == $len) throw new Exception ("incomplete escape sequence");
$bin .= sprintf ("%08b", ord($input[$i++]));
}
$lc = 1 + bindec(substr($bin,0,2));
$bin = substr($bin,2);
// get char bytes
while ($lc--)
{
if ($i == $len) throw new Exception ("incomplete escape sequence");
$bin .= sprintf ("%08b", ord($input[$i++]));
$res .= chr(bindec(substr($bin, 0, 8)));
$bin = substr ($bin, 8);
}
break;
case null: // not enough bits do decode further
// get more input
if ($i == $len) throw new Exception ("no end of input mark found");
$bin .= sprintf ("%08b", ord($input[$i++]));
break;
default: // huffman encoded
$res .= $c;
break;
}
}
if (bindec ($bin) != 0) throw new Exception ("trailing bits in input");
return $res;
}
// ========================================================================
// builds a huffman code from an input string or frequency table
// ========================================================================
public function make_code ($input, $encoding="UTF-8")
{
if (is_string ($input))
{
// make dynamic table from the input message
mb_internal_encoding($encoding);
$frequency = array();
while ($input != '')
{
$c = mb_substr ($input, 0, 1);
$input = mb_substr ($input, 1);
if (isset ($frequency[$c])) $frequency[$c]++; else $frequency[$c]=1;
}
$this->dict = new Huffman_dict ($frequency);
}
else // assume $input is an array of symbol-indexed frequencies
{
$this->dict = new Huffman_dict ($input);
}
}
}
And the huffman dictionary
class Huffman_dict
{
public $code = array();
// ========================================================================
// constructs a dictionnary from an array of frequencies indexed by symbols
// ========================================================================
public function __construct ($frequency = array())
{
// add terminator and escape symbols
if (!isset ($frequency["\0"])) $frequency["\0"] = 1e-100;
if (!isset ($frequency["\1"])) $frequency["\1"] = 1e-100;
// sort symbols by increasing frequencies
asort ($frequency);
// create an initial array of (frequency, symbol) pairs
foreach ($frequency as $symbol => $frequence) $occurences[] = array ($frequence, $symbol);
while (count($occurences) > 1)
{
$leaf1 = array_shift($occurences);
$leaf2 = array_shift($occurences);
$occurences[] = array($leaf1[0] + $leaf2[0], array($leaf1, $leaf2));
sort($occurences);
}
$this->tree = $this->build($occurences[0], '');
}
// -----------------------------------------------------------
// recursive build of lookup tree and symbol[code] table
// -----------------------------------------------------------
private function build ($node, $prefix)
{
if (is_array($node[1]))
{
return array (
'0' => $this->build ($node[1][0], $prefix.'0'),
'1' => $this->build ($node[1][1], $prefix.'1'));
}
else
{
$this->code[$node[1]] = $prefix;
return $node[1];
}
}
// ===========================================================
// extracts a symbol from a code stream
// if found : updates code stream and returns symbol
// if not found : returns null and leave stream intact
// ===========================================================
public function symbol(&$code_stream)
{
list ($symbol, $code) = $this->get_symbol ($this->tree, $code_stream);
if ($symbol !== null) $code_stream = $code;
return $symbol;
}
// -----------------------------------------------------------
// recursive search for a symbol from an huffman code
// -----------------------------------------------------------
private function get_symbol ($node, $code)
{
if (is_array($node))
{
if ($code == '') return null;
return $this->get_symbol ($node[$code[0]], substr($code, 1));
}
return array ($node, $code);
}
}
Example
include ('class.iriprm.codec.php');
$iri = new IRI_prm_codec ("config.txt");
foreach (array (
'repos' => "discussion,documentation,hoodie-cli",
'labels' => "enhancement,release-0.3.0,starter",
'milestones' => "1.0.0,1.1.0,v0.7",
'username' => "mklappstuhl",
'show_open' => false,
'show_closed' => true,
'show_commented' => true,
'show_uncommented' => false
) as $prop => $val) $iri_prm->$prop = $val;
$encoded = $iri->encode ($iri_prm);
echo "encoded as $encoded\n";
$decoded = $iri->decode ($encoded);
var_dump($decoded);
output:
encoded as 5ĶůťÊĕCOĔƀŪļŤłmĄZEÇŽÉįóšüÿjħũÅìÇēOĪäŖÏŅíŻÉĒQmìFOyäŖĞqæŠŹōÍĘÆŤŅËĦ
object(stdClass)#7 (8) {
["show_open"]=>
bool(false)
["show_closed"]=>
bool(true)
["show_commented"]=>
bool(true)
["show_uncommented"]=>
bool(false)
["repos"]=>
string(35) "discussion,documentation,hoodie-cli"
["labels"]=>
string(33) "enhancement,release-0.3.0,starter"
["milestones"]=>
string(16) "1.0.0,1.1.0,v0.7"
["username"]=>
string(11) "mklappstuhl"
}
In that example, the input got packed into 64 unicode characters, for an input length of about 100, yielding a 1/3 reduction.
An equivalent string:
discussion,documentation,hoodie-cli|enhancement,release-0.3.0,starter|
1.0.0,1.1.0,v0.7|mklappstuhl|0110
Would be compressed by a dynamic Huffman table to 59 characters. Not much of a difference.
No doubt smart data reordering would reduce that, but then you would need to pass the dynamic table along...
Chinese to the rescue?
Drawing on ttepasse's idea, one could take advantage of the huge number of Asian characters to find a range of 0x4000 (12 bits) contiguous values, to code 3 bytes into 2 CJK characters, like so:
// translate into IRI characters
$res = '';
$len = strlen ($huff);
for ($i = 0 ; $i != $len ; $i++)
{
$byte = ord($huff[$i]);
$quartet[2*$i ] = $byte >> 4;
$quartet[2*$i+1] = $byte &0xF;
}
$len *= 2;
while ($len%3 != 0) $quartet[$len++] = 0;
$len /= 3;
for ($i = 0 ; $i != $len ; $i++)
{
$utf16 = 0x4E00 // CJK page base, enough range for 2**12 (0x4000) values
+ ($quartet[3*$i+0] << 8)
+ ($quartet[3*$i+1] << 4)
+ ($quartet[3*$i+2] << 0);
$c = chr ($utf16 >> 8) . chr ($utf16 & 0xFF);
$res .= $c;
}
$res = mb_convert_encoding ($res, "UTF-8", "UTF-16");
and back:
// convert IRI characters to binary
$input = mb_convert_encoding ($input, "UTF-16", "UTF-8");
$len = strlen ($input)/2;
for ($i = 0 ; $i != $len ; $i++)
{
$val = (ord($input[2*$i ]) << 8) + ord ($input[2*$i+1]) - 0x4E00;
$quartet[3*$i+0] = ($val >> 8) &0xF;
$quartet[3*$i+1] = ($val >> 4) &0xF;
$quartet[3*$i+2] = ($val >> 0) &0xF;
}
$len *= 3;
while ($len %2) $quartet[$len++] = 0;
$len /= 2;
$raw = '';
for ($i = 0 ; $i != $len ; $i++)
{
$raw .= chr (($quartet[2*$i+0] << 4) + $quartet[2*$i+1]);
}
The previous output of 64 Latin chars
5ĶůťÊĕCOĔƀŪļŤłmĄZEÇŽÉįóšüÿjħũÅìÇēOĪäŖÏŅíŻÉĒQmìFOyäŖĞqæŠŹōÍĘÆŤŅËĦ
would "shrink" to 42 Asian characters:
乙堽孴峴勀垧壩坸冫嚘佰嫚凲咩俇噱刵巋娜奾埵峼圔奌夑啝啯嶼勲婒婅凋凋伓傊厷侖咥匄冯塱僌
However, as you can see, the sheer bulk of your average ideogram makes the string actually longer (pixel-wise), so even if the idea was promising, the outcome is rather disappointing.
Picking thinner glyphs
On the other hand, you can try to pick "thin" characters as a base for URI encoding. For instance:
█ᑊᵄ′ӏᶟⱦᵋᵎiïᵃᶾ᛬ţᶫꞌᶩ᠇܂اlᶨᶾᛁ⁚ᵉʇȋʇίן᠙ۃῗᥣᵋĭꞌ៲ᛧ༚ƫܙ۔ˀȷˁʇʹĭ∕ٱ;łᶥյ;ᴶ⁚ĩi⁄ʈ█
instead of
█5ĶůťÊĕCOĔƀŪļŤłmĄZEÇŽÉįóšüÿjħũÅìÇēOĪäŖÏŅíŻÉĒQmìFOyäŖĞqæŠŹōÍĘÆŤŅËĦ█
That will shrink the length by half with proportional fonts, including in a browser address bar.
My best candidate set of 256 "thin" glyphs so far:
᠊།ᑊʲ་༌ᵎᵢᶤᶩᶪᶦᶧˡ ⁄∕เ'Ꞌꞌ꡶ᶥᵗᶵᶨ|¦ǀᴵ  ᐧᶠᶡ༴ˢᶳ⁏ᶴʳʴʵ։᛬⍮ʹ′ ⁚⁝ᵣ⍘༔⍿ᠵᥣᵋᵌᶟᴶǂˀˁˤ༑,.   ∙Ɩ៲᠙ᵉᵊᵓᶜᶝₑₔյⵏⵑ༝༎՛ᵞᵧᚽᛁᛂᛌᛍᛙᛧᶢᶾ৷⍳ɩΐίιϊᵼἰἱἲἳἴἵἶἷὶίῐῑῒΐῖῗ⎰⎱᠆ᶿ՝ᵟᶫᵃᵄᶻᶼₐ∫ª౹᠔/:;\ijltìíîïĩīĭįıĵĺļłţŧſƚƫƭǐǰȉȋțȴȷɉɨɪɫɬɭʇʈʝːˑ˸;·ϳіїјӏ᠇ᴉᵵᵻᶅᶖḭḯḷḹḻḽṫṭṯṱẗẛỉị⁞⎺⎻⎼⎽ⱡⱦ꞉༈ǁ‖༅༚ᵑᵝᵡᵦᵪา᠑⫶ᶞᚁᚆᚋᚐᚕᵒᵔᵕᶱₒⵗˣₓᶹๅʶˠ᛫ᵛᵥᶺᴊ
Conclusion
This implementation should be ported to JavaScript to allow client-server exchange.
You should also provide a way to share the structure and Huffman codes with the clients.
It is not difficult and rather fun to do, but that means even more work :).
The Huffman gain in term of characters is around 30%.
Of course these characters are multibyte for the most part, but if you aim for the shortest URI it does not matter.
Except for the booleans that can easily be packed to 1 bit, those pesky strings seem rather reluctant to be compressed.
It might be possible to better tune the frequencies, but I doubt you will get above 50% compression rate.
On the other hand, picking thin glyphs does actually more to shrink the string.
So all in all the combination of both might indeed achieve something, though it's a lot of work for a modest result.
Just as you yourself propose, I would first get rid of all the characters that are not carrying any information, because they are part of the "format".
E.g. turn "labels=open,ssl,cypher&repository=275643&username=ryanbrg&milestones=&with_comment=yes" to
"open,ssl,cyper|275643|ryanbrg||yes".
Then use a Huffmann encoding with a fixed probability vector (resulting in a fixed mapping from characters to variable length bitstrings - with the most probable characters mapped to shorter bitstrings and less probable characters mapped to longer bitstrings).
You could even use different probability vectors for the different parameters. For example in the parameter "labels" the alpha characters will have high probability, but in the "repository" parameter the numeric characters will have the highest probability. If you do this, you should consider the separator "|" a part of the preceeding parameter.
And finally turn the long bitstring (which is the concatenation all the bitstrings to which the characters were mapped) into something you can put into an URL by base64url encoding it.
If you could send me a set of representative parameter lists, I could run them through a Huffmann coder to see how well they compress.
The probability vector (or equivalently the mapping from characters to bitstrings) should be encoded as constant arrays into the Javascript function that is sent to the browser.
Of course you could go even further and - for example - try to get a list of possible lables with their probabilities. Then you could map entire lables to bitstrings with a Huffmann encoding. This will give you better compression, but you will have extra work for those labels that are new (e.g. falling back to the single character encoding), and of course the mapping (which - as mentioned above - is a constant array in the Javascript function) will be much larger.
Why not using protocol-buffers?
Protocol buffers are a flexible, efficient, automated mechanism for serializing structured data – think XML, but smaller, faster, and simpler. You define how you want your data to be structured once, then you can use special generated source code to easily write and read your structured data to and from a variety of data streams and using a variety of languages. You can even update your data structure without breaking deployed programs that are compiled against the "old" format.
ProtoBuf.js converts objects to protocol buffer messages and vice vera.
The following object converts to: CgFhCgFiCgFjEgFkEgFlEgFmGgFnGgFoGgFpIgNqZ2I=
{
repos : ['a', 'b', 'c'],
labels: ['d', 'e', 'f'],
milestones : ['g', 'h', 'i'],
username : 'jgb'
}
Example
The following example is built using require.js. Give it a try on this jsfiddle.
require.config({
paths : {
'Math/Long' : '//rawgithub.com/dcodeIO/Long.js/master/Long.min',
'ByteBuffer' : '//rawgithub.com/dcodeIO/ByteBuffer.js/master/ByteBuffer.min',
'ProtoBuf' : '//rawgithub.com/dcodeIO/ProtoBuf.js/master/ProtoBuf.min'
}
})
require(['message'], function(message) {
var data = {
repos : ['a', 'b', 'c'],
labels: ['d', 'e', 'f'],
milestones : ['g', 'h', 'i'],
username : 'jgb'
}
var request = new message.arguments(data);
// Convert request data to base64
var base64String = request.toBase64();
console.log(base64String);
// Convert base64 back
var decodedRequest = message.arguments.decode64(base64String);
console.log(decodedRequest);
});
// Protobuf message definition
// Message definition could also be stored in a .proto definition file
// See: https://github.com/dcodeIO/ProtoBuf.js/wiki
define('message', ['ProtoBuf'], function(ProtoBuf) {
var proto = {
package : 'message',
messages : [
{
name : 'arguments',
fields : [
{
rule : 'repeated',
type : 'string',
name : 'repos',
id : 1
},
{
rule : 'repeated',
type : 'string',
name : 'labels',
id : 2
},
{
rule : 'repeated',
type : 'string',
name : 'milestones',
id : 3
},
{
rule : 'required',
type : 'string',
name : 'username',
id : 4
},
{
rule : 'optional',
type : 'bool',
name : 'with_comments',
id : 5
},
{
rule : 'optional',
type : 'bool',
name : 'without_comments',
id : 6
}
],
}
]
};
return ProtoBuf.loadJson(proto).build('message')
});
I have a cunning plan! (And a drink of gin tonic)
You doesn't seem to care about the length of the bytestream but of the length of the resulting glyphs, e.g. what the string which is displayed to the user.
Browser are pretty good in converting an IRI to the underlying [URI][2] while still displaying the IRI in the address bar. IRIs have a greater repertoire of possible characters while your set of possible chars is rather limited.
That means you can encode bigrams of your chars (aa, ab, ac, …, zz & special chars) into one char of the full unicode spectrum. Say you've got 80 possible ASCII chars: the number of possible combinations of two chars is 6400. Which are easy findable in Unicodes assigned chars, e.g. in the han unified CJK spectrum:
aa → 一
ab → 丁
ac → 丂
ad → 七
…
I picked CJK because this is only (slighty) reasonable if the target chars are assigned in unicode and have assigned glyphs on the major browser and operating systems. For that reason the private use area is out and the more efficient version using trigrams (whose possible combinations could use all of Unicodes 1114112 possible code points) are out.
To recap: the underlying bytes are still there and – given UTF-8 encoding – possible even longer, but the string of displayed characters the user sees and copies is 50% shorter.
Ok, Ok, reasons, why this solution is insane:
IRIs are not perfect. A lot of lesser tools than modern browser have their problems.
The algorithm needs obviously a lot of more work. You'll need a function which maps the bigrams to the target chars and back. And it should preferable work arithmetically to avoid big hash tables in memory.
The target chars should be checked if they are assigned and if they are simple chars and not fancy unicodian things like combining chars or stuff that got lost somewhere in Unicode normalization. Also if the target area is an continuous span of assigned chars with glyphs.
Browser are sometimes wary of IRIs. For good reason, given the IDN homograph attacks. Are they OK with all these non-ASCII-chars in their address bar?
And the biggest: people are notoriously bad at remembering characters in scripts they don't know. They are even worse at trying to (re)-type these chars. And copy'n'paste can go wrong in many different clicks. There is a reason URL shorteners use Base64 and even smaller alphabets.
… speaking of which: That would be my solution. Offloading the work of shortening links either to the user or integrating goo.gl or bit.ly via their APIs.
Small tip: Both parseInt and Number#toString support radix arguments. Try using a radix of 36 to encode numbers (or indexes into lists) in URLs.
Update: I released an NPM package with some more optimizations, see https://www.npmjs.com/package/#yaska-eu/jsurl2
Some more tips:
Base64 encodes with a..zA..Z0..9+/=, and un-encoded URI characters are a..zA..Z0..9-_.~. So Base64 results only need to swap +/= for -_. and it won't expand URIs.
You could keep an array of key names, so that objects could be represented with the first character being the offset in the array, e.g. {foo:3,bar:{g:'hi'}} becomes a3,b{c'hi'} given key array ['foo','bar','g']
Interesting libraries:
JSUrl specifically encodes JSON so it can be put in a URL without changes, even though it uses more characters than specified in the RFC. {"name":"John Doe","age":42,"children":["Mary","Bill"]} becomes ~(name~'John*20Doe~age~42~children~(~'Mary~'Bill)) and with a key dictionary ['name','age','children'] that could be ~(0~'John*20Doe~1~42~2~(~'Mary~'Bill)), thus going from 101 bytes URI encoded to 38.
Small footprint, fast, reasonable compression.
lz-string uses an LZW-based algorithm to compress strings to UTF16 for storing in localStorage. It also has a compressToEncodedURIComponent() function to produce URI-safe output.
Still only a few KB of code, pretty fast, good/great compression.
So basically I'd recommend picking one of these two libraries and consider the problem solved.
There are two main aspects to the problem: encoding and compression.
General purpose compression doesn’t seem to work well on small strings. As browsers don’t provide any API to compress strings, you also need to load the source, which can be huge.
But a lot of characters can be saved by using an efficient encoding. I have written a library named μ to handle the encoding and decoding part.
The idea is to specify as much as information available about the structure and domain of the URL parameters as a specification. This specification can be then used to drive the encoding and decoding. For example:
booleans can be encoded using just one bit;
integers can be converted to base64 (thereby reducing the number of characters required);
object keys need not be encoded (because they can be inferred from the specification);
enums can be encoded using log2(numberOfAllowedValues) bits.
Perhaps you can find a url shortener with a jsonp API, that way you could make all the URLs really short automatically.
http://yourls.org/ even has jsonp support.
It looks like the Github APIs have numeric IDs for many things (looks like repos and users have them, but labels don't) under the covers. It might be possible to use those numbers instead of names wherever advantageous. You then have to figure out how to best encode those in something that'll survive in a query string, e.g. something like base64(url).
For example, your hoodie.js repository has ID 4780572.
Packing that into a big-endian unsigned int (as many bytes as we need) gets us \x00H\xf2\x1c.
We'll just toss the leading zero, we can always restore that later, now we have H\xf2\x1c.
Encode as URL-safe base64, and you have SPIc (toss any padding you might get).
Going from hoodiehq/hoodie.js to SPIc seems like a good-sized win!
More generally, if you're willing to invest the time, you can try to exploit a bunch of redudancies in your query strings. Other ideas are along the lines of packing the two boolean params into a single character, possibly along with other state (like what fields are included). If you use base64-encoding (which seems the best option here due to the URL-safe version -- I looked at base85, but it has a bunch of characters that won't survive in a URL), that gets you 6 bits of entropy per character... there's a lot you can do with that.
To add to Thomas Fuchs' note, yes, if there's some kind of inherent, immutable ordering in some of things you're encoding, than that would obviously also help. However, that seems hard for both the labels and the milestones.
Maybe any simple JS minifier will help you. You'll need only to integrate it on serialization and deserialization points only. I think it'd be the easiest solution.
Why not use a third party link shortener?
(I am assuming you don't have a problem with URI length limits since you mentioned this is an existing application.)
It looks like you're writing a Greasemonkey script or thereabouts, so perhaps you have access to GM_xmlhttpRequest(), which would allow use of a third party link shortener.
Otherwise, you'd need to use XMLHttpRequest() and host your own link shortening service on the same server to avoid crossing the same-origin policy boundary. A quick online search for hosting your own shorteners supplied me with a list of 7 free/open source PHP link shortener scripts and one more on GitHub, though the question likely excludes this kind of approach since "The app’s logic is in-browser only, and there is no backend I can write to."
You can see example code implementing this kind of thing in the URL Shortener UserScript (for Greasemonkey), which pops up a shortened version of the current page's URL when you press SHIFT+T.
Of course, shorteners will redirect users to the long form URL, but this would be a problem in any non-server-side solution. At least a shortener can theoretically proxy (like Apache's RewriteRule with [P]) or use a <frame> tag.
Short
Use a URL packing scheme such as my own, starting only from the params section of your URL.
Longer
As other's here have pointed out, typical compression systems don't work for short strings. But, it's important to recognise that URLs and Params are a serialization format of a data model: a text human-readable format with specific sections - we know that the scheme is first, the host is found directly after, the port is implied but can be overridden, etc...
With the underlying conceptual data model, one can serialize with a more bit-efficient serialization scheme. In fact, I have created such a serialization myself which archives around 50% compression: see http://blog.alivate.com.au/packed-url/
Conceptually, my scheme was written with the conceptual data model in mind, it doesn't deserialize the URL into that conceptual model as a distinct step. However, that's possible, and that formal approach might yield greater efficiencies, where the bits don't need to be in the same order as what a string URL might be.

ajax returns empty string instead of json [python cgi]

Basically, I have a cgi script that prints out valid json, I have checked and I have a similar functions that work the same way but this one doesn't some reason and I can't find it.
Javascript:
function updateChat(){
$.ajax({
type: "get",
url: "cgi-bin/main.py",
data: {'ajax':'1', 'chat':'1'},
datatype:"html",
async: false,
success: function(response) {
alert(response); //Returns an empty string
},
error:function(xhr,err)
{
alert("Error connecting to server, please contact system administator.");
}
});
Here is the JSON that python prints out:
[
"jon: Hi.",
"bob: Hello."
]
I used json.dumps to create the JSON it worked in previous functions that have pretty much the same JSON layout only different content.
There is a whole bunch more of server code, I tried to copy out the relevant parts. Basically I'm just trying to filter an ugly chat log for learning purposes. I filter it with regex and then create a json out of it.
#!/usr/bin/env python
# -*- coding: UTF-8 -*-
print "Content-type: text/html\n\n"
print
import cgi, sys, cgitb, datetime, re, time, random, json
cgitb.enable()
formdata = cgi.FieldStorage()
def tail( f, window=20 ):
BUFSIZ = 1024
f.seek(0, 2)
bytes = f.tell()
size = window
block = -1
data = []
while size > 0 and bytes > 0:
if (bytes - BUFSIZ > 0):
# Seek back one whole BUFSIZ
f.seek(block*BUFSIZ, 2)
# read BUFFER
data.append(f.read(BUFSIZ))
else:
# file too small, start from begining
f.seek(0,0)
# only read what was not read
data.append(f.read(bytes))
linesFound = data[-1].count('\n')
size -= linesFound
bytes -= BUFSIZ
block -= 1
return '\n'.join(''.join(data).splitlines()[-window:])
def updateChatBox():
try:
f = open('test.txt', 'r')
lines = tail(f, window = 20)
chat_array = lines.split("\n")
f.close()
except:
print "Failed to access data"
sys.exit(4)
i = 0
while i < len(chat_array):
#remove timer
time = re.search("(\[).*(\])", chat_array[i])
result_time = time.group()
chat_array[i] = chat_array[i].replace(result_time, "")
#Removes braces around user
user = re.search("(\\().*?(_)", chat_array[i])
result_user = user.group()
chat_array[i] = chat_array[i].replace("(", "")
chat_array[i] = chat_array[i].replace(")", "")
#Removes underscore and message end marker
message = re.search("(_).*?(\|)", chat_array[i])
result_message = message.group()
chat_array[i] = chat_array[i].replace("_", ":")
chat_array[i] = chat_array[i].replace("|", "")
data += chat_array[i] + "\n"
i = i + 1
data_array = data.split("\n")
json_string = json.dumps(data_array)
print json_string
if formdata.has_key("ajax"):
ajax = formdata["ajax"].value
if ajax == "1": #ajax happens
if formdata.has_key("chat"):
chat = formdata["chat"].value
if chat == 1:
updateChatBox()
else:
print "ERROR"
elif formdata.has_key("get_all_stats"):
get_all_stats = formdata["get_all_stats"].value
if get_all_stats == "1":
getTopScores()
else:
print "ERROR"
Here is also a function that works perfectly and is in the same python file
def getTopScores():
try:
f = open('test_stats.txt', 'r')
stats = f.read()
stats_list = stats.split("\n")
f.close()
except:
print "Failed reading file"
sys.exit(4)
json_string = json.dumps(stats_list)
print json_string
The only difference is using the tail function and regex, the end result JSON actually looks identical.
Are you certain that updateChatBox is even getting called? Note that you compare ajax to the string "1" but you compare chat to the integer 1. I bet one of those doesn't match (in particular the chat one). If that doesn't match, your script will fall through without ever returning a value.
Also, though it isn't the root cause, you should clean up your content types for correctness. Your Javascript AJAX call is declared as expecting html in response, and your cgi script is also set to return content-type:text/html. These should be changed to json and content-type:application/json, respectively.

How do I pack an unsigned int as a binary string in nodejs?

I wonder if anyone can help me. I'm new to nodejs and I've been trying to send a message to a server using nodejs as the client. The server is written in C and looking at a PHP installation it makes use of the pack('N',len) to send the length of the string to the server. I have tried to implement a similar thing in javascript but am hitting some problems. I wonder if you can point out where I am going wrong (credit to phpjs from where I copied the packing string code).
My client nodejs javascript code is:
var net = require('net');
var strs = Array("<test1>1</test1>",`
"<test_2>A STRING</test_2>", "<test_3>0</test_3>",
"<test_4></test_4>", "<test_5></test_5>",
"<test_6></test_6>", "<test_7_></test_7>",
"<test_8></test_8>", "<test_9>10</test_9>",
"<test_10></test_10>", "<test_11></test11>",
"<test_12></test_12>", "<test_13></test_13>",
"<test_14></test_14>");
hmsg = strs[0] + strs[1] + strs[2] + strs[3] + strs[4] + strs[5] + strs[6];
console.log(hmsg.length);
msg = hmsg + "<test_20></test_20>"`
msglen = hmsg.length;
astr = '';
astr += String.fromCharCode((msglen >>> 24) && 0xFF);
astr += String.fromCharCode((msglen >>> 16) && 0xFF);
astr += String.fromCharCode((msglen >>> 8) & 0xFF);
astr += String.fromCharCode((msglen >>> 0) & 0xFF);
var pmsg = astr + msg;
console.log(pmsg);
var client = net.createConnection({host: 'localhost', port: 1250});
console.log("client connected");
client.write(pmsg);
client.end();
Running 'node testApp' prints out the correct length of the header string. If I look at what the server is receiving I can see that as long as the header string is < 110 chars it decodes the correct length, but if the header string is > 110 (by adding strs[6] or more to the hmsg) the decoded length is incorrect. Including strs[6] I get a string of length 128
on the client side and 194 on the server side.
I am clearly doing something wrong in packing the integer, but I'm not familiar with packing bits and am not sure where I am going wrong. Can anyone point out where my error is?
Many thanks!
Update
Thanks to Fedor Indutny on the nodejs mailing list the following worked for me:
console.log(hmsg.length, msg.length);
var msglen = hmsg.length;
var buf = new Buffer(msg.length+4);
mslen = buf.writeUInt32BE(msglen, 0);
mslen = buf.write(msg, 4);
var client = net.createConnection({host: 'localhost', port: 8190});
console.log("client connected");
client.write(buf);
client.end();
I.e. using the Buffer's writeUInt32 was all that was needed for the length of the header message. I'm posting here in the hope it may help someone else.
I see that you managed to get it working on your own, but for a much easier fix you could just use the binary module.
https://github.com/substack/node-binary
Install with npm install binary

how do I access XHR responseBody (for binary data) from Javascript in IE?

I've got a web page that uses XMLHttpRequest to download a binary resource.
In Firefox and Gecko I can use responseText to get the bytes, even if the bytestream includes binary zeroes. I may need to coerce the mimetype with overrideMimeType() to make that happen. In IE, though, responseText doesn't work, because it appears to terminate at the first zero. If you read 100,000 bytes, and byte 7 is a binary zero, you will be able to access only 7 bytes. IE's XMLHttpRequest exposes a responseBody property to access the bytes. I've seen a few posts suggesting that it's impossible to access this property in any meaningful way directly from Javascript. This sounds crazy to me.
xhr.responseBody is accessible from VBScript, so the obvious workaround is to define a method in VBScript in the webpage, and then call that method from Javascript. See jsdap for one example. EDIT: DO NOT USE THIS VBScript!!
var IE_HACK = (/msie/i.test(navigator.userAgent) &&
!/opera/i.test(navigator.userAgent));
// no no no! Don't do this!
if (IE_HACK) document.write('<script type="text/vbscript">\n\
Function BinaryToArray(Binary)\n\
Dim i\n\
ReDim byteArray(LenB(Binary))\n\
For i = 1 To LenB(Binary)\n\
byteArray(i-1) = AscB(MidB(Binary, i, 1))\n\
Next\n\
BinaryToArray = byteArray\n\
End Function\n\
</script>');
var xml = (window.XMLHttpRequest)
? new XMLHttpRequest() // Mozilla/Safari/IE7+
: (window.ActiveXObject)
? new ActiveXObject("MSXML2.XMLHTTP") // IE6
: null; // Commodore 64?
xml.open("GET", url, true);
if (xml.overrideMimeType) {
xml.overrideMimeType('text/plain; charset=x-user-defined');
} else {
xml.setRequestHeader('Accept-Charset', 'x-user-defined');
}
xml.onreadystatechange = function() {
if (xml.readyState == 4) {
if (!binary) {
callback(xml.responseText);
} else if (IE_HACK) {
// call a VBScript method to copy every single byte
callback(BinaryToArray(xml.responseBody).toArray());
} else {
callback(getBuffer(xml.responseText));
}
}
};
xml.send('');
Is this really true? The best way? copying every byte? For a large binary stream that's not going to be very efficient.
There is also a possible technique using ADODB.Stream, which is a COM equivalent of a MemoryStream. See here for an example. It does not require VBScript but does require a separate COM object.
if (typeof (ActiveXObject) != "undefined" && typeof (httpRequest.responseBody) != "undefined") {
// Convert httpRequest.responseBody byte stream to shift_jis encoded string
var stream = new ActiveXObject("ADODB.Stream");
stream.Type = 1; // adTypeBinary
stream.Open ();
stream.Write (httpRequest.responseBody);
stream.Position = 0;
stream.Type = 1; // adTypeBinary;
stream.Read.... /// ???? what here
}
But that's not going to work well - ADODB.Stream is disabled on most machines these days.
In The IE8 developer tools - the IE equivalent of Firebug - I can see the responseBody is an array of bytes and I can even see the bytes themselves. The data is right there. I don't understand why I can't get to it.
Is it possible for me to read it with responseText?
hints? (other than defining a VBScript method)
Yes, the answer I came up with for reading binary data via XHR in IE, is to use VBScript injection. This was distasteful to me at first, but, I look at it as just one more browser dependent bit of code.
(The regular XHR and responseText works fine in other browsers; you may have to coerce the mime type with XMLHttpRequest.overrideMimeType(). This isn't available on IE).
This is how I got a thing that works like responseText in IE, even for binary data.
First, inject some VBScript as a one-time thing, like this:
if(/msie/i.test(navigator.userAgent) && !/opera/i.test(navigator.userAgent)) {
var IEBinaryToArray_ByteStr_Script =
"<!-- IEBinaryToArray_ByteStr -->\r\n"+
"<script type='text/vbscript' language='VBScript'>\r\n"+
"Function IEBinaryToArray_ByteStr(Binary)\r\n"+
" IEBinaryToArray_ByteStr = CStr(Binary)\r\n"+
"End Function\r\n"+
"Function IEBinaryToArray_ByteStr_Last(Binary)\r\n"+
" Dim lastIndex\r\n"+
" lastIndex = LenB(Binary)\r\n"+
" if lastIndex mod 2 Then\r\n"+
" IEBinaryToArray_ByteStr_Last = Chr( AscB( MidB( Binary, lastIndex, 1 ) ) )\r\n"+
" Else\r\n"+
" IEBinaryToArray_ByteStr_Last = "+'""'+"\r\n"+
" End If\r\n"+
"End Function\r\n"+
"</script>\r\n";
// inject VBScript
document.write(IEBinaryToArray_ByteStr_Script);
}
The JS class I'm using that reads binary files exposes a single interesting method, readCharAt(i), which reads the character (a byte, really) at the i'th index. This is how I set it up:
// see doc on http://msdn.microsoft.com/en-us/library/ms535874(VS.85).aspx
function getXMLHttpRequest()
{
if (window.XMLHttpRequest) {
return new window.XMLHttpRequest;
}
else {
try {
return new ActiveXObject("MSXML2.XMLHTTP");
}
catch(ex) {
return null;
}
}
}
// this fn is invoked if IE
function IeBinFileReaderImpl(fileURL){
this.req = getXMLHttpRequest();
this.req.open("GET", fileURL, true);
this.req.setRequestHeader("Accept-Charset", "x-user-defined");
// my helper to convert from responseBody to a "responseText" like thing
var convertResponseBodyToText = function (binary) {
var byteMapping = {};
for ( var i = 0; i < 256; i++ ) {
for ( var j = 0; j < 256; j++ ) {
byteMapping[ String.fromCharCode( i + j * 256 ) ] =
String.fromCharCode(i) + String.fromCharCode(j);
}
}
// call into VBScript utility fns
var rawBytes = IEBinaryToArray_ByteStr(binary);
var lastChr = IEBinaryToArray_ByteStr_Last(binary);
return rawBytes.replace(/[\s\S]/g,
function( match ) { return byteMapping[match]; }) + lastChr;
};
this.req.onreadystatechange = function(event){
if (that.req.readyState == 4) {
that.status = "Status: " + that.req.status;
//that.httpStatus = that.req.status;
if (that.req.status == 200) {
// this doesn't work
//fileContents = that.req.responseBody.toArray();
// this doesn't work
//fileContents = new VBArray(that.req.responseBody).toArray();
// this works...
var fileContents = convertResponseBodyToText(that.req.responseBody);
fileSize = fileContents.length-1;
if(that.fileSize < 0) throwException(_exception.FileLoadFailed);
that.readByteAt = function(i){
return fileContents.charCodeAt(i) & 0xff;
};
}
if (typeof callback == "function"){ callback(that);}
}
};
this.req.send();
}
// this fn is invoked if non IE
function NormalBinFileReaderImpl(fileURL){
this.req = new XMLHttpRequest();
this.req.open('GET', fileURL, true);
this.req.onreadystatechange = function(aEvt) {
if (that.req.readyState == 4) {
if(that.req.status == 200){
var fileContents = that.req.responseText;
fileSize = fileContents.length;
that.readByteAt = function(i){
return fileContents.charCodeAt(i) & 0xff;
}
if (typeof callback == "function"){ callback(that);}
}
else
throwException(_exception.FileLoadFailed);
}
};
//XHR binary charset opt by Marcus Granado 2006 [http://mgran.blogspot.com]
this.req.overrideMimeType('text/plain; charset=x-user-defined');
this.req.send(null);
}
The conversion code was provided by Miskun.
Very fast, works great.
I used this method to read and extract zip files from Javascript, and also in a class that reads and displays EPUB files in Javascript. Very reasonable performance. About half a second for a 500kb file.
XMLHttpRequest.responseBody is a VBArray object containing the raw bytes. You can convert these objects to standard arrays using the toArray() function:
var data = xhr.responseBody.toArray();
I would suggest two other (fast) options:
First, you can use
ADODB.Recordset to convert the byte array into a string. I would guess that this object is more common that ADODB.Stream, which is often disabled for security reasons. This option is VERY fast, less than 30ms for a 500kB file.
Second, if the Recordset component is not accessible, there is a trick to access the byte array data from Javascript. Send your xhr.responseBody to VBScript, pass it through any VBScript string function such as CStr (takes no time), and return it to JS. You will get a weird string with bytes concatenated into 16-bit unicode (in reverse). You can then convert this string quickly into a usable bytestring through a regular expression with dictionary-based replacement. Takes about 1s for 500kB.
For comparison, the byte-by-byte conversion through loops takes several minutes for this same 500kB file, so it's a no-brainer :) Below the code I have been using, to insert into your header. Then call the function ieGetBytes with your xhr.responseBody.
<!--[if IE]>
<script type="text/vbscript">
'Best case scenario when the ADODB.Recordset object exists
'We will do the existence test in Javascript (see after)
'Extremely fast, about 25ms for a 500kB file
Function ieGetBytesADO(byteArray)
Dim recordset
Set recordset = CreateObject("ADODB.Recordset")
With recordset
.Fields.Append "temp", 201, LenB(byteArray)
.Open
.AddNew
.Fields("temp").AppendChunk byteArray
.Update
End With
ieGetBytesADO = recordset("temp")
recordset.Close
Set recordset = Nothing
End Function
'Trick to return a Javascript-readable string from a VBScript byte array
'Yet the string is not usable as such by Javascript, since the bytes
'are merged into 16-bit unicode characters. Last character missing if odd length.
Function ieRawBytes(byteArray)
ieRawBytes = CStr(byteArray)
End Function
'Careful the last character is missing in case of odd file length
'We Will call the ieLastByte function (below) from Javascript
'Cannot merge directly within ieRawBytes as the final byte would be duplicated
Function ieLastChr(byteArray)
Dim lastIndex
lastIndex = LenB(byteArray)
if lastIndex mod 2 Then
ieLastChr = Chr( AscB( MidB( byteArray, lastIndex, 1 ) ) )
Else
ieLastChr = ""
End If
End Function
</script>
<script type="text/javascript">
try {
// best case scenario, the ADODB.Recordset object exists
// we can use the VBScript ieGetBytes function to transform a byte array into a string
var ieRecordset = new ActiveXObject('ADODB.Recordset');
var ieGetBytes = function( byteArray ) {
return ieGetBytesADO(byteArray);
}
ieRecordset = null;
} catch(err) {
// no ADODB.Recordset object, we will do the conversion quickly through a regular expression
// initializes for once and for all the translation dictionary to speed up our regexp replacement function
var ieByteMapping = {};
for ( var i = 0; i < 256; i++ ) {
for ( var j = 0; j < 256; j++ ) {
ieByteMapping[ String.fromCharCode( i + j * 256 ) ] = String.fromCharCode(i) + String.fromCharCode(j);
}
}
// since ADODB is not there, we replace the previous VBScript ieGetBytesADO function with a regExp-based function,
// quite fast, about 1.3 seconds for 500kB (versus several minutes for byte-by-byte loops over the byte array)
var ieGetBytes = function( byteArray ) {
var rawBytes = ieRawBytes(byteArray),
lastChr = ieLastChr(byteArray);
return rawBytes.replace(/[\s\S]/g, function( match ) {
return ieByteMapping[match]; }) + lastChr;
}
}
</script>
<![endif]-->
Thanks so much for this solution. the BinaryToArray() function in VbScript works great for me.
Incidentally, I need the binary data for providing it to an Applet. (Don't ask me why Applets can't be used for downloading binary data. Long story short.. weird MS authentication that cant go thru applets (URLConn) calls. Its especially weird in cases where users are behind a proxy )
The Applet needs a byte array from this data, so here's what I do to get it:
String[] results = result.toString().split(",");
byte[] byteResults = new byte[results.length];
for (int i=0; i<results.length; i++){
byteResults[i] = (byte)Integer.parseInt(results[i]);
}
The byte array can then converted into a bytearrayinputstream for further processing.
Thank you for this post.
I found this link usefull:
http://www.codingforums.com/javascript-programming/47018-help-using-responsetext-property-microsofts-xmlhttp-activexobject-ie6.html
Specially this part:
</script>
<script language="VBScript">
Function BinaryToString(Binary)
Dim I,S
For I = 1 to LenB(Binary)
S = S & Chr(AscB(MidB(Binary,I,1)))
Next
BinaryToString = S
End Function
</script>
I've added this to my htm page.
Then I call this function from my javascript:
responseText = BinaryToString(xhr.responseBody);
Works on IE8, IE9, IE10, FF & Chrome.
You could also just make a proxy script that goes to the address you're requesting & base64's it. Then you just have to pass a query string to the proxy script that tells it the address. In IE you have to manually do base64 in JS though. But this is a way to go if you don't want to use VBScript.
I used this for my GameBoy Color emulator.
Here is the PHP script that does the magic:
<?php
//Binary Proxy
if (isset($_GET['url'])) {
try {
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, stripslashes($_GET['url']));
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($curl, CURLOPT_USERAGENT, $_SERVER['HTTP_USER_AGENT']);
curl_setopt($curl, CURLOPT_POST, false);
curl_setopt($curl, CURLOPT_CONNECTTIMEOUT, 30);
$result = curl_exec($curl);
curl_close($curl);
if ($result !== false) {
header('Content-Type: text/plain; charset=ASCII');
header('Expires: '.gmdate('D, d M Y H:i:s \G\M\T', time() + (3600 * 24 * 7)));
echo(base64_encode($result));
}
else {
header('HTTP/1.0 404 File Not Found');
}
}
catch (Exception $error) { }
}
?>
I was trying to download a file and than sign it using CAPICOM.DLL. The only way I coud do it was by injecting a VBScript function that does the download. That is my solution:
if(/msie/i.test(navigator.userAgent) && !/opera/i.test(navigator.userAgent)) {
var VBConteudo_Script =
'<!-- VBConteudo -->\r\n'+
'<script type="text/vbscript">\r\n'+
'Function VBConteudo(url)\r\n'+
' Set objHTTP = CreateObject("MSXML2.XMLHTTP")\r\n'+
' objHTTP.open "GET", url, False\r\n'+
' objHTTP.send\r\n'+
' If objHTTP.Status = 200 Then\r\n'+
' VBConteudo = objHTTP.responseBody\r\n'+
' End If\r\n'+
'End Function\r\n'+
'\<\/script>\r\n';
// inject VBScript
document.write(VBConteudo_Script);
}

Categories

Resources