Reading socket connection responses in Node.js - javascript

I'm in the process of converting a PHP function to JS for use with Node.
The PHP function:
takes in a partially formed packet as an arg
finalizes the packet
creates a socket
sends the packet over the socket
Reads the response code from the server
Looks up the error code in an array
Sends the error code back to the caller
I have most of the function converted except for the bit that reads the server response.
Original PHP Function
(Comments are my understanding of what the code does. May be incorrect)
function serverInteractive($buf) { // $buf = partially formed packet
$fp = fsockopen($ip, $port , $errno, $errstr, 5);
$rs = '';
if (!$fp) return $this -> fsockerror;
$packet = pack("s", (strlen($buf)+2)).$buf; // Finalizes the packet
fwrite($fp, $packet); // Sends the packet to the server.
// ----- Read Server Response START -----//
$len = unpack("v", fread($fp, 2));
$rid = unpack("c", fread($fp, 1));
for ($i = 0; $i < (($len[1] - 4) / 4); $i++) {
$read = unpack("i", fread($fp, 4));
$rs .= $read[1];
}
// ----- Read Server Response FINISH -----//
fclose($fp); // Closes the socket
$result = $this -> socketerrors[$rs];
// $socketerrors is an array of error messages.
return($result);
}
My JavaScript Version
var net = require('net');
var submitPacket = function(packet) {
// Generate Final Packet
var p = pack('s', packet.length + 2) + packet;
// Create socket to Server
var serverSocket = net.createConnection(config.serverConfig.port,
config.serverConfig.host,
function() {
// Executes of connection is OK.
console.log("Connected to Server");
if (serverSocket.write(p, function() {
console.log("Buffer Flushed!");
})) {
console.log("Packet sent ok!");
} else {
console.log("There was a problem sending the packet!")
}
});
serverSocket.on('error', function(error) {
if (error) {
console.log(error);
}
});
serverSocket.on('data', function(data) {
if (data) {
console.log("Response: ", data);
// Need to put the error code generation
// here and fire a callback.
serverSocket.end();
}
});
}
The response i get from the server looks something like this when everything is ok:
<Buffer 07 00 06 01 00 00 00>
When not ok, looks something like this:
<Buffer 0b 00 06 00 00 00 00 03 00 00 00>
Any advice would be greatly appreciated.
UPDATE 1: This is what i've come up with so far however it the resulting code is undefined.
serverdSocket.on('data', function(data) {
if (data) {
var response = toArrayBuffer(data);
var len = response.slice(0,2);
var rid = response.slice(0,1);
var rs = '';
for (var i = 0; i < ((len[1]-4) / 4); i++) {
var read = response.slice(0,4);
rs += read[1];
}
console.log("Code: ",rs);
}
UPDATE 2: The PHP unpack function does indeed convert a buffer of binary data into an array. It looks like i can do the same thing by JSON.stringify then JSON.parse() to get it into an array. I now have an array object of the correct data, but the rest of the function doesnt seem to replicate the original.

I'll give it a try, although you haven't actually said what you want the "Code:" output to look like for the two input strings. We'll start with the differences between the PHP code and the JavaScript code.
Let's talk about these lines from the PHP code:
$len = unpack("v", fread($fp, 2));
$rid = unpack("c", fread($fp, 1));
Now those little fread() function calls are actually reading data from an input stream/file/whatever. The bottom line is that $len gets its value from the first two bytes of the stream and $rid gets its value from the third byte.
Now compare with the JavaScript code:
var len = response.slice(0,2);
var rid = response.slice(0,1);
I have a few observations about this code. First, the calls to slice() are both operating on the same array-like object, starting from the same location. So the value of rid will be wrong. Second, the value for rid is never used in the sample code, so if you never really need it you can eliminate that line entirely (which you could not do in the PHP code). The third observation is that calling slice() seems like overkill. Why not just use the square brackets on response?
One final puzzle about the original PHP code. Looking at the code that builds up $rs:
$rs .= $read[1];
It looks like this is a string concatenation of the successive integer data values. It's been a few years since I worked in PHP, so maybe I'm missing some subtlety, but this seems like a kind of odd thing to do. If you could tell me what the expected output codes are supposed to be it would help.
That brings me to the data itself. Just guessing from the data examples in the PHP code, it looks like the first two bytes are a little-endian encoded 16-bit buffer length. The next byte is the rid value, which is 6 in both examples. Then it looks like the rest of the buffer is composed of 32-bit little-endian values. Of course this is a WAG, but I'm just working with what has been provided.
So here's some code that processes the data values from the two sample buffers. For the first buffer it prints Code: 1 and for the second Code: 03. If these are not the correct values, then let me know what the expected output is and I'll see if I can make it work.
function toArrayBuffer(buffer) {
var ab = new ArrayBuffer(buffer.length);
var view = new Uint8Array(ab);
for (var i = 0; i < buffer.length; ++i) {
view[i] = buffer[i];
}
return ab;
}
function serverSocketOnData(data)
{
if (data) {
var response = toArrayBuffer(data);
var len = response[0] + 256*response[1];
var rid = response[2]; // get rid of this line?
var rs = '';
for (var i = 3; i < len; i+=4) {
rs += response[i];
// if the 32-bit value could ever exceed 255, then use this instead:
// rs += response[i] + 256*response[i+1] +
// 256*256*response[i+2] + 256*256*256*response[i+3];
}
console.log("Code: ",rs);
}
}
function testArray(sourceArray)
{
var sourceBuffer = new Buffer(sourceArray);
console.log("source buffer");
console.log(sourceBuffer);
serverSocketOnData(sourceBuffer);
}
function main()
{
var sourceArray1 = [0x07,0x00,0x06,0x01,0x00,0x00,0x00];
var sourceArray2 = [0x0b,0x00,0x06,0x00,0x00,0x00,0x00,0x03,0x00,0x00,0x00];
testArray(sourceArray1);
testArray(sourceArray2);
}
main();

Are you using slice appropriately?
When you call slice on a string, it returns a string.
var hello = "hello"
var hel = hello.slice(0, 3)
var h = hel[1]
var ello = hello.slice(1)

Related

Change segment text before processing using hls.js

so due to some security reason i want to add some extra text to .ts file in the begining of so when parsing it causes buffering issues
to fix this i decided to removed that 'extra' text i added before processing the segment issue is i dont know how to manipulate arraybuffer so i can remove that text since i am not that knowledgable on js
I tried many things including just download hlsjs file directly then edit readystatechange
// >= HEADERS_RECEIVED
if (readyState >= 2) {
....
if (isArrayBuffer)
{
console.log(xhr.response);
var ress = xhr.response;
//console.log(ress.replace('FFmpeg',''));
var enc = new TextDecoder('ASCII');
var seg = enc.decode(ress);
//var binaryArray = new Uint8Array(this.response.slice(0)); // use UInt8Array for binary
//var blob = new Blob([seg], { type: "video/MP2T" });
var enc = new TextEncoder(); // always utf-8
var newww = enc.encode(enc.encode(seg));
var ddd = newww.buffer;
console.debug( newww );
console.debug( newww.buffer);
//dec = dec.replace('ÿØÿà �JFIF','') ;
//xhr.response = Array.from(newww) ;
data = ddd;
len = data.byteLength;
the idea was to convert arraybuffer to string remove that text then convert it back to arraybuffer

How to convert Bigtable HBase API returning byte array to integer or float in nodeJS?

I am writing data from dataflow into bigtable, and need to retrieve data from NodeJS, but realised that the data is in byte array. How do I convert back to integer or float?
The key "\u0000\u0000\u0000\u0000" was originally a 0, but I could never get it to output correctly in my nodeJS code.
I have tried the below methods using Buffer, bin2string, byteArrayToLong, but none of them worked correctly. The following is the code to query data.
async function query(table, start, end) {
return new Promise((resolve, reject) => {
table.createReadStream({
start: start,
end: end
}).on('data', function(row) {
for(var key in row.data.ch){
console.log(JSON.stringify(key)); // Output: "\u0000\u0000\u0000\u0000"
console.log(`bin2string: ${bin2string(key)}`); // Output: bin2string:
let keybuf = Buffer.from(key);
console.log(keybuf); // Output: <Buffer 00 00 00 00>
console.log(keybuf.toString('utf8')); // Output:
const utf16Buffer = Buffer.from(key,'utf16le'); // Output: <Buffer 00 00 00 00 00 00 00 00>
console.log(utf16Buffer);
console.log(utf16Buffer.toString()); // Output:
console.log(byteArrayToLong(key)); // Output: NaN
}
// Nothing to do with data
// We can measure the time needed to get the first row
}).on('end', function(){
resolve();
});
});
}
function bin2string(array){
var result = "";
for(var i = 0; i < array.length; ++i){
result+= (String.fromCharCode(array[i]));
}
return result;
}
function byteArrayToLong (byteArray){
var value =0;
for(var i=byteArray.length-1; i>=0; i--){
value = value*256 + byteArray[i];
}
return value;
}
I don't have a full answer to your question, but I have some pointers.
Cloud Bigtable numbers are all "64-bit integer encoded as an 8-byte big-endian value" (see here). Node.js longs "endian-ness" are system specific (see here). PHP has a similar issue with Cloud Bigtable (see here).
In Java, all numeric values are big-endian. The HBase Bytes class does all of the conversions between numbers and bytes (source code), and may provide some clues.
It may be wroth posting an issue on https://github.com/googleapis/nodejs-bigtable/issues to get a better solution.
For Integers, the function below would work.
function byteToInt(x){
let val=0;
for (let i=0; i<x.length; ++i) {
val+=x[i];
if(i<x.length-1) val = val << 8;
}
return val;
}
For float, NodeJS already provides a method to read from Buffer:
let buf = Buffer.from(value, 'binary');
let num = buf.readFloatBE(0);
It is also be possible to use the below, depending on the endianness and no of bits:
buf.readInt8(offset)
buf.readInt16BE(offset)
buf.readInt16LE(offset)
buf.readInt32BE(offset)
buf.readInt32LE(offset)

how to convert arraybuffer to string

I have written a simple TCP server on node.js to send some data to a Chrome app. In the chrome app, when I get the data, I convert that to string using below function, I get an exception "byte length of Uint16Array should be a multiple of 2"
String.fromCharCode.apply(null, new Uint16Array(buffer))
I could not find any information about what could be causing this and how to fix this. Any pointers on this is highly appreciated.
Below is the code in node.js server for sending the data to client:
socket.on('data', function(data) {
console.log('DATA ' + socket.remoteAddress + ': ' + data);
// Write the data back to the socket,
// the client will receive it as data from the server
var r= socket.write('from server\r\n');
});
Below is the code from chrome app:
chrome.sockets.tcp.onReceive.addListener(function (info) {
console.log('onListener registered');
if (info.socketId != socketid)
return;
else {
try {
data = ab2str(info.data);
console.log(data);
}
catch (e) {
console.log(e);
}
}
// info.data is an arrayBuffer.
});
function ab2str(buf) {
return String.fromCharCode.apply(null, new Uint16Array(buf));
}
The modern (Chrome 38+) way to do this would be, assuming the encoding is UTF-8:
var decoder = new TextDecoder("utf-8");
function ab2str(buf) {
return decoder.decode(new Uint8Array(buf));
}
This uses the TextDecoder API; see documentation for more options, such as a different encoding.
See also: Easier ArrayBuffer<->String conversion with the Encoding API
# Google Developers
You're probably seeing this problem because your app has received an odd number of bytes on the socket, but you're trying to create an array of 2-byte-wide items out of it (because that's what fits into a Uint16Array)
If your app receives the string "Hello" over the network (5 bytes), then you can cast that to a Uint8Array, and it will look like this:
Item: 0 1 2 3 4
Char: H e l l o
Uint8 Value: 72 101 108 108 111
casting it to an Uint16Array, though will try to do this:
Item 0 1 2
Chars He ll o?
IntVal 25928 27756 ?????
Without a 6th byte to work with, it can't construct the array, and so you get an exception.
Using a Uint16Array for the data only makes sense if you are expecting UCS-2 string data on the socket. If you are receiving plain ASCII data, then you want to cast that to a Uint8Array instead, and map String.fromCharCode on that. If it's something else, such as UTF-8, then you'll have to do some other conversion.
No matter what, though, the socket layer is always free to send you data in chunks of any length. Your app will have to deal with odd sizes, and save any remainder that you can't deal with right away, so that you can use it when you receive the next chunk of data.
Kind of old and late, but perhaps using this function (original source) works better (it worked for me for decoding arraybuffer to string without leaving some special chars as total garbage):
function decodeUtf8(arrayBuffer) {
var result = "";
var i = 0;
var c = 0;
var c1 = 0;
var c2 = 0;
var data = new Uint8Array(arrayBuffer);
// If we have a BOM skip it
if (data.length >= 3 && data[0] === 0xef && data[1] === 0xbb && data[2] === 0xbf) {
i = 3;
}
while (i < data.length) {
c = data[i];
if (c < 128) {
result += String.fromCharCode(c);
i++;
} else if (c > 191 && c < 224) {
if( i+1 >= data.length ) {
throw "UTF-8 Decode failed. Two byte character was truncated.";
}
c2 = data[i+1];
result += String.fromCharCode( ((c&31)<<6) | (c2&63) );
i += 2;
} else {
if (i+2 >= data.length) {
throw "UTF-8 Decode failed. Multi byte character was truncated.";
}
c2 = data[i+1];
c3 = data[i+2];
result += String.fromCharCode( ((c&15)<<12) | ((c2&63)<<6) | (c3&63) );
i += 3;
}
}
return result;
}
There is an asynchronous way using Blob and FileReader.
You can specify any valid encoding.
function arrayBufferToString( buffer, encoding, callback ) {
var blob = new Blob([buffer],{type:'text/plain'});
var reader = new FileReader();
reader.onload = function(evt){callback(evt.target.result);};
reader.readAsText(blob, encoding);
}
//example:
var buf = new Uint8Array([65,66,67]);
arrayBufferToString(buf, 'UTF-8', console.log.bind(console)); //"ABC"

Retrieving binary data in Javascript (Ajax)

Im trying to get this remote binary file to read the bytes, which (of course) are supossed to come in the range 0..255. Since the response is given as a string, I need to use charCodeAt to get the numeric values for every character. I have come across the problem that charCodeAt returns the value in UTF8 (if im not mistaken), so for example the ASCII value 139 gets converted to 8249. This messes up my whole application cause I need to get those value as they are sent from the server.
The immediate solution is to create a big switch that, for every given UTF8 code will return the corresponding ASCII. But i was wondering if there is a more elegant and simpler solution. Thanks in advance.
The following code has been extracted from an answer to this StackOverflow question and should help you work around your issue.
function stringToBytesFaster ( str ) {
var ch, st, re = [], j=0;
for (var i = 0; i < str.length; i++ ) {
ch = str.charCodeAt(i);
if(ch < 127)
{
re[j++] = ch & 0xFF;
}
else
{
st = []; // clear stack
do {
st.push( ch & 0xFF ); // push byte to stack
ch = ch >> 8; // shift value down by 1 byte
}
while ( ch );
// add stack contents to result
// done because chars have "wrong" endianness
st = st.reverse();
for(var k=0;k<st.length; ++k)
re[j++] = st[k];
}
}
// return an array of bytes
return re;
}
var str = "\x8b\x00\x01\x41A\u1242B\u4123C";
alert(stringToBytesFaster(str)); // 139,0,1,65,65,18,66,66,65,35,67
I would recommend encoding the binary data is some character-encoding independent format like base64

how do I access XHR responseBody (for binary data) from Javascript in IE?

I've got a web page that uses XMLHttpRequest to download a binary resource.
In Firefox and Gecko I can use responseText to get the bytes, even if the bytestream includes binary zeroes. I may need to coerce the mimetype with overrideMimeType() to make that happen. In IE, though, responseText doesn't work, because it appears to terminate at the first zero. If you read 100,000 bytes, and byte 7 is a binary zero, you will be able to access only 7 bytes. IE's XMLHttpRequest exposes a responseBody property to access the bytes. I've seen a few posts suggesting that it's impossible to access this property in any meaningful way directly from Javascript. This sounds crazy to me.
xhr.responseBody is accessible from VBScript, so the obvious workaround is to define a method in VBScript in the webpage, and then call that method from Javascript. See jsdap for one example. EDIT: DO NOT USE THIS VBScript!!
var IE_HACK = (/msie/i.test(navigator.userAgent) &&
!/opera/i.test(navigator.userAgent));
// no no no! Don't do this!
if (IE_HACK) document.write('<script type="text/vbscript">\n\
Function BinaryToArray(Binary)\n\
Dim i\n\
ReDim byteArray(LenB(Binary))\n\
For i = 1 To LenB(Binary)\n\
byteArray(i-1) = AscB(MidB(Binary, i, 1))\n\
Next\n\
BinaryToArray = byteArray\n\
End Function\n\
</script>');
var xml = (window.XMLHttpRequest)
? new XMLHttpRequest() // Mozilla/Safari/IE7+
: (window.ActiveXObject)
? new ActiveXObject("MSXML2.XMLHTTP") // IE6
: null; // Commodore 64?
xml.open("GET", url, true);
if (xml.overrideMimeType) {
xml.overrideMimeType('text/plain; charset=x-user-defined');
} else {
xml.setRequestHeader('Accept-Charset', 'x-user-defined');
}
xml.onreadystatechange = function() {
if (xml.readyState == 4) {
if (!binary) {
callback(xml.responseText);
} else if (IE_HACK) {
// call a VBScript method to copy every single byte
callback(BinaryToArray(xml.responseBody).toArray());
} else {
callback(getBuffer(xml.responseText));
}
}
};
xml.send('');
Is this really true? The best way? copying every byte? For a large binary stream that's not going to be very efficient.
There is also a possible technique using ADODB.Stream, which is a COM equivalent of a MemoryStream. See here for an example. It does not require VBScript but does require a separate COM object.
if (typeof (ActiveXObject) != "undefined" && typeof (httpRequest.responseBody) != "undefined") {
// Convert httpRequest.responseBody byte stream to shift_jis encoded string
var stream = new ActiveXObject("ADODB.Stream");
stream.Type = 1; // adTypeBinary
stream.Open ();
stream.Write (httpRequest.responseBody);
stream.Position = 0;
stream.Type = 1; // adTypeBinary;
stream.Read.... /// ???? what here
}
But that's not going to work well - ADODB.Stream is disabled on most machines these days.
In The IE8 developer tools - the IE equivalent of Firebug - I can see the responseBody is an array of bytes and I can even see the bytes themselves. The data is right there. I don't understand why I can't get to it.
Is it possible for me to read it with responseText?
hints? (other than defining a VBScript method)
Yes, the answer I came up with for reading binary data via XHR in IE, is to use VBScript injection. This was distasteful to me at first, but, I look at it as just one more browser dependent bit of code.
(The regular XHR and responseText works fine in other browsers; you may have to coerce the mime type with XMLHttpRequest.overrideMimeType(). This isn't available on IE).
This is how I got a thing that works like responseText in IE, even for binary data.
First, inject some VBScript as a one-time thing, like this:
if(/msie/i.test(navigator.userAgent) && !/opera/i.test(navigator.userAgent)) {
var IEBinaryToArray_ByteStr_Script =
"<!-- IEBinaryToArray_ByteStr -->\r\n"+
"<script type='text/vbscript' language='VBScript'>\r\n"+
"Function IEBinaryToArray_ByteStr(Binary)\r\n"+
" IEBinaryToArray_ByteStr = CStr(Binary)\r\n"+
"End Function\r\n"+
"Function IEBinaryToArray_ByteStr_Last(Binary)\r\n"+
" Dim lastIndex\r\n"+
" lastIndex = LenB(Binary)\r\n"+
" if lastIndex mod 2 Then\r\n"+
" IEBinaryToArray_ByteStr_Last = Chr( AscB( MidB( Binary, lastIndex, 1 ) ) )\r\n"+
" Else\r\n"+
" IEBinaryToArray_ByteStr_Last = "+'""'+"\r\n"+
" End If\r\n"+
"End Function\r\n"+
"</script>\r\n";
// inject VBScript
document.write(IEBinaryToArray_ByteStr_Script);
}
The JS class I'm using that reads binary files exposes a single interesting method, readCharAt(i), which reads the character (a byte, really) at the i'th index. This is how I set it up:
// see doc on http://msdn.microsoft.com/en-us/library/ms535874(VS.85).aspx
function getXMLHttpRequest()
{
if (window.XMLHttpRequest) {
return new window.XMLHttpRequest;
}
else {
try {
return new ActiveXObject("MSXML2.XMLHTTP");
}
catch(ex) {
return null;
}
}
}
// this fn is invoked if IE
function IeBinFileReaderImpl(fileURL){
this.req = getXMLHttpRequest();
this.req.open("GET", fileURL, true);
this.req.setRequestHeader("Accept-Charset", "x-user-defined");
// my helper to convert from responseBody to a "responseText" like thing
var convertResponseBodyToText = function (binary) {
var byteMapping = {};
for ( var i = 0; i < 256; i++ ) {
for ( var j = 0; j < 256; j++ ) {
byteMapping[ String.fromCharCode( i + j * 256 ) ] =
String.fromCharCode(i) + String.fromCharCode(j);
}
}
// call into VBScript utility fns
var rawBytes = IEBinaryToArray_ByteStr(binary);
var lastChr = IEBinaryToArray_ByteStr_Last(binary);
return rawBytes.replace(/[\s\S]/g,
function( match ) { return byteMapping[match]; }) + lastChr;
};
this.req.onreadystatechange = function(event){
if (that.req.readyState == 4) {
that.status = "Status: " + that.req.status;
//that.httpStatus = that.req.status;
if (that.req.status == 200) {
// this doesn't work
//fileContents = that.req.responseBody.toArray();
// this doesn't work
//fileContents = new VBArray(that.req.responseBody).toArray();
// this works...
var fileContents = convertResponseBodyToText(that.req.responseBody);
fileSize = fileContents.length-1;
if(that.fileSize < 0) throwException(_exception.FileLoadFailed);
that.readByteAt = function(i){
return fileContents.charCodeAt(i) & 0xff;
};
}
if (typeof callback == "function"){ callback(that);}
}
};
this.req.send();
}
// this fn is invoked if non IE
function NormalBinFileReaderImpl(fileURL){
this.req = new XMLHttpRequest();
this.req.open('GET', fileURL, true);
this.req.onreadystatechange = function(aEvt) {
if (that.req.readyState == 4) {
if(that.req.status == 200){
var fileContents = that.req.responseText;
fileSize = fileContents.length;
that.readByteAt = function(i){
return fileContents.charCodeAt(i) & 0xff;
}
if (typeof callback == "function"){ callback(that);}
}
else
throwException(_exception.FileLoadFailed);
}
};
//XHR binary charset opt by Marcus Granado 2006 [http://mgran.blogspot.com]
this.req.overrideMimeType('text/plain; charset=x-user-defined');
this.req.send(null);
}
The conversion code was provided by Miskun.
Very fast, works great.
I used this method to read and extract zip files from Javascript, and also in a class that reads and displays EPUB files in Javascript. Very reasonable performance. About half a second for a 500kb file.
XMLHttpRequest.responseBody is a VBArray object containing the raw bytes. You can convert these objects to standard arrays using the toArray() function:
var data = xhr.responseBody.toArray();
I would suggest two other (fast) options:
First, you can use
ADODB.Recordset to convert the byte array into a string. I would guess that this object is more common that ADODB.Stream, which is often disabled for security reasons. This option is VERY fast, less than 30ms for a 500kB file.
Second, if the Recordset component is not accessible, there is a trick to access the byte array data from Javascript. Send your xhr.responseBody to VBScript, pass it through any VBScript string function such as CStr (takes no time), and return it to JS. You will get a weird string with bytes concatenated into 16-bit unicode (in reverse). You can then convert this string quickly into a usable bytestring through a regular expression with dictionary-based replacement. Takes about 1s for 500kB.
For comparison, the byte-by-byte conversion through loops takes several minutes for this same 500kB file, so it's a no-brainer :) Below the code I have been using, to insert into your header. Then call the function ieGetBytes with your xhr.responseBody.
<!--[if IE]>
<script type="text/vbscript">
'Best case scenario when the ADODB.Recordset object exists
'We will do the existence test in Javascript (see after)
'Extremely fast, about 25ms for a 500kB file
Function ieGetBytesADO(byteArray)
Dim recordset
Set recordset = CreateObject("ADODB.Recordset")
With recordset
.Fields.Append "temp", 201, LenB(byteArray)
.Open
.AddNew
.Fields("temp").AppendChunk byteArray
.Update
End With
ieGetBytesADO = recordset("temp")
recordset.Close
Set recordset = Nothing
End Function
'Trick to return a Javascript-readable string from a VBScript byte array
'Yet the string is not usable as such by Javascript, since the bytes
'are merged into 16-bit unicode characters. Last character missing if odd length.
Function ieRawBytes(byteArray)
ieRawBytes = CStr(byteArray)
End Function
'Careful the last character is missing in case of odd file length
'We Will call the ieLastByte function (below) from Javascript
'Cannot merge directly within ieRawBytes as the final byte would be duplicated
Function ieLastChr(byteArray)
Dim lastIndex
lastIndex = LenB(byteArray)
if lastIndex mod 2 Then
ieLastChr = Chr( AscB( MidB( byteArray, lastIndex, 1 ) ) )
Else
ieLastChr = ""
End If
End Function
</script>
<script type="text/javascript">
try {
// best case scenario, the ADODB.Recordset object exists
// we can use the VBScript ieGetBytes function to transform a byte array into a string
var ieRecordset = new ActiveXObject('ADODB.Recordset');
var ieGetBytes = function( byteArray ) {
return ieGetBytesADO(byteArray);
}
ieRecordset = null;
} catch(err) {
// no ADODB.Recordset object, we will do the conversion quickly through a regular expression
// initializes for once and for all the translation dictionary to speed up our regexp replacement function
var ieByteMapping = {};
for ( var i = 0; i < 256; i++ ) {
for ( var j = 0; j < 256; j++ ) {
ieByteMapping[ String.fromCharCode( i + j * 256 ) ] = String.fromCharCode(i) + String.fromCharCode(j);
}
}
// since ADODB is not there, we replace the previous VBScript ieGetBytesADO function with a regExp-based function,
// quite fast, about 1.3 seconds for 500kB (versus several minutes for byte-by-byte loops over the byte array)
var ieGetBytes = function( byteArray ) {
var rawBytes = ieRawBytes(byteArray),
lastChr = ieLastChr(byteArray);
return rawBytes.replace(/[\s\S]/g, function( match ) {
return ieByteMapping[match]; }) + lastChr;
}
}
</script>
<![endif]-->
Thanks so much for this solution. the BinaryToArray() function in VbScript works great for me.
Incidentally, I need the binary data for providing it to an Applet. (Don't ask me why Applets can't be used for downloading binary data. Long story short.. weird MS authentication that cant go thru applets (URLConn) calls. Its especially weird in cases where users are behind a proxy )
The Applet needs a byte array from this data, so here's what I do to get it:
String[] results = result.toString().split(",");
byte[] byteResults = new byte[results.length];
for (int i=0; i<results.length; i++){
byteResults[i] = (byte)Integer.parseInt(results[i]);
}
The byte array can then converted into a bytearrayinputstream for further processing.
Thank you for this post.
I found this link usefull:
http://www.codingforums.com/javascript-programming/47018-help-using-responsetext-property-microsofts-xmlhttp-activexobject-ie6.html
Specially this part:
</script>
<script language="VBScript">
Function BinaryToString(Binary)
Dim I,S
For I = 1 to LenB(Binary)
S = S & Chr(AscB(MidB(Binary,I,1)))
Next
BinaryToString = S
End Function
</script>
I've added this to my htm page.
Then I call this function from my javascript:
responseText = BinaryToString(xhr.responseBody);
Works on IE8, IE9, IE10, FF & Chrome.
You could also just make a proxy script that goes to the address you're requesting & base64's it. Then you just have to pass a query string to the proxy script that tells it the address. In IE you have to manually do base64 in JS though. But this is a way to go if you don't want to use VBScript.
I used this for my GameBoy Color emulator.
Here is the PHP script that does the magic:
<?php
//Binary Proxy
if (isset($_GET['url'])) {
try {
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, stripslashes($_GET['url']));
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($curl, CURLOPT_USERAGENT, $_SERVER['HTTP_USER_AGENT']);
curl_setopt($curl, CURLOPT_POST, false);
curl_setopt($curl, CURLOPT_CONNECTTIMEOUT, 30);
$result = curl_exec($curl);
curl_close($curl);
if ($result !== false) {
header('Content-Type: text/plain; charset=ASCII');
header('Expires: '.gmdate('D, d M Y H:i:s \G\M\T', time() + (3600 * 24 * 7)));
echo(base64_encode($result));
}
else {
header('HTTP/1.0 404 File Not Found');
}
}
catch (Exception $error) { }
}
?>
I was trying to download a file and than sign it using CAPICOM.DLL. The only way I coud do it was by injecting a VBScript function that does the download. That is my solution:
if(/msie/i.test(navigator.userAgent) && !/opera/i.test(navigator.userAgent)) {
var VBConteudo_Script =
'<!-- VBConteudo -->\r\n'+
'<script type="text/vbscript">\r\n'+
'Function VBConteudo(url)\r\n'+
' Set objHTTP = CreateObject("MSXML2.XMLHTTP")\r\n'+
' objHTTP.open "GET", url, False\r\n'+
' objHTTP.send\r\n'+
' If objHTTP.Status = 200 Then\r\n'+
' VBConteudo = objHTTP.responseBody\r\n'+
' End If\r\n'+
'End Function\r\n'+
'\<\/script>\r\n';
// inject VBScript
document.write(VBConteudo_Script);
}

Categories

Resources