Which encoding does application/dns-message use? - javascript

I am writing DNS-over-HTTPS server which should resolve custom names, not just proxy them to some other DoH server, like Google's. I am having trouble properly decoding the body of the request.
For example, I get body of request, that is in binary format, specifically in javascript in Uint8 ArrayBuffer type. I am using the following code to get base64 format of the array:
function _arrayBufferToBase64(buffer) {
var binary = '';
var bytes = new Uint8Array(buffer);
var len = bytes.byteLength;
for (var i = 0; i < len; i++) {
binary += String.fromCharCode(bytes[i]);
}
return btoa(binary);
}
And I get something like this as a result:
AAABAAABAAAAAAABCmFwbngtbWF0Y2gGZG90b21pA2NvbQAAAQABAAApEAAAAAAAAE4ADABKAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
Now, per RCF8484 standard this should be decoded as base64url, but when I decode it as such, I get the following:
apnx-matchdotomicom)NJ
I also used this "tutorial" as the reference, but they also decode similarly formatted blob and I get similar nonsense like previously.
There is very little to no information about something like this on the internet and if it is of any help DoH standard uses application/dns-message media type for the body.
If anyone has some insight on what I am doing wrong or how I could edit the question to make it more clear, please help me, cheers :)

As stated in the RFC:
Definition of the "application/dns-message" Media Type
The data payload for the "application/dns-message" media type is a
single message of the DNS on-the-wire format defined in Section 4.2.1
of [RFC1035], which in turn refers to the full wire format defined in
Section 4.1 of that RFC.
So what you get is exactly what is sent on the wire in the normal DNS over 53 case.
I would recommend you use a DNS library that should have a from_wire or similar method to which you can feed this content and get back some structured data.
Showing an example in Python with the content you gave:
In [1]: import base64
In [3]: import dns.message
In [5]: payload = 'AAABAAABAAAAAAABCmFwbngtbWF0Y2gGZG90b21pA2NvbQAAAQABAAApEAAAAAAAAE4ADABKAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA='
In [7]: raw = base64.b64decode(payload)
In [9]: msg = dns.message.from_wire(raw)
In [10]: print msg
id 0
opcode QUERY
rcode NOERROR
flags RD
edns 0
payload 4096
option Generic 12
;QUESTION
apnx-match.dotomi.com. IN A
;ANSWER
;AUTHORITY
;ADDITIONAL
So your message is a DNS query for the A record type on name apnx-match.dotomi.com.
Also about:
I am writing DNS-over-HTTPS server which should resolve custom names,
If you don't do that to learn (which is a fine goal), note that there are already various open source nameservers software that do DOH so you don't need to reinvent it. For example: https://blog.nlnetlabs.nl/dns-over-https-in-unbound/

Related

How to deserialize dumped BSON with arbitrarily many documents in JavaScript?

I have a BSON file that comes from a mongoexport of a database. Let's assume the database is todo and the collection is items. Now I want to load the data offline into my RN app. Since the collection may contain arbitrarily many documents (lets say 2 currently), I want to use a method to parse the file however many documents it contains.
I have tried the following methods:
Use external bsondump executable.
We can convert the file to JSON using a external command
bsondump --outFile items.json items.bson
But I am developing a mobile app, so invoking a third-party executable in shell command is not ideal. Plus, the output contains several lines of one-line JSON objects, so the output is technically not a correct JSON file. So parsing afterwards is not graceful.
Use deserialize in js-bson library
According to the js-bson documentation, we can do
const bson = require('bson')
const fs = require('fs')
bson.deserialize(fs.readFileSync(PATH_HERE))
But this raises an error
Error: buffer length 173 must === bson size 94
and by adding this option,
bson.deserialize(fs.readFileSync(PATH_HERE), {
allowObjectSmallerThanBufferSize: true
})
the error is resolved but only returns the first document. Because the documentation doesn't mention that this function can only parse 1-document collection, I wonder if there is some option that enables multiple document reading.
Use deserializeStream in js-bson
let docs = []
bson.deserializeStream(fs.readFileSync(PATH_HERE), 0, 2, docs, 0)
But this methods requires a parameter of the document count (2 here).
Use bson-stream library
I am actually using react-native-fetch-blob instead of fs, and according to their documentation, the stream object does not have a pipe method, which is the one-and-only method demonstrated in bson-stream doc. So although this method does not require the number of documents, I am confused how to use it.
// fs
const BSONStream = require('bson-stream');
fs.createReadStream(PATH_HERE).pipe(new BSONStream()).on('data', callback);
// RNFetchBlob
const RNFetchBlob = require('react-native-fetch-blob');
RNFetchBlob.fs.readStream(PATH_HERE, ENCODING)
.then(stream => {
stream.open();
stream.can_we_pipe_here(new BSONStream())
stream.onData(callback)
});
Also I'm not sure about the above ENCODING above.
I have read the source code of js-bson and has figured out a way to solve the problem. I think it's better to keep a detailed record here:
Approach 1
Split documents by ourselves, and feed the documents to parser one-by-one.
BSON internal format
Let's say the .json dump of our todo/items.bson is
{_id: "someid#1", content: "Launch a manned rocket to the sun"}
{_id: "someid#2", content: "Wash my underwear"}
Which clearly violates the JSON syntax because there isn't an outer object wrapping things together.
The internal BSON is of similar shape, but it seems BSON allows this kind of multi-object stuffing in one file.
Then for each document, the four leading bytes indicates the length of this document, including this prefix itself and the suffix. The suffix is simply a 0 byte.
The final BSON file resembles
LLLLDDDDDDD0LLLLDDD0LLLLDDDDDDDDDDDDDDDDDDDDDD0...
where L is length, D is binary data, 0 is literally 0.
The algorithm
Therefore, we can develop a simple algorithm to get the document length, do the bson.deserialize with allowObjectSmallerThanBufferSize which will get a first document from buffer start, then slice off this document and repeat.
About encoding
One extra thing I mentioned is encoding in the React Native context. The libraries dealing with React Native persistent seems to all lack the support of reading the raw buffer from a file. The closest choice we have is base64, which is a string representation of any binary file. Then we use Buffer to convert base64 strings to buffers and feed into the algorithm above.
The code
deserialize.js
const BSON = require('bson');
function _getNextObjectSize(buffer) {
// this is how BSON
return buffer[0] | (buffer[1] << 8) | (buffer[2] << 16) | (buffer[3] << 24);
}
function deserialize(buffer, options) {
let _buffer = buffer;
let _result = [];
while (_buffer.length > 0) {
let nextSize = _getNextObjectSize(_buffer);
if (_buffer.length < nextSize) {
throw new Error("Corrupted BSON file: the last object is incomplete.");
}
else if (_buffer[nextSize - 1] !== 0) {
throw new Error(`Corrupted BSON file: the ${_result.length + 1}-th object does not end with 0.`);
}
let obj = BSON.deserialize(_buffer, {
...options,
allowObjectSmallerThanBufferSize: true,
promoteBuffers: true // Since BSON support raw buffer as data type, this config allows
// these buffers as is, which is valid in JS object but not in JSON
});
_result.push(obj);
_buffer = _buffer.slice(nextSize);
}
return _result;
}
module.exports = deserialize;
App.js
import RNFetchBlob from `rn-fetch-blob`;
const deserialize = require('./deserialize.js');
const Buffer = require('buffer/').Buffer;
RNFetchBlob.fs.readFile('...', 'base64')
.then(b64Data => Buffer.from(b64Data, 'base64'))
.then(bufferData => deserialize(bufferData))
.then(jsData => {/* Do anything here */})
Approach 2
The above method reads the files as a whole. Sometimes when we have a very large .bson file, the app may crash. Of course one can change the readFile to readStream above and add various checks to determine if the current chunk contains an ending of a document. This can be troublesome, and we are actually re-writing the bson-stream library!
So instead, we can create a RNFetchBlob file stream, and another bson-stream parsing stream. This brings us back to the attempt #4 in the question.
After reading the source code, the BSON parsing stream is inherited form a node.js Transform string. Instead of piping, we can manually forward chunks and events from onData and onEnd to on('data') and on('end').
Since bson-stream does not support passing options to underlying bson library calls, one may want to tweak the library source code a little in their own projects.

Javascript ArrayBuffer equivalent in Swift or iOS

I want to know what is the Javascript ArrayBuffer equivalent in Swift or ios.
Basically I have a Swift struct that I want to store as a blob (data stored based on memory layout) and pass this blob to Javascript ArrayBuffer where I can , based on defined set layout extract data from ArrayBuffer
I havent yet managed to save the struct from Swift as a binary/memory blob. Struggling to understand the memory layout configuration. I thought it would be similar to structs in C but they are not.
Any help or pointers would be appreciated. Thanks.
I'm not an expert in Javascript, so I may be talking nonsense.
It seems like you can achieve what you want if your struct S implements the Codable protocol. Then you can transform it to a Data blob using an encoder, like this:
let encoder = JSONEncoder()
do {
let data = try encoder.encode(s)
// do what you want with the blob
} catch {
// handle error
}
And back to S, like this:
let decoder = JSONDecoder()
do {
let s = try decoder.decode(S.self, from: data)
} catch {
// handle error
}
If S is Codable, [S] (an Array<S>) will also be Codable.
You can probably pass the data to your scriptas a String, then you'll have to transform your data do string with JSONSerialization

Extracting gzip data in Javascript with Pako - encoding issues

I am trying to run what I expect is a very common use case:
I need to download a gzip file (of complex JSON datasets) from Amazon S3, and decompress(gunzip) it in Javascript. I have everything working correctly except the final 'inflate' step.
I am using Amazon Gateway, and have confirmed that the Gateway is properly transferring the compressed file (used Curl and 7-zip to verify the resulting data is coming out of the API). Unfortunately, when I try to inflate the data in Javascript with Pako, I am getting errors.
Here is my code (note: response.data is the binary data transferred from AWS):
apigClient.dataGet(params, {}, {})
.then( (response) => {
console.log(response); //shows response including header and data
const result = pako.inflate(new Uint8Array(response.data), { to: 'string' });
// ERROR HERE: 'buffer error'
}).catch ( (itemGetError) => {
console.log(itemGetError);
});
Also tried a version to do it splitting the binary data input into an array by adding the following before the inflate:
const charData = response.data.split('').map(function(x){return x.charCodeAt(0); });
const binData = new Uint8Array(charData);
const result = pako.inflate(binData, { to: 'string' });
//ERROR: incorrect header check
I suspect I have some sort of issue with the encoding of the data and I am not getting it into the proper format for Uint8Array to be meaningful.
Can anyone point me in the right direction to get this working?
For clarity:
As the code above is listed, I get a buffer error. If I drop the Uint8Array, and just try to process 'result.data' I get the error: 'incorrect header check', which is what makes me suspect that it is the encoding/format of my data which is the issue.
The original file was compressed in Java using GZIPOutputStream with
UTF-8 and then stored as a static file (i.e. randomname.gz).
The file is transferred through the AWS Gateway as binary, so it is
exactly the same coming out as the original file, so 'curl --output
filename.gz {URLtoS3Gateway}' === downloaded file from S3.
I had the same basic issue when I used the gateway to encode the binary data as 'base64', but did not try a whole lot around that effort, as it seems easier to work with the "real" binary data than to add the base64 encode/decode in the middle. If that is a needed step, I can add it back in.
I have also tried some of the example processing found halfway through this issue: https://github.com/nodeca/pako/issues/15, but that didn't help (I might be misunderstanding the binary format v. array v base64).
I was able to figure out my own problem. It was related to the format of the data being read in by Javascript (either Javascript itself or the Angular HttpClient implementation). I was reading in a "binary" format, but it was not the same as that recognized/used by pako. When I read the data in as base64, and then converted to binary with 'atob', I was able to get it working. Here is what I actually have implemented (starting at fetching from the S3 file storage).
1) Build AWS API Gateway that will read a previously stored *.gz file from S3.
Create a standard "get" API request to S3 that supports binary.
(http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-payload-encodings-configure-with-console.html)
Make sure the Gateway will recognize the input type by setting 'Binary types' (application/gzip worked for me, but others like application/binary-octet and image/png should work for other types of files besides *.gz). NOTE: that setting is under the main API selections list on the left of the API config screen.
Set the 'Content Handling' to "Convert to text(if needed)" by selecting the API Method/{GET} -> Integration Request Box and updating the 'Content Handling' item. (NOTE: the example in the link above recommends "passthrough". DON'T use that as it will pass the unreadable binary format.) This is the step that actually converts from binary to base64.
At this point you should be able to download a base64 verion of your binary file via the URL (test in browser or with Curl).
2) I then had the API Gateway generate the SDK and used the respective apiGClient.{get} call.
3) Within the call, translate the base64->binary->Uint8 and then decompress/inflate it. My code for that:
apigClient.myDataGet(params, {}, {})
.then( (response) => {
// HttpClient result is in response.data
// convert the incoming base64 -> binary
const strData = atob(response.data);
// split it into an array rather than a "string"
const charData = strData.split('').map(function(x){return x.charCodeAt(0); });
// convert to binary
const binData = new Uint8Array(charData);
// inflate
const result = pako.inflate(binData, { to: 'string' });
console.log(result);
}).catch ( (itemGetError) => {
console.log(itemGetError);
});
}

Parse a timestamp response file (tsr) using javascript

This code is written in python:
from asn1crypto import tsp, cms, util
response_file = open('timestamp-response.tsr','rb')
response = tsp.TimeStampResp.load(response_file.read())
token = response['time_stamp_token']
signed_data = token['content']
encap_content_info = signed_data['encap_content_info']
tst_info = encap_content_info['content'].parsed
signer_infos = signed_data['signer_infos']
signer_info = signer_infos[0]
signed_attrs = signer_info['signed_attrs']
signature = signer_info['signature']
I can't find way to perform the same action using javascript even the api of the libraries looks the same.
Helpful links:
https://kjur.github.io/jsrsasign/api/symbols/KJUR.asn1.tsp.TimeStampResp.html
https://github.com/wbond/asn1crypto/blob/master/asn1crypto/tsp.py
I am not aware of any ready-to-use library but I believe it should be possible to use ASN1.js to parse TimeStampResp structure with definitions from RFC3161 and extract the data you need.
Parsing DER encoded structure when you have its ASN.1 definition is the same thing as parsing XML structure when you have its XSD definition but it will probably take more time until you get familiar with ASN.1 stuff.
You could try pkijs. I did not try it on timestamps (only x509 certificates) but it seems this library does support it. It uses asn1js under the cover.
Time-stamping request:
Parsing internal values
Getting/setting any internal values
Creation of a new Time-stamping request "from scratch"
Validation of Time-stamping request signature
Time-stamping response:
Parsing internal values
Getting/setting any internal values
Creation of a new Time-stamping response "from scratch"
Validation of Time-stamping response signature

Parameter retrieval for HTTP PUT requests under IIS5.1 and ASP-classic?

I'm trying to implement a REST interface under IIS5.1/ASP-classic (XP-Pro development box). So far, I cannot find the incantation required to retrieve request content variables under the PUT HTTP method.
With a request like:
PUT http://localhost/rest/default.asp?/record/1336
Department=Sales&Name=Jonathan%20Doe%203548
how do I read Department and Name values into my ASP code?
Request.Form appears to only support POST requests. Request.ServerVariables only gets me to header information. Request.QueryString doesn't get me to the content either...
Based on the replies from AnthonyWJones and ars I went down the BinaryRead path and came up with the first attempt below:
var byteCount = Request.TotalBytes;
var binContent = Request.BinaryRead(byteCount);
var myBinary = '';
var rst = Server.CreateObject('ADODB.Recordset');
rst.Fields.Append('myBinary', 201, byteCount);
rst.Open();
rst.AddNew();
rst('myBinary').AppendChunk(binContent);
rst.update();
var binaryString = rst('myBinary');
var contentString = binaryString.Value;
var parameters = {};
var pairs = HtmlDecode(contentString).split(/&/);
for(var pair in pairs) {
var param = pairs[pair].split(/=/);
parameters[param[0]] = decodeURI(param[1]);
}
This blog post by David Wang, and an HtmlDecode() function taken from Andy Oakley at blogs.msdn.com, also helped a lot.
Doing this splitting and escaping by hand, I'm sure there are a 1001 bugs in here but at least I'm moving again. Thanks.
Unfortunately ASP predates the REST concept by quite some years.
If you are going RESTFull then I would consider not using url encoded form data. Use XML instead. You will be able to accept an XML entity body with:-
Dim xml : Set xml = CreateObject("MSXML2.DOMDocument.3.0")
xml.async = false
xml.Load Request
Otherwise you will need to use BinaryRead on the Request object and then laboriously convert the byte array to text then parse the url encoding yourself along with decoding the escape sequences.
Try using the BinaryRead method in the Request object:
http://www.w3schools.com/ASP/met_binaryread.asp
Other options are to write an ASP server component or ISAPI filter:
http://www.codeproject.com/KB/asp/cookie.aspx

Categories

Resources