Javascript ArrayBuffer equivalent in Swift or iOS - javascript

I want to know what is the Javascript ArrayBuffer equivalent in Swift or ios.
Basically I have a Swift struct that I want to store as a blob (data stored based on memory layout) and pass this blob to Javascript ArrayBuffer where I can , based on defined set layout extract data from ArrayBuffer
I havent yet managed to save the struct from Swift as a binary/memory blob. Struggling to understand the memory layout configuration. I thought it would be similar to structs in C but they are not.
Any help or pointers would be appreciated. Thanks.

I'm not an expert in Javascript, so I may be talking nonsense.
It seems like you can achieve what you want if your struct S implements the Codable protocol. Then you can transform it to a Data blob using an encoder, like this:
let encoder = JSONEncoder()
do {
let data = try encoder.encode(s)
// do what you want with the blob
} catch {
// handle error
}
And back to S, like this:
let decoder = JSONDecoder()
do {
let s = try decoder.decode(S.self, from: data)
} catch {
// handle error
}
If S is Codable, [S] (an Array<S>) will also be Codable.
You can probably pass the data to your scriptas a String, then you'll have to transform your data do string with JSONSerialization

Related

Which encoding does application/dns-message use?

I am writing DNS-over-HTTPS server which should resolve custom names, not just proxy them to some other DoH server, like Google's. I am having trouble properly decoding the body of the request.
For example, I get body of request, that is in binary format, specifically in javascript in Uint8 ArrayBuffer type. I am using the following code to get base64 format of the array:
function _arrayBufferToBase64(buffer) {
var binary = '';
var bytes = new Uint8Array(buffer);
var len = bytes.byteLength;
for (var i = 0; i < len; i++) {
binary += String.fromCharCode(bytes[i]);
}
return btoa(binary);
}
And I get something like this as a result:
AAABAAABAAAAAAABCmFwbngtbWF0Y2gGZG90b21pA2NvbQAAAQABAAApEAAAAAAAAE4ADABKAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
Now, per RCF8484 standard this should be decoded as base64url, but when I decode it as such, I get the following:
apnx-matchdotomicom)NJ
I also used this "tutorial" as the reference, but they also decode similarly formatted blob and I get similar nonsense like previously.
There is very little to no information about something like this on the internet and if it is of any help DoH standard uses application/dns-message media type for the body.
If anyone has some insight on what I am doing wrong or how I could edit the question to make it more clear, please help me, cheers :)
As stated in the RFC:
Definition of the "application/dns-message" Media Type
The data payload for the "application/dns-message" media type is a
single message of the DNS on-the-wire format defined in Section 4.2.1
of [RFC1035], which in turn refers to the full wire format defined in
Section 4.1 of that RFC.
So what you get is exactly what is sent on the wire in the normal DNS over 53 case.
I would recommend you use a DNS library that should have a from_wire or similar method to which you can feed this content and get back some structured data.
Showing an example in Python with the content you gave:
In [1]: import base64
In [3]: import dns.message
In [5]: payload = 'AAABAAABAAAAAAABCmFwbngtbWF0Y2gGZG90b21pA2NvbQAAAQABAAApEAAAAAAAAE4ADABKAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA='
In [7]: raw = base64.b64decode(payload)
In [9]: msg = dns.message.from_wire(raw)
In [10]: print msg
id 0
opcode QUERY
rcode NOERROR
flags RD
edns 0
payload 4096
option Generic 12
;QUESTION
apnx-match.dotomi.com. IN A
;ANSWER
;AUTHORITY
;ADDITIONAL
So your message is a DNS query for the A record type on name apnx-match.dotomi.com.
Also about:
I am writing DNS-over-HTTPS server which should resolve custom names,
If you don't do that to learn (which is a fine goal), note that there are already various open source nameservers software that do DOH so you don't need to reinvent it. For example: https://blog.nlnetlabs.nl/dns-over-https-in-unbound/

Swift 5 - How to convert LONGBLOB/Buffer into Data

I am currently working on a project for school.
I have written an API using Express connected to a mysql database. And now I am writing the iOS app.
My problem is that I need to save profile pictures. So I saved the png data of the picture into a **LONGBLOB** into db and I want to recreate the image into a **UIImage**.
To do that I am trying to convert the buffer into ```Data```
So, the API is returning a buffer created that way:
let buffer = Buffer.from(ppData.data, 'binary').toString('base64');
And on the iOS side I tried:
guard let data = dict["data"] as? Data else {return nil}
Where dict["data"] is the buffer returned by the API.
But it always enter into the "else" part.
What am i doing wrong
Edit:
As what it was said in comments, I decoded the Base64 encoded string. Now the data are decoded but creating a UIImage from it, fails, without any details. What I tried is:
let image = UIImage(from: base64DecodedData)
For example:
guard let strData = dict["data"] as? String else {
return nil
}
guard let data = Data(base64Encoded: strData, options: .ignoreUnknownCharacters) else {
return nil
}
guard let picture = UIImage(data: data) else {
return nil
}
Thanks.
The mistake was not in the swift code part but in my API and database structure. After reading some MySQL and Node.js documentaion, I switched from LONGBLOB (which is totally oversized) to MEDIUMTEXT.
Also, in the API I was trying to create a buffer from binary data but not from a base64 string encoded data, so I removed this line:
let buffer = Buffer.from(ppData.data, 'binary').toString('base64');

How to deserialize dumped BSON with arbitrarily many documents in JavaScript?

I have a BSON file that comes from a mongoexport of a database. Let's assume the database is todo and the collection is items. Now I want to load the data offline into my RN app. Since the collection may contain arbitrarily many documents (lets say 2 currently), I want to use a method to parse the file however many documents it contains.
I have tried the following methods:
Use external bsondump executable.
We can convert the file to JSON using a external command
bsondump --outFile items.json items.bson
But I am developing a mobile app, so invoking a third-party executable in shell command is not ideal. Plus, the output contains several lines of one-line JSON objects, so the output is technically not a correct JSON file. So parsing afterwards is not graceful.
Use deserialize in js-bson library
According to the js-bson documentation, we can do
const bson = require('bson')
const fs = require('fs')
bson.deserialize(fs.readFileSync(PATH_HERE))
But this raises an error
Error: buffer length 173 must === bson size 94
and by adding this option,
bson.deserialize(fs.readFileSync(PATH_HERE), {
allowObjectSmallerThanBufferSize: true
})
the error is resolved but only returns the first document. Because the documentation doesn't mention that this function can only parse 1-document collection, I wonder if there is some option that enables multiple document reading.
Use deserializeStream in js-bson
let docs = []
bson.deserializeStream(fs.readFileSync(PATH_HERE), 0, 2, docs, 0)
But this methods requires a parameter of the document count (2 here).
Use bson-stream library
I am actually using react-native-fetch-blob instead of fs, and according to their documentation, the stream object does not have a pipe method, which is the one-and-only method demonstrated in bson-stream doc. So although this method does not require the number of documents, I am confused how to use it.
// fs
const BSONStream = require('bson-stream');
fs.createReadStream(PATH_HERE).pipe(new BSONStream()).on('data', callback);
// RNFetchBlob
const RNFetchBlob = require('react-native-fetch-blob');
RNFetchBlob.fs.readStream(PATH_HERE, ENCODING)
.then(stream => {
stream.open();
stream.can_we_pipe_here(new BSONStream())
stream.onData(callback)
});
Also I'm not sure about the above ENCODING above.
I have read the source code of js-bson and has figured out a way to solve the problem. I think it's better to keep a detailed record here:
Approach 1
Split documents by ourselves, and feed the documents to parser one-by-one.
BSON internal format
Let's say the .json dump of our todo/items.bson is
{_id: "someid#1", content: "Launch a manned rocket to the sun"}
{_id: "someid#2", content: "Wash my underwear"}
Which clearly violates the JSON syntax because there isn't an outer object wrapping things together.
The internal BSON is of similar shape, but it seems BSON allows this kind of multi-object stuffing in one file.
Then for each document, the four leading bytes indicates the length of this document, including this prefix itself and the suffix. The suffix is simply a 0 byte.
The final BSON file resembles
LLLLDDDDDDD0LLLLDDD0LLLLDDDDDDDDDDDDDDDDDDDDDD0...
where L is length, D is binary data, 0 is literally 0.
The algorithm
Therefore, we can develop a simple algorithm to get the document length, do the bson.deserialize with allowObjectSmallerThanBufferSize which will get a first document from buffer start, then slice off this document and repeat.
About encoding
One extra thing I mentioned is encoding in the React Native context. The libraries dealing with React Native persistent seems to all lack the support of reading the raw buffer from a file. The closest choice we have is base64, which is a string representation of any binary file. Then we use Buffer to convert base64 strings to buffers and feed into the algorithm above.
The code
deserialize.js
const BSON = require('bson');
function _getNextObjectSize(buffer) {
// this is how BSON
return buffer[0] | (buffer[1] << 8) | (buffer[2] << 16) | (buffer[3] << 24);
}
function deserialize(buffer, options) {
let _buffer = buffer;
let _result = [];
while (_buffer.length > 0) {
let nextSize = _getNextObjectSize(_buffer);
if (_buffer.length < nextSize) {
throw new Error("Corrupted BSON file: the last object is incomplete.");
}
else if (_buffer[nextSize - 1] !== 0) {
throw new Error(`Corrupted BSON file: the ${_result.length + 1}-th object does not end with 0.`);
}
let obj = BSON.deserialize(_buffer, {
...options,
allowObjectSmallerThanBufferSize: true,
promoteBuffers: true // Since BSON support raw buffer as data type, this config allows
// these buffers as is, which is valid in JS object but not in JSON
});
_result.push(obj);
_buffer = _buffer.slice(nextSize);
}
return _result;
}
module.exports = deserialize;
App.js
import RNFetchBlob from `rn-fetch-blob`;
const deserialize = require('./deserialize.js');
const Buffer = require('buffer/').Buffer;
RNFetchBlob.fs.readFile('...', 'base64')
.then(b64Data => Buffer.from(b64Data, 'base64'))
.then(bufferData => deserialize(bufferData))
.then(jsData => {/* Do anything here */})
Approach 2
The above method reads the files as a whole. Sometimes when we have a very large .bson file, the app may crash. Of course one can change the readFile to readStream above and add various checks to determine if the current chunk contains an ending of a document. This can be troublesome, and we are actually re-writing the bson-stream library!
So instead, we can create a RNFetchBlob file stream, and another bson-stream parsing stream. This brings us back to the attempt #4 in the question.
After reading the source code, the BSON parsing stream is inherited form a node.js Transform string. Instead of piping, we can manually forward chunks and events from onData and onEnd to on('data') and on('end').
Since bson-stream does not support passing options to underlying bson library calls, one may want to tweak the library source code a little in their own projects.

Extracting gzip data in Javascript with Pako - encoding issues

I am trying to run what I expect is a very common use case:
I need to download a gzip file (of complex JSON datasets) from Amazon S3, and decompress(gunzip) it in Javascript. I have everything working correctly except the final 'inflate' step.
I am using Amazon Gateway, and have confirmed that the Gateway is properly transferring the compressed file (used Curl and 7-zip to verify the resulting data is coming out of the API). Unfortunately, when I try to inflate the data in Javascript with Pako, I am getting errors.
Here is my code (note: response.data is the binary data transferred from AWS):
apigClient.dataGet(params, {}, {})
.then( (response) => {
console.log(response); //shows response including header and data
const result = pako.inflate(new Uint8Array(response.data), { to: 'string' });
// ERROR HERE: 'buffer error'
}).catch ( (itemGetError) => {
console.log(itemGetError);
});
Also tried a version to do it splitting the binary data input into an array by adding the following before the inflate:
const charData = response.data.split('').map(function(x){return x.charCodeAt(0); });
const binData = new Uint8Array(charData);
const result = pako.inflate(binData, { to: 'string' });
//ERROR: incorrect header check
I suspect I have some sort of issue with the encoding of the data and I am not getting it into the proper format for Uint8Array to be meaningful.
Can anyone point me in the right direction to get this working?
For clarity:
As the code above is listed, I get a buffer error. If I drop the Uint8Array, and just try to process 'result.data' I get the error: 'incorrect header check', which is what makes me suspect that it is the encoding/format of my data which is the issue.
The original file was compressed in Java using GZIPOutputStream with
UTF-8 and then stored as a static file (i.e. randomname.gz).
The file is transferred through the AWS Gateway as binary, so it is
exactly the same coming out as the original file, so 'curl --output
filename.gz {URLtoS3Gateway}' === downloaded file from S3.
I had the same basic issue when I used the gateway to encode the binary data as 'base64', but did not try a whole lot around that effort, as it seems easier to work with the "real" binary data than to add the base64 encode/decode in the middle. If that is a needed step, I can add it back in.
I have also tried some of the example processing found halfway through this issue: https://github.com/nodeca/pako/issues/15, but that didn't help (I might be misunderstanding the binary format v. array v base64).
I was able to figure out my own problem. It was related to the format of the data being read in by Javascript (either Javascript itself or the Angular HttpClient implementation). I was reading in a "binary" format, but it was not the same as that recognized/used by pako. When I read the data in as base64, and then converted to binary with 'atob', I was able to get it working. Here is what I actually have implemented (starting at fetching from the S3 file storage).
1) Build AWS API Gateway that will read a previously stored *.gz file from S3.
Create a standard "get" API request to S3 that supports binary.
(http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-payload-encodings-configure-with-console.html)
Make sure the Gateway will recognize the input type by setting 'Binary types' (application/gzip worked for me, but others like application/binary-octet and image/png should work for other types of files besides *.gz). NOTE: that setting is under the main API selections list on the left of the API config screen.
Set the 'Content Handling' to "Convert to text(if needed)" by selecting the API Method/{GET} -> Integration Request Box and updating the 'Content Handling' item. (NOTE: the example in the link above recommends "passthrough". DON'T use that as it will pass the unreadable binary format.) This is the step that actually converts from binary to base64.
At this point you should be able to download a base64 verion of your binary file via the URL (test in browser or with Curl).
2) I then had the API Gateway generate the SDK and used the respective apiGClient.{get} call.
3) Within the call, translate the base64->binary->Uint8 and then decompress/inflate it. My code for that:
apigClient.myDataGet(params, {}, {})
.then( (response) => {
// HttpClient result is in response.data
// convert the incoming base64 -> binary
const strData = atob(response.data);
// split it into an array rather than a "string"
const charData = strData.split('').map(function(x){return x.charCodeAt(0); });
// convert to binary
const binData = new Uint8Array(charData);
// inflate
const result = pako.inflate(binData, { to: 'string' });
console.log(result);
}).catch ( (itemGetError) => {
console.log(itemGetError);
});
}

Creating a json obj from a string when working without a net connection?

I have a json object returned from a third party api, it looks like:
{"version":"1.0","encoding":"UTF-8"}
I'm going to be working on my project without a network connection, so I have to do everything locally. How can I create an instance of a json object locally for testing? Say I copy the above string, can I do something like:
var json = null;
if (debugging_locally) {
json = new jsonObj('{"version":"1.0","encoding":"UTF-8"}');
}
else {
json = doAjaxCall();
}
doStuffWithJsonObj(json);
so I just want to create a json object from a stored string if debugging locally - how can I do that?
Thanks
Simple as this:
if (debugging_locally) {
json = {"version":"1.0","encoding":"UTF-8"};
}
JSON is valid Javascript syntax.
Therefore, you can paste the JSON directly into the Javascript (not as a string) and assign it to a variable.
Take a look at Resig's post as well. He covers some new JSON parsing capabilities that are currently in the JS engines of Safari, WebKit, Chrome, Firefox.
This way you can test a JSON string that you will be expecting from a web-service, your API etc.
eg.
instead of:
json = new jsonObj('{"version":"1.0","encoding":"UTF-8"}');
you could do:
json = JSON.parse('{"version":"1.0","encoding":"UTF-8"}');

Categories

Resources