AWS S3 browser upload using HTTP POST gives invalid signature - javascript

I'm working on a website where the users should be able to upload video files to AWS. In order to avoid unnecessary traffic I would like the user to upload directly to AWS (and not through the API server). In order to not expose my secret key in the JavaScript I'm trying to generate a signature in the API. It does, however, tell me when I try to upload, that the signature does not match.
For signature generation I have been using http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-UsingHTTPPOST.html
On the backend I'm running C#.
I generate the signature using
string policy = $#"{{""expiration"":""{expiration}"",""conditions"":[{{""bucket"":""dennisjakobsentestbucket""}},[""starts-with"",""$key"",""""],{{""acl"":""private""}},[""starts-with"",""$Content-Type"",""""],{{""x-amz-algorithm"":""AWS4-HMAC-SHA256""}}]}}";
which generates the following
{"expiration":"2016-11-27T13:59:32Z","conditions":[{"bucket":"dennisjakobsentestbucket"},["starts-with","$key",""],{"acl":"private"},["starts-with","$Content-Type",""],{"x-amz-algorithm":"AWS4-HMAC-SHA256"}]}
based on http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-HTTPPOSTConstructPolicy.html (I base64 encode the policy). I have tried to keep it very simple, just as a starting point.
For generating the signature, I use code found on the AWS site.
static byte[] HmacSHA256(String data, byte[] key)
{
String algorithm = "HmacSHA256";
KeyedHashAlgorithm kha = KeyedHashAlgorithm.Create(algorithm);
kha.Key = key;
return kha.ComputeHash(Encoding.UTF8.GetBytes(data));
}
static byte[] GetSignatureKey(String key, String dateStamp, String regionName, String serviceName)
{
byte[] kSecret = Encoding.UTF8.GetBytes(("AWS4" + key).ToCharArray());
byte[] kDate = HmacSHA256(dateStamp, kSecret);
byte[] kRegion = HmacSHA256(regionName, kDate);
byte[] kService = HmacSHA256(serviceName, kRegion);
byte[] kSigning = HmacSHA256("aws4_request", kService);
return kSigning;
}
Which I use like this:
byte[] signingKey = GetSignatureKey(appSettings["aws:SecretKey"], dateString, appSettings["aws:Region"], "s3");
byte[] signature = HmacSHA256(encodedPolicy, signingKey);
where dateString is on the format yyyymmdd
I POST information from JavaScript using
let xmlHttpRequest = new XMLHttpRequest();
let formData = new FormData();
formData.append("key", "<path-to-upload-location>");
formData.append("acl", signature.acl); // private
formData.append("Content-Type", "$Content-Type");
formData.append("AWSAccessKeyId", signature.accessKey);
formData.append("policy", signature.policy); //base64 of policy
formData.append("x-amz-credential", signature.credentials); // <accesskey>/20161126/eu-west-1/s3/aws4_request
formData.append("x-amz-date", signature.date);
formData.append("x-amz-algorithm", "AWS4-HMAC-SHA256");
formData.append("Signature", signature.signature);
formData.append("file", file);
xmlHttpRequest.open("post", "http://<bucketname>.s3-eu-west-1.amazonaws.com/");
xmlHttpRequest.send(formData);
I have been using UTF8 everywhere as prescribed by AWS. In their examples the signature is on a hex format, which I have tried as well.
No matter what I try I get an error 403
The request signature we calculated does not match the signature you provided. Check your key and signing method.
My policy on AWS has "s3:Get*", "s3:Put*"
Am I missing something or does it just work completely different than what I expect?
Edit: The answer below is one of the steps. The other is that AWS distinguish between upper and lowercase hex strings. 0xFF != 0xff in the eyes of AWS. They want the signature in all lowercase.

You are generating the signature using Signature Version 4, but you are constructing the form as though you were using Signature Version 2... well, sort of.
formData.append("AWSAccessKeyId", signature.accessKey);
That's V2. It shouldn't be here at all.
formData.append("x-amz-credential", signature.credentials); // <accesskey>/20161126/eu-west-1/s3/aws4_request
This is V4. Note the redundant submission of the AWS Access Key ID here and above. This one is probably correct, although the examples have capitalization like X-Amz-Credential.
formData.append("x-amz-algorithm", "AWS4-HMAC-SHA256");
That is also correct, except it may need to be X-Amz-Algorithm. (The example seems to imply that capitalization is ignored).
formData.append("Signature", signature.signature);
This one is incorrect. This should be X-Amz-Signature. V4 signatures are hex, so that is what you should have here. V2 signatures are base64.
There's a full V4 example here, which even provides you with an example aws key and secret, date, region, bucket name, etc., that you can use with your code to verify that you indeed get the same response. The form won't actually work but the important question is whether your code can generate the same form, policy, and signature.
For any given request, there is only ever exactly one correct signature; however, for any given policy, there may be more than one valid JSON encoding (due to JSON's flexibility with whitespace) -- but for any given JSON encoding there is only one possible valid base64-encoding of the policy. This means that your code, using the example data, is certified as working correctly if it generates exactly the same form and signature as shown in the example -- and it means that your code is proven invalid if it generates the same form and policy with a different signature -- but there is a third possibility: the test actually proves nothing conclusive about your code if your code generates a different base64 encoding of the policy, because that will necessarily change the signature to not match, yet might still be a valid policy.
Note that Signature V2 is only suported on older S3 regions, while Signature V4 is supported by all S3 regions, so, even though you could alternately fix this by making your entire signing process use V2, that wouldn't be recommended.
Note also that The request signature we calculated does not match the signature you provided. Check your key and signing method does not tell you anything about whether the bucket policy or any users policies allow or deny the request. This error is not a permissions error. It will be thrown prior to the permissions checks, based solely on the validity of the signature, not whether the AWS Access Key id is authorized to perform the requested operation, which is something that is only tested after the signature is validated.

I suggest you to create a pair auth token with permission to POST only, and send an http request like this:
require 'rest-client'
class S3Uploader
def initialize
#options = {
aws_access_key_id: "ACCESS_KEY",
aws_secret_access_key: "ACCESS_SECRET",
bucket: "BUCKET",
acl: "private",
expiration: 3.hours.from_now.utc,
max_file_size: 524288000
}
end
def fields
{
:key => key,
:acl => #options[:acl],
:policy => policy,
:signature => signature,
"AWSAccessKeyId" => #options[:aws_access_key_id],
:success_action_status => "201"
}
end
def key
#key ||= "temp/${filename}"
end
def url
"http://#{#options[:bucket]}.s3.amazonaws.com/"
end
def policy
Base64.encode64(policy_data.to_json).delete("\n")
end
def policy_data
{
expiration: #options[:expiration],
conditions: [
["starts-with", "$key", ""],
["content-length-range", 0, #options[:max_file_size]],
{ bucket: #options[:bucket] },
{ acl: #options[:acl] },
{ success_action_status: "201" }
]
}
end
def signature
Base64.encode64(
OpenSSL::HMAC.digest(
OpenSSL::Digest.new("sha1"),
#options[:aws_secret_access_key], policy
)
).delete("\n")
end
end
uploader = S3Uploader.new
puts uploader.fields
puts uploader.url
begin
RestClient.post(uploader.url, uploader.fields.merge(file: File.new('51bb26652134e98eae931fbaa10dc3a1.jpeg'), :multipart => true))
rescue RestClient::ExceptionWithResponse => e
puts e.response
end

Related

Which encoding does application/dns-message use?

I am writing DNS-over-HTTPS server which should resolve custom names, not just proxy them to some other DoH server, like Google's. I am having trouble properly decoding the body of the request.
For example, I get body of request, that is in binary format, specifically in javascript in Uint8 ArrayBuffer type. I am using the following code to get base64 format of the array:
function _arrayBufferToBase64(buffer) {
var binary = '';
var bytes = new Uint8Array(buffer);
var len = bytes.byteLength;
for (var i = 0; i < len; i++) {
binary += String.fromCharCode(bytes[i]);
}
return btoa(binary);
}
And I get something like this as a result:
AAABAAABAAAAAAABCmFwbngtbWF0Y2gGZG90b21pA2NvbQAAAQABAAApEAAAAAAAAE4ADABKAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
Now, per RCF8484 standard this should be decoded as base64url, but when I decode it as such, I get the following:
apnx-matchdotomicom)NJ
I also used this "tutorial" as the reference, but they also decode similarly formatted blob and I get similar nonsense like previously.
There is very little to no information about something like this on the internet and if it is of any help DoH standard uses application/dns-message media type for the body.
If anyone has some insight on what I am doing wrong or how I could edit the question to make it more clear, please help me, cheers :)
As stated in the RFC:
Definition of the "application/dns-message" Media Type
The data payload for the "application/dns-message" media type is a
single message of the DNS on-the-wire format defined in Section 4.2.1
of [RFC1035], which in turn refers to the full wire format defined in
Section 4.1 of that RFC.
So what you get is exactly what is sent on the wire in the normal DNS over 53 case.
I would recommend you use a DNS library that should have a from_wire or similar method to which you can feed this content and get back some structured data.
Showing an example in Python with the content you gave:
In [1]: import base64
In [3]: import dns.message
In [5]: payload = 'AAABAAABAAAAAAABCmFwbngtbWF0Y2gGZG90b21pA2NvbQAAAQABAAApEAAAAAAAAE4ADABKAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA='
In [7]: raw = base64.b64decode(payload)
In [9]: msg = dns.message.from_wire(raw)
In [10]: print msg
id 0
opcode QUERY
rcode NOERROR
flags RD
edns 0
payload 4096
option Generic 12
;QUESTION
apnx-match.dotomi.com. IN A
;ANSWER
;AUTHORITY
;ADDITIONAL
So your message is a DNS query for the A record type on name apnx-match.dotomi.com.
Also about:
I am writing DNS-over-HTTPS server which should resolve custom names,
If you don't do that to learn (which is a fine goal), note that there are already various open source nameservers software that do DOH so you don't need to reinvent it. For example: https://blog.nlnetlabs.nl/dns-over-https-in-unbound/

Signing JWT - do I do it wrong?

I'm trying to make a JWT generator in JavaScript for educational purposes. There is a jwt.io tool to create and/or validate JWT.
I'm struggling to get my results match the results from the validator. The problem is the signature.
Here's my code:
function base64url(input) {
return btoa(typeof input === 'string' ? input : JSON.stringify(input))
.replace(/=+$/, '')
.replace(/\+/g, '-')
.replace(/\//g, '_');
}
const JWT = {
encode(header, payload, secret) {
const unsigned = [base64url(header), base64url(payload)].join('.');
return [unsigned, base64url(sha256.hmac(secret, unsigned))].join('.');
}
};
To encrypt HMAC SHA256 I'm using js-sha256 library with sha256.hmac(key, value) prototype. I compared it with online tools and it works fine.
Now, I test it with the following code:
const jwt = JWT.encode(
{
alg: 'HS256',
typ: 'JWT'
},
123,
'xxx'
);
The result I get is:
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.MTIz.NzhlNTFmYzUxOGQ2YjNlZDFiOTM0ZGRhOTUwNDFmMzEwMzdlNmZkZWRhNGFlMjdlNDU3ZTZhNWRhYjQ1YzFiMQ
On the other hand, the result from jwt.io is:
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.MTIz.eOUfxRjWs-0bk03alQQfMQN-b97aSuJ-RX5qXatFwbE
As you can see, the two out of three chunks of JWT are identical in my result and jwt.io result. The signature is different and if you ask me, the signature generated by it is surprisingly short. That tool also marks my own JWT as invalid.
I checked with online HMAC SHA256 generators and it looks like my code creates a valid signature, so:
base64url(sha256.hmac('xxx', 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.MTIz')) ===
'NzhlNTFmYzUxOGQ2YjNlZDFiOTM0ZGRhOTUwNDFmMzEwMzdlNmZkZWRhNGFlMjdlNDU3ZTZhNWRhYjQ1YzFiMQ'
Is jwt.io just broken or does it do it some other way?
I wouldn't say you're doing it wrong, but you missed a small but important detail.
The result from jwt.io is correct and the hash you calculate is also correct. But the signature you create with your hash is not correct.
The hash you calculate with sha256.hmac(secret, unsigned) is a large number but the return value of the function is a hexadecimal string representation of that large number. For the signature you need to base64url encode the original number instead of it's string representation.
I modified your code, so that it encodes the hash value directly to base64url (node.js version):
const JWT = {
encode(header, payload, secret) {
const unsigned = [base64url(header), base64url(payload)].join('.');
const hash = sha256.hmac(secret, unsigned);
console.log(hash);
var signature = new Buffer.from(hash, 'hex').toString('base64').replace(/\+/g,'-').replace(/\=+$/m,'');
return [unsigned, signature].join('.');
}
};
or, if you don't use node.js, you can use this instead (as suggested by Robo Robok):
const JWT = {
encode(header, payload, secret) {
const unsigned = [base64url(header), base64url(payload)].join('.');
return [unsigned, base64url(sha256.hmac(secret, unsigned).replace(/\w{2}/g, byte => String.fromCharCode(parseInt(byte, 16))))].join('.');
}
};
The result is a token, which is identical to the one created with jwt.io:
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.MTIz.eOUfxRjWs-0bk03alQQfMQN-b97aSuJ-RX5qXatFwbE
See also my answer here, in which I explained the steps to compare the results from different tools.

Problem with AttachmentService of SAP Cloud SDK for JavaScript

Currently we use SAP REST API for uploading and managing attachments.
We want to replace the standard requests with the SDK because we had problems getting the connection through a CloudConnector with the respective proxy settings and because we also use the SDK for all other requests.
var attContentSetBuilder = AttachmentContentSet.builder();
attContentSetBuilder.documentInfoRecordDocNumber("10000000008");
attContentSetBuilder.documentInfoRecordDocPart("000");
attContentSetBuilder.documentInfoRecordDocType("YBO");
attContentSetBuilder.documentInfoRecordDocVersion("01");
attContentSetBuilder.businessObjectTypeName("DRAW");
attContentSetBuilder.fileName("TEST.pdf")
attContentSetBuilder.content(fileToBase64("C:\\TEST.pdf"));
var attContentSet = attContentSetBuilder.build();
var requestBuilder = new AttachmentContentSetRequestBuilder();
var contentSetRequester = requestBuilder.create(attContentSet);
contentSetRequester.withCustomHeaders({ key: 'slug', value: 'TEST.pdf' }).execute({XXX}).then ...
function fileToBase64(filename: string): string {
var fs = require('fs');
return fs.readFileSync(filename, 'utf8');
}
Will the content/body with the binary data be set that way? Does the header value slug also have to be set?
Does the Attachment Service also support GOS?
So far we get the error:
"Attachment name cannot be empty"
the error message reads like a message you get from the S/4HANA API, so it seems like there is a semantic problem with your request. Unfortunately, the API Business Hub is not very good in communicating the required fields for a request, but here are some pointers:
If you take a look at the entity definition, the following fields are non-nullable:
documentInfoRecordDocType: string;
documentInfoRecordDocNumber: string;
documentInfoRecordDocVersion: string;
documentInfoRecordDocPart: string;
logicalDocument: string;
archiveDocumentId: string;
linkedSapObjectKey: string;
businessObjectTypeName: string;
so maybe providing values for the ones your missing solves the problem
There is more documentation on this API here (I got there by going to the API's page on the Business Hub, clicking on "Details" and then on "Business Documentation" on the bottom of the page)
your .withCustomHeaders looks off, I'm guessing what you wanted to do is: .withCustomHeaders({ slug: 'TEST.pdf' })
Bonus: the builder and request builder have a fluent API, so you can also use it like this:
const attContentSet = AttachmentContentSet.builder()
.documentInfoRecordDocNumber("10000000008")
.documentInfoRecordDocPart("000")
.documentInfoRecordDocType("YBO")
.documentInfoRecordDocVersion("01")
.businessObjectTypeName("DRAW")
.fileName("TEST.pdf")
.content(fileToBase64("C:\\TEST.pdf"))
.build();
That's a matter of taste, of course, personally I find this a little easier to parse mentally.

Extracting gzip data in Javascript with Pako - encoding issues

I am trying to run what I expect is a very common use case:
I need to download a gzip file (of complex JSON datasets) from Amazon S3, and decompress(gunzip) it in Javascript. I have everything working correctly except the final 'inflate' step.
I am using Amazon Gateway, and have confirmed that the Gateway is properly transferring the compressed file (used Curl and 7-zip to verify the resulting data is coming out of the API). Unfortunately, when I try to inflate the data in Javascript with Pako, I am getting errors.
Here is my code (note: response.data is the binary data transferred from AWS):
apigClient.dataGet(params, {}, {})
.then( (response) => {
console.log(response); //shows response including header and data
const result = pako.inflate(new Uint8Array(response.data), { to: 'string' });
// ERROR HERE: 'buffer error'
}).catch ( (itemGetError) => {
console.log(itemGetError);
});
Also tried a version to do it splitting the binary data input into an array by adding the following before the inflate:
const charData = response.data.split('').map(function(x){return x.charCodeAt(0); });
const binData = new Uint8Array(charData);
const result = pako.inflate(binData, { to: 'string' });
//ERROR: incorrect header check
I suspect I have some sort of issue with the encoding of the data and I am not getting it into the proper format for Uint8Array to be meaningful.
Can anyone point me in the right direction to get this working?
For clarity:
As the code above is listed, I get a buffer error. If I drop the Uint8Array, and just try to process 'result.data' I get the error: 'incorrect header check', which is what makes me suspect that it is the encoding/format of my data which is the issue.
The original file was compressed in Java using GZIPOutputStream with
UTF-8 and then stored as a static file (i.e. randomname.gz).
The file is transferred through the AWS Gateway as binary, so it is
exactly the same coming out as the original file, so 'curl --output
filename.gz {URLtoS3Gateway}' === downloaded file from S3.
I had the same basic issue when I used the gateway to encode the binary data as 'base64', but did not try a whole lot around that effort, as it seems easier to work with the "real" binary data than to add the base64 encode/decode in the middle. If that is a needed step, I can add it back in.
I have also tried some of the example processing found halfway through this issue: https://github.com/nodeca/pako/issues/15, but that didn't help (I might be misunderstanding the binary format v. array v base64).
I was able to figure out my own problem. It was related to the format of the data being read in by Javascript (either Javascript itself or the Angular HttpClient implementation). I was reading in a "binary" format, but it was not the same as that recognized/used by pako. When I read the data in as base64, and then converted to binary with 'atob', I was able to get it working. Here is what I actually have implemented (starting at fetching from the S3 file storage).
1) Build AWS API Gateway that will read a previously stored *.gz file from S3.
Create a standard "get" API request to S3 that supports binary.
(http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-payload-encodings-configure-with-console.html)
Make sure the Gateway will recognize the input type by setting 'Binary types' (application/gzip worked for me, but others like application/binary-octet and image/png should work for other types of files besides *.gz). NOTE: that setting is under the main API selections list on the left of the API config screen.
Set the 'Content Handling' to "Convert to text(if needed)" by selecting the API Method/{GET} -> Integration Request Box and updating the 'Content Handling' item. (NOTE: the example in the link above recommends "passthrough". DON'T use that as it will pass the unreadable binary format.) This is the step that actually converts from binary to base64.
At this point you should be able to download a base64 verion of your binary file via the URL (test in browser or with Curl).
2) I then had the API Gateway generate the SDK and used the respective apiGClient.{get} call.
3) Within the call, translate the base64->binary->Uint8 and then decompress/inflate it. My code for that:
apigClient.myDataGet(params, {}, {})
.then( (response) => {
// HttpClient result is in response.data
// convert the incoming base64 -> binary
const strData = atob(response.data);
// split it into an array rather than a "string"
const charData = strData.split('').map(function(x){return x.charCodeAt(0); });
// convert to binary
const binData = new Uint8Array(charData);
// inflate
const result = pako.inflate(binData, { to: 'string' });
console.log(result);
}).catch ( (itemGetError) => {
console.log(itemGetError);
});
}

Sending file direct from browser to S3 but changing file name

I am using signed authorized S3 uploads so that users can upload files directly from their browser to S3 bypassing my server. This presently works, but the file name is the same as on the user's machine. I'd like to save it on S3 as a different name.
The formdata I post to amazon looks like this:
var formData = new FormData();
formData.append('key', targetPath); // e.g. /path/inside/bucket/myFile.mov
formData.append('AWSAccessKeyId', s3Auth.AWSAccessKeyId); // aws public key
formData.append('acl', s3Auth.acl); // e.g. 'public-read'
formData.append('policy', s3Auth.policy); // s3 policy including ['starts-with', '$key', '/path/inside/bucket/']
formData.append('signature', s3Auth.signature); // base64 sha1 hash of private key and base64 policy JSON
formData.append('success_action_status ', 200); // response code 200 on success
formData.append('file', file.slice()); // e.g. /path/on/user/computer/theirFile.mov
However instead of the file ending up at:
https://s3.amazonaws.com/mybucket/path/inside/bucket/myFile.mov
It ends up as:
https://s3.amazonaws.com/mybucket/path/inside/bucket/theirFile.mov
Note it has their filename but my base path.
I would like it to have the filename I specify as well.
UPDATE: Update: this was working all along I simply had other code that copied from one bucket to another that restored the original file name and thus confusing me.
Are you sure about the contents of targetPath and where that data comes from?
The behavior you are describing is what should happen in one particular case where targetPath doesn't actually contain /path/inside/bucket/myFile.mov.
I would suggest that targetPath actually contains the value /path/inside/bucket/${filename} -- and by that I mean, literally, the characters $ { f i l e n a m e } are at the end of that string, instead of the filename you intend.
If that's true, that's exactly how it's supposed to work.
If you do not know the name of the file a user will upload, the key value can include the special variable ${filename} which will be replaced with the name of the uploaded file. For example, the key value uploads/${filename} will become the object name uploads/Birthday Cake.jpg if the user uploads a file called Birthday Cake.jpg.
— https://aws.amazon.com/articles/1434
If you populate that variable with the literal filename you want to see in S3, then the uploads should behave as you expect, using your filename instead of the filename on the uploader's computer.
Also, a more secure approach would be to eliminate the key 'starts-with' logic in your policy, and instead explicitly sign a policy (dynamically) for each upload event, with the specific key you want the user to upload. Otherwise, it's not impossible to exploit this form to to overwrite other files within the same key prefix.

Categories

Resources