I'm trying to set up a local minio instance for me to upload and read files. I'm using pre-signed urls to retrieve and upload files. The problem is that when I make a request to the url I'm getting a SignatureDoesNotMatch response. But when I get a pre-signed url from the minio admin ui I am able to download an image. It works when I connect to a Cloudflare R2 instance but I don't want to use it my local machine neither do I want to use it in the CI. Is maybe my configuration wrong? I can't seem to find the issue.
My .env file
STORAGE_ENDPOINT="http://localhost:9000"
STORAGE_ACCESS_KEY_ID="user"
STORAGE_SECRET_ACCESS_KEY="password"
My docker-compose.yaml file
services:
storage:
container_name: coespace-storage
image: minio/minio
ports:
- "9000:9000"
- "9001:9001"
volumes:
- coespace-storage:/data
environment:
MINIO_ACCESS_KEY: user
MINIO_SECRET_KEY: password
MINIO_DEFAULT_BUCKETS: 'coespace-studio'
command: server --address 0.0.0.0:9000 --console-address 0.0.0.0:9001 /
# more unrelated services...
function createClient() {
return new S3Client({
region: 'auto',
endpoint: process.env.STORAGE_ENDPOINT,
forcePathStyle: true,
credentials: {
accessKeyId: process.env.STORAGE_ACCESS_KEY_ID,
secretAccessKey: process.env.STORAGE_SECRET_ACCESS_KEY,
},
});
}
const s3 = createClient();
export function getPreSignedDownloadUrl(key: string) {
return getSignedUrl(
s3,
new GetObjectCommand({
Bucket: 'my-bucket',
Key: key,
}),
{
expiresIn: 60 * 60, // expires in an hour
}
);
}
export function getPreSignedUploadUrl(key: string) {
return getSignedUrl(
s3,
new PutObjectCommand({
Bucket: 'my-bucket',
Key: key,
}),
{
expiresIn: 60 * 60, // expires in an hour
}
);
}
It looks like you are using the AWS SDK to access the minio service with the port. When aws SDK is signed, there is a bug that ignores the port, resulting in incorrect authorization in the header.
presigning an endpoint with port doesn't work
This is a way for me to bypass this bug.(add a custom singer to sign host with port)
import { SignatureV4 } from '#aws-sdk/signature-v4'
import { Sha256 } from '#aws-crypto/sha256-browser'
import { HttpRequest } from '#aws-sdk/types'
import {
S3Client,
ListBucketsCommand
} from '#aws-sdk/client-s3'
const s3 = new S3Client({
region: "us-east-1",
credentials: {
accessKeyId: 'xxxxxx',
secretAccessKey: 'xxxxxx',
},
endpoint: "http://127.0.0.1:9000",
forcePathStyle: true,
signer: async () => ({
sign: async (request: HttpRequest) => {
request.headers['host'] = `${request.hostname}:${request.port}`
const signatureV4 = new SignatureV4({
credentials: {
accessKeyId: 'xxxxxx',
secretAccessKey: 'xxxxxx',
},
region: 'us-east-1',
service: 's3',
sha256: Sha256,
});
const authorizatedRequest = await signatureV4.sign(request);
return authorizatedRequest
}
})
});
This is resolved in some aws version, see https://github.com/minio/minio/issues/15693
Related
I have Reactjs project created using create-react-app and a aws s3 bucket in witch I've saved some images that I want to display on my website.
I have created a aws.js where I configure and make the call like this
import { S3Client } from "#aws-sdk/client-s3";
import { ListObjectsV2Command } from "#aws-sdk/client-s3";
const REGION = 'eu-central-1'
const credentials = {
accessKeyId: accessKeyId,
privateKeyId: privateKeyId,
}
const config = {
region: REGION,
credentials: credentials,
}
const bucketName = {
Bucket: bucketName,
}
const s3Client = new S3Client(config);
export const run = async () => {
try{
const command = new ListObjectsV2Command(bucketName);
const data = await s3Client.send(command);
console.log("SUCCESS\n", data);
}
catch(err) {
console.log("ERROR\n", err);
}
}
I have also created a .env filder where I saved the keys with and without REACT_APP prefix but the result is the same. Invalidating the credentials.
For credentials I've checked and rechecked 10 times and I also created a new user and use those keys but nothing. I also configured CORS to allow access from my localhost.
What I'm doing wrong? And is there a complete documentation from A-Z on what to use AWS services? Including v3, api doc, credentials set up and everything.
P.S. It's my first time using AWS so some docs would be much apreciated. Thanks in advance
UPDATE---
I tried to use aws javascript sdk v2 and now it works. Here is the code that I used to list objects inside a bucket
But it works only when I used AWS.config.update if I passed the configuration to the bucket it still thrown an error
const AWS = require('aws-sdk');
AWS.config.update({
region: region,
accessKeyId: accessKeyId,
secretAccessKey: secretAccessKey
});
let s3 = new AWS.S3()
export const testFnc = () =>{
s3.listObjects({
Bucket: 'artgalleryszili.digital'
}, (res, err) => {
if(err){
console.log(err);
}
else{
console.log(res);
}
})
}
I can generate the presigned url following the steps as described in this section, so I wanted to test uploading a specific image marble.jpg and I tried to use postman to test the upload. So, I copied the presigned url and hit the endpoint with a PUT request, and I got this error:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
<Key>records/marble_cave.jpg</Key>
<BucketName>bucket</BucketName>
<Resource>/bucket/records/marble.jpg</Resource>
<RequestId>17E3999B521ABB65</RequestId>
<HostId>50abb07a-2ad0-4948-96e0-23403f661cba</HostId>
</Error>
The following resources are setup:
I'm using the min.io server to test this locally.
I'm using aws-sdk version 3 of the nodejs sdk for aws
I've triple checked my credentials, simple minio creds with no special characters also, I'm definitely making a PUT request.
So, The question is:
How to set the signatureVersion using the new javascript aws sdk version 3. (
The getSignedUrl is used to generate presigned url in v3 of the sdk, import { getSignedUrl } from '#aws-sdk/s3-request-presigner';)
what causes might be there such that this error is occuring?
The code I use for presigned url generation is:
import { getSignedUrl } from '#aws-sdk/s3-request-presigner';
import { PutObjectCommand, S3Client } from '#aws-sdk/client-s3';
const s3Client = new S3Client({
region: 'us-east-1',
credentials: {
accessKeyId: 'minioadmin',
secretAccessKey: 'minioadmin',
},
endpoint: http://172.21.0.2:9000,
forcePathStyle: true,
});
const bucketParams = {
Bucket: 'myBucket',
Key: `marbles.jpg`,
};
const command = new PutObjectCommand(bucketParams);
const signedUrl = await getSignedUrl(s3Client, command, {
expiresIn: 10000,
})
I stumbled on this issue myself a year ago, the new V3 SDK has a bug, it doesn't take the port into consideration when signing a URL.
see here https://github.com/aws/aws-sdk-js-v3/issues/2726
the work around I ended up implemented overrides getSignedUrl in my code and add the missing port as follows:
import {BuildMiddleware, MetadataBearer, RequestPresigningArguments} from '#aws-sdk/types';
import {Client, Command} from '#aws-sdk/smithy-client';
import {HttpRequest} from '#aws-sdk/protocol-http';
import {formatUrl} from '#aws-sdk/util-format-url';
import {S3RequestPresigner} from '#aws-sdk/s3-request-presigner';
export const getSignedUrl = async <
InputTypesUnion extends object,
InputType extends InputTypesUnion,
OutputType extends MetadataBearer = MetadataBearer
>(
client: Client<any, InputTypesUnion, MetadataBearer, any>,
command: Command<InputType, OutputType, any, InputTypesUnion, MetadataBearer>,
options: RequestPresigningArguments = {}
): Promise<string> => {
const s3Presigner = new S3RequestPresigner({ ...client.config });
const presignInterceptMiddleware: BuildMiddleware<InputTypesUnion, MetadataBearer> =
(next, context) => async (args) => {
const { request } = args;
if (!HttpRequest.isInstance(request)) {
throw new Error('Request to be presigned is not an valid HTTP request.');
}
// Retry information headers are not meaningful in presigned URLs
delete request.headers['amz-sdk-invocation-id'];
delete request.headers['amz-sdk-request'];
// User agent header would leak sensitive information
delete request.headers['x-amz-user-agent'];
delete request.headers['x-amz-content-sha256'];
delete request.query['x-id'];
if (request.port) {
request.headers['host'] = `${request.hostname}:${request.port}`;
}
const presigned = await s3Presigner.presign(request, {
...options,
signingRegion: options.signingRegion ?? context['signing_region'],
signingService: options.signingService ?? context['signing_service'],
});
return {
// Intercept the middleware stack by returning fake response
response: {},
output: {
$metadata: { httpStatusCode: 200 },
presigned,
},
} as any;
};
const middlewareName = 'presignInterceptMiddleware';
client.middlewareStack.addRelativeTo(presignInterceptMiddleware, {
name: middlewareName,
relation: 'before',
toMiddleware: 'awsAuthMiddleware',
override: true,
});
let presigned: HttpRequest;
try {
const output = await client.send(command);
//#ts-ignore the output is faked, so it's not actually OutputType
presigned = output.presigned;
} finally {
client.middlewareStack.remove(middlewareName);
}
return formatUrl(presigned);
};
The solution is probably the same as in my other question, so simply copying the answer:
I was trying and changing ports, and the put command seems to work when I use only local host for url generation
so, in this above:
new S3Client({
region: 'us-east-1',
credentials: {
accessKeyId: 'minioadmin',
secretAccessKey: 'minioadmin',
},
endpoint: http://172.21.0.2:9000,
forcePathStyle: true,
});
I use:
new S3Client({
region: 'us-east-1',
credentials: {
accessKeyId: 'minioadmin',
secretAccessKey: 'minioadmin',
},
endpoint: http://172.21.0.2, // or 127.0.0.1
forcePathStyle: true,
});
Note, I haven't used any port number, so the default is 80
If you're using docker-compose add this config:
.
.
.
ports:
- 80:9000
and it works fine.
I am uploading file to S3 bucket using S3 upload function in Node.js. The frontend is built on Angular. But now the client's requirement is that all uploads should direct to s3 bucket via a presigned URL. Does this because of any security concern? The Code that i am currently using to upload files to S3 Bucket is:
async function uploadFile(object){
//object param contains two properties 'image_data' and 'path'
return new Promise(async(resolve, reject) => {
var obj = object.image_data;
var imageRemoteName = object.path+'/'+Date.now()+obj.name;
AWS.config.update({
accessKeyId: ACCESS_KEY,
secretAccessKey: SECRET_KEY,
region: REGION
})
var s3 = new AWS.S3()
s3.upload({
Bucket: BUCKET,
Body: obj.data,
Key: imageRemoteName
})
.promise()
.then(response => {
console.log(`done! - `, response)
resolve(response.Location)
})
.catch(err => {
console.log('failed:', err)
})
})
}
Any Help will be appreciated, Thanks!
Security wise it doesn't make a difference whether you call upload or first create a pre-signed URL, as long as the code you showed does not run within your Angular application, meaning on the client. In that case every client of your application has access to your AWS access key, and secret key. Still, swapping upload with a pre-signed URL won't solve the problem in this case. However, if you use a server such as express and that's where this code is running, you're basically fine.
AWS provides instructions on how to upload objects using a pre-signed URL. The basic steps are:
import { getSignedUrl } from "#aws-sdk/s3-request-presigner";
import { S3Client, PutObjectCommand } from "#aws-sdk/client-s3";
const s3Client = new S3Client({
accessKeyId: ACCESS_KEY,
secretAccessKey: SECRET_KEY,
region: REGION
});
/* ... */
const command = new PutObjectCommand({
Bucket: BUCKET,
Body: obj.data,
Key: imageRemoteName
});
// upload image and return a new signed URL,
// with expiration to download image, if needed.
// Otherwise you can leave `signedUrl` unused.
const signedUrl = await getSignedUrl(s3Client, command, {
expiresIn: 3600,
});
I'm trying to upload my html result file to AWS S3 after my Protractor test suite execution is complete. I use JavaScript in my automation. Please help me resolve the error here:
static uploadtoS3() {
const AWS = require('aws-sdk');
var FILE_NAME_LOCAL;
var crypt = require("crypto");
fs.readdirSync("./reports/html/").forEach(file => {
if (file.startsWith("execution_report")) {
FILE_NAME_LOCAL = process.cwd() + "\\reports\\html\\" + file;
}
});
console.log("File name: " + FILE_NAME_LOCAL);
// Get file stream
const fileStream = fs.createReadStream(FILE_NAME_LOCAL);
var hash = crypt.createHash("md5")
.update(new Buffer.from(FILE_NAME_LOCAL, 'binary'))
.digest("base64");
console.log("Hash: "+hash);
// Call S3 to retrieve upload file to specified bucket
const uploadParams = {
Bucket: 'my.bucket',
Key: 'automation_report.html',
Body: fileStream,
ContentType: "text/html",
ContentMD5: hash,
// CacheControl: "max-age=0,no-cache,no-store,must-revalidate",
ACL: 'public-read',
};
const s3 = new AWS.S3({
// TODO: use this `accessKeyId: <key>` annotation to indicate the presence of a key instead of placing the actual key here.
endpoint: "https://3site-abc-wip1.nam.nsroot.net",
accessKeyId: <access_key_id>,
secretAccessKey: <secret_access_key>,
signatureVersion: 'v4',
ca: fs.readFileSync('C:\\Users\\AB11111\\InternalCAChain_PROD.pem'),
sslEnabled: true
});
// Create S3 service object and upload
s3.upload(uploadParams, function (err, data) {
console.log("Inside upload..");
if (err) {
throw err;
} if (data) {
console.log('Upload Success. File location:' + data.Location);
}
});
}
Error: unable to get local issuer certificate at
TLSSocket.onConnectSecure (_tls_wrap.js:1049:34) at TLSSocket.emit
(events.js:182:13) at TLSSocket.EventEmitter.emit (domain.js:442:20)
at TLSSocket._finishInit (_tls_wrap.js:631:8)
I made it working. I needed to add the certiicate in AWS.Config. Full working code is below. This might help someone. Note: The below credentials and urls are representation purpose only and they aren't not real:
const AWS = require('aws-sdk');
const https = require('https');
var FILE_NAME_LOCAL;
AWS.config.update({
httpOptions: {
agent: new https.Agent({
// rejectUnauthorized: false, // Don't use this - this is insecure, just like --no-verify-ssl in AWS cli
ca: fs.readFileSync('./support/InternalCAChain_PROD.pem')
})
}
});
const s3 = new AWS.S3({
s3BucketEndpoint: true,
endpoint: "https://my.bucket.3site-abc.nam.nsroot.net/",
accessKeyId: "abABcdCD",
secretAccessKey: "kjlJLlklkLlUYt",
});
// Get file stream
fs.readdirSync("./reports/html/").forEach(file => {
if (file.startsWith("execution_report")) {
FILE_NAME_LOCAL = process.cwd() + "\\reports\\html\\" + file;
}
});
const fileStream = fs.readFileSync(FILE_NAME_LOCAL);
// Call S3 to retrieve upload file to specified bucket
const uploadParams = {
Bucket: 'my.bucket',
Key: path.basename(FILE_NAME_LOCAL),
Body: fileStream,
ContentType: "text/html",
ContentEncoding: 'UTF-8',
ACL: 'public-read',
};
// Create S3 service object and upload
s3.upload(uploadParams, function (err, data) {
console.log("Inside upload..");
if (err) {
throw err;
} if (data) {
s3FileLocation = data.Location;
console.log('Upload Success. File location:' + data.Location);
}
});
I implemented uploading file in Amazon s3 bucket like below and it works fine:
const S3 = require('aws-sdk/clients/s3');
const AWS = require('aws-sdk');
const accessKeyId = 'AKIAYVXDX*******';
const secretAccessKey = 'gxZpdSDnOfpM*****************';
const s3 = new S3({
region: 'us-east-1',
accessKeyId,
secretAccessKey
});
s3.putObject({
Body: 'Hello World',
Bucket: "dev-amazon",
Key: 'hello.txt'
}
, (err, data) => {
if (err) {
console.log(err);
}
});
And I need to implement uploading file in Wasabi bucket.
I tried like below:
const S3 = require('aws-sdk/clients/s3');
const AWS = require('aws-sdk');
const wasabiEndpoint = new AWS.Endpoint('s3.wasabisys.com');
const accessKeyId = 'PEIL4DYOY*******';
const secretAccessKey = 'D4jIz3tjJw*****************';
const s3 = new S3({
endpoint: wasabiEndpoint,
region: 'us-east-2',
accessKeyId,
secretAccessKey
});
s3.putObject({
Body: 'Hello World',
Bucket: "dev-wasabi",
Key: 'hello.txt'
}
, (err, data) => {
if (err) {
console.log(err);
}
});
And the result of `console.log(err) is:
err {"message":"The request signature we calculated does not match the signature you provided. Check your key and signing method.","code":"SignatureDoesNotMatch","region":null,"time":"2019-10-30T09:39:19.072Z","requestId":null,"statusCode":403,"retryable":false,"retryDelay":64.72166771381391}
Console error in devtools:
PUT https://dev-wasabi.s3.us-east-2.wasabisys.com/5efa9b286821fab7df3ece8dc3d6687ed32 403 (Forbidden)
What is wrong in my codes?
After some research, I found that wasabiEndpoint was wrong.
It should be
const wasabiEndpoint = new AWS.Endpoint('s3.us-east-2.wasabisys.com ');
According to docs, service URLs should be different based on regions.
Wasabi US East 1 (N. Virginia): s3.wasabisys.com or s3.us-east-1.wasabisys.com
Wasabi US East 2 (N. Virginia): s3.us-east-2.wasabisys.com
Wasabi US West 1 (Oregon): s3.us-west-1.wasabisys.com
Wasabi EU Central 1 (Amsterdam): s3.eu-central-1.wasabisys.com
Will be more than happy if this can help someone. ;)
When using the #aws-sdk/client-s3 package, the S3 client only needs the Wasabi endpoint defined. The following will create the S3 client w/ the correct endpoint:
const client = new S3Client({
credentials: {
accessKeyId: "<wasabi-access-key-id>",
secretAccessKey: "<wasabi-secret-key>"
},
endpoint: {
url: "https://s3.wasabisys.com"
}
})
From here, putting an object is exactly the same as a standard AWS S3 bucket. Ex:
await client.send(new PutObjectCommand({
Bucket: "bucket-name",
Key: "object-key",
Body: <whatever is being put>
})
With the basic import statement being:
import { S3Client, PutObjectCommand } from "#aws-sdk/client-s3"