Amazon S3 Signature Does Not Match - AWS SDK Java - javascript

I have a play application that needs to upload files to S3. We are developing in scala and using the Java AWS SDK.
I'm having trouble trying to upload files, I keep getting 403 SignatureDoesNotMatch when using presigned urls. The url is being genereated using AWS Java SDK by the following code:
def generatePresignedPutRequest(filename: String) = {
val expiration = new java.util.Date();
var msec = expiration.getTime() + 1000 * 60 * 60; // Add 1 hour.
expiration.setTime(msec);
s3 match {
case Some(s3) => s3.generatePresignedUrl(bucketname, filename, expiration, HttpMethod.PUT).toString
case None => {
Logger.warn("S3 is not availiable. Cannot generate PUT request.")
"URL not availiable"
}
}
}
For the frontend code we followed ioncannon article.
The js function that uploads the file (the same as the one used in the article)
function uploadToS3(file, url)
{
var xhr = createCORSRequest('PUT', url);
if (!xhr)
{
setProgress(0, 'CORS not supported');
}
else
{
xhr.onload = function()
{
if(xhr.status == 200)
{
setProgress(100, 'Upload completed.');
}
else
{
setProgress(0, 'Upload error: ' + xhr.status);
}
};
xhr.onerror = function()
{
setProgress(0, 'XHR error.');
};
xhr.upload.onprogress = function(e)
{
if (e.lengthComputable)
{
var percentLoaded = Math.round((e.loaded / e.total) * 100);
setProgress(percentLoaded, percentLoaded == 100 ? 'Finalizing.' : 'Uploading.');
}
};
xhr.setRequestHeader('Content-Type', 'image/png');
xhr.setRequestHeader('x-amz-acl', 'authenticated-read');
xhr.send(file);
}
}
The server's response is
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
<StringToSignBytes>50 55 bla bla bla...</StringToSignBytes>
<RequestId>F7A8F1659DE5909C</RequestId>
<HostId>q+r+2T5K6mWHLKTZw0R9/jm22LyIfZFBTY8GEDznfmJwRxvaVJwPiu/hzUfuJWbW</HostId>
<StringToSign>PUT
image/png
1387565829
x-amz-acl:authenticated-read
/mybucketname/icons/f5430c16-32da-4315-837f-39a6cf9f47a1</StringToSign>
<AWSAccessKeyId>myaccesskey</AWSAccessKeyId></Error>
I have configured CORS, double checked aws credentials and tried changing request headers. I always get the same result.
Why is Amazon telling me that signatures dont match?

Doubt the OP still has a problem with this, but for anyone else who runs into this, here is the answer:
When making a signed request to S3, AWS checks to make sure that the signature exactly matches the HTTP Header information the browser sent. This is unfortunately required reading: http://s3.amazonaws.com/doc/s3-developer-guide/RESTAuthentication.html
However in the code above this is not actually the case, the Javascript is sending:
xhr.setRequestHeader('Content-Type', 'image/png');
xhr.setRequestHeader('x-amz-acl', 'authenticated-read');
But in the Java/Scala, s3.generatePresignedUrl is being called without passing in either of them. So the resulting signature is actually telling S3 to reject anything with a Content-Type or x-ams-acl header set. Oops (I fell for it too).
I've seen browsers send Content-Types automatically, so even if they're not explicitly added to the header they could still be coming into S3. So the question is, how do we add Content-Type and x-amz-acl headers into the signature?
There are several overloaded generatePresignedUrl functions in the AWS SDK, but only one of them allows us to pass in anything else besides the bucket-name, filename, expiration-date and http-method.
The solution is:
Create a GeneratePresignedUrlRequest object, with your bucket and filename.
Call setExpiration, setContentType, etc, to set all of your header info on it.
Pass that into s3.generatePresignedUrl as the only parameter.
Here's the proper function definition of GeneratePresignedUrlRequest to use:
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Client.html#generatePresignedUrl(com.amazonaws.services.s3.model.GeneratePresignedUrlRequest)
The function's code on the AWS GitHub repo was also helpful for me to see how to code up the solution. Hope this helps.

I faced a similar issue and setting the config signatureVersion: 'v4' helped solve it in my case -
In JavaScript:
var s3 = new AWS.S3({
signatureVersion: 'v4'
});
Adapted from https://github.com/aws/aws-sdk-js/issues/902#issuecomment-184872976

I just encountered this problem using the NodeJs AWS SDK.
It was due to using credentials that were valid, but without sufficient permissions.
Changing to my admin key fixed this with no code changes!

I had the same issue, but removing content-type works fine. Hereby sharing the complete code.
public class GeneratePresignedUrlAndUploadObject {
private static final String BUCKET_NAME = "<YOUR_AWS_BUCKET_NAME>";
private static final String OBJECT_KEY = "<YOUR_AWS_KEY>";
private static final String AWS_ACCESS_KEY = "<YOUR_AWS_ACCESS_KEY>";
private static final String AWS_SECRET_KEY = "<YOUR_AWS_SECRET_KEY>";
public static void main(String[] args) throws IOException {
BasicAWSCredentials awsCreds = new BasicAWSCredentials(AWS_ACCESS_KEY, AWS_SECRET_KEY);
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().withRegion(Regions.US_EAST_1)
.withCredentials(new AWSStaticCredentialsProvider(awsCreds)).build();
try {
System.out.println("Generating pre-signed URL.");
java.util.Date expiration = new java.util.Date();
long milliSeconds = expiration.getTime();
milliSeconds += 1000 * 60 * 60;
expiration.setTime(milliSeconds);
GeneratePresignedUrlRequest generatePresignedUrlRequest =
new GeneratePresignedUrlRequest(BUCKET_NAME, OBJECT_KEY);
generatePresignedUrlRequest.setMethod(HttpMethod.PUT);
generatePresignedUrlRequest.setExpiration(expiration);
URL url = s3Client.generatePresignedUrl(generatePresignedUrlRequest);
UploadObject(url);
System.out.println("Pre-Signed URL = " + url.toString());
} catch (AmazonServiceException exception) {
System.out.println("Caught an AmazonServiceException, " +
"which means your request made it " +
"to Amazon S3, but was rejected with an error response " +
"for some reason.");
System.out.println("Error Message: " + exception.getMessage());
System.out.println("HTTP Code: " + exception.getStatusCode());
System.out.println("AWS Error Code:" + exception.getErrorCode());
System.out.println("Error Type: " + exception.getErrorType());
System.out.println("Request ID: " + exception.getRequestId());
} catch (AmazonClientException ace) {
System.out.println("Caught an AmazonClientException, " +
"which means the client encountered " +
"an internal error while trying to communicate" +
" with S3, " +
"such as not being able to access the network.");
System.out.println("Error Message: " + ace.getMessage());
}
}
public static void UploadObject(URL url) throws IOException
{
HttpURLConnection connection=(HttpURLConnection) url.openConnection();
connection.setDoOutput(true);
connection.setRequestMethod("PUT");
OutputStreamWriter out = new OutputStreamWriter(
connection.getOutputStream());
out.write("This text uploaded as object.");
out.close();
int responseCode = connection.getResponseCode();
System.out.println("Service returned response code " + responseCode);
}
}

Got a problem, the mime type on windows was setting the fileType to empty string and it didn't work. Just handle empty strings and add some file type.

I faced with SignatureDoesNotMatch error using the Java AWS SDK. In my case, SignatureDoesNotMatch error occurred after upgraded maven dependencies without changes in my code (so credentials are correct and were not changed). After upgrading dependency org.apache.httpcomponents:httpclient from version 4.5.6 to 4.5.7 (actually it was upgrade of Spring Boot from 2.1.2 to 2.1.3, and there bom has specified httpclient version), code became throw exceptions while doing some AWS SDK S3 requests like AmazonS3.getObject.
After digging into the root cause, I found that httpclient library did breaking changes with normalized URI, that affected Java AWS SDK S3. Please take a look for opened GitHub ticket org.apache.httpcomponents:httpclient:4.5.7 breaks fetching S3 objects for more details.

If your access keys and secret keys are good but it is saying "SignatureDoesNotMAtch", check your secret key, it probably has any of some special charaters, e.g +/ - / *
Go to aws and generate another access key, where the the secret key does not have those. Then try again :)

Related

Cloud Storage for Firebase: How to recover a pdf

I have a pdf stored in Cloud Storage and I'm trying to take this file to send it through email.
I'm trying to recover it but I receive back an error about access deniend:
Uncaught (in promise): FirebaseError: Firebase Storage: User does not
have permission to access
My code:
const storageRef = firebase.storage().ref();
var forestRef = storageRef.child('/uploads/' + offerId + '/' + offerId + '.pdf');
forestRef.getDownloadURL()
.then(function (url) {
console.log("url ", url)
var xhr = new XMLHttpRequest();
xhr.responseType = 'blob';
xhr.onload = function (event) {
var blob = xhr.response;
};
xhr.open('GET', url);
})
I think that the problem should be that I'm not using the access token but I don't know how to recover it. ( I have tried to use also the getMetadata, but the result is the same)
Edit:
I have also the url with token
The files in firebase storage follow a specific url format. use the following format. The url generated from getDownloadURL() will have token associated with it causing the link to expire after few days.
https://firebasestorage.googleapis.com/v0/b/<PROJECT-NAME>.appspot.com/o/<PATH>%2F<TOFILE>?alt=media
So your url string for /uploads/${offerId}/${offerId}.pdf will be :
https://firebasestorage.googleapis.com/v0/b/<PROJECT-NAME>.appspot.com/o/uploads%2F${offerId}%2F${offerId}.pdf?alt=media
Thus by string manipulations you can create the file urls.
While download URLs provide public, read-only access to files in Cloud Storage for Firebase, calling getDownloadURL to generate such a URL requires that you have read permission on that file.
The error message indicates that the code does not meet your security rules, i.e. that there is no user signed in when you run this code.
If that is not what you expect, I recommend checking that right before you call the Storage API:
const storageRef = firebase.storage().ref();
var forestRef = storageRef.child('/uploads/' + offerId + '/' + offerId + '.pdf');
if (!firebase.auth().currentUser) throw new "No user signed in, can't get download URL";
forestRef.getDownloadURL()
...

AWS Upload with multipart/form-data Invalid

I am sending a file to the presignedPOST url to upload to AWS S3 and in other resources I've found, to send a file with form-data is to switch to multipart/form-data to send a file.
This is the form data I've created is this:
fields['file'] = new File([this.get_compressed_photo],manifest.photo, {type: "image/jpeg"});
var form = new FormData();
for(let field in fields){
form.append(field+"", fields[field]);
}
try {
response = await axios.post(my_url, form, {
headers : {
"Content-Type" : "multipart/form-data",
}
});
}catch(error){
console.log(error);
}
this is the field in the form in the PARAMs for the request:
Content-Disposition: form-data; name="file"; filename="file_name.jpg"
Content-Type: image/jpeg
function() {
[native code]
}
Is something going wrong here?
UPDATE:
AWS does respond, but not with an error that is relevant to the file. I'm not sure if this means that the file is still valid, but just looking at the value for the image file, I'm not sure how.
<Error><Code>SignatureDoesNotMatch</Code>....
I'm using the aws-sdk and creating the presignedPOST url like so:
....
let path = process.env.PATH + identifier + "/" + file_name;
var url = false;
try{
const url = await s3.createPresignedPost({
Bucket: process.env.BUCKET,
Expires: (60 * 5),
Fields : {
key: path,
AWSAccessKeyId: process.env.KEY,
},
});
return url;
}catch(error){
return false;
}
....
Do I still need to add a signature to this?
I just wasted a day trying to get multipart POSTs to S3 working with AWS signature v4.
The POST kept failing with a 403 Forbidden response with SignatureDoesNotMatch. I was 100% certain my signature was correct as I was using the AWS SDK to generate it, and I knew my keys were correct.
I had my POST field name for the signature as 'Signature' rather than 'x-amz-signature' as per the docs, because changing it to 'x-amz-signature' just resulted in a 400 response instead of the 403, with an error message saying I was missing the Signature field!
It then dawned on me that S3 was trying to verify my request as if it was signed using AWS signature version 2! The fix was to use 'x-amz-signature' as per the docs, but also to make sure the 'x-amz-algorithm' field in the multipart POST data was before all others! The AWS docs do not show it like this. Clearly S3 requires this field to come first so it knows what algorithm to use.
I ended up with the following order that now works:
x-amz-algorithm
x-amz-credential
policy
x-amz-date
x-amz-signature
...
I removed the unneeded AWSAccessKeyId in the fields object. In an example somewhere I saw that it was added so I added it initially.
Removing it makes it works like a charm and I think it messes up AWS's specific required order of the fields.
....
let path = process.env.PATH + identifier + "/" + file_name;
var url = false;
try{
const url = await s3.createPresignedPost({
Bucket: process.env.BUCKET,
Expires: (60 * 5),
Fields : {
key: path, // key is the only required field here
//AWSAccessKeyId: process.env.KEY, << I COMMENTED OUT THIS LINE
},
});
return url;
}catch(error){
return false;
}
....

Upload image using Javascript port

Hi I'm trying to upload image via the Java script port. Logging seems to work and it seems the server is not receiving the "file" object. Here is my code (note this works via simulator):
Display.getInstance().openGallery(new ActionListener() {
#Override
public void actionPerformed(ActionEvent evt) {
picture = (String) evt.getSource();
if (picture != null) {
String url = "...";
MultipartRequest request = new MultipartRequest();
request.setUrl(url);
try {
request.addData("file", picture, "image/png");
request.setFilename("file", "myPicture.png");
request.setPost(true);
request.addArgument("submit", "yes");
NetworkManager.getInstance().addToQueueAndWait(request);
Log.p("initVars(..) MultipartRequest error code: "
+ request.getResponseCode(), Log.DEBUG);
String data = new String(request.getResponseData());
Log.p(data, Log.DEBUG);
} catch (IOException err) {
err.printStackTrace();
}
}
}
}, Display.GALLERY_IMAGE);
If the JavaScript deployed on the same server as the upload destination?
Assuming you are in the same server try using the build hint: javascript.inject_proxy=false. This will disable the proxy servlet and create direct communication to the JavaScript port.
If you are not in the same server make sure you are using the WAR distribution with the proxy servlet within so it can redirect your upload.

AuthorizedHandler Blocked wrong request! url: /socket.io/

I'm using mrniko/netty-socketio (Java) to start a websocket server like this:
config = new Configuration();
config.setHostname("localhost");
config.setPort(8001);
server = new SocketIOServer(config);
server.addListeners(serviceClass);
server.start();
Then I'm using (the recommended) socketio/socket.io-client (JavaScript) to try to connect to the websocket server like this (all on the same server):
var socket = io("http://localhost:8001");
The connection is "blocked" at the server with the server printing:
8239 [nioEventLoopGroup-5-1] WARN com.corundumstudio.socketio.handler.AuthorizeHandler - Blocked wrong request! url: /socket.io/, ip: /127.0.0.1:48915
28889 [nioEventLoopGroup-5-2] WARN com.corundumstudio.socketio.handler.AuthorizeHandler - Blocked wrong request! url: /socket.io/, ip: /127.0.0.1:48916
Which occurs endlessly, as the client continues to retry the connection.
I can't seem to get the server to accept the connection. I've tried:
var socket = io("ws://localhost:8001");
But that gives the same outcome. I've also tried putting a trailing slash after the URL for both cases - makes no difference. I've also tried all combinations of using "localhost" or "127.0.0.1" at both the server and client, and so on.
The JavaScript page itself is being served up from a http server on localhost:8000. This does not appear to be a cross site issue as that gives an entirely different error at the browser.
Does anyone know what is going wrong and how to fix it?
In my case network monitoring accesses that port every 10 seconds. I had temporarily changed log4j.properties to ERROR level logging, but wanted to provide networking a path to use that would not cause excessive warn logging. Not sure if this was the best approach, but this is what I ended up doing.
config.setAllowCustomRequests(true);
By allowing custom requests the piece of code displaying the warning was bypassed in Authorizehandler.
I created a custom pipeline, that allowed me to switch out the wrongUrlHandler with a custom one to allow a safe path to use for monitoring.
public class CustomSocketIOChannelInitializer extends SocketIOChannelInitializer {
CustomWrongUrlHandler customWrongUrlHandler = null;
public CustomSocketIOChannelInitializer(Configuration configuration) {
customWrongUrlHandler = new CustomWrongUrlHandler(configuration);
}
protected void initChannel(Channel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
addSslHandler(pipeline);
addSocketioHandlers(pipeline);
// Replace wrong url handler with our custom one to allow network monitoring without logging warnings.
pipeline.replace(SocketIOChannelInitializer.WRONG_URL_HANDLER, "CUSTOM_WRONG_URL_HANDLER", customWrongUrlHandler);
}
This is my custom handler:
#Sharable
public class CustomWrongUrlHandler extends ChannelInboundHandlerAdapter {
private final Logger log = LoggerFactory.getLogger(getClass());
Configuration configuration = null;
/**
* #param configuration
*/
public CustomWrongUrlHandler(Configuration configuration) {
this.configuration = configuration;
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
if (msg instanceof FullHttpRequest) {
FullHttpRequest req = (FullHttpRequest) msg;
Channel channel = ctx.channel();
QueryStringDecoder queryDecoder = new QueryStringDecoder(req.getUri());
// Don't log when port is pinged for monitoring. Must use context that starts with /ping.
if (configuration.isAllowCustomRequests() && queryDecoder.path().startsWith("/ping")) {
HttpResponse res = new DefaultHttpResponse(HTTP_1_1, HttpResponseStatus.BAD_REQUEST);
channel.writeAndFlush(res).addListener(ChannelFutureListener.CLOSE);
req.release();
//log.info("Blocked wrong request! url: {}, ip: {}", queryDecoder.path(), channel.remoteAddress());
return;
}
// This is the last channel handler in the pipe so if it is not ping then log warning.
HttpResponse res = new DefaultHttpResponse(HTTP_1_1, HttpResponseStatus.BAD_REQUEST);
ChannelFuture f = channel.writeAndFlush(res);
f.addListener(ChannelFutureListener.CLOSE);
req.release();
log.warn("Blocked wrong socket.io-context request! url: {}, params: {}, ip: {}", channel.remoteAddress() + " " + queryDecoder.path(), queryDecoder.parameters());
}
}
}
CustomSocketIOChannelInitializer customSocketIOChannelInitializer = new CustomSocketIOChannelInitializer(config);
server.setPipelineFactory(customSocketIOChannelInitializer);

Spring 4 / ExtJs 6 File Upload - ExtJs connection refused error when file size limit exceed

I have a file upload form that's working correctly except when I send a file that is larger than the size that I've configured in Spring.
I've used the same code before on another application that was written in Spring, the difference is I was using ExtJs 4.2, now using ExtJs 6.0. As well the old app used Spring security 3, the new one 4.
The Spring side has been configured to block files that exceed 3MB.
From WebMvcConfig.java:
#Bean(name="multipartResolver")
public CommonsMultipartResolver commonsMultipartResolver()
{
ToolkitCommonsMultipartResolver resolver = new ToolkitCommonsMultipartResolver();
resolver.setMaxUploadSize(Constants.UPLOAD_MAX_FILE_SIZE);
return resolver;
}
From ToolkitCommonsMultipartResolver:
public class ToolkitCommonsMultipartResolver extends CommonsMultipartResolver {
#SuppressWarnings({ "rawtypes", "unchecked" })
#Override
protected MultipartParsingResult parseRequest(final HttpServletRequest request) {
String encoding = determineEncoding(request);
FileUpload fileUpload = prepareFileUpload(encoding);
List fileItems;
try {
fileItems = ((ServletFileUpload) fileUpload).parseRequest(request);
}
catch (FileUploadBase.SizeLimitExceededException ex) {
System.out.println("******* MultipartParsingResult limit exceeded");
request.setAttribute("fileSizeExceeded", ex);
fileItems = Collections.EMPTY_LIST;
}
catch (FileUploadException ex) {
throw new MultipartException("Could not parse multipart servlet request", ex);
}
return parseFileItems(fileItems, encoding);
}
}
My custom Controller:
#PreAuthorize("hasAuthority('ACTIVITY_CREATE_UPDATE')")
#RequestMapping(value = "/activity/editActivity", method = RequestMethod.POST, consumes="multipart/form-data", produces=MediaType.TEXT_HTML_VALUE )
public #ResponseBody String editActivity(#Valid ActivityBean bean, BindingResult result, HttpServletRequest request) {
//WebMvcConfig.commonsMultipartResolver will throw exception if file size exceeds the max size
//Passed as a request attribute
Object exception = request.getAttribute("fileSizeExceeded");
if (exception != null && FileUploadBase.SizeLimitExceededException.class.equals(exception.getClass()))
{
log.info("File too large");
String msg = "The file you sent has exceeded the maximum upload size of " + (Constants.UPLOAD_MAX_FILE_SIZE / 1000000L) + " MB.";
return "{\"success\" : false, \"msg\" : \"" + msg + "\"}";
}
...other code to process request
}
}
Spring security http tag has the following code to allow frame content to be displayed from the server (X-Frame-Options). Before I added this code all the responses were blocked (save was successful or not):
<headers>
<frame-options policy="SAMEORIGIN"/>
</headers>
Spring will return success: false with the message I've setup in the controller. In chrome, I see connection aborted (net::ERR_CONNECTION_ABORTED). Deep in ExtJs code I found the method onComplete:
/**
* Callback handler for the upload function. After we've submitted the form via the
* iframe this creates a bogus response object to simulate an XHR and populates its
* responseText from the now-loaded iframe's document body (or a textarea inside the
* body). We then clean up by removing the iframe.
* #private
*/
onComplete: function()
Inside onComplete() there is a call after the upload
doc = me.getDoc();
This method tries to access the content returned from the server in the iFrame is blocked. It seems as though the Spring security header isn't working in this case. There is a catch (e) that is throwing the error:
Uncaught DOMException: Blocked a frame with origin "http://localhost:8080" from accessing a cross-origin frame.(…)
Does anyone know how to resolve this issue? I can disable the Muitipart resolver and accept the complete file, and do size validation in my custom code. It makes more sense to block the file upload first if it exceeds the size. Is this a Spring security 4 or ExtJs6 issue?

Categories

Resources