AWS Upload with multipart/form-data Invalid - javascript

I am sending a file to the presignedPOST url to upload to AWS S3 and in other resources I've found, to send a file with form-data is to switch to multipart/form-data to send a file.
This is the form data I've created is this:
fields['file'] = new File([this.get_compressed_photo],manifest.photo, {type: "image/jpeg"});
var form = new FormData();
for(let field in fields){
form.append(field+"", fields[field]);
}
try {
response = await axios.post(my_url, form, {
headers : {
"Content-Type" : "multipart/form-data",
}
});
}catch(error){
console.log(error);
}
this is the field in the form in the PARAMs for the request:
Content-Disposition: form-data; name="file"; filename="file_name.jpg"
Content-Type: image/jpeg
function() {
[native code]
}
Is something going wrong here?
UPDATE:
AWS does respond, but not with an error that is relevant to the file. I'm not sure if this means that the file is still valid, but just looking at the value for the image file, I'm not sure how.
<Error><Code>SignatureDoesNotMatch</Code>....
I'm using the aws-sdk and creating the presignedPOST url like so:
....
let path = process.env.PATH + identifier + "/" + file_name;
var url = false;
try{
const url = await s3.createPresignedPost({
Bucket: process.env.BUCKET,
Expires: (60 * 5),
Fields : {
key: path,
AWSAccessKeyId: process.env.KEY,
},
});
return url;
}catch(error){
return false;
}
....
Do I still need to add a signature to this?

I just wasted a day trying to get multipart POSTs to S3 working with AWS signature v4.
The POST kept failing with a 403 Forbidden response with SignatureDoesNotMatch. I was 100% certain my signature was correct as I was using the AWS SDK to generate it, and I knew my keys were correct.
I had my POST field name for the signature as 'Signature' rather than 'x-amz-signature' as per the docs, because changing it to 'x-amz-signature' just resulted in a 400 response instead of the 403, with an error message saying I was missing the Signature field!
It then dawned on me that S3 was trying to verify my request as if it was signed using AWS signature version 2! The fix was to use 'x-amz-signature' as per the docs, but also to make sure the 'x-amz-algorithm' field in the multipart POST data was before all others! The AWS docs do not show it like this. Clearly S3 requires this field to come first so it knows what algorithm to use.
I ended up with the following order that now works:
x-amz-algorithm
x-amz-credential
policy
x-amz-date
x-amz-signature
...

I removed the unneeded AWSAccessKeyId in the fields object. In an example somewhere I saw that it was added so I added it initially.
Removing it makes it works like a charm and I think it messes up AWS's specific required order of the fields.
....
let path = process.env.PATH + identifier + "/" + file_name;
var url = false;
try{
const url = await s3.createPresignedPost({
Bucket: process.env.BUCKET,
Expires: (60 * 5),
Fields : {
key: path, // key is the only required field here
//AWSAccessKeyId: process.env.KEY, << I COMMENTED OUT THIS LINE
},
});
return url;
}catch(error){
return false;
}
....

Related

How do I set/update AWS s3 object metadata during image upload PUT request to signed url?

I'm trying to include the name of the file that is uploaded to AWS S3 and given a random/unique name in a NextJS app.
I can set metadata from the backend, but I would like to update it from my put request (where the image is actually uploaded) to the signed URL. How would I do this?
To be clear: I would like to set metadata when I do a PUT request to the signed URL. I currently have it set to "none" on the backend to avoid forbidden errors (and this shows up as metadata in s3). Is this possible to update that metadata from my PUT request is there another approach I should take? Thanks!
// Backend code to get signed URL
async function handler(req, res) {
if (req.method === 'GET') {
const key = `content/${uuidv4()}.jpeg`;
s3.getSignedUrl(
'putObject',
{
Bucket: 'assets',
ContentType: 'image/jpeg',
Key: key,
Expires: 5 * 60,
Metadata: {
'file-name': "none",
}
},
(err, url) => res.send({ key, url })
);
}
// Frontend code
const [file, setFile] = useState(null);
const onFileUpload = async (e) => {
e.preventDefault();
const uploadConfig = await fetch('/api/upload');
const uploadURL = await uploadConfig.json();
await fetch(uploadURL.url, {
body: file,
method: 'PUT',
headers: {
'Content-Type': file.type,
'x-amz-meta-file-name': 'updated test name',
},
});
};
It isn't possible to do that with presigned urls. When you create the presigned url, all properties are pre-filled. You can't change them, when uploading the file. The only thing you can control is the object's data. The bucket, the key and the metadata (and all other parameters of put_object) are predefined. This is also the case for generate_presigned_post. All fields are prefilled.
This makes sense, as the back-end grants the permissions and needs to decide on these. Also the implementation will be much more complicated, as presigned urls support all client methods, which have different parameters.
The only way you could do it, is to generate urls on-demand. First generate the pre-signed url, based on the name selected by the client and then do the upload. You will need two round-trips for every file. One to your server, for generating the url and one to S3 for the uploading.

POST cutting off PDF data

I posted a question yesterday (linked here) where I had been trying to send a PDF to a database, and then retrieve it a later date. Since then I have been advised that it is best to (in my case as I cannot use Cloud Computing services) to upload the PDF files to local storage, and save the URL of the file to the database instead. I have now begun implementing this, but I have come across some trouble.
I am currently using FileReader() as documented below to process the input file and send it to the server:
var input_file = "";
let reader = new FileReader();
reader.readAsText(document.getElementById("input_attachment").files[0]);
reader.onloadend = function () {
input_file = "&file=" + reader.result;
const body = /*all the rest of my data*/ + input_file;
const method = {
method: "POST",
body: body,
headers: {
"Content-type": "application/x-www-form-urlencoded"
}
};
After this bloc of code I do the stock standard fetch() and a route on my server receives this. Almost all data comes in 100% as expected, but the file comes in cut off somewhere around 1300 characters in (making it quite an incomplete PDF). What does appear to come in seems to match the first 1300 characters of the original PDF I uploaded.
I have seen suggestions that you are meant to use "multipart/form-data" content-type to upload files, but when I do this I seem to only then receive the first 700 characters or so of my PDF. I have tried using the middleware Multer to handle the "multipart/form-data" but it just doesn't seem to upload anything (though I can't guarantee that I am using it correctly).
I also initially had trouble with fetch payload too large error message, but have currently resolved this through this method:
app.use(bodyParser.urlencoded({ limit: "50mb", extended: false, parameterLimit: 50000 }));
Though I have suspicions that this may not be correctly implemented as I have seen some discussion that the urlencoded limit is set prior to the file loading, and cannot be changed in the middle of the program.
Any and all help is greatly appreciated, and I will likely use any information here to construct an answer on my original question from yesterday so that anybody else facing these sort of issues have a resource to go to.
I personally found the solution to this problem as follows. On the client-side of my application this code is an example of what was implemented.
formData = new FormData();
formData.append("username", "John Smith");
formData.append("fileToUpload", document.getElementById("input_attachment").files[0]);
const method = {
method: "POST",
body: formData
};
fetch(url, method)
.then(res => res.json())
.then(res => alert("File uploaded!"))
.catch(err => alert(err.message))
As can be noted I have changed from using "application/x-www-form-urlencoded" encoding to "multipart/form-data" to upload files. nodeJS and Express however do not natively support this encoding type. I chose to use the library Formidable (found this to be easiest to use without too much overhead) which can be investigated about here. Below is an example of my server-side implementation of this middleware (Formidable).
const express = require('express');
const app = express();
const formidable = require('formidable');
app.post('/upload', (req, res) => {
const form = formidable({ uploadDir: `${__dirname}/file/`, keepExtensions: true });
form.parse(req, (err, fields, files) => {
if (err) console.log(err.stack);
else {
console.log(fields.username);
});
});
The file(s) are automatically uploaded to the directory specified in uploadDir, and the keepExtensions ensures that the file extension is saved as well. The non-file inputs are accessible through the fields object as seen through the fields.username example above.
From what I have found, this is the easiest method to take to setup an easy file upload system.

How to send a HTTP POST REQUEST by parameters using typescript

I would like to send a POST request by param similar to that:
http://127.0.0.1:9000/api?command={"command":"value","params":{"key":"value","key":"value","key":"value","key":value,}}
I tried to do that, but not working:
let command: HttpParams = new HttpParams();
let params: HttpParams = new HttpParams();
command = command.append('command', 'value');
params = params.append('key', value);
params = params.append('key', value);
params = params.append('key', value);
params = params.append('key', value);
command = command.append('params', params.toString());
this.httpClient.post('/api?', null, {
params: command
});
The error is: 500 (Internal Server Error)
Could you please help me?
The code 500 given by the server has the following description:
The HyperText Transfer Protocol (HTTP) 500 Internal Server Error
server error response code indicates that the server encountered an
unexpected condition that prevented it from fulfilling the request.
I think your server is trying to process your request.
So remove ".toString()" in your code.
command = command.append('params', params.toString());
Then try (Ctrl + shift + Supr) to see chrome dev tools, and go to the Networks tab.
You will see all your calls. Check if your request has the format as you want.
The solution is:
1) First I create the object containing the command info and parameters. Similar to that:
const object = {
command: 'command description',
params: {
properti: value,
properti: value,
properti: value,
}
};
2) After, I convert this object to json using JSON.stringify():
// convert object to json data
const jsonData = JSON.stringify(object);
3) After second step, I encapsulate the json data in HttpParams
// encapsulate json data in http param
let httpParams: HttpParams = new HttpParams();
httpParams = httpParams.append('command', automationTestTriggerJson);
4) and finally, I send POST by param using httpClient. Similar to that:
this.httpClient.post<T>(url, body, {
httpParams: parameters
});
NOTE: I don't know if it's is a better solution bur it's working for me.

How to use Google Drive API to download files with Javascript

I want to download files from google drive with javascript API. I have managed to authenticate and load list of files using gapi.client.drive.files request. However, I stuck at downloading those files.
My attempt to download the file:
var request = gapi.client.drive.files.get({
fileId:id,
alt:'media'
});
request.execute(function(resp){
console.log(resp);
});
I have these errors when trying to run the above:
(403) The user has not granted the app 336423653212 read access to the file 0B0UFTVo1BFVmeURrWnpDSloxQlE.
(400) Bad Request
I recognize that the files which aren't google drive file (google doc, google slide) return the 403 error.
I am new to this. Any help and answer is really appreciated.
Update 0
From Google Drive documentation about Handling API Error, here is part of the explanation for 400 errors
This can mean that a required field or parameter has not been provided, the
value supplied is invalid, or the combination of provided fields is
invalid.
This is because I have alt:'media' in my parameter object.
I tried gapi.client.drive.files.export, but it doesn't work either and it returns (403) Insufficient Permission although my Google Drive account has the edit permission for those files. Here is my code:
var request = gapi.client.drive.files.get({
fileId:element.id,
});
request.then(function(resp){
console.log(resp.result);
type = resp.result.mimeType;
id = resp.result.id;
var request = gapi.client.drive.files.export({
fileId:id,
mimeType:type
})
request.execute(function(resp){
console.log(resp);
});
});
Update 1
Based on abielita's answer, I have tried to make an authorized HTTP request but it doesn't download the file. It actually returns the file information in response and responseText attribute in the XMLHttpRequest object.
function test() {
var accessToken = gapi.auth.getToken().access_token;
var xhr = new XMLHttpRequest();
xhr.open("GET", "https://www.googleapis.com/drive/v3/files/"+'1A1RguZpYFLyO9qEs-EnnrpikIpzAbDcZs3Gcsc7Z4nE', true);
xhr.setRequestHeader('Authorization','Bearer '+accessToken);
xhr.onload = function(){
console.log(xhr);
}
xhr.send('alt=media');
}
______________________________________________________
I found out that I can actually retrieve URLs of all those files from the folder using files' webViewLink or webViewContent attributes.
A file which is from Google Drive type (Google Doc, Google Sheet,
etc...) will have webViewLink attribute. A webViewLink will open
the file in Google Drive.
A non Google Drive type file will have webContentLink. A
webContentLink will download the file.
My code:
var request = gapi.client.drive.files.list({
q:"'0Bz9_vPIAWUcSWWo0UHptQ005cnM' in parents", //folder ID
'fields': "files(id, name, webContentLink, webViewLink)"
});
request.execute(function(resp) {
console.log(resp);
}
Based from this documentation, if you're using alt=media, you need to make an authorized HTTP GET request to the file's resource URL and include the query parameter alt=media.
GET https://www.googleapis.com/drive/v3/files/0B9jNhSvVjoIVM3dKcGRKRmVIOVU?alt=media
Authorization: Bearer ya29.AHESVbXTUv5mHMo3RYfmS1YJonjzzdTOFZwvyOAUVhrs
Check here the examples of performing a file download with our Drive API client libraries.
String fileId = "0BwwA4oUTeiV1UVNwOHItT0xfa2M";
OutputStream outputStream = new ByteArrayOutputStream();
driveService.files().get(fileId)
.executeMediaAndDownloadTo(outputStream);
For the error (403) Insufficient Permission, maybe this is a problem with your access token, not with your project configuration.
The insufficient permissions error is returned when you have not requested the scopes you need when you retrieved your access token. You can check which scopes you have requested by passing your access_token to this endpoint: https://www.googleapis.com/oauth2/v1/tokeninfo?access_token=ACCESS_TOKEN
Check these links:
google plus api: "insufficientPermissions" error
Google drive Upload returns - (403) Insufficient Permission
Remember you are uploading to the service accounts google drive account. If you want to be able to see it from your own Google drive account you are going to have to do an insert of the permissions. to give yourself access
Phu, you were so close!
Thank you for sharing your method of using the webContentLink and webViewLink. I think that is best for most purposes. But in my app, I couldn't use viewContentLink because need to be able to enter the image into a canvas, and the image google provides is not CORS ready.
So here is a method
var fileId = '<your file id>';
var accessToken = gapi.auth2.getAuthInstance().currentUser.get().getAuthResponse().access_token;// or this: gapi.auth.getToken().access_token;
var xhr = new XMLHttpRequest();
xhr.open("GET", "https://www.googleapis.com/drive/v3/files/"+fileId+'?alt=media', true);
xhr.setRequestHeader('Authorization','Bearer '+accessToken);
xhr.responseType = 'arraybuffer'
xhr.onload = function(){
//base64ArrayBuffer from https://gist.github.com/jonleighton/958841
var base64 = 'data:image/png;base64,' + base64ArrayBuffer(xhr.response);
//do something with the base64 image here
}
xhr.send();
Notice that I set the response type to arraybuffer, and moved alt=media up to the xhr.open call. Also I grabbed a function that converts the array buffer to base64 from https://gist.github.com/jonleighton/958841.
I found out that I can actually retrieve URLs of all those files from the folder using files' webViewLink or webViewContent attributes. A file which is of Google Drive type (Google Doc, Google Sheet, etc...) will have webViewLink attribute and a non Google Drive type file will have webContentLink. The webViewLink will open the file in Google Drive and the webContentLink will download the file. My code:
var request = gapi.client.drive.files.list({
q:"'0Bz9_vPIAWUcSWWo0UHptQ005cnM' in parents", //folder ID
fields: "files(id, name, webContentLink, webViewLink)"
});
request.execute(function(resp) {
console.log(resp); //access to files in this variable
}
Task: download the file and create File object;
Environment: browser;
const URL = 'https://www.googleapis.com/drive/v3/files';
const FIELDS = 'name, mimeType, modifiedTime';
const getFile = async (fileId) => {
const { gapi: { auth, client: { drive: { files } } } } = window;
const { access_token: accessToken } = auth.getToken();
const fetchOptions = { headers: { Authorization: `Bearer ${accessToken}` } };
const {
result: { name, mimeType, modifiedTime }
} = await files.get({ fileId, fields: FIELDS });
const blob = await fetch(`${URL}/${fileId}?alt=media`, fetchOptions).then(res => res.blob());
const fileOptions = {
type: mimeType,
lastModified: new Date(modifiedTime).getTime(),
};
return new File([blob], name, fileOptions);
};
I was able to download using the files.get API:
var fileId = '<your file id>';
gapi.client.drive.files.get(
{fileId: fileId, alt: 'media'}
).then(function (response) {
// response.body has the file data
}, function (reason) {
alert(`Failed to get file: ${reason}`);
});
let url = https://drive.google.com/uc?id=${file_id}&export=download;
Make sure to pass the file_id in this link.
You can get the file id from the file you want to download by getlink --> general access. Make sure the file is public.

Amazon S3 Signature Does Not Match - AWS SDK Java

I have a play application that needs to upload files to S3. We are developing in scala and using the Java AWS SDK.
I'm having trouble trying to upload files, I keep getting 403 SignatureDoesNotMatch when using presigned urls. The url is being genereated using AWS Java SDK by the following code:
def generatePresignedPutRequest(filename: String) = {
val expiration = new java.util.Date();
var msec = expiration.getTime() + 1000 * 60 * 60; // Add 1 hour.
expiration.setTime(msec);
s3 match {
case Some(s3) => s3.generatePresignedUrl(bucketname, filename, expiration, HttpMethod.PUT).toString
case None => {
Logger.warn("S3 is not availiable. Cannot generate PUT request.")
"URL not availiable"
}
}
}
For the frontend code we followed ioncannon article.
The js function that uploads the file (the same as the one used in the article)
function uploadToS3(file, url)
{
var xhr = createCORSRequest('PUT', url);
if (!xhr)
{
setProgress(0, 'CORS not supported');
}
else
{
xhr.onload = function()
{
if(xhr.status == 200)
{
setProgress(100, 'Upload completed.');
}
else
{
setProgress(0, 'Upload error: ' + xhr.status);
}
};
xhr.onerror = function()
{
setProgress(0, 'XHR error.');
};
xhr.upload.onprogress = function(e)
{
if (e.lengthComputable)
{
var percentLoaded = Math.round((e.loaded / e.total) * 100);
setProgress(percentLoaded, percentLoaded == 100 ? 'Finalizing.' : 'Uploading.');
}
};
xhr.setRequestHeader('Content-Type', 'image/png');
xhr.setRequestHeader('x-amz-acl', 'authenticated-read');
xhr.send(file);
}
}
The server's response is
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
<StringToSignBytes>50 55 bla bla bla...</StringToSignBytes>
<RequestId>F7A8F1659DE5909C</RequestId>
<HostId>q+r+2T5K6mWHLKTZw0R9/jm22LyIfZFBTY8GEDznfmJwRxvaVJwPiu/hzUfuJWbW</HostId>
<StringToSign>PUT
image/png
1387565829
x-amz-acl:authenticated-read
/mybucketname/icons/f5430c16-32da-4315-837f-39a6cf9f47a1</StringToSign>
<AWSAccessKeyId>myaccesskey</AWSAccessKeyId></Error>
I have configured CORS, double checked aws credentials and tried changing request headers. I always get the same result.
Why is Amazon telling me that signatures dont match?
Doubt the OP still has a problem with this, but for anyone else who runs into this, here is the answer:
When making a signed request to S3, AWS checks to make sure that the signature exactly matches the HTTP Header information the browser sent. This is unfortunately required reading: http://s3.amazonaws.com/doc/s3-developer-guide/RESTAuthentication.html
However in the code above this is not actually the case, the Javascript is sending:
xhr.setRequestHeader('Content-Type', 'image/png');
xhr.setRequestHeader('x-amz-acl', 'authenticated-read');
But in the Java/Scala, s3.generatePresignedUrl is being called without passing in either of them. So the resulting signature is actually telling S3 to reject anything with a Content-Type or x-ams-acl header set. Oops (I fell for it too).
I've seen browsers send Content-Types automatically, so even if they're not explicitly added to the header they could still be coming into S3. So the question is, how do we add Content-Type and x-amz-acl headers into the signature?
There are several overloaded generatePresignedUrl functions in the AWS SDK, but only one of them allows us to pass in anything else besides the bucket-name, filename, expiration-date and http-method.
The solution is:
Create a GeneratePresignedUrlRequest object, with your bucket and filename.
Call setExpiration, setContentType, etc, to set all of your header info on it.
Pass that into s3.generatePresignedUrl as the only parameter.
Here's the proper function definition of GeneratePresignedUrlRequest to use:
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Client.html#generatePresignedUrl(com.amazonaws.services.s3.model.GeneratePresignedUrlRequest)
The function's code on the AWS GitHub repo was also helpful for me to see how to code up the solution. Hope this helps.
I faced a similar issue and setting the config signatureVersion: 'v4' helped solve it in my case -
In JavaScript:
var s3 = new AWS.S3({
signatureVersion: 'v4'
});
Adapted from https://github.com/aws/aws-sdk-js/issues/902#issuecomment-184872976
I just encountered this problem using the NodeJs AWS SDK.
It was due to using credentials that were valid, but without sufficient permissions.
Changing to my admin key fixed this with no code changes!
I had the same issue, but removing content-type works fine. Hereby sharing the complete code.
public class GeneratePresignedUrlAndUploadObject {
private static final String BUCKET_NAME = "<YOUR_AWS_BUCKET_NAME>";
private static final String OBJECT_KEY = "<YOUR_AWS_KEY>";
private static final String AWS_ACCESS_KEY = "<YOUR_AWS_ACCESS_KEY>";
private static final String AWS_SECRET_KEY = "<YOUR_AWS_SECRET_KEY>";
public static void main(String[] args) throws IOException {
BasicAWSCredentials awsCreds = new BasicAWSCredentials(AWS_ACCESS_KEY, AWS_SECRET_KEY);
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().withRegion(Regions.US_EAST_1)
.withCredentials(new AWSStaticCredentialsProvider(awsCreds)).build();
try {
System.out.println("Generating pre-signed URL.");
java.util.Date expiration = new java.util.Date();
long milliSeconds = expiration.getTime();
milliSeconds += 1000 * 60 * 60;
expiration.setTime(milliSeconds);
GeneratePresignedUrlRequest generatePresignedUrlRequest =
new GeneratePresignedUrlRequest(BUCKET_NAME, OBJECT_KEY);
generatePresignedUrlRequest.setMethod(HttpMethod.PUT);
generatePresignedUrlRequest.setExpiration(expiration);
URL url = s3Client.generatePresignedUrl(generatePresignedUrlRequest);
UploadObject(url);
System.out.println("Pre-Signed URL = " + url.toString());
} catch (AmazonServiceException exception) {
System.out.println("Caught an AmazonServiceException, " +
"which means your request made it " +
"to Amazon S3, but was rejected with an error response " +
"for some reason.");
System.out.println("Error Message: " + exception.getMessage());
System.out.println("HTTP Code: " + exception.getStatusCode());
System.out.println("AWS Error Code:" + exception.getErrorCode());
System.out.println("Error Type: " + exception.getErrorType());
System.out.println("Request ID: " + exception.getRequestId());
} catch (AmazonClientException ace) {
System.out.println("Caught an AmazonClientException, " +
"which means the client encountered " +
"an internal error while trying to communicate" +
" with S3, " +
"such as not being able to access the network.");
System.out.println("Error Message: " + ace.getMessage());
}
}
public static void UploadObject(URL url) throws IOException
{
HttpURLConnection connection=(HttpURLConnection) url.openConnection();
connection.setDoOutput(true);
connection.setRequestMethod("PUT");
OutputStreamWriter out = new OutputStreamWriter(
connection.getOutputStream());
out.write("This text uploaded as object.");
out.close();
int responseCode = connection.getResponseCode();
System.out.println("Service returned response code " + responseCode);
}
}
Got a problem, the mime type on windows was setting the fileType to empty string and it didn't work. Just handle empty strings and add some file type.
I faced with SignatureDoesNotMatch error using the Java AWS SDK. In my case, SignatureDoesNotMatch error occurred after upgraded maven dependencies without changes in my code (so credentials are correct and were not changed). After upgrading dependency org.apache.httpcomponents:httpclient from version 4.5.6 to 4.5.7 (actually it was upgrade of Spring Boot from 2.1.2 to 2.1.3, and there bom has specified httpclient version), code became throw exceptions while doing some AWS SDK S3 requests like AmazonS3.getObject.
After digging into the root cause, I found that httpclient library did breaking changes with normalized URI, that affected Java AWS SDK S3. Please take a look for opened GitHub ticket org.apache.httpcomponents:httpclient:4.5.7 breaks fetching S3 objects for more details.
If your access keys and secret keys are good but it is saying "SignatureDoesNotMAtch", check your secret key, it probably has any of some special charaters, e.g +/ - / *
Go to aws and generate another access key, where the the secret key does not have those. Then try again :)

Categories

Resources