WebKitFormBoundary included in file payload on direct upload to s3 - javascript

I have a dropzone.js instance that uploads files directly to an S3 bucket using CORS and then passes me the file information inside of javascript to use. This is the tutorial I followed for it...
The file upload itself seems to work fine, the files show up in the s3 bucket at the correct file path, however all of the files include something like this wrapped around it
------WebKitFormBoundaryMH4lrj8VmFKgt1Ar
Content-Disposition: form-data; name="files[0]"; filename="image-name.png"
Content-Type: image/png
IMAGE CONTENT HERE
------WebKitFormBoundaryMH4lrj8VmFKgt1Ar--
I cannot for the life of me figure out why this is happening. It doesn't matter what type/mime of file I upload, everything includes it.
Any help would be greatly appreciated!

inside your init: function() { .. }
add the following:
self.on("sending", function(file, xhr, formData) {
var _send = xhr.send;
xhr.send = function() {
_send.call(xhr, file);
}
});

#TadasTamosauskas is correct that catching the 'sending' event to patch xhr will not work for chunked uploads.
Below is another method that patches xhr with a params function passed in as an option to Dropzone. The chunked execution path also adds the headers required for a resumable file upload using the OneDrive API as documented here: https://learn.microsoft.com/en-us/onedrive/developer/rest-api/api/driveitem_createuploadsession?view=odsp-graph-online
const CHUNK_SIZE=10485760 //10MiB
Dropzone.options.dropzone = {
method: "put",
headers: {
'Cache-Control': null,
'X-Requested-With': null
},
filesizeBase: 1024,
maxFilesize: 102400, // 100G in MB, max onedrive filesize
chunking: true,
chunkSize: CHUNK_SIZE,
params: function(files, xhr, chunk) {
if (chunk) {
const chunk_start = (chunk.index * CHUNK_SIZE)
xhr.setRequestHeader('Content-Range',
'bytes ' + chunk_start
+ '-' + (chunk_start + chunk.dataBlock.data.size - 1)
+ '/' + files[0].size)
var _send = xhr.send
xhr.send = function() {
_send.call(xhr, chunk.dataBlock.data)
}
} else {
var _send = xhr.send
xhr.send = function() {
_send.call(xhr, files[0])
}
}
}
}

Related

formdata via ajax with groovy

I'm trying to transfer an image-file and corresponding information via ajax to a groovlet-server.
Problem:
I can't get the data out of the HTTPServletRequest obect.
Here is the Javascript-Code that I use to transfer the data:
$("#submitButton").click( function(){
if ( submitButtonCondition == true ) {
//Gathering Data
var enabledValue = false;
if ($("#activate").val()){
enabledValue = true;
}
var metadata = $("#metaTextarea").val();
var inputFile = $("#fileInput")[0].files[0];
// Creating FormData-Object filled with necessary Data
var formData = new FormData();
formData.append('file', inputFile);
formData.append('enabled', enabledValue);
formData.append('metadata', metadata);
// Sending FormData to Server
$.ajax({
type : 'POST',
url : '/createNewEntry.groovy',
contentType: false,
processData: false,
data: formData,
success: function(resultData){
console.log("Upload successful");
},
failure: function(resultData){
console.log("Upload failed");
}
});
}
});
The only way of verifying if data has been send has been accessing the attached reader of the request object: System.out.println(request.reader.text);
Output looks like this:
------WebKitFormBoundaryzNUfRksUAVW2ioCa
Content-Disposition: form-data; name="file"; filename="blatest.png"
Content-Type: image/png
------WebKitFormBoundaryzNUfRksUAVW2ioCa
Content-Disposition: form-data; name="enabled"
true
------WebKitFormBoundaryzNUfRksUAVW2ioCa
Content-Disposition: form-data; name="metadata"
asdfasdfasdf
------WebKitFormBoundaryzNUfRksUAVW2ioCa--
So apparently the data has been transferred?
Still, I'm struggling to get information out of methods getParameter, getParameterMap, getParameterNames, getParameterValues which all give me no output.
you got on server side multipart request
normally your request should be instanceof
http://docs.oracle.com/javaee/6/api/javax/servlet/http/HttpServletRequest.html
and you can use methods:
Part getPart(java.lang.String name) Gets the Part with the given name.
java.util.Collection<Part> getParts()
Managed to get the Parts using the following external libraries:
org.apache.commons.fileupload
org.apache.commons.io
Code then looks like this:
// Create a factory for disk-based file items
DiskFileItemFactory factory = new DiskFileItemFactory();
// Figure out ServerContext
ServletContext servletContext = context;
// Configure a repository (to ensure a secure temp location is used)
File repository = (File) servletContext.getAttribute("javax.servlet.context.tempdir");
// Set factory constraints
factory.setSizeThreshold(50000);
factory.setRepository(repository);
// Create a new file upload handler
ServletFileUpload upload = new ServletFileUpload(factory);
// Parse the request
List<FileItem> items = upload.parseRequest(request);
// Process the uploaded items
Iterator<FileItem> iter = items.iterator();
while (iter.hasNext()) {
FileItem item = iter.next()
if (item.isFormField()) {
processFormField(item);
} else {
processUploadedFile(item, servletContext);
}
}
request and response are related to the groovlet object.
The methods processFormField() and ProcessUploadedFile() can access the form-Data and cached Files.
processFormField() for example is accessing the information the following way:
private void processFormField(FileItem item) {
String name = item.getFieldName()
String value = item.getString()
if (name=="enabled") {
queryEnabledValue=value;
}
if (name=="metadata") {
queryMetadata=value;
}
}

Upload file with nodeJS

I am having trouble uploading a file with nodeJS and Angular.
I found solutions but it's only with Ajax which I don't know about. Is it possible to do without?
With the following code I get this error :
POST http://localhost:2000/database/sounds 413 (Payload Too Large)
Code:
HTML:
<div class="form-group">
<label for="upload-input">This needs to be a .WAV file</label>
<form enctype="multipart/form-data" action="/database/sounds" method="post">
<input type="file" class="form-control" name="uploads[]" id="upload-input" multiple="multiple">
</form>
<button class="btn-primary" ng-click="uploadSound()">UPLOAD</button>
</div>
Javascript:
$scope.uploadSound = function(){
var x = document.getElementById("upload-input");
if ('files' in x) {
if (x.files.length == 0) {
console.log("Select one or more files.");
} else {
var formData = new FormData();
for (var i = 0; i < x.files.length; i++) {
var file = x.files[i];
if(file.type==("audio/wav")){
console.log("Importing :");
if ('name' in file) {
console.log("-name: " + file.name);
}
if ('size' in file) {
console.log("-size: " + file.size + " bytes");
}
formData.append('uploads[]', file, file.name);
}else{
console.log("Error with: '"+file.name+"': the type '"+file.type+"' is not supported.");
}
}
$http.post('/database/sounds', formData).then(function(response){
console.log("Upload :");
console.log(response.data);
});
}
}
}
NodeJS:
//Upload a sound
app.post('/database/sounds', function(req, res){
var form = new formidable.IncomingForm();
// specify that we want to allow the user to upload multiple files in a single request
form.multiples = true;
// store all uploads in the /uploads directory
form.uploadDir = path.join(__dirname, '/database/sounds');
// every time a file has been uploaded successfully,
// rename it to it's orignal name
form.on('file', function(field, file) {
fs.rename(file.path, path.join(form.uploadDir, file.name));
});
// log any errors that occur
form.on('error', function(err) {
console.log('An error has occured: \n' + err);
});
// once all the files have been uploaded, send a response to the client
form.on('end', function() {
res.end('success');
});
// parse the incoming request containing the form data
form.parse(req);
});
EDIT:
The error became
POST http://localhost:2000/database/sounds 400 (Bad Request)
If your are using bodyParser
app.use(bodyParser.urlencoded({limit: '100mb',extended: true}));
app.use(bodyParser.json({limit: '100mb'}));
This will allow you to upload files upto 100mb
For json/urlencoded limit, it’s recommended to configure them in server/config.json as follows:
{
“remoting”: {
“json”: {“limit”: “50mb”},
“urlencoded”: {“limit”: “50mb”, “extended”: true}
}
Please note loopback REST api has its own express router with bodyParser.json/urlencoded middleware. When you add a global middleware, it has to come before the boot() call.
var loopback = require('loopback');
var boot = require('loopback-boot');
var app = module.exports = loopback();
//request limit 1gb
app.use(loopback.bodyParser.json({limit: 524288000}));
app.use(loopback.bodyParser.urlencoded({limit: 524288000, extended: true}));
With regards to checking that the data is actually a WAV file, your best bet is to look at the contents of the file and determine if it looks like a WAV file or not.
The WAVE PCM soundfile format article goes into the details of the format.
To be absolutely sure that this is a proper WAV file, and it's not broken in some way, you need to check that all of the fields defined there make sense. But a quick solution, might be to just check that the first four bytes of the content are the letters 'RIFF'. It won't guard against corrupted files or malicious content, but it's a good place to start I think.
I tried to change the object sent to url params as said in Very Simple AngularJS $http POST Results in '400 (Bad Request)' and 'Invalid HTTP status code 400' :
$http.post({
method: 'POST',
url: '/upload',
data: formData,
headers: {'Content-Type': 'application/x-www-form-urlencoded'},
transformRequest: function(obj) {
var str = [];
for(var p in obj)
str.push(encodeURIComponent(p) + "=" + encodeURIComponent(obj[p]));
return str.join("&");
}
}).success(function(response){
console.log("Uploaded :");
console.log(response.data);
});
But I get a bad request error
Why is there no data received ? The console.log before this shows I have one file in my formData.
Error: $http:badreq
Bad Request Configuration
Http request configuration url must be a string or a $sce trusted
object. Received: {"method":"POST","url":"/upload","data":
{},"headers":{"Content-Type":"application/x-www-form-urlencoded"}}

How to abort/stop an Amazon AWS s3 upload in progress

I am using the javascript version of the aws sdk to upload a file to an amazon s3 bucket.
code :
AWS.config.update({
accessKeyId : 'access-key',
secretAccessKey : 'secret-key'
});
AWS.config.region = 'region';
var bucket = new AWS.S3({params: {Bucket: 'bucket-name'}});
//var fileChooser = document.getElementById('file');
var files = event.target.files;
$.each(files, function(i, file){
//console.log(file.name);
if (file) {
var params = {Key: file.name, ContentType: file.type, Body: file};
bucket.upload(params).on('httpUploadProgress', function(evt) {
console.log("Uploaded :: " + parseInt((evt.loaded * 100) / evt.total)+'%');
if("Uploaded :: " + parseInt((evt.loaded * 100) / evt.total)+'%' == 'Uploaded :: 20%'){
console.log("abort upload");
bucket.abort.bind(bucket);
}
}).send(function(err, data) {
if(err != "null"){
console.log(data);
//alert("Upload Success \nETag:"+ data.ETag + "\nLocation:"+ data.Location);
var filename = data.Location.substr(data.Location.lastIndexOf("/")+1, data.Location.length-1);
console.log(filename);
fileData = filename;
filename = filename.replace("%20"," ");
$('.aws-file-content').append('<i id="delete-aws-file'+i+'" class="delete-aws-file icon-remove-sign" data-filename=' + fileData +'></i><a href="'+data.Location+'" target=_blank >'+filename+'</a><br>');
}else{
console.log(err);
}
});
}
});
While the file is uploading parts of the file successfully and is still in progress, I want to abort/stop the file upload.
I tried:
bucket.abort();// not working
bucket.abort.bind(bucket); //not working.
Thanks for help.
Found the solution :
// replaced bucket.upload() with bucket.putObject()
var params = {Key: file.name, ContentType: file.type, Body: file};
request = bucket.putObject(params);
then for abort the request:
abort: function(){
request.abort();
}
You cannot bind from the bucket which is your S3 object, it must be called for the upload part.
change for something like this
var upload = bucket.upload(params)
upload.send(....)
so you can bind on upload like
upload.abort.bind(upload);
you can call within an timeout method as crowned in the example
// abort request in 1 second
setTimeout(upload.abort.bind(upload), 1000);
Calling abort() in the browser environment will not abort any requests that are already in flight. If a multipart upload was created, any parts not yet uploaded will not be sent, and the multipart upload will be cleaned up.
Default value for part size is (5 * 1024 * 1024)
Through dumb luck I've stumbled upon a way to do this for multipart uploads.
The accepted answer forces you to use the putObject method, which does not chunk uploads and sends them using the multipart upload API.
The following solution uses the s3.upload method of the AWS S3 SDK for Javascript in the Browser. And it seems to work just fine, even though the example from the official documentation doesn't work.
var bucket = new AWS.S3({params: {Bucket: 'bucket-name'}});
var params = {Key: file.name, ContentType: file.type, Body: file};
var bucket.upload(params).send();
setTimeout(bucket.abort, 1000);
That's it. I just tried calling bucket.abort() and it just worked. Not sure why AWS hasn't documented this.

How can I upload files to Amazon S3 using Cordova FileTransfer?

I'm following Heroku's tutorial on direct uploads to Amazon S3.
After getting a signed request from AWS through the Node.js app, they use a "normal" XMLHttpRequest to send the file.
This is their function:
function upload_file(file, signed_request, url){
var xhr = new XMLHttpRequest();
xhr.open("PUT", signed_request);
xhr.setRequestHeader('x-amz-acl', 'public-read');
xhr.onload = function() {
if (xhr.status === 200) {
document.getElementById("preview").src = url;
document.getElementById("avatar_url").value = url;
}
};
xhr.onerror = function() {
alert("Could not upload file.");
};
xhr.send(file);
}
Now, I'm working with Cordova and, since I don't get a File object from the camera plugin, but only the file URI, I used Cordova's FileTransfer to upload pictures to my Node.js app with multipart/form-data and it worked fine.
However, I can't manage to make it work for Amazon S3.
Here's what I have:
$scope.uploadPhoto = function () {
$scope.getSignedRequest(function (signedRequest) {
if (!signedRequest)
return;
var options = new FileUploadOptions();
options.fileKey = 'file';
options.httpMethod = 'PUT';
options.mimeType = 'image/jpeg';
options.headers = {
'x-amz-acl': 'public-read'
};
options.chunkedMode = false;
var ft = new FileTransfer();
ft.upload($scope.photoURI, encodeURI(signedRequest.signed_request), function () {
// success
}, function () {
// error
}, options);
});
};
I've tried both chunkedMode = true and chunkedMode = false, but neither the success nor the error callback is called.
So, is there a way to upload a file to S3 with FileTransfer?
Do I actually need the signed request or is it only necessary if I use XHR?
Any hint is appreciated.
I ended up with this function in Cordova:
$scope.uploadPhoto = function () {
$scope.getSignedRequest(function (signedRequest) {
if (!signedRequest)
return;
var options = new FileUploadOptions();
options.chunkedMode = false;
options.httpMethod = 'PUT';
options.headers = {
'Content-Type': 'image/jpeg',
'X-Amz-Acl': 'public-read'
};
var ft = new FileTransfer();
ft.upload($scope.photoURI, signedRequest.signedUrl, function () {
$scope.$apply(function () {
// success
});
}, function () {
$scope.$apply(function () {
// failure
});
}, options);
});
};
The important bits are setting the Content-Type header, so that multipart/form-data won't be used, and chunkedMode = false to send the file with a single request.
EDIT: Removed changes to the plugin code which were, in hindsight, useless (outdated plugin).
Not able to add comment:
Strange. Works for me using $cordovaFileTransfer.upload. I don't have the 'x-amz-acl': 'public-read' header. Also I don't use encodeURI on the signed url. Have you been able to debug it? See any errors? I used chrome://inspect and port forwarding to connect to my app running on the phone, so I was able to debug the response from Amazon. Might be another reason why it's failing.

Direct uploading to AWS S3 : SignatureDoesNotMatch only for IE

I use Amazon Web Service S3 to upload and store my files. I generate a pre signed url with AWS Sdk for Node.js server-side to upload directly files from browser thanks to this pre signed url.
How it works
Server-side I have a method wich returns the pre-signed url.
AWS.config.loadFromPath(__dirname + '/../properties/aws-config.json');
AWS.config.region = 'eu-west-1';
//Credentials are loaded
var s3 = new AWS.S3();
var docId = req.query.doc;
var params = {
Bucket: res.locals.user.bucketId,
Key: docId+"."+req.query.fileExtension,
ACL : "bucket-owner-read",
ContentType : req.query.fileType
};
s3.getSignedUrl('putObject', params, function (err, url) {
if(url){
res.writeHead(200);
var result = {
AWSUrl : url
};
//Generates pre signed URL with signature param
res.end(JSON.stringify(result));
}
}
I upload directly my file to S3 client-side
var loadToAWSS3 = function(url, file, inputFileId){
var data = new FormData();
data.append('file', file);
$.ajax({
url: url,//url getted from server-side method
type: 'PUT',
data : data,
headers: {
'Content-Type': file.type
},
processData: false,
xhr: function() {
var myXhr = $.ajaxSettings.xhr();
if(myXhr.upload){
myXhr.upload.addEventListener('progress',function(e){
if(e.lengthComputable){
var max = e.total;
var current = e.loaded;
var percentage = (current * 100)/max;
//stuff to handle progress...
}
},
false);
}
return myXhr;
},
statusCode: {
200: function () {
//some stuff
}
});
}
Chrome & Firefox behaviors
Works as expected, the pre signed url is getted, then the file is uploaded, I can see it in my AWS S3 console.
Lovely IE 11
SignatureDoesNotMatch error ! IE add some extra stuff to Content-Type request header not expected by AWS which causes error in the signature comparison. Server-side, the Sdk generates signature based on :
ContentType : req.query.fileType //(something like application/pdf)
whereas when I inspect the request with IE debugger, I see
Content-Type application/pdf, multipart/form-data; boundary=---------------------------7df2f3283091c
in Chrome my request header is fine
Content-Type: application/pdf
What can I do to remove this IE extra Content-Type ? If not possible, how can I generate this extra stuff before sending the request in order to get the pre-signed url with the extra stuff in the signature ?
OK, I finally figured it out.
Using FormData() simulates that you are sending files through a form. That's why IE always adds
multipart/form-data; boundary=---------------------------7**********
To get around the problem I use raw Javascript with XMLHttpRequest thanks to this answer
var xmlHttpRequest = new XMLHttpRequest();
xmlHttpRequest.open('PUT', url, true);
xmlHttpRequest.setRequestHeader('Content-Type', file.type);
xmlHttpRequest.send(file);
And it works with Chrome, Firefox, IE 11 (I have not tested with IE<11 but according to W3Schools it works for IE7+). No more extra content type with IE.
Hope this helps
if you have access to xhr you can do:
var _send = xhr.send;
xhr.send = function() {
_send.call(xhr, file);
}

Categories

Resources