I am using Plupload to upload to S3; My problem is that I want to change names of files, so when they reside in S3 they will be changed to a format I want. I managed to retrieve the file name of files uploaded by the function:
FilesAdded: function (up, files) {
for (var i in files) {
files[i].name = files[i].name.split('_').join(' ').trim();
alert('Selected files: ' + files[i].name);
}
the file name changes in the control but when I check the S3 the file is unchanged.
I made sure unique_names property is false and rename property to true; but did not work any help?
I faced the same problem i.e. using pluploader for S3 and wanted to normalize file names before uploading. Unfortunately 'Files' param appears to be read-only and any changes to file's name doesn't show up in the submitted form.
But if we change the 'key' param to an exact string (normalized name) it will cause S3 to save it with a different name. We can create a normalized name and use it in the 'key' param in the multipart_param in the 'FileAdded' callback.
....
FilesAdded: function(up, files) {
console.log("Uploader.FilesAdded ", up, files);
var file = files[0];
//Replace unwanted characters with "_"
var new_name = file.name.replace(/[^a-z0-9\.]+/gi,"_").toLowerCase();
console.log("Changing file name to ", file.name);
//Create multipart_params here (or just set the 'key' and 'Filename')
var multipart_params = {
'key': config.key + new_name, // *instead of ${filename}
'Filename': config.key + new_name,
'acl': config.acl,
'Content-Type': '',
'AWSAccessKeyId' : config.accessId,
'policy': config.policy,
'signature': config.signature
};
//
up.settings.multipart_params = multipart_params;
}
....
I had the requirement to add a user-definable prefix to all the files in the upload queue just before executing the multipart upload to S3. In my case, the prefix could be edited at any time before starting the upload - so using the FilesAdded event wasn't acceptable.
The solution that worked for me was to use the BeforeUpload which fires for each file in the queue just before starting the upload.
BeforeUpload: function(up, file) {
//Change the filename
file.name = appendUniqueId(file.name);
file.name = addPrefix(file.name);
var params = up.settings.multipart_params;
params.key = file.name;
params.Filename = file.name;
}
Try with unique_names:true with rename:true. It works for me.
The only fully working solution which I have found to this problem is to make use of the property multipart_params on the Plupload uploader.
The code I am using is;
uploader.bind('BeforeUpload', function (up, file) {
uploader.settings.multipart_params = { 'renamedFileName': 'yourRenamedFileName' };
});
Then when you are processing your image, you can then request this from the posted information;
c# code
var renamedFileName = context.Request["renamedFileName"].ToString();
For your code you could try;
uploader.bind('BeforeUpload', function (up, file) {
uploader.settings.multipart_params = { 'renamedFileName': file.name.split('_').join(' ').trim(); };
});
You can then use this additional information to rename the file on your server.
Related
So I uploaded an image file and store it in a local folder, the assignment requirement is to move the image file from folder A to folder B, I have no clue to do this.
app.get('/fineupload003',function(req,res){
function moveApprovedFile(file, uuid, success, failure) {
var sourcePath = uploadedFilesPath;
var desPath = 'approved';
var desDir = desPath + "/";
var fileDes = desDir + file.name;
fs.access()
};
});
If the requirements is to just move the file and not copy it, you could rename the file which would act as moving.
fs.rename(sourcePath, desPath);
Read more on rename: https://nodejs.org/docs/latest/api/fs.html#fs_fs_rename_oldpath_newpath_callback
I'm trying to add a local file to the zip so when the user downloads and unzips, he'll get a folder with a .dll and a config.json file:
var zip = new JSZip();
options.forEach(option => {
zip.folder("REST." + option + ".Connector")
.file("config.json", "//config for " + option)
// I want this file to be from a local directory within my project
// eg. {dir}\custom_rest_connector_repository\src\dlls\Connectors.RestConnector.dll
.file('../dlls/Connectors.RestConnector.dll', null);
});
zip.generateAsync({type:"blob"}).then(function (blob) {
FileSaver.saveAs(blob, "REST_Connectors_"
+ dateStr
+ ".zip");
});
I read through the JSZip documentation but couldn't find an example or any information whether this can actually be done.
If it can't, is there any other more robust library that does support this operation?
Found the answer to my own question using the jszip-utils
JSZipUtils.getBinaryContent("../dlls/Connectors.RestConnector.dll", function (err, data) {
if(err) {
throw err; // or handle the error
}
zip.file("../dlls/Connectors.RestConnector.dll", data, {binary:true});
});
I am trying to upload file to aws s3. before i upload i want to rename it by adding timestamp to file name. but i am geting an error as 'Cannot assign to read only property 'name' of object '#''
here is the code
let file = e.target.files[0];
let timeStamp = (new Date()).getTime();
let fileExt = file.name.split('.')[file.name.split('.').length-1];
let fileNameWithoutExt = file.name.replace(`.${fileExt}`,'');
let newFileName = fileNameWithoutExt + '_' + timeStamp + '.' + fileExt;
file.name = newFileName;
Yep that sounds like a weird rule to set it as Read-only, but it's what it is...
So the workaround, not so hard, is to create a new File object from your previous one...
var previous_file = new File(['foo'], 'file.txt', {type: 'text/plain'});
try{
previous_file.name = 'hello.txt';
}
catch(e){}
console.log(previous_file.name); // didn't work
// so we just create a new File from it...
var new_file = new File([previous_file], 'hello.txt');
console.log(new_file);
But also note that if you need to support older browsers that don't support the File constructor, then you can override this file name in a FormData that you will send to your sever:
var file = new File(['foo'], 'text.txt', {type:'text/plain'});
var formdata = new FormData();
// this will override the file name
formdata.append('file', file, 'hello.txt');
// and now you can send this formdata through xhr
// for demo, we will just log its content
for(let entry of formdata.entries()) {
console.log(entry);
}
The append() method of FormData accepts a third optional filename parameter.
// new file name as a variable with timestamp
const newName = new Date().getTime() + event.target.files[0].name;
fd.append('file[]', event.target.files[0], newName);
You can't change a name of an already created file.
You can create
new instance of file with new file name, like in a post above.But
File constroctor is not supported by all browsers (is not supported at IE and EDGE supporting table).
You can put new
file name to key property of your amazon upload
https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-post-example.html
instead of key = "folder1/folder2/${filename}"
you can write key = "folder1/folder2/youfilename.txt"
I have a meteor application and in this one I get a base64 image. I want to save the image on a Digital Ocean instance, so I would convert it in a png or an other image format and send it to the server to get an url of the image.
But I didn't find a meteor package that does this.
Do you know how I can do that ?
I was running into a similar issue.
run the following:
meteor npm install --save file-api
This will allow the following code on the server for example:
import FileAPI from 'file-api';
const { File } = FileAPI;
const getFile = function(name,image){
const i = image.indexOf('base64,');
const buffer = Buffer.from(image.slice(i + 7), 'base64');
const file = new File({buffer: buffer, name, type: 'image/jpeg'});
return file;
}
Simply call it with any name of file you prefer, and the base64 string as the image parameter.
I hope this helps. I have tested this and it works on the server. I have not tested it on the client but I don't see why it wouldn't work.
I solved my problem using fs.writeFile from File System.
This is my javascript code on client side, I got a base64 image (img) from a plugin and when I click on my save button, I do this :
$("#saveImage").click(function() {
var img = $image.cropper("getDataURL")
preview.setAttribute('src', img);
insertionImage(img);
});
var insertionImage = function(img){
//some things...
Meteor.call('saveTileImage', img);
//some things...
}
And on the server side, I have :
Meteor.methods({
saveTileImage: function(fileData) {
var fs = Npm.require('fs');
var path = process.env.PWD + '/var/uploads/';
base64Data = fileData.replace(/^data:image\/png;base64,/, "");
base64Data += base64Data.replace('+', ' ');
binaryData = new Buffer(base64Data, 'base64').toString('binary');
var imageName = "tileImg_" + currentTileId + ".png";
fs.writeFile(path + imageName, binaryData, "binary", Meteor.bindEnvironment(function (err) {
if (err) {
throw (new Meteor.Error(500, 'Failed to save file.', err));
} else {
insertionTileImage(imageName);
}
}));
}
});
var insertionTileImage = function(fileName){
tiles.update({_id: currentTileId},{$set:{image: "upload/" + fileName}});
}
So, the meteor methods saveTileImage transform the base64 image into a png file and insertionTileImage upload it to the server.
BlobUrl, would it be a better option for you?
Save the images to a server as you like in base64 or whatever, and then when you are viewing the image on a page, generate the blobUrl of it. The url being used only at that time, preventing others from using your url on various websites and not overloading your image server ...
I am using the javascript version of the aws sdk to upload a file to an amazon s3 bucket.
code :
AWS.config.update({
accessKeyId : 'access-key',
secretAccessKey : 'secret-key'
});
AWS.config.region = 'region';
var bucket = new AWS.S3({params: {Bucket: 'bucket-name'}});
//var fileChooser = document.getElementById('file');
var files = event.target.files;
$.each(files, function(i, file){
//console.log(file.name);
if (file) {
var params = {Key: file.name, ContentType: file.type, Body: file};
bucket.upload(params).on('httpUploadProgress', function(evt) {
console.log("Uploaded :: " + parseInt((evt.loaded * 100) / evt.total)+'%');
if("Uploaded :: " + parseInt((evt.loaded * 100) / evt.total)+'%' == 'Uploaded :: 20%'){
console.log("abort upload");
bucket.abort.bind(bucket);
}
}).send(function(err, data) {
if(err != "null"){
console.log(data);
//alert("Upload Success \nETag:"+ data.ETag + "\nLocation:"+ data.Location);
var filename = data.Location.substr(data.Location.lastIndexOf("/")+1, data.Location.length-1);
console.log(filename);
fileData = filename;
filename = filename.replace("%20"," ");
$('.aws-file-content').append('<i id="delete-aws-file'+i+'" class="delete-aws-file icon-remove-sign" data-filename=' + fileData +'></i><a href="'+data.Location+'" target=_blank >'+filename+'</a><br>');
}else{
console.log(err);
}
});
}
});
While the file is uploading parts of the file successfully and is still in progress, I want to abort/stop the file upload.
I tried:
bucket.abort();// not working
bucket.abort.bind(bucket); //not working.
Thanks for help.
Found the solution :
// replaced bucket.upload() with bucket.putObject()
var params = {Key: file.name, ContentType: file.type, Body: file};
request = bucket.putObject(params);
then for abort the request:
abort: function(){
request.abort();
}
You cannot bind from the bucket which is your S3 object, it must be called for the upload part.
change for something like this
var upload = bucket.upload(params)
upload.send(....)
so you can bind on upload like
upload.abort.bind(upload);
you can call within an timeout method as crowned in the example
// abort request in 1 second
setTimeout(upload.abort.bind(upload), 1000);
Calling abort() in the browser environment will not abort any requests that are already in flight. If a multipart upload was created, any parts not yet uploaded will not be sent, and the multipart upload will be cleaned up.
Default value for part size is (5 * 1024 * 1024)
Through dumb luck I've stumbled upon a way to do this for multipart uploads.
The accepted answer forces you to use the putObject method, which does not chunk uploads and sends them using the multipart upload API.
The following solution uses the s3.upload method of the AWS S3 SDK for Javascript in the Browser. And it seems to work just fine, even though the example from the official documentation doesn't work.
var bucket = new AWS.S3({params: {Bucket: 'bucket-name'}});
var params = {Key: file.name, ContentType: file.type, Body: file};
var bucket.upload(params).send();
setTimeout(bucket.abort, 1000);
That's it. I just tried calling bucket.abort() and it just worked. Not sure why AWS hasn't documented this.