Sending file direct from browser to S3 but changing file name - javascript

I am using signed authorized S3 uploads so that users can upload files directly from their browser to S3 bypassing my server. This presently works, but the file name is the same as on the user's machine. I'd like to save it on S3 as a different name.
The formdata I post to amazon looks like this:
var formData = new FormData();
formData.append('key', targetPath); // e.g. /path/inside/bucket/myFile.mov
formData.append('AWSAccessKeyId', s3Auth.AWSAccessKeyId); // aws public key
formData.append('acl', s3Auth.acl); // e.g. 'public-read'
formData.append('policy', s3Auth.policy); // s3 policy including ['starts-with', '$key', '/path/inside/bucket/']
formData.append('signature', s3Auth.signature); // base64 sha1 hash of private key and base64 policy JSON
formData.append('success_action_status ', 200); // response code 200 on success
formData.append('file', file.slice()); // e.g. /path/on/user/computer/theirFile.mov
However instead of the file ending up at:
https://s3.amazonaws.com/mybucket/path/inside/bucket/myFile.mov
It ends up as:
https://s3.amazonaws.com/mybucket/path/inside/bucket/theirFile.mov
Note it has their filename but my base path.
I would like it to have the filename I specify as well.
UPDATE: Update: this was working all along I simply had other code that copied from one bucket to another that restored the original file name and thus confusing me.

Are you sure about the contents of targetPath and where that data comes from?
The behavior you are describing is what should happen in one particular case where targetPath doesn't actually contain /path/inside/bucket/myFile.mov.
I would suggest that targetPath actually contains the value /path/inside/bucket/${filename} -- and by that I mean, literally, the characters $ { f i l e n a m e } are at the end of that string, instead of the filename you intend.
If that's true, that's exactly how it's supposed to work.
If you do not know the name of the file a user will upload, the key value can include the special variable ${filename} which will be replaced with the name of the uploaded file. For example, the key value uploads/${filename} will become the object name uploads/Birthday Cake.jpg if the user uploads a file called Birthday Cake.jpg.
— https://aws.amazon.com/articles/1434
If you populate that variable with the literal filename you want to see in S3, then the uploads should behave as you expect, using your filename instead of the filename on the uploader's computer.
Also, a more secure approach would be to eliminate the key 'starts-with' logic in your policy, and instead explicitly sign a policy (dynamically) for each upload event, with the specific key you want the user to upload. Otherwise, it's not impossible to exploit this form to to overwrite other files within the same key prefix.

Related

How to create temp files in nodejs

I want to attach an xml file and send as email. I have a string of text which I want to write in an xml file. But I don't want to actually create a file every time.
I am using nodemailer(https://community.nodemailer.com/using-attachments/) to send mail and it supports stream to attach file.
Does stream mean it has to actually create a file? Can't I just use stream somehow to send it as an attachment without creating a file.
I have xml string like this which I want to put in an xml file and send email:
const xmlStringStart = `<ENVELOPE>
<HEADER>
<TALLYREQUEST>Import Data</TALLYREQUEST>
</HEADER>
<BODY>
<IMPORTDATA>
<REQUESTDESC>
<REPORTNAME>All Masters</REPORTNAME>
<STATICVARIABLES>
<SVCURRENTCOMPANY>X</SVCURRENTCOMPANY>
</STATICVARIABLES>
</REQUESTDESC>
<REQUESTDATA>...`;
I saw PassThrough https://nodejs.org/api/stream.html#stream_class_stream_passthrough but can't figure out how to use it.
From the nodemailer document, you can use an utf-8 string as an attachment :
attachments: [ { // utf-8 string as an attachment
filename: 'text1.txt',
content: xmlStringStart // your string
},
]
I think this is simpler and better in your case. The stream option is like, reading the file as a stream then put the content to the attachment.
I would prefer it if I could have a code example to work on, but the basic concept remains the same.
You can just create a file, send it as an attachment through nodemailer, and then delete the file once that action is complete, thus making it act like a temporary file.
//create a txt file and write Hello World into it
fs.writeFileSync("/foo/bar.txt", "Hello World")
/*
Send file as attachment to nodemailer
*/
//once you're done sending file, go ahead and delete it
fs.unlink("/foo/bar.txt")
i think the filename here refers to the displayed filename as its downloaded as attachment. the path option is the file location..
as for passthrough.. it's probably like
const pt = new PassThrough();
{ // stream as an attachment
filename: 'text4.txt',
content: pt
},
then
pt.write("....");
to close: pt.end("..."()

What is the purpose of the computed_hashes.json and verified_contents.json files in a secure Chrome extension?

I've seen some Chrome extensions that hashes their folder and file names. They have a folder named 'metadata' and two files inside it: 'computed_hashes.json' and 'verified_contents.json'. What are these files, what do they do and how can I get them or use them?
computed_hashes.json
computed_hashes.json calculates the SHA256 hash of the blocks of the files included in the extension, which is presumably used for file integrity and/or security purposes to ensure the files haven't been corrupted/tampered with.
I go into this in depth in this StackOverflow answer, where I reference the various relevant sections in the Chromium source code.
The main relevant files are:
extensions/browser/computed_hashes.h
extensions/browser/computed_hashes.cc
And within this, the main relevant functions are:
Compute
ComputeAndCheckResourceHash
GetHashesForContent
And the actual hash calculation can be seen in the ComputedHashes::GetHashesForContent function.
verified_contents.json
Short Answer
This is used for file integrity and/or security purposes to ensure that the extension files haven't been corrupted/tampered with.
verified_contents.json ensures that the Base64 encoded payload of the signed_content object within the object with a description of treehash per file validates against the signature of the object within signatures that has a header.kid of webstore. This is validated using crypto::SignatureVerifier::RSA_PKCS1_SHA256 across the concatenated values of protected + . + payload.
If the signature validates correctly, the SHA256 hash of the blocks of the files included in the extension are then calculated and compared as per computed_hashes.json (described above).
Deep Dive Explanation
To determine the internal specifics of how verified_contents.json is created/validated, we can search the chromium source code for verified_contents as follows:
https://source.chromium.org/search?q=verified_contents
This returns a number of interesting files, including:
extensions/browser/verified_contents.h
extensions/browser/verified_contents.cc
Looking in verified_contents.h we can see a comment describing the purpose of verified_contents.json, and how it's created by the webstore:
// This class encapsulates the data in a "verified_contents.json" file
// generated by the webstore for a .crx file. That data includes a set of
// signed expected hashes of file content which can be used to check for
// corruption of extension files on local disk.
We can also see a number of function prototypes that sound like they are used for parsing and validating the verified_contents.json file:
// Returns verified contents after successfully parsing verified_contents.json
// file at |path| and validating the enclosed signature. Returns nullptr on
// failure.
// Note: |public_key| must remain valid for the lifetime of the returned
// object.
static std::unique_ptr<VerifiedContents> CreateFromFile(
base::span<const uint8_t> public_key,
const base::FilePath& path);
// Returns verified contents after successfully parsing |contents| and
// validating the enclosed signature. Returns nullptr on failure. Note:
// |public_key| must remain valid for the lifetime of the returned object.
static std::unique_ptr<VerifiedContents> Create(
base::span<const uint8_t> public_key,
base::StringPiece contents);
// Returns the base64url-decoded "payload" field from the |contents|, if
// the signature was valid.
bool GetPayload(base::StringPiece contents, std::string* payload);
// The |protected_value| and |payload| arguments should be base64url encoded
// strings, and |signature_bytes| should be a byte array. See comments in the
// .cc file on GetPayload for where these come from in the overall input
// file.
bool VerifySignature(const std::string& protected_value,
const std::string& payload,
const std::string& signature_bytes);
We can find the function definitions for these in verified_contents.cc:
VerifiedContents::CreateFromFile calls base::ReadFileToString (which then calls ReadFileToStringWithMaxSize that reads the file as a binary file with mode rb) to load the contents of the file, and then passes this to Create
VerifiedContents::Create
calls VerifiedContents::GetPayload to extract/validate/decode the contents of the Base64 encoded payload field within verified_contents.json (see below for deeper explanation of this)
parses this as JSON with base::JSONReader::Read
extracts the item_id key, validates it with crx_file::id_util::IdIsValid, and adds it to verified_contents as extension_id_
extracts the item_version key, validates it with Version::IsValid(), and adds it to verified_contents as version_
extracts all of the content_hashes objects and
verifies that the format of each is treehash
extracts the block_size and hash_block_size, ensures they have the same value, and addsblock_size to verified_contents as block_size_
extracts all of the files objects
extracts the path and root_hash keys and ensures that root_hash is Base64 decodeable
calculates the canonical_path using content_verifier_utils::CanonicalizeRelativePath and base::FilePath::FromUTF8Unsafe, and inserts it into root_hashes_ in the verified_contents
finally, returns the verified_contents
VerifiedContents::GetPayload
parses the contents as JSON with base::JSONReader::Read
finds an object in the JSON that has the description key set to treehash per file
extracts the signed_content object
extracts the signatures array
finds an object in the signatures array that has a header.kid set to webstore
extracts the protected / signature keys and Base64 decodes the signature into signature_bytes
extracts the payload key
calls VerifySignature with protected / payload / signature_bytes
if the signature is valid, Base64 decodes the payload into a JSON string
VerifiedContents::VerifySignature
calls SignatureVerifier::VerifyInit using crypto::SignatureVerifier::RSA_PKCS1_SHA256
uses this to validate protected_value + . + payload
Since this didn't show how the file hashes themselves were verified, I then searched for where VerifiedContents::CreateFromFile was called:
https://source.chromium.org/search?q=VerifiedContents::CreateFromFile
Which pointed me to the following files:
extensions/browser/content_verifier/content_hash.cc
Where
VerifiedContents::CreateFromFile is called by ReadVerifiedContents
ReadVerifiedContents is called by ContentHash::GetVerifiedContents, and when the contents are successfully verified, it will pass them to verified_contents_callback
GetVerifiedContents is called by ContentHash::Create, which passes ContentHash::GetComputedHashes as the verified_contents_callback
ContentHash::GetComputedHashes calls ContentHash::BuildComputedHashes
ContentHash::BuildComputedHashes will read/create the computed_hashes.json file by calling file_util::GetComputedHashesPath, ComputedHashes::CreateFromFile, CreateHashes (which calls ComputedHashes::Compute), etc
Note that ComputedHashes::CreateFromFile and ComputedHashes::Compute are the functions described in the computed_hashes.json section above (used to calculate the SHA256 hash of the blocks of the files included in the extension), and which I go into much more detail about in this StackOverflow answer.

AWS S3 browser upload using HTTP POST gives invalid signature

I'm working on a website where the users should be able to upload video files to AWS. In order to avoid unnecessary traffic I would like the user to upload directly to AWS (and not through the API server). In order to not expose my secret key in the JavaScript I'm trying to generate a signature in the API. It does, however, tell me when I try to upload, that the signature does not match.
For signature generation I have been using http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-UsingHTTPPOST.html
On the backend I'm running C#.
I generate the signature using
string policy = $#"{{""expiration"":""{expiration}"",""conditions"":[{{""bucket"":""dennisjakobsentestbucket""}},[""starts-with"",""$key"",""""],{{""acl"":""private""}},[""starts-with"",""$Content-Type"",""""],{{""x-amz-algorithm"":""AWS4-HMAC-SHA256""}}]}}";
which generates the following
{"expiration":"2016-11-27T13:59:32Z","conditions":[{"bucket":"dennisjakobsentestbucket"},["starts-with","$key",""],{"acl":"private"},["starts-with","$Content-Type",""],{"x-amz-algorithm":"AWS4-HMAC-SHA256"}]}
based on http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-HTTPPOSTConstructPolicy.html (I base64 encode the policy). I have tried to keep it very simple, just as a starting point.
For generating the signature, I use code found on the AWS site.
static byte[] HmacSHA256(String data, byte[] key)
{
String algorithm = "HmacSHA256";
KeyedHashAlgorithm kha = KeyedHashAlgorithm.Create(algorithm);
kha.Key = key;
return kha.ComputeHash(Encoding.UTF8.GetBytes(data));
}
static byte[] GetSignatureKey(String key, String dateStamp, String regionName, String serviceName)
{
byte[] kSecret = Encoding.UTF8.GetBytes(("AWS4" + key).ToCharArray());
byte[] kDate = HmacSHA256(dateStamp, kSecret);
byte[] kRegion = HmacSHA256(regionName, kDate);
byte[] kService = HmacSHA256(serviceName, kRegion);
byte[] kSigning = HmacSHA256("aws4_request", kService);
return kSigning;
}
Which I use like this:
byte[] signingKey = GetSignatureKey(appSettings["aws:SecretKey"], dateString, appSettings["aws:Region"], "s3");
byte[] signature = HmacSHA256(encodedPolicy, signingKey);
where dateString is on the format yyyymmdd
I POST information from JavaScript using
let xmlHttpRequest = new XMLHttpRequest();
let formData = new FormData();
formData.append("key", "<path-to-upload-location>");
formData.append("acl", signature.acl); // private
formData.append("Content-Type", "$Content-Type");
formData.append("AWSAccessKeyId", signature.accessKey);
formData.append("policy", signature.policy); //base64 of policy
formData.append("x-amz-credential", signature.credentials); // <accesskey>/20161126/eu-west-1/s3/aws4_request
formData.append("x-amz-date", signature.date);
formData.append("x-amz-algorithm", "AWS4-HMAC-SHA256");
formData.append("Signature", signature.signature);
formData.append("file", file);
xmlHttpRequest.open("post", "http://<bucketname>.s3-eu-west-1.amazonaws.com/");
xmlHttpRequest.send(formData);
I have been using UTF8 everywhere as prescribed by AWS. In their examples the signature is on a hex format, which I have tried as well.
No matter what I try I get an error 403
The request signature we calculated does not match the signature you provided. Check your key and signing method.
My policy on AWS has "s3:Get*", "s3:Put*"
Am I missing something or does it just work completely different than what I expect?
Edit: The answer below is one of the steps. The other is that AWS distinguish between upper and lowercase hex strings. 0xFF != 0xff in the eyes of AWS. They want the signature in all lowercase.
You are generating the signature using Signature Version 4, but you are constructing the form as though you were using Signature Version 2... well, sort of.
formData.append("AWSAccessKeyId", signature.accessKey);
That's V2. It shouldn't be here at all.
formData.append("x-amz-credential", signature.credentials); // <accesskey>/20161126/eu-west-1/s3/aws4_request
This is V4. Note the redundant submission of the AWS Access Key ID here and above. This one is probably correct, although the examples have capitalization like X-Amz-Credential.
formData.append("x-amz-algorithm", "AWS4-HMAC-SHA256");
That is also correct, except it may need to be X-Amz-Algorithm. (The example seems to imply that capitalization is ignored).
formData.append("Signature", signature.signature);
This one is incorrect. This should be X-Amz-Signature. V4 signatures are hex, so that is what you should have here. V2 signatures are base64.
There's a full V4 example here, which even provides you with an example aws key and secret, date, region, bucket name, etc., that you can use with your code to verify that you indeed get the same response. The form won't actually work but the important question is whether your code can generate the same form, policy, and signature.
For any given request, there is only ever exactly one correct signature; however, for any given policy, there may be more than one valid JSON encoding (due to JSON's flexibility with whitespace) -- but for any given JSON encoding there is only one possible valid base64-encoding of the policy. This means that your code, using the example data, is certified as working correctly if it generates exactly the same form and signature as shown in the example -- and it means that your code is proven invalid if it generates the same form and policy with a different signature -- but there is a third possibility: the test actually proves nothing conclusive about your code if your code generates a different base64 encoding of the policy, because that will necessarily change the signature to not match, yet might still be a valid policy.
Note that Signature V2 is only suported on older S3 regions, while Signature V4 is supported by all S3 regions, so, even though you could alternately fix this by making your entire signing process use V2, that wouldn't be recommended.
Note also that The request signature we calculated does not match the signature you provided. Check your key and signing method does not tell you anything about whether the bucket policy or any users policies allow or deny the request. This error is not a permissions error. It will be thrown prior to the permissions checks, based solely on the validity of the signature, not whether the AWS Access Key id is authorized to perform the requested operation, which is something that is only tested after the signature is validated.
I suggest you to create a pair auth token with permission to POST only, and send an http request like this:
require 'rest-client'
class S3Uploader
def initialize
#options = {
aws_access_key_id: "ACCESS_KEY",
aws_secret_access_key: "ACCESS_SECRET",
bucket: "BUCKET",
acl: "private",
expiration: 3.hours.from_now.utc,
max_file_size: 524288000
}
end
def fields
{
:key => key,
:acl => #options[:acl],
:policy => policy,
:signature => signature,
"AWSAccessKeyId" => #options[:aws_access_key_id],
:success_action_status => "201"
}
end
def key
#key ||= "temp/${filename}"
end
def url
"http://#{#options[:bucket]}.s3.amazonaws.com/"
end
def policy
Base64.encode64(policy_data.to_json).delete("\n")
end
def policy_data
{
expiration: #options[:expiration],
conditions: [
["starts-with", "$key", ""],
["content-length-range", 0, #options[:max_file_size]],
{ bucket: #options[:bucket] },
{ acl: #options[:acl] },
{ success_action_status: "201" }
]
}
end
def signature
Base64.encode64(
OpenSSL::HMAC.digest(
OpenSSL::Digest.new("sha1"),
#options[:aws_secret_access_key], policy
)
).delete("\n")
end
end
uploader = S3Uploader.new
puts uploader.fields
puts uploader.url
begin
RestClient.post(uploader.url, uploader.fields.merge(file: File.new('51bb26652134e98eae931fbaa10dc3a1.jpeg'), :multipart => true))
rescue RestClient::ExceptionWithResponse => e
puts e.response
end

How do I encode/decode a file correctly after reading it through javascript and pass the file data through ajax?

I have a django File field with multiple attribute set to true. I am trying to make a multiple file uploader where I get the file objects with a simple javascript FileReader object. After looping through the file list I read the file data through
reader.readAsBinaryString(file);
and get the desired file data result. After passing this data to my views through ajax I am trying to create a copy of the file into the media folder. I am presently using the following views function :
#csrf_exempt
def storeAttachment(data):
'''
stores the files in media folder
'''
data = simplejson.loads(data.raw_post_data)
user_org = data['user_org']
fileName = data['fileName']
fileData = data['fileData']
file_path = MEDIA_ROOT + 'icts_attachments/'
try:
path = open((file_path+ str(user_org) + "_" + '%s')%fileName, "w+")
path.write(fileData)
path.close()
return HttpResponse(1)
except IOError:
return HttpResponse(2)
I am able to write simple text files,.js,.html and other few formats but when I try to upload pdf, word, excel, rar formats I get the following error in my response even though a file with invalid data is saved at my MEDIA path(the file does not open).
'ascii' codec can't encode characters in position 41-42: ordinal not in range(128)
I tried to encode/decode the file data using various references but with no effect..Any advice will be greatly appreciated..
You got error because Python 2's default ASCII encoding was used. Characters greater than 127 cause an exception so use str.encode to encode from Unicode to text/bytes.
Good practice is to use with keyword when dealing with file objects.
path = u''.join((file_path, user_org, '_', fileName)).encode('utf-8')
with open(path, 'w+') as f:
f.write(fileData)

Convert base64 image to file in javascript or how to pass a big base64 string with jquery ajax

This can't take any more of my time. I've tried to solve this a very long time now.
I will give you my whole scenario and then what the problem is.
I have this web site. On one page the user can choose between three image input types.
This is a radio button group:
o Twitter Logo
o Twitter Profile Picture
o Upload picture
If the user choose option 1 or 2, an img tag src is updated with a local-project file (http://localhost:9000/public/images/image.png) and this image src is stored in html5 Web Session Storage variable.
If the user choose option 3 he/she get to choose a file from their computer (a input type="file" appears under the radio group) and the img tag src is updated with this image.
This time, the src that I will store in the session variable won't be a path to the file (which I know is because of security reasons) but the src will be a base64 string. A really big one if the user choose a big image.
So now I have this image stored in the session variable, either a path to the image file included in the project folder or a base64 encoded image.
What I do now is to fetch this value from the session variable in JavaScript. I want to pass this image to my code on the server side. For making an actual image of it and uploading it to places, but that part isnt really necessary.
My problem is that in JavaScript, I can't pass this with a POST using $.ajax.
The base64 string is too big I think, and I can't figure out how I can convert it to something else, say a byte[].
How should I do?
I want to pass this image that the user choose to the server side for further process.
Then on the server side I want to convert it to an actual Image object, or BufferedImage.
Here's a code-block of how it looks now:
function gatherSessionValuesAndGenerateCode(userEmail) {
var email = userEmail;
var category = getSessionValue("category");
var itemName = getSessionValue("itemName");
var service = getSessionValue("service");
var accountName = getSessionValue("accountName");
var action = getSessionValue("action");
var imageType = getSessionValue("imageType");
var imageFile = getSessionValue("imageFile");
var expirationDate = getSessionValue("expirationDate");
$.ajax({
type: "POST",
url: "http://localhost:9000/quikkly/business/create/generate",
data: {
email: email,
category: category,
itemName: itemName,
service: service,
accountName: accountName,
action: action,
imageType: imageType,
imageFile: imageFile, //This is making me feel ill, don't know how to solve it.
expirationDate: expirationDate
},
success: function(response){console.log("Horayyy "+response)}
});
}
ok i got your point but what i am trying to say is may be your string contains 20K characters but i don't think it can be larger than 2-3 Mb and i hope your server settings allows you to post data of size of 2-3 MB.
Apart from this i think putting the image path name in base_64 is not a good idea.If you think someone can steel the data from your seesion then he/she can do any thing with your complete website concept.
Because any how a person can see the image path even when you display some image over web page.
Still if you think you don't want path in seesion you can keep it in some file in key-value pair or you can keep it in database.
OR you can do one more thing just keep the image file name in your seesion data & prepend the exact path when you want to use it internally over server side.

Categories

Resources