Problem with AttachmentService of SAP Cloud SDK for JavaScript - javascript

Currently we use SAP REST API for uploading and managing attachments.
We want to replace the standard requests with the SDK because we had problems getting the connection through a CloudConnector with the respective proxy settings and because we also use the SDK for all other requests.
var attContentSetBuilder = AttachmentContentSet.builder();
attContentSetBuilder.documentInfoRecordDocNumber("10000000008");
attContentSetBuilder.documentInfoRecordDocPart("000");
attContentSetBuilder.documentInfoRecordDocType("YBO");
attContentSetBuilder.documentInfoRecordDocVersion("01");
attContentSetBuilder.businessObjectTypeName("DRAW");
attContentSetBuilder.fileName("TEST.pdf")
attContentSetBuilder.content(fileToBase64("C:\\TEST.pdf"));
var attContentSet = attContentSetBuilder.build();
var requestBuilder = new AttachmentContentSetRequestBuilder();
var contentSetRequester = requestBuilder.create(attContentSet);
contentSetRequester.withCustomHeaders({ key: 'slug', value: 'TEST.pdf' }).execute({XXX}).then ...
function fileToBase64(filename: string): string {
var fs = require('fs');
return fs.readFileSync(filename, 'utf8');
}
Will the content/body with the binary data be set that way? Does the header value slug also have to be set?
Does the Attachment Service also support GOS?
So far we get the error:
"Attachment name cannot be empty"

the error message reads like a message you get from the S/4HANA API, so it seems like there is a semantic problem with your request. Unfortunately, the API Business Hub is not very good in communicating the required fields for a request, but here are some pointers:
If you take a look at the entity definition, the following fields are non-nullable:
documentInfoRecordDocType: string;
documentInfoRecordDocNumber: string;
documentInfoRecordDocVersion: string;
documentInfoRecordDocPart: string;
logicalDocument: string;
archiveDocumentId: string;
linkedSapObjectKey: string;
businessObjectTypeName: string;
so maybe providing values for the ones your missing solves the problem
There is more documentation on this API here (I got there by going to the API's page on the Business Hub, clicking on "Details" and then on "Business Documentation" on the bottom of the page)
your .withCustomHeaders looks off, I'm guessing what you wanted to do is: .withCustomHeaders({ slug: 'TEST.pdf' })
Bonus: the builder and request builder have a fluent API, so you can also use it like this:
const attContentSet = AttachmentContentSet.builder()
.documentInfoRecordDocNumber("10000000008")
.documentInfoRecordDocPart("000")
.documentInfoRecordDocType("YBO")
.documentInfoRecordDocVersion("01")
.businessObjectTypeName("DRAW")
.fileName("TEST.pdf")
.content(fileToBase64("C:\\TEST.pdf"))
.build();
That's a matter of taste, of course, personally I find this a little easier to parse mentally.

Related

mailgun.js inline image attachments

I want to embed inline images with mailgun, however the documentation for sending attachments doesnt seem to match with the latest mailgun.js NOT mailgun-js
https://www.npmjs.com/package/mailgun.js
the mentioned docs in this readme do not exist: https://documentation.mailgun.com/api-sending.html#sending << 404's
The readme docs for this version say this:
But where do I even add an attachment? How do I set mailgun to use multipart/form-data encoding? Where/ what parameter do I use to add an attachment? There are no good type definitions either
If i have this code below, how do I just add an attachment? What's the object schema? Does it even go here? I have no clue
mg.messages.create('sandbox-123.mailgun.org', {
from: "Excited User <mailgun#sandbox-123.mailgun.org>",
to: ["test#example.com"],
subject: "Hello",
text: "Testing some Mailgun awesomness!",
html: "<h1>Testing some Mailgun awesomness!</h1>"
})
Even once set as an attachment, how do I reference it in the email to use in <img> ?
Anyone that has done with the latest mailgun.js please share your reference code. The documentation is outdated
The documentation of mailgun.js can be a bit confusing, at least for node JS, since their website refers to the old version SDK whereas their github page points to the new one. That said, their github page does offer examples for including figures in the email. There are two ways to do so
Sent as attachment files but NOT displayed in the email.
Sent as inline figure and displayed in the email.
Below shows an example (original code available here) of sending figures both as attachment and inline. Notice that for inline figures to display, one must anchor the file name with a prefix cid: in an HTML template.
To send multiple attachments or inline figures, include all files in a list (the example below uses a list of one file for both inline and attachment).
/**
Copied from https://github.com/mailgun/mailgun-js/blob/master/examples/send-email.js
*/
const fs = require('fs');
const domain = 'sandbox-123.mailgun.com';
const fromEmail = 'Excited User <mailgun#sandbox-123.mailgun.com>';
const toEmails = ['you#example.com'];
const mailgunLogo = fs.createReadStream(`${__dirname}/mailgun.png`);
const rackspaceLogo = fs.createReadStream(`${__dirname}/rackspace.png`);
mg.messages.create(domain, {
from: fromEmail,
to: toEmails,
subject: 'Hello',
html: '<img src="cid:mailgun.png" width="200px"><br><h3>Testing some Mailgun awesomness!</h3>',
text: 'Testing some Mailgun awesomness!',
inline: [mailgunLogo],
attachment: [rackspaceLogo]
})
.then((msg) => console.log(msg))
.catch((err) => console.log(err));

Add Signature field to pdf in javascript

after hours of searching for a solution, I've decided to ask my first question on stackoverflow.
Our application uses pdf-lib (https://www.npmjs.com/package/pdf-lib) to modify existing PDFs, e.g. add images. We're now looking for a way to ad signature form fields to the PDF as well.
With pdf-lib, it is possible to add a bunch of form fields, except for signature fields. It is possible to get them (https://pdf-lib.js.org/docs/api/classes/pdfform#getsignature), but unlike other fields, there's no create method (e.g. https://pdf-lib.js.org/docs/api/classes/pdfform#createtextfield).
I've digged deeper in the code, and found access to the PDFForms AcroForm (https://pdf-lib.js.org/docs/api/classes/pdfform#acroform). It's possible to add fields with it, but I wasn't able to create the correct Field beforehand (in my opinion it has to be PDFSignature or PDFAcroSignature).
I found out that other fields like PDFAcroText have create methods
class PDFAcroText extends PDFAcroTerminal {
static fromDict = (dict: PDFDict, ref: PDFRef) => new PDFAcroText(dict, ref);
static create = (context: PDFContext) => {
const dict = context.obj({
FT: 'Tx',
Kids: [],
});
const ref = context.register(dict);
return new PDFAcroText(dict, ref);
};
Those get called by the wrapper functions (like createTextfield as mentioned):
createTextField(name: string): PDFTextField {
assertIs(name, 'name', ['string']);
const nameParts = splitFieldName(name);
const parent = this.findOrCreateNonTerminals(nameParts.nonTerminal);
const text = PDFAcroText.create(this.doc.context);
text.setPartialName(nameParts.terminal);
addFieldToParent(parent, [text, text.ref], nameParts.terminal);
return PDFTextField.of(text, text.ref, this.doc);
}
I looked for other js libs that provide the possibility to add signature form fields, but I wasn't able to find the answer to this - except for pay to use libs like pdfjs.express.
Assuming that they are capable of adding such fields, there must be a way to do this!
Please let me know if anyone of you figured out how to do this or if there's another solution for this.
Thank you in advance!
Greetings
Alex
The Acrobat PRO itself doesn't have an option to put a straight "Signature" field. You may "request" a signature, but only using Adobe's services, and an email is required.
If you plan to add a signature by the code, take a look at their "Fill Form" example. They put an image on top of a Button field, but an Image field also works.
const signatureImageField2 = form.getButton('button-signature-field')
signatureImageField2.setImage(signatureImage)
const factionImageField = form.getField('image-signature-field_af_image')
factionImageField.setImage(signatureImage)

serverless - How to dynamically add resources generated from a javascript file and merge them with other resources?

I want to create a serverless file that deploys Cognito resources to AWS. I have a config.yml file that holds all the scopes that should be created in the Cognito Resource Server.
config.yml
- name: scope1
description: Description of scope1
- name: scope2
description: Description of scope2
What I want to accomplish is to dynamically generate one Cognito App Client for each scope we register, as well as adding these scopes to a Cognito Resource Server (in my case, the Cognito User Pool and Domain Name are already created).
To do that, I tried to make a javascript file that will load the config.yml file and generate two variables:
userPoolClientList which holds a list of Cognito App Client resources.
scopeList which represents a list of scopes that should be registered in the Cognito Resource Server.
sls-template.js
const fs = require("fs");
const yaml = require("js-yaml");
const scopeList = yaml.safeLoad(fs.readFileSync("config.yml"));
module.exports = {
scopeList: function () {
return scopeList.map(({ name, description }) => ({
ScopeName: name,
ScopeDescription: description,
}));
},
userPoolClientList: function (serverless) {
const { cognitoUserPoolId } = serverless.service.custom;
const scopeResourceList = scopeList.map(({ name, description }) => ({
[`cognitoUserPoolClient-${name}`]: {
Type: "AWS::Cognito::UserPoolClient",
Properties: {
AllowedOAuthScopes: [`server/${name}`],
UserPoolId: cognitoUserPoolId,
},
DependsOn: "cognitoResourceServer",
},
}));
return Object.assign({}, ...scopeResourceList);
},
};
Now this looks like it returns exactly what I wanted (I've tested it and it works great).
My problem is rather in the implementation on the serverless.yml file and how to combine a fixed Resources and a dynamically generated one.
serverless.yml
resources:
Resources:
${file(sls-template.js):userPoolClientList}
cognitoResourceServer:
Type: AWS::Cognito::UserPoolResourceServer
Properties:
Identifier: server
Name: Server
Scopes: ${file(sls-template.js):scopeList}
UserPoolId: ${self:custom.cognitoUserPoolId}
This throws an error as the syntax is not correct. However, when I try to deploy the resources individually (one time just the cognitoResourceServer resource, the other time the generated variable from the javascript file), everything works fine.
The problem really is on how I should combine or merge these two resources.
I've been trying a lot of different combinations to try to make it work, but it always give me an invalid template.
So I was wondering if what I try to accomplish is even possible in serverless and if so, how can I change my final serverless.yml file to make it work.
Thanks a lot.
Yes resource blocks can be merged.
I believe this should do it:
resources:
- Resources: ${file(sls-template.js):userPoolClientList}
- Resources:
cognitoResourceServer:
Type: AWS::Cognito::UserPoolResourceServer
Properties:
Identifier: server
Name: Server
Scopes: ${file(sls-template.js):scopeList}
UserPoolId: ${self:custom.cognitoUserPoolId}
Not 100% sure whether the dynamic block needs to generate Resources top-level key or whether you can hard-code it like I've shown in the example.

How to send javascript object literals from a Node backend to the browser frontend?

I am using Node where I have JavaScript object literals with methods in the backend, e.g.:
const report = {
id: 1,
title: 'Quarterly Report for Department 12345',
abstract: 'This report shows the results of the sales and marketing divisions.',
searchText: function () {
return this.title + '|' + this.abstract;
}
};
And I want to send these object literals to the frontend via AJAX and be able to call the methods on these objects as I can in the backend.
But even though I can send the objects to the frontend without JSON.stringify(), they are still converted to plain JSON by the time they reach my frontend:
Am I missing something, or is there not a way to send full object literals from backend to frontend. I'm using Axios.
But even though I can send the objects to the frontend without JSON.stringify(),
It sounds like you are using JSON.stringify … just indirectly (via a library).
JSON has no function data type. So you can't just use JSON.
You have a few options.
Listed in the order I'd recommend them in.
Resolve the methods
In your example, your function simple return this.title + '|' + this.abstract; so you could replace it with a string:
const report = {
id: 1,
title: 'Quarterly Report for Department 12345',
abstract: 'This report shows the results of the sales and marketing divisions.',
searchText: 'Quarterly Report for Department 12345|This report shows the results of the sales and marketing divisions.'
}
};
You could use the replacer argument of JSON.stringify to do this automatically for any method on the object.
This is the simplest option but results in data that doesn't update dynamically so it might not be suitable for your needs.
Add the methods with client-side code
Send a simple object which doesn't have the method but does have a field which describes the type of object it does.
Then inflate it on the client:
const type = ajaxResponse.type;
const Constructor = collectionOfConstructorFunctions[type];
const data = new Constructor(ajaxResponse.data);
Send JavaScript instead
You could use JSONP instead of Axios.
The response would be application/javascript instead of application/json so you could encode function expressions in the returned data.
I don't recommend this option.
Encode the functions in the JSON and then include them client-side
This is horrible.
const report = {
id: 1,
title: 'Quarterly Report for Department 12345',
abstract: 'This report shows the results of the sales and marketing divisions.',
searchText: "return this.title + '|' + this.abstract;"
}
and then, on the client:
report.searchText = new Function(report.searchText);
console.log(report.searchText());
This is effectively using eval. Don't do it.

AWS S3 browser upload using HTTP POST gives invalid signature

I'm working on a website where the users should be able to upload video files to AWS. In order to avoid unnecessary traffic I would like the user to upload directly to AWS (and not through the API server). In order to not expose my secret key in the JavaScript I'm trying to generate a signature in the API. It does, however, tell me when I try to upload, that the signature does not match.
For signature generation I have been using http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-UsingHTTPPOST.html
On the backend I'm running C#.
I generate the signature using
string policy = $#"{{""expiration"":""{expiration}"",""conditions"":[{{""bucket"":""dennisjakobsentestbucket""}},[""starts-with"",""$key"",""""],{{""acl"":""private""}},[""starts-with"",""$Content-Type"",""""],{{""x-amz-algorithm"":""AWS4-HMAC-SHA256""}}]}}";
which generates the following
{"expiration":"2016-11-27T13:59:32Z","conditions":[{"bucket":"dennisjakobsentestbucket"},["starts-with","$key",""],{"acl":"private"},["starts-with","$Content-Type",""],{"x-amz-algorithm":"AWS4-HMAC-SHA256"}]}
based on http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-HTTPPOSTConstructPolicy.html (I base64 encode the policy). I have tried to keep it very simple, just as a starting point.
For generating the signature, I use code found on the AWS site.
static byte[] HmacSHA256(String data, byte[] key)
{
String algorithm = "HmacSHA256";
KeyedHashAlgorithm kha = KeyedHashAlgorithm.Create(algorithm);
kha.Key = key;
return kha.ComputeHash(Encoding.UTF8.GetBytes(data));
}
static byte[] GetSignatureKey(String key, String dateStamp, String regionName, String serviceName)
{
byte[] kSecret = Encoding.UTF8.GetBytes(("AWS4" + key).ToCharArray());
byte[] kDate = HmacSHA256(dateStamp, kSecret);
byte[] kRegion = HmacSHA256(regionName, kDate);
byte[] kService = HmacSHA256(serviceName, kRegion);
byte[] kSigning = HmacSHA256("aws4_request", kService);
return kSigning;
}
Which I use like this:
byte[] signingKey = GetSignatureKey(appSettings["aws:SecretKey"], dateString, appSettings["aws:Region"], "s3");
byte[] signature = HmacSHA256(encodedPolicy, signingKey);
where dateString is on the format yyyymmdd
I POST information from JavaScript using
let xmlHttpRequest = new XMLHttpRequest();
let formData = new FormData();
formData.append("key", "<path-to-upload-location>");
formData.append("acl", signature.acl); // private
formData.append("Content-Type", "$Content-Type");
formData.append("AWSAccessKeyId", signature.accessKey);
formData.append("policy", signature.policy); //base64 of policy
formData.append("x-amz-credential", signature.credentials); // <accesskey>/20161126/eu-west-1/s3/aws4_request
formData.append("x-amz-date", signature.date);
formData.append("x-amz-algorithm", "AWS4-HMAC-SHA256");
formData.append("Signature", signature.signature);
formData.append("file", file);
xmlHttpRequest.open("post", "http://<bucketname>.s3-eu-west-1.amazonaws.com/");
xmlHttpRequest.send(formData);
I have been using UTF8 everywhere as prescribed by AWS. In their examples the signature is on a hex format, which I have tried as well.
No matter what I try I get an error 403
The request signature we calculated does not match the signature you provided. Check your key and signing method.
My policy on AWS has "s3:Get*", "s3:Put*"
Am I missing something or does it just work completely different than what I expect?
Edit: The answer below is one of the steps. The other is that AWS distinguish between upper and lowercase hex strings. 0xFF != 0xff in the eyes of AWS. They want the signature in all lowercase.
You are generating the signature using Signature Version 4, but you are constructing the form as though you were using Signature Version 2... well, sort of.
formData.append("AWSAccessKeyId", signature.accessKey);
That's V2. It shouldn't be here at all.
formData.append("x-amz-credential", signature.credentials); // <accesskey>/20161126/eu-west-1/s3/aws4_request
This is V4. Note the redundant submission of the AWS Access Key ID here and above. This one is probably correct, although the examples have capitalization like X-Amz-Credential.
formData.append("x-amz-algorithm", "AWS4-HMAC-SHA256");
That is also correct, except it may need to be X-Amz-Algorithm. (The example seems to imply that capitalization is ignored).
formData.append("Signature", signature.signature);
This one is incorrect. This should be X-Amz-Signature. V4 signatures are hex, so that is what you should have here. V2 signatures are base64.
There's a full V4 example here, which even provides you with an example aws key and secret, date, region, bucket name, etc., that you can use with your code to verify that you indeed get the same response. The form won't actually work but the important question is whether your code can generate the same form, policy, and signature.
For any given request, there is only ever exactly one correct signature; however, for any given policy, there may be more than one valid JSON encoding (due to JSON's flexibility with whitespace) -- but for any given JSON encoding there is only one possible valid base64-encoding of the policy. This means that your code, using the example data, is certified as working correctly if it generates exactly the same form and signature as shown in the example -- and it means that your code is proven invalid if it generates the same form and policy with a different signature -- but there is a third possibility: the test actually proves nothing conclusive about your code if your code generates a different base64 encoding of the policy, because that will necessarily change the signature to not match, yet might still be a valid policy.
Note that Signature V2 is only suported on older S3 regions, while Signature V4 is supported by all S3 regions, so, even though you could alternately fix this by making your entire signing process use V2, that wouldn't be recommended.
Note also that The request signature we calculated does not match the signature you provided. Check your key and signing method does not tell you anything about whether the bucket policy or any users policies allow or deny the request. This error is not a permissions error. It will be thrown prior to the permissions checks, based solely on the validity of the signature, not whether the AWS Access Key id is authorized to perform the requested operation, which is something that is only tested after the signature is validated.
I suggest you to create a pair auth token with permission to POST only, and send an http request like this:
require 'rest-client'
class S3Uploader
def initialize
#options = {
aws_access_key_id: "ACCESS_KEY",
aws_secret_access_key: "ACCESS_SECRET",
bucket: "BUCKET",
acl: "private",
expiration: 3.hours.from_now.utc,
max_file_size: 524288000
}
end
def fields
{
:key => key,
:acl => #options[:acl],
:policy => policy,
:signature => signature,
"AWSAccessKeyId" => #options[:aws_access_key_id],
:success_action_status => "201"
}
end
def key
#key ||= "temp/${filename}"
end
def url
"http://#{#options[:bucket]}.s3.amazonaws.com/"
end
def policy
Base64.encode64(policy_data.to_json).delete("\n")
end
def policy_data
{
expiration: #options[:expiration],
conditions: [
["starts-with", "$key", ""],
["content-length-range", 0, #options[:max_file_size]],
{ bucket: #options[:bucket] },
{ acl: #options[:acl] },
{ success_action_status: "201" }
]
}
end
def signature
Base64.encode64(
OpenSSL::HMAC.digest(
OpenSSL::Digest.new("sha1"),
#options[:aws_secret_access_key], policy
)
).delete("\n")
end
end
uploader = S3Uploader.new
puts uploader.fields
puts uploader.url
begin
RestClient.post(uploader.url, uploader.fields.merge(file: File.new('51bb26652134e98eae931fbaa10dc3a1.jpeg'), :multipart => true))
rescue RestClient::ExceptionWithResponse => e
puts e.response
end

Categories

Resources