I'm following this little tutorial for sending an email when an object is uploaded in a S3 bucket. To use it with Lambda I created a .zip file with following structure:
mail.js
/node_modules
The mail.js has following code:
var MailComposer = require('mailcomposer').MailComposer,
mailcomposer = new MailComposer();
var ses =
new aws.SES({
accessKeyId: 'xxxxxxxxxxxx',
secretAccessKey: 'xxxxxxxxxxxx'});
s3.getObject(params, function(err, data) {
if (err) {
//error handling
} else {
mailcomposer.setMessageOption({
from: 'chirer#gmail.com’,
to: 'sjuif#gmail.com',
subject: 'Test’,
body: 's3://' + bucket + '/' + key,
html: 's3://' + bucket + '/' + key +
'<br/><img src="cid:' + key + '" />'
});
var attachment = {
contents: data.Body,
contentType: 'image/png',
cid: key
};
mailcomposer.addAttachment(attachment);
mailcomposer.buildMessage(function(err, messageSource) {
if (err) {
// error handling
} else {
ses.sendRawEmail({RawMessage: {Data: messageSource}}, function(err, data) {
if(err) {
// error handling
} else {
context.done(null, data);
}
});
}
});
}
});
When I create a lambda function I do the following :
In the select blueprint menu I select "s3-get-object-python"
I choose my bucket
As event I choose "Put"
I click "next"
I give a name to the lambda function and choose "upload a .zip file"
I upload the zip file with mail.js and the node_modules directory
As handler I fill in "mail.handler"
As role I choose "S3 execution role". The wizard gives now a new screen where I click "view policy document". I edit the document, the document is now like:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"ses:SendEmail",
"ses:SendRawEmail"
],
"Resource": [
"*"
]
}
]
}
I click 'Allow' and go back to the previous screen
Then I choose next en enables the lambda function
When I now upload a png file I get the following error in my log.
START RequestId: a4401d96-c0ef-11e5-9ae4-8f38a4f750b6 Version: $LATEST
**Unable to import module 'mail': No module named mail**
END RequestId: a4401d96-c0ef-11e5-9ae4-8f38a4f750b6
REPORT RequestId: a4401d96-c0ef-11e5-9ae4-8f38a4f750b6 Duration: 0.35 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 9 MB
I don't know why because i'm sure the mail.js is in the root of my .Zip file
There just so many gotchas you can run to while creating deployment packages for AWS Lambda (for Python). I have spent hours and hours on debugging sessions until I found a formula that rarely fails.
I have created a script that automates the entire process and therefore makes it less error prone. I have also wrote tutorial that explains how everything works. You may want to check it out:
Hassle-Free Python Lambda Deployment [Tutorial + Script]
That error means that Lambda can't find the lib. It can't be in proj/lib/python2.7/site-packages or proj/lib64/python2.7/site-packages
It MUST BE inside proj/ itself. I ran into the same problem with MySQL-python and wrote a howto:
http://www.iheavy.com/2016/02/14/getting-errors-building-amazon-lambda-python-functions-help-howto/
HTH
-Sean
Related
I'd like to use the npm package "request" in an AWS lambda function.
I'm trying to follow the procedure outlined in this article here: https://medium.com/#anjanava.biswas/nodejs-runtime-environment-with-aws-lambda-layers-f3914613e20e
I've created a directory structure like this:
nodejs
│ package-lock.json
│ package.json
└───node_modules
My package.json looks like this:
{
"name": "my-package-name",
"version": "1.0.0",
"description": "whatever",
"author": "My Name",
"license": "MIT",
"dependencies": {
"request": "^2.88.0"
}
}
As far as I can tell from the article, all I should have to do with the above is run npm i, zip up the directory, upload it as a layer, and add the layer to my lambda function.
I've done all of that, but all that I get when I try to test my function is this:
{
"errorType": "Runtime.ImportModuleError",
"errorMessage": "Error: Cannot find module 'request'\nRequire stack:\n- /var/task/index.js\n- /var/runtime/UserFunction.js\n- /var/runtime/index.js",
"trace": [
"Runtime.ImportModuleError: Error: Cannot find module 'request'",
"Require stack:",
...
...as if the layer had never been added. The error is exactly the same whether the layer is added or not. If there's some sort of permissions issue that needs to be resolved, there's nothing in the article that indicates that.
I've tried a few different things, like whether or not my .zip file contains the top-level directory "nodejs" or just its contents. I've tried adding "main": "index.js", to my package.json, with an index.js file like this:
export.modules.request = require('request');
...all to no avail.
What am I missing?
Oh, I can't believe it's just this!
The top-level directory for the .zip file must LITERALLY be named "nodejs"! I was using a different name, and only changed it back to "nodejs" in the text of this post to be more generic, but the directory name was the real problem all along.
Sigh.
Usually, it's got to do with the name of the folder/files inside. And if those files are referred elsewhere, it's gonna percolate and complain there as well. Just check the folder structure thoroughly, you will be able to catch the thief. I struggled for a day to figure out, it was a silly typo.
For me, what was causing these issues was having a version of the package.json still inside an older version of the .build folder which had also been deployed. Once I removed that, packages were installed as expected.
I got this error also. The src.zip file should have the source code directly without any parent folder.
For example, if you want to zip src folder, you need to do this.
cd src/ && zip -r ../src.zip .
Ok so I found my issue. I was zipping a file containing my lambda instead of just my lambdas root. This was causing the lambda to look for my handler at ./index, but not finding it as it was located at ./nodejs/index.js.
Here is the command i used to properly zip my files from the root:
cd nodejs/
ls # should look like this: node_modules index.js package-lock.json package.json
zip -r ../nodejs.zip ./*
This zips everything properly so that the lambda finds your files at the root of the lambda like in the default configuration for creating a lambda through the aws UI.
Accessing table data from RDS using lambda function with encrypted key (KMS) and Environment variable
Step 1 :- First Enable key in KMS(Key Management Service (KMS))
Review your key Policy and Done! with KMS creation
{
"Id": "key-consolepolicy-3",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Enable IAM User Permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::163806924483:root"
},
"Action": "kms:*",
"Resource": "*"
},
{
"Sid": "Allow access for Key Administrators",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::163806924483:user/User1#gmail.com"
},
"Action": [
"kms:Create*",
"kms:Describe*",
"kms:Enable*",
"kms:List*",
"kms:Put*",
"kms:Update*",
"kms:Revoke*",
"kms:Disable*",
"kms:Get*",
"kms:Delete*",
"kms:TagResource",
"kms:UntagResource",
"kms:ScheduleKeyDeletion",
"kms:CancelKeyDeletion"
],
"Resource": "*"
},
{
"Sid": "Allow use of the key",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::163806924483:user/User1#gmail.com",
"arn:aws:iam::163806924483:user/User2#gmail.com",
"arn:aws:iam::163806924483:user/User3#gmail.com"
]
},
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey"
],
"Resource": "*"
},
{
"Sid": "Allow attachment of persistent resources",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::163806924483:user/User1.dilip#gmail.com",
"arn:aws:iam::163806924483:user/User2#gmail.com",
"arn:aws:iam::163806924483:user/User3#gmail.com"
]
},
"Action": [
"kms:CreateGrant",
"kms:ListGrants",
"kms:RevokeGrant"
],
"Resource": "*",
"Condition": {
"Bool": {
"kms:GrantIsForAWSResource": "true"
}
}
}
]
}
Step:- 2 Create a policy in IAM for KMS assign to ur each lambda function
"StringEquals": {
"kms:EncryptionContext:LambdaFunctionName": [
"LambdaFunction-1",
"LambdaFunction-2",
"LambdaFunction-3"
]
}
Step 3:- Assign a Policy created in Step-2 to ur default lambda Role(1st Lambda need to be created to get default lambda role)
Step 4:- Create lambda Function
Node.js Code for lambda Function
const mysql = require('mysql');
const aws = require("aws-sdk");
const functionName = process.env.AWS_LAMBDA_FUNCTION_NAME;
let res;
let response={};
exports.handler = async(event) => {
reset_globals();
// load env variables
const rds_user = await kms_decrypt(process.env.RDS_USERNAME);
const rds_pwd = await kms_decrypt(process.env.RDS_PASSWORD)
// setup rds connection
var db_connection = await mysql.createConnection({
host: process.env.RDS_HOSTNAME,
user: rds_user,
password: rds_pwd,
port: process.env.RDS_PORT,
database: process.env.RDS_DATABASE
});
var sqlQuery = `SELECT doc_id from documents`;
await getValues(db_connection,sqlQuery);
}
async function getValues(db_connection,sql) {
await new Promise((resolve, reject) => {
db_connection.query(sql, function (err, result) {
if (err) {
response = {statusCode: 500, body:{message:"Database Connection Failed",
error: err}};
console.log(response);
resolve();
}
else {
console.log("Number of records retrieved: " + JSON.stringify(result));
res = result;
resolve();
}
});
});
}
async function kms_decrypt(encrypted) {
const kms = new aws.KMS();
const req = { CiphertextBlob: Buffer.from(encrypted, 'base64'), EncryptionContext: {
LambdaFunctionName: functionName } };
const decrypted = await kms.decrypt(req).promise();
let cred = decrypted.Plaintext.toString('ascii');
return cred;
}
function reset_globals() {
res = (function () { return; })();
response = {};
}
Now u should see KMS in Lambda.
Step 5:- Set Environment Variable and encrypt it.
Lambda ->Functions -> Configuration -> Environment Variable -> Edit
RDS_DATABASE docrds
RDS_HOSTNAME docrds-library.c1k3kcldebmp.us-east-1.rds.amazonaws.com
RDS_PASSWORD root123
RDS_PORT 3306
RDS_USERNAME admin
In Lambda Function to decrypt the encrypted environment variabled use below code
function kms_decrypt(encrypted) {
const kms = new aws.KMS();
const req = { CiphertextBlob: Buffer.from(encrypted, 'base64'), EncryptionContext: {
LambdaFunctionName: functionName } };
const decrypted = await kms.decrypt(req).promise();
let cred = decrypted.Plaintext.toString('ascii');
return cred;
}
My RDS document table looks like:-
I am accessing column doc_id using sqlQuery in lambda function
var sqlQuery = `SELECT doc_id from documents`;
After testing the lambda function, I get below output.
If u gets SQL import Error, then can must add a layer.
errorType": "Runtime.ImportModuleError",
"errorMessage": "Error: Cannot find module 'mysql'\nRequire stack:\n-
/var/task/index.js\n- /var/runtime/UserFunction.js\n- /var/runtime/index.js",
"trace": [
"Runtime.ImportModuleError: Error: Cannot find module 'mysql'",
You can configure your Lambda function to use additional code and
content in the form of layers. A layer is a ZIP archive that contains
libraries, a custom runtime, or other dependencies. With layers, you
can use libraries in your function without needing to include them in
your deployment package.
To include libraries in a layer, place them in the directory structure
that corresponds to your programming language.
Node.js – nodejs/node_modules
Python – python
Ruby – ruby/gems/2.5.0
Java – java/lib
First create a zip archieve that contain mysql archieve.
First create a react-project
Then in terminal $project-path > npm init
Then $project-path > npm install mysql
You should see node_modules folder created.
Zip node_modules that folder and upload on layer as shown below.
Then, Goto Lambda--> Layer-->Create layer.
I have the career PersistedModel for storing the data in the database and i have the attachment model for file storage to store in some location.Now i want to send an email with the data. I can able to send only the career data but i want to send attachment also with the same email.I could not able to fetch the file name because it is not in the career model it is in the attachment. How to do get the file name and send it help me out.
career.js
const app = require('../../server/server');
module.exports = function(Career) {
Career.afterRemote('create', function(context, remoteMethodOutput, next) {
next();
console.log(remoteMethodOutput)
Career.app.models.Email.send({
to: 'lakshmipriya.l#gmail.com',
from: 'lakshmipriya.l#gmail.com',
subject: 'my subject',
html: 'Hello-world',
attachments: [
{
path: '../files/resume/'+remoteMethodOutput.resume,
}
],
}, function(err, mail) {
// console.log(context.result.email)
console.log('email sent!');
cb(err);
});
});
};
attachment.json
{
"name": "attachment",
"base": "Model",
"idInjection": true,
"options": {
"validateUpsert": true
},
"properties": {},
"validations": [],
"relations": {},
"acls": [],
"methods": {}
}
My project structure where i used to store the files is
Using absolute path for your files is always more robust than relative path. Use __dirname for that :
const filePath = __dirname + '/files/resume/' + remoteMethodOutput.resume;
If you need to go up one level, then enter the files directory, you need Node's path module to resolve it :
const path = require("path"),
filePath = path.normalize(__dirname + '/../files/resume/' + remoteMethodOutput.resume)
I have another project where this same code works successfully, so it may be some configuration option I've missed this time around. I'm using the google cloud API to access firebase storage.
For clarity, the file does exist.
var storage = require('#google-cloud/storage')({
keyFilename: 'serviceAccountKey.json',
projectId: 'my-id'
});
var bucket = storage.bucket('my-id.appspot.com');
var file = bucket.file('directory/file.json'); //this exists!
file.exists(function(err, exists){
console.log("Checking for challenges file. Results:" + exists + ", err:" + err); //returns "Checking for challenges file. Results:true, err:nil"
if (exists) {
console.log("File exists. Printing."); //prints "File exists. Printing."
file.download().then(function(currentFileData) {
console.log("This line is never reached.");
}).catch(err => {
console.error('ERROR:', err); //gives a 404 error
});
}
});
Instead of printing "this line is never reached.", it prints the following caught error:
ERROR: { ApiError: Not Found at Object.parseHttpRespMessage (/user_code/node_modules/#google-cloud/storage/node_modules/#google-cloud/common/src/util.js:156:33) at Object.handleResp ... ... The full error is colossal, so I won't post it here in its entirety unless required.
It's possible the user that is trying to access the file only have access over the bucket but not over the file. Check the ACLs of both the bucket and the file in both projects and compare what you get:
myBucket.acl.get()
.then(acls => console.log("Bucket ACLs:", acls));
myFile.acl.get()
.then(acls => console.log("File ACLs:", acls));
You should see an output like this:
[ [ { entity: 'user-abenavides333#gmail.com', role: 'OWNER' },
{ entity: 'user-dwilches#gmail.com', role: 'OWNER' } ],
{ kind: 'storage#objectAccessControls',
items: [ [Object], [Object] ] } ]
If there is no difference there, try the following more verbose versions of the same code:
myBucket.acl.get()
.then(acls => console.log("Bucket ACLs:", JSON.stringify(acls, null, '\t')));
myFile.acl.get()
.then(acls => console.log("File ACLs:", JSON.stringify(acls, null, '\t')));
I am developing an application where I need to schedule a task, so I am using AWS Lambda for it.However, the scheduled time is dynamic, since it depends on the user request, it can't be scheduled using AWS Console, so I use AWS Javascript SDK to schedule it.
This is the flow:
Create a CloudWatch Rule (this is successful, I can see the rule being created in the console
Add permission to the policy of lambda, so that cloudwatch event can invoke it (Lambda function code is same for all request, so I created a lambda function in AWS Console instead of using SDK)
Add target to the rule created in Step 1 (this step fails). The error i get is RoleArn is not supported for target arn:aws:lambda:eu-west-1:629429065286:function:prebook.
Below is the Node.js code I wrote
schedule_aws_lambda: function(booking_id, cronTimeIST, callback){
var event = new AWS.CloudWatchEvents({
accessKeyId: accessKeyId,
secretAccessKey: secretAccessKey,
region: 'eu-west-1'
});
var lambda = new AWS.Lambda({
accessKeyId: accessKeyId,
secretAccessKey: secretAccessKey,
region: 'eu-west-1'
});
var year = cronTimeIST.utc().year();
var month = cronTimeIST.utc().month() + 1;
var date = cronTimeIST.utc().date();
var hour = cronTimeIST.utc().hour();
var minute = cronTimeIST.utc().minute();
var cronExpression = "cron(" + minute + " "+ hour + " " + date + " " + month + " ? " + year +")";
var hour_minute = cronTimeIST.format("HH_mm");
var ruleParams = {
Name: 'brodcast_' + booking_id + '_' + hour_minute,
Description: 'prebook brodcast for ' + booking_id + '_' + hour_minute,
ScheduleExpression: cronExpression,
RoleArn: 'arn:aws:iam::629429065286:role/service-role/prebook_lambda_role',
State: 'ENABLED',
};
event.putRule(ruleParams).promise()
.then(data => {
var lambdaPermission = {
FunctionName: 'arn:aws:lambda:eu-west-1:629429065286:function:prebook',
StatementId: 'brodcast_' + booking_id + '_' + hour_minute,
Action: 'lambda:*',
Principal: 'events.amazonaws.com',
};
return lambda.addPermission(lambdaPermission).promise();
})
.then(data => {
var targetParams = {
Rule: ruleParams.Name,
Targets: [
{
Id: 'default',
Arn: 'arn:aws:lambda:eu-west-1:629429065286:function:prebook',
RoleArn: ruleParams.RoleArn,
Input: JSON.stringify({booking_id: booking_id})
}
]
};
return event.putTargets(targetParams).promise();
})
.then(data => {
callback(null, data);
})
.catch(err => {
callback(err)
});
}
I know it has to do something with the Role which doesn't have some permission, I can't figure out the exact cause, I have given the following access for the role
And this is the policy document
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
},
{
"Effect": "Allow",
"Principal": {
"Service": "events.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Basically, I want to attach many triggers(the trigger time is not known to me it depends on user request) to the lambda function, however, lambda function code is same for all.
Try removing the RoleArn property. If you are adding permissions to the Lambda function to allow CloudWatch events to invoke it, you don't need it.
In the function policy, make sure you add the SourceArn of the event.
Here's the reference from the docs that explains the error. You must use a resource policy (= Lambda permission), not an identity policy (= role) to invoke Lambda from EventBridge:
Docs: Amazon SQS, Amazon SNS, Lambda, CloudWatch Logs, and EventBridge bus targets do not use roles, and permissions to EventBridge must be granted via a resource policy. API Gateway targets can use either resource policies or IAM roles.
The Lambda AddPermission API creates the resource policy.
Not a node expert, and this is the first time I'm using log4js-node.
I am trying to get my ERROR logs and any of my console logs to write to a log_file.log file with log4js on a nodejs server running Express. Here is my config file:`
{
"replaceConsole": true,
"appenders": [
{
"type": "file",
"filename":"log_file.log",
"maxLogSize":20480,
"backups": 3,
"category":"relative-logger"
},
{
"type":"logLevelFilter",
"level":"ERROR",
"appender":{
"type":"file",
"filename":"log_file.log"
}
},
{
"appender": {
"type": "smtp",
"recipients": "myemail#gmail.com",
"sender": "myemailadd#gmail.com",
"sendInterval": 60,
"transport": "SMTP",
"SMTP": {
"host": "localhost",
"port": 25
}
}
}]
}`
And here is how I'm requiring the application in my app.js file:
var log4js = require("log4js");
log4js.configure("log_config.json")
logger = log4js.getLogger();
I'm sending manual errors to log4js with this (I can get this to log to the console fine, just can't get the log_file written):
logger.error('A mandrill error occurred: ' + e.name + ' - ' + e.message);
And I'm hoping jog4js catches the application's normal ERROR messages.
How do I get log4js to log to the log_file.log them send me an email of that log? I have installed nodemailer 0.7, fyi, to handle smtp.
maybe you could remove "category":"relative-logger" in your file appender.
Yes remove "category":"relative-logger" it somehow blocks the data transfer into your log file.. Or try something like this:
// Setup Logging
log4js.configure({
appenders: [
{ type: 'console' },
{ type: 'file', filename: '.\\logs\\PesaFastaArchiveData.log' }
]
});
The path is of-course the windows path.