Consider a piece of serverless code:
functions:
MyFunc:
handler: index.handler
name: "my_name"
runtime: nodejs12.x
memorySize: 512
timeout: 30
inlineCode: |
exports.handler = function(event, context) {
console.log("ok");
};
description: description
This leads to pacakge everything in source folder. I can not disable it. Event if I add:
package:
artifact: dummy.zip
Deploy failed because dummy.zip is empty file. But why I need a zip file when specified inlineCode? Is there a way to disable packaging and deploy nodejs function with inlineCode parameter only?
The workaround is to define lambda function deginition as normal cloudformation resource like that:
resources:
Resources:
MyFunc:
Type: AWS::Lambda::Function
Properties:
FunctionName: "my_name"
Handler: index.handler
Runtime: nodejs10.x
Role: !GetAtt LambdaRole.Arn # do not forget to define role by hand :(
Code:
ZipFile: |
exports.handler = function(event, context, callback) {
console.log(event);
const response = {
statusCode: 200,
body: JSON.stringify('Hello Node')
};
callback(null, response);
};
The concept of an inlineCode parameter is supported by AWS::Serverless::Function, but not serverless-framework. The YAML you pasted is not a 1:1 mapping to the AWS::Serverless::Function, it's specific to sls itself.
Store your code in files/directories until the sls team adds support for inlineCode. I didn't see any feature requests for it. I'm sure they'd be glad to get one from you.
Related
I would like to take a param from a result of an external JS function but I retrieve this error:
Cannot resolve serverless.yml: Variables resolution errored with:
- Cannot resolve variable at "resources.Resources.FileBucket.Properties.BucketName": Value not found at "file" source
This is my (a piece) serverless file:
service: backend-uploader
frameworkVersion: '3'
variablesResolutionMode: 20210326
provider:
name: aws
runtime: nodejs16.x
region: eu-west-1
resources:
Resources:
FileBucket:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
Properties:
BucketName: ${file(./unique-bucket-name.cjs):bucketName}
This is my file unique-bucket-name.cjs in the same directory:
module.export = async function () {
return { bucketName: 'something' }
}
I have tried using self and importing the file as custom but the error still.
I have tried to use a Json with the same response and it works.
Why my JS file can't be taken from serverless?
Thanks.
Technically there's no much difference between using a JSON file to store the variables and a cjs. Looks like internally Serverless has some logic to parse the files and fetch the values
Serverless documentation points towards using a JSON file:
${file(./config.${opt:stage, 'dev'}.json):CREDS}
But, I've also seen some examples of people referencing to YML files instead.
When deploying lambdas with serverless, the following error occurs:
The CloudFormation template is invalid: Template format error: Output ServerlessDeploymentBucketName is malformed. The Name field of every Export member must be specified and consist only of alphanumeric characters, colons, or hyphens.
I don't understand what the problem is.
Serverless config file:
service: lambdas-${opt:region}
frameworkVersion: '2'
provider:
name: aws
runtime: nodejs12.x
memorySize: 512
timeout: 10
lambdaHashingVersion: 20201221
region: ${opt:region}
stackName: lambdas-${opt:region}
logRetentionInDays: 14
deploymentBucket:
name: lambdas-${opt:region}
plugins:
- serverless-deployment-bucket
functions:
function1:
handler: function1/index.handler
name: function1-${opt:stage}
description: This function should call specific API on Backend server
events:
- schedule: cron(0 0 * * ? *)
environment:
ENV: ${opt:stage}
function2:
handler: function2/index.handler
name: function2-${opt:stage}
description: Function should be triggered by invocation from backend.
environment:
ENV: ${opt:stage}
I ran into this same problem.
In the serverless.yml I changed service that I had it as lambda_function and put it as lambdaFunction
The error was solved and it deployed correctly.
Most likely your stage name contains an illegal character. Serverless auto-generates a name for your s3 bucket based on your stage name. If you look at the generated template file you will see the full export, which will look something like the following:
"ServerlessDeploymentBucketName": {
"Value": "api-deployment",
"Export": {
"Name": "sls-api_stage-ServerlessDeploymentBucketName"
}
}
The way around this (assuming you don't want to change your stage name) is to explicitly set the output by adding something like this to your serverless config (in this case the illegal character was the underscore)
resources: {
Outputs: {
ServerlessDeploymentBucketName: {
Export: {
Name: `sls-${stageKey.replace('api_', 'api-')}-ServerlessDeploymentBucketName`
}
}
}
}
Unfortunately this has to be done for every export... It is a better option to update your stage name to not include illegal characters
I am trying to create a deployment or replicaSet with the Kubernetes Javascript client. The Kubernetes javascript client documentation is virtually non-existent.
Is there any way to achieve this?
Assuming that by:
createDeployment()
you are referring to: createNamespacedDeployment()
You can use below code snippet to create a Deployment using Javascript client library:
const k8s = require('#kubernetes/client-node');
const kc = new k8s.KubeConfig();
kc.loadFromDefault();
const k8sApi = kc.makeApiClient(k8s.AppsV1Api); // <-- notice the AppsV1Api
// Definition of the deployment
var amazingDeployment = {
metadata: {
name: 'nginx-deployment'
},
spec: {
selector: {
matchLabels: {
app: 'nginx'
}
},
replicas: 3,
template: {
metadata: {
labels: {
app: 'nginx'
}
},
spec: {
containers: [
{
name: 'nginx',
image: 'nginx'
} ]
}
}
}
};
// Sending the request to the API
k8sApi.createNamespacedDeployment('default', amazingDeployment).then(
(response) => {
console.log('Yay! \nYou spawned: ' + amazingDeployment.metadata.name);
},
(err) => {
console.log('Oh no. Something went wrong :(');
// console.log(err) <-- Get the full output!
}
);
Disclaimer!
This code assumes that you have your ~/.kube/config already configured!
Running this code for the first time with:
$ node deploy.js
should output:
Yay!
You spawned: nginx-deployment
You can check if the Deployment exists by:
$ kubectl get deployment nginx-deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 6m57s
Running this code once again will output (deployment already exists!):
Oh no. Something went wrong :(
Additional resources:
Github.com: Kubernetes-client: Javascript
Be careful when you try to deploy a different kinds of resources such as deployment or service.
You need to correctly specify the API version.
const k8sApi = kc.makeApiClient(k8s.AppsV1Api) or (k8s.CoreV1Api) for namespace and etc.
First, you create a kube config object and then create the associated API type. I.e,
import k8s from '#kubernetes/client-node';
const kubeConfig = new k8s.KubeConfig();
kubeConfig.loadFromCluster(); // Or whatever method you choose
const api = kubeConfig.makeApiClient(k8s.CoreV1Api); // Or whatever API
// you'd like to
// use.
const namespace = 'default';
const manifest = new k8s.V1ConfigMap();
// ... additional manifest setup code...
await api.createNamespacedConfigMap(namespace, manifest);
This is the gist of it. If you'd like, I recently created a library with the intention of simplifying interactions with the kubernetes javascript api and it can be found here:
https://github.com/ThinkDeepTech/k8s
If it doesn't help you directly, perhaps it can serve as an example of how to interact with the API. I hope that helps!
Also, make sure the application executing this code has the necessary permissions (i.e, the K8s Role, RoleBinding and ServiceAccount configs) necessary to perform the actions you're attempting. Otherwise, it'll error out.
I'm working on a sample AWS project that creates two lambda functions. These functions share common code from node_modules which has been placed in a separate layer (specifically AWS::Lambda::LayerVersion, not AWS::Serverless::LayerVersion). I can deploy this code and it works correctly when I test the deployed version.
However, when I try to test the code locally using sam invoke local, the common code is not found. I get this error (I'm trying to use the npm package "axios"):
{"errorType":"Runtime.ImportModuleError","errorMessage":"Error: Cannot find module 'axios'\nRequire stack:\n- /var/task/get-timezone.js\n- /var/runtime/UserFunction.js\n- /var/runtime/index.js"}
This is my template.yaml:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: AWS Sample
Globals:
Function:
Timeout: 30
Resources:
SampleCommonLayer:
Type: AWS::Lambda::LayerVersion
Properties:
CompatibleRuntimes:
- nodejs12.x
Content: nodejs.zip
Description: Sample Common LayerVersion
LayerName: SampleCommonLayer
GetTimezoneFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: dist/get-timezone
Handler: get-timezone.getTimezone
Runtime: nodejs12.x
Layers:
- !Ref SampleCommonLayer
Events:
GetTimezone:
Type: Api
Properties:
Path: /get-timezone
Method: get
ReverseFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: dist/reverse
Handler: reverse.reverse
Runtime: nodejs12.x
Layers:
- !Ref SampleCommonLayer
Events:
Reverse:
Type: Api
Properties:
Path: /reverse
Method: get
Outputs:
GetTimezoneApi:
Description: "API Gateway endpoint URL for Prod stage for getTimezone function"
Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/get-timezone/"
GetTimezoneFunction:
Description: "getTimezone Lambda Function ARN"
Value: !GetAtt GetTimezoneFunction.Arn
GetTimezoneFunctionIamRole:
Description: "Implicit IAM Role created for getTimezone function"
Value: !GetAtt GetTimezoneFunctionRole.Arn
ReverseApi:
Description: "API Gateway endpoint URL for Prod stage for reverse function"
Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/reverse/"
ReverseFunction:
Description: "reverse Lambda Function ARN"
Value: !GetAtt ReverseFunction.Arn
ReverseFunctionIamRole:
Description: "Implicit IAM Role created for reverse function"
Value: !GetAtt ReverseFunctionRole.Arn
I'm invoking the GetTimezone function like this:
sam local invoke --layer-cache-basedir layer-cache --force-image-build \"GetTimezoneFunction\" --event events/event-timezone.json -d 5858
Nothing ever gets copied into the layer-cache directory, and I'm sure that's part of the problem, but I can't figure out how I'd fix that.
I've searched for answers to this problem, but so far I've only found unanswered questions, or answers that don't match my particular situation.
Most of the somewhat-related questions involve AWS::Serverless::LayerVersion, not AWS::Lambda::LayerVersion. I've tried using Serverless instead, but that hasn't helped.
UPDATE:
If I change...
Layers:
- !Ref SampleCommonLayer
...to...
Layers:
- arn:aws:lambda:us-east-2:xxxxxxxxxxxx:layer:SampleCommonLayer:y
...using an already-deployed layer (where xxxxxxxxxxxx and y are a specific ID and version) then sam local invoke works. But I don't want to use something I have to deploy first, I want to use the latest local not-yet-deployed code.
This is known issue: https://github.com/awslabs/aws-sam-cli/issues/947
Currently the "workaround" is to use the directory of the layer instead of a zip file.
I am quite new to nodeJS.
I am using the nodeJS module node-workflow
Basically, this module is an orchestrator that takes a custom javascript script (=workflow definition), then serialize it and store it in a REDIS db (for example), and execute on-demand later on by the node-workflow module.
A workflow definition is composed of task, like this:
var my_external_module = require('my_external_module');
var workflow = module.exports = {
name: 'Workflow Test',
chain: [{
name: 'TASK 1',
timeout: 30,
retry: 1,
body: function(job, cb) {
// Execute external function
my_external_module.hello("Monkey");
return cb(null)
},
},
...
First I put my function my_external_module.hello() in a .js file beside the workflow script.
When I run the node-workflow module I get the following error:
Error initializing runner:
[ReferenceError: my_external_module is not defined]
So I have created a module my_external_module,
and in: ./node_modules/my_external_module/index.js
module.exports = {
hello: function(name) {
console.log("Hello, " + name);
}
};
When I run the node-workflow module I get the same error:
Error initializing runner:
[ReferenceError: my_external_module is not defined]
It seems that the require(...) shall stands in one of the .js files of the node-workflow module, so I would have to hack one of the files of the module, but it is a bit dirty.
Is there something I missed?
Or is there a way to define a $PATH like in Python in order to my function to be accessible from everywhere.
You have to require it as:
var my_external_module = require('./my_external_module');
Notice the ./ this means Node should search for a file like ./my_external_module.js
If you omit the ./ Node looks at the installed modules on (usually) node_modules directory