How can I view resolved variables in serverless.yml? - javascript

Consider this simple example:
service: my-service
frameworkVersion: ">=1.38.0 <2.0.0"
plugins:
- serverless-step-functions
- serverless-pseudo-parameters
- serverless-cf-vars
- serverless-parameters
- serverless-scriptable-plugin
- serverless-s3-deploy
provider:
name: aws
region: us-east-1
custom:
myVariable: "some var value"
assets:
auto: true
targets:
- bucket: ${self:custom.myVariable}
prefix: ${self:custom.myVariable}/
acl: private
files:
- source: my file
glob: "*"
The problem here is - when serverless generate a json cloudformation template and uploads it into cloud-formation. I can not see what actual values were in bucket: ${self:custom.myVariable}.
Is there a way to output serverless template with already resolved variables?

You can use the serverless package command packages your entire infrastructure into the .serverless directory.
This is where you could see the results of any local variables.
Note that any CloudFormation variables (e.g. Fn::* config) won't have been compiled as this is handled by CloudFormation at deployment time.

Related

Serverless Cannot resolve variable

I would like to take a param from a result of an external JS function but I retrieve this error:
Cannot resolve serverless.yml: Variables resolution errored with:
- Cannot resolve variable at "resources.Resources.FileBucket.Properties.BucketName": Value not found at "file" source
This is my (a piece) serverless file:
service: backend-uploader
frameworkVersion: '3'
variablesResolutionMode: 20210326
provider:
name: aws
runtime: nodejs16.x
region: eu-west-1
resources:
Resources:
FileBucket:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
Properties:
BucketName: ${file(./unique-bucket-name.cjs):bucketName}
This is my file unique-bucket-name.cjs in the same directory:
module.export = async function () {
return { bucketName: 'something' }
}
I have tried using self and importing the file as custom but the error still.
I have tried to use a Json with the same response and it works.
Why my JS file can't be taken from serverless?
Thanks.
Technically there's no much difference between using a JSON file to store the variables and a cjs. Looks like internally Serverless has some logic to parse the files and fetch the values
Serverless documentation points towards using a JSON file:
${file(./config.${opt:stage, 'dev'}.json):CREDS}
But, I've also seen some examples of people referencing to YML files instead.

Error building schema with gatsby-source-filesystem config with officiel tutorial

I'm quite new to using Gatsby and for this reason I wanted to start using it by following the official tutorial. Everything went well until I came across the 4th part of it : https://www.gatsbyjs.com/docs/tutorial/part-4/#task-use-graphiql-to-build-the-query-1
When I update my gatsby-config.js file with the gatsby-source-filesystem configuration, I can't run my website locally.
When running the gatsby develop command in my terminal, I have the following error when trying to build schema :
gatsby develop
success compile gatsby files - 1.288s
success load gatsby config - 0.013s
success load plugins - 0.303s
success onPreInit - 0.004s
success initialize cache - 0.074s
success copy gatsby files - 0.049s
success Compiling Gatsby Functions - 0.080s
success onPreBootstrap - 0.087s
success createSchemaCustomization - 0.002s
success Checking for changed pages - 0.002s
success source and transform nodes - 0.089s
ERROR
Missing onError handler for invocation 'building-schema', error was 'Error: TypeError[File.publicURL]: Cannot convert to OutputType the following value: Object({ type: String, args: Object({ }), description: "Copy file to static directory and return public url to it", resolve: [function resolve] })'. Stacktrace was 'Error: TypeError[File.publicURL]: Cannot convert to OutputType the following value: Object({ type: String, args: Object({ }), description: "Copy file to static directory and return public url to it", resolve: [function resolve] })
at TypeMapper.convertOutputFieldConfig (/Users/nicolas/my-gatsby-site/node_m odules/graphql-compose/src/TypeMapper.ts:419:13)
at ObjectTypeComposer.setField (/Users/nicolas/my-gatsby-site/node_modules/g raphql-compose/src/ObjectTypeComposer.ts:445:40)
at /Users/nicolas/my-gatsby-site/node_modules/graphql-compose/src/ObjectType Composer.ts:479:14
at Array.forEach (<anonymous>)
at ObjectTypeComposer.addNestedFields (/Users/nicolas/my-gatsby-site/node_mo dules/graphql-compose/src/ObjectTypeComposer.ts:468:28)
at forEach (/Users/nicolas/my-gatsby-site/node_modules/gatsby/src/schema/schema.js:764:39)
at Array.forEach (<anonymous>)
at /Users/nicolas/my-gatsby-site/node_modules/gatsby/src/schema/schema.js:764:18
at async Promise.all (index 54)
at updateSchemaComposer (/Users/nicolas/my-gatsby-site/node_modules/gatsby/src/schema/schema.js:168:3)
at buildSchema (/Users/nicolas/my-gatsby-site/node_modules/gatsby/src/schema/schema.js:71:3)
at build (/Users/nicolas/my-gatsby-site/node_modules/gatsby/src/schema/index.js:112:18)
at buildSchema (/Users/nicolas/my-gatsby-site/node_modules/gatsby/src/servic es/build-schema.ts:19:3)'
⠸ building schema
Then the building schema command is running forever and my site can't launch.
I know the issue is when adding gatsby-source-filesystem into the gatsby-config.js file because when I delete it, I can run gatsby develop without any issue.
Here is my gatsby-config.js file, nearly identical to the one in the tutorial (I just changed the blog title) :
module.exports = {
siteMetadata: {
title: `Arckablog`,
siteUrl: `https://www.yourdomain.tld`,
},
plugins: [
"gatsby-plugin-image",
"gatsby-plugin-sharp",
{
resolve: "gatsby-source-filesystem",
options: {
name: `blog`,
path: `${__dirname}/blog`,
}
},
],
}
I have seen a similar question on stackoverflow here : Error building schema with gatsby-source-filesystem config (following official tutorial) and I tried updating both my Gatsby version and the gatsby-source-filesystem version but none of it worked out for me.
Can you maybe advise me please ?
Thank you for your help !
Nicolas
I encountered same problem.
I was running
npm install gatsby-source-filesystem
from different folder. Then I changed my current folder to project directory and ran npm install gatsby-source-filesystem everything worked fine

How to Deploy from Gitlab-ci to multiple kubernetes namespaces?

I have two variable containing my namespaces names:
$KUBE_NAMESPACE_DEV ="stellacenter-dev"
$KUBE_NAMESPACE_STAGE "stellacenter-stage-uat"
Now I want to modify the following .gitlab-ci.yaml configuration to include the namespace logic:
deploy_dev:
stage: deploy
image: stellacenter/aws-helm-kubectl
before_script:
- aws configure set aws_access_key_id ${DEV_AWS_ACCESS_KEY_ID}
- aws configure set aws_secret_access_key ${DEV_AWS_SECRET_ACCESS_KEY}
- aws configure set region ${DEV_AWS_DEFAULT_REGION}
script:
- sed -i "s/<VERSION>/${CI_COMMIT_SHORT_SHA}/g" provider-service.yml
- mkdir -p $HOME/.kube
- cp $KUBE_CONFIG_DEV $HOME/.kube/config
- chown $(id -u):$(id -g) $HOME/.kube/config
- export KUBECONFIG=$HOME/.kube/config
- kubectl apply -f ./provider-service.yml
only:
- developer
Provide-service.yml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: provider-app
namespace: "stellacenter-dev" or "stellacenter-stage-uat"
labels:
app: provider-app
spec:
replicas: 1
selector:
matchLabels:
app : provider-app
template:
metadata:
labels:
app: provider-app
spec:
containers:
- name: provider-app
image: registry.gitlab.com/stella-center/backend-services/provider-service:<VERSION>
imagePullPolicy: Always
ports:
- containerPort: 8092
imagePullSecrets:
- name: gitlab-registry-token-auth
---
apiVersion: v1
kind: Service
metadata:
name: provider-service
namespace: "stellacenter-dev" "stellacenter-stage-uat"
spec:
type: NodePort
selector:
app: provider-app
ports:
- port: 8092
targetPort: 8092
I don't know how to integrate the variables and the values correctly . I'm facing the error while I run pipeline.Kindly help me to sort it out.
You can remove the namespace: NAMESPACE from the manifest, and apply the resource in a namespace using the commandline.
- kubectl apply -f ./provider-service.yml -n ${KUBE_NAMESPACE_DEV}
- kubectl apply -f ./provider-service.yml -n ${KUBE_NAMESPACE_STAGE}
Just add one line above the apply command
- export KUBECONFIG=$HOME/.kube/config
- kubectl apply -f ./provider-service.yml
using sed you can replace the respective variable into YAML file
sed -i "s, NAMESPACE,$KUBE_NAMESPACE_DEV," Provide-service.yml
Inside that YAML file keep it something like
apiVersion: v1
kind: Service
metadata:
name: provider-service
namespace: NAMESPACE
spec:
type: NodePort
You can keep one variable instead of two for Namespace management however using the sed you can set the Namespace into the YAML and apply that YAML.
While inside your repo it will be like a template, when CI will run NAMESPACE will get replaced by sed command and YAML will get applied to k8s. Accordingly, you can also keep other things as templates and replace them with sed as per need.
apiVersion: v1
kind: Service
metadata:
name: SERVICE_NAME
namespace: NAMESPACE
spec:
type: SERVICE_TYPE

Serverless error - The CloudFormation template is invalid - during deployment

When deploying lambdas with serverless, the following error occurs:
The CloudFormation template is invalid: Template format error: Output ServerlessDeploymentBucketName is malformed. The Name field of every Export member must be specified and consist only of alphanumeric characters, colons, or hyphens.
I don't understand what the problem is.
Serverless config file:
service: lambdas-${opt:region}
frameworkVersion: '2'
provider:
name: aws
runtime: nodejs12.x
memorySize: 512
timeout: 10
lambdaHashingVersion: 20201221
region: ${opt:region}
stackName: lambdas-${opt:region}
logRetentionInDays: 14
deploymentBucket:
name: lambdas-${opt:region}
plugins:
- serverless-deployment-bucket
functions:
function1:
handler: function1/index.handler
name: function1-${opt:stage}
description: This function should call specific API on Backend server
events:
- schedule: cron(0 0 * * ? *)
environment:
ENV: ${opt:stage}
function2:
handler: function2/index.handler
name: function2-${opt:stage}
description: Function should be triggered by invocation from backend.
environment:
ENV: ${opt:stage}
I ran into this same problem.
In the serverless.yml I changed service that I had it as lambda_function and put it as lambdaFunction
The error was solved and it deployed correctly.
Most likely your stage name contains an illegal character. Serverless auto-generates a name for your s3 bucket based on your stage name. If you look at the generated template file you will see the full export, which will look something like the following:
"ServerlessDeploymentBucketName": {
"Value": "api-deployment",
"Export": {
"Name": "sls-api_stage-ServerlessDeploymentBucketName"
}
}
The way around this (assuming you don't want to change your stage name) is to explicitly set the output by adding something like this to your serverless config (in this case the illegal character was the underscore)
resources: {
Outputs: {
ServerlessDeploymentBucketName: {
Export: {
Name: `sls-${stageKey.replace('api_', 'api-')}-ServerlessDeploymentBucketName`
}
}
}
}
Unfortunately this has to be done for every export... It is a better option to update your stage name to not include illegal characters

Node process.env variable in eval() function gives different result

I have a react app running in node with serverside rendering.
The following environment variable is set to test through kubernetes in my test environment: process.env.NODE_ENV.
When I run the following two commands they give different results. I expect the value to always be test.
log.debug(process.env.NODE_ENV) // logs development
log.debug(eval('process.env.NODE_ENV')) // logs test
Somehow, it looks like the variable is first interpreted as development (Which can happen in my code if it is undefined), but it is somehow interpreted correctly to test by the eval() function.
What can cause node to interpret the value differently between the two expressions?
EDIT: Added kubernetes yaml config.
The ${} variables are replaced by Azure DevOps during the release process.
apiVersion: v1
kind: ConfigMap
metadata:
name: config
namespace: ${KUBERNETES_NAMESPACE}
data:
NODE_ENV: ${NODE_ENV}
---
kind: Service
apiVersion: v1
metadata:
name: ${SERVICE_NAME}
spec:
selector:
app: ${SERVICE_NAME}
ports:
- name: http
protocol: TCP
port: 80
targetPort: 3000
loadBalancerIP: ${IP_NUMBER}
type: LoadBalancer
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: ${SERVICE_NAME}
labels:
app: ${SERVICE_NAME}
spec:
replicas: 2
selector:
matchLabels:
app: ${SERVICE_NAME}
template:
metadata:
labels:
app: ${SERVICE_NAME}
spec:
containers:
- name: ${SERVICE_NAME}
image: {IMAGE_PATH}/${IMAGE_REPO}:${BUILD_NUMBER}
ports:
- name: http
containerPort: 3000
protocol: TCP
resources:
limits:
cpu: 100m
memory: 1024Mi
requests:
cpu: 100m
memory: 1024Mi
envFrom:
- configMapRef:
name: config
imagePullSecrets:
- name: ${IMAGEPULLSECRETNAME}
I seem to have found the cause of the issue.
We use webpack for bundling (which I maybe should have mentioned), and in the server code webpack has outputted, I see that it has resolved process.env.NODE_ENV to a static value, but it doesn't do the same for eval(process.env.NODE_ENV).
Seems my post was unecessary, but I hope it might help someone in the future.

Categories

Resources