I have two variable containing my namespaces names:
$KUBE_NAMESPACE_DEV ="stellacenter-dev"
$KUBE_NAMESPACE_STAGE "stellacenter-stage-uat"
Now I want to modify the following .gitlab-ci.yaml configuration to include the namespace logic:
deploy_dev:
stage: deploy
image: stellacenter/aws-helm-kubectl
before_script:
- aws configure set aws_access_key_id ${DEV_AWS_ACCESS_KEY_ID}
- aws configure set aws_secret_access_key ${DEV_AWS_SECRET_ACCESS_KEY}
- aws configure set region ${DEV_AWS_DEFAULT_REGION}
script:
- sed -i "s/<VERSION>/${CI_COMMIT_SHORT_SHA}/g" provider-service.yml
- mkdir -p $HOME/.kube
- cp $KUBE_CONFIG_DEV $HOME/.kube/config
- chown $(id -u):$(id -g) $HOME/.kube/config
- export KUBECONFIG=$HOME/.kube/config
- kubectl apply -f ./provider-service.yml
only:
- developer
Provide-service.yml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: provider-app
namespace: "stellacenter-dev" or "stellacenter-stage-uat"
labels:
app: provider-app
spec:
replicas: 1
selector:
matchLabels:
app : provider-app
template:
metadata:
labels:
app: provider-app
spec:
containers:
- name: provider-app
image: registry.gitlab.com/stella-center/backend-services/provider-service:<VERSION>
imagePullPolicy: Always
ports:
- containerPort: 8092
imagePullSecrets:
- name: gitlab-registry-token-auth
---
apiVersion: v1
kind: Service
metadata:
name: provider-service
namespace: "stellacenter-dev" "stellacenter-stage-uat"
spec:
type: NodePort
selector:
app: provider-app
ports:
- port: 8092
targetPort: 8092
I don't know how to integrate the variables and the values correctly . I'm facing the error while I run pipeline.Kindly help me to sort it out.
You can remove the namespace: NAMESPACE from the manifest, and apply the resource in a namespace using the commandline.
- kubectl apply -f ./provider-service.yml -n ${KUBE_NAMESPACE_DEV}
- kubectl apply -f ./provider-service.yml -n ${KUBE_NAMESPACE_STAGE}
Just add one line above the apply command
- export KUBECONFIG=$HOME/.kube/config
- kubectl apply -f ./provider-service.yml
using sed you can replace the respective variable into YAML file
sed -i "s, NAMESPACE,$KUBE_NAMESPACE_DEV," Provide-service.yml
Inside that YAML file keep it something like
apiVersion: v1
kind: Service
metadata:
name: provider-service
namespace: NAMESPACE
spec:
type: NodePort
You can keep one variable instead of two for Namespace management however using the sed you can set the Namespace into the YAML and apply that YAML.
While inside your repo it will be like a template, when CI will run NAMESPACE will get replaced by sed command and YAML will get applied to k8s. Accordingly, you can also keep other things as templates and replace them with sed as per need.
apiVersion: v1
kind: Service
metadata:
name: SERVICE_NAME
namespace: NAMESPACE
spec:
type: SERVICE_TYPE
Related
When deploying lambdas with serverless, the following error occurs:
The CloudFormation template is invalid: Template format error: Output ServerlessDeploymentBucketName is malformed. The Name field of every Export member must be specified and consist only of alphanumeric characters, colons, or hyphens.
I don't understand what the problem is.
Serverless config file:
service: lambdas-${opt:region}
frameworkVersion: '2'
provider:
name: aws
runtime: nodejs12.x
memorySize: 512
timeout: 10
lambdaHashingVersion: 20201221
region: ${opt:region}
stackName: lambdas-${opt:region}
logRetentionInDays: 14
deploymentBucket:
name: lambdas-${opt:region}
plugins:
- serverless-deployment-bucket
functions:
function1:
handler: function1/index.handler
name: function1-${opt:stage}
description: This function should call specific API on Backend server
events:
- schedule: cron(0 0 * * ? *)
environment:
ENV: ${opt:stage}
function2:
handler: function2/index.handler
name: function2-${opt:stage}
description: Function should be triggered by invocation from backend.
environment:
ENV: ${opt:stage}
I ran into this same problem.
In the serverless.yml I changed service that I had it as lambda_function and put it as lambdaFunction
The error was solved and it deployed correctly.
Most likely your stage name contains an illegal character. Serverless auto-generates a name for your s3 bucket based on your stage name. If you look at the generated template file you will see the full export, which will look something like the following:
"ServerlessDeploymentBucketName": {
"Value": "api-deployment",
"Export": {
"Name": "sls-api_stage-ServerlessDeploymentBucketName"
}
}
The way around this (assuming you don't want to change your stage name) is to explicitly set the output by adding something like this to your serverless config (in this case the illegal character was the underscore)
resources: {
Outputs: {
ServerlessDeploymentBucketName: {
Export: {
Name: `sls-${stageKey.replace('api_', 'api-')}-ServerlessDeploymentBucketName`
}
}
}
}
Unfortunately this has to be done for every export... It is a better option to update your stage name to not include illegal characters
Currently running my automation from a pipeline please see yaml file below:
jobs:
- job: master
pool:
vmImage: ubuntu-latest
steps:
- task: NodeTool#0
inputs:
versionSpec: '10.14'
displayName: 'Install Node.js'
- script: npm install
displayName: 'Install TestCafe'
- script: npm test
displayName: 'Run Tests'
- task: PublishTestResults#2
inputs:
testResultsFiles: 'report.xml'
testResultsFormat: 'JUnit'
this works well the problem I have is that I would live to be able to make the URL dynamic based on variables entered into Azure dev ops
The update yaml file is below:
trigger:
- master
parameters:
- name: env
type: string
default: testing
values:
- testing
- bdev
- fdev
- name: person
type: string
default: uat
values:
- bs
- nk
- uat
- mc
- rm
- pe
- mv
- mm
variables:
webapp: 'Test-rt5-${{ parameters.env }}-app-${{ parameters.person }}'
Stages:
- stage: 'Build'
displayName: 'Build ${{ parameters.env }}-${{ parameters.person }} '
jobs:
- job: master
pool:
vmImage: ubuntu-latest
steps:
- task: NodeTool#0
inputs:
versionSpec: '10.14'
displayName: 'Install Node.js'
- script: npm install
displayName: 'Install TestCafe'
- script: npm test
displayName: 'Run Tests'
- task: PublishTestResults#2
inputs:
testResultsFiles: 'report.xml'
testResultsFormat: 'JUnit'
How do I use these variables to form my URL for each test/fixture?
currently using
const URL = 'https://test-rt5-bdev-app-rm.com/';
fixture ("SmokeFuxture")
.page(URL);
Azure Docs about defining variables state the following:
Notice that variables are also made available to scripts through environment variables.
So, you can use your variable with a dynamically created URL just by accessing the corresponding environment variable in the TestCafe test:
fixture("SmokeFixture")
.page(process.env.WEBAPP)
I am new to electron. I have an angular application wrapped in electron that I want to build the package/installer using electron-builder. I am using electron-builder-config.yaml file to build the installer.
I would like to know how do I read values from .env environment file into electron-builder-config.yaml file ?
I want to set the version of the package that is generated by command electron-builder -w --publish always -c ./builder-config.yaml.
I did try using buildVersion property but the problem is that there is an installer.nsh file that needs to run as part of nsis installer to set the path and that file uses ${version}.
There is very little documentation on environment variables usage in electron-builder-config.yaml
Here is my electron-builder-config.yaml
directories:
output: ./dist/electron
buildResources: ./electron/build
app: ''
electronVersion: X.Y.Z
appId: com.sample.app
copyright: "Copyright © 2020 ${author}"
productName: TestApp
forceCodeSigning: true
artifactName: "${productName}-${os}-${version}.${ext}"
files:
- "**/dist/electron/*"
- "**/electron/*"
asar: true
compression: maximum
mac:
category: public.app-category.reference
icon: "./icon-file.icns"
publish: [{
"provider": "generic",
"url": "http://localhost:8080"
}]
dmg:
background: "./build/sample.jpg"
icon: "./build/nw.icns"
iconSize: 96
contents:
- x: 650
y: 230
type: link
path: /Applications
- x: 350
y: 230
type: file
win:
cscLink: "./somelink.pfx"
cscKeyPassword: "XXXXXX"
target: [nsis]
icon: "./appinfo.ico"
publish: [{
"provider": "generic",
"url": "http://localhost:8080"
}]
msi:
shortcutName: "TestApp - ${version}"
createDesktopShortcut: true
createStartMenuShortcut: true
nsis:
include: "./installer.nsh"
installerIcon: "./appinfo.ico"
uninstallerIcon: "./appinfo.ico"
packElevateHelper: true
allowToChangeInstallationDirectory: true
perMachine: true
oneClick: false
createDesktopShortcut: true
createStartMenuShortcut: true
shortcutName: "TestApp - ${version}"
guid: "someguid"
npmRebuild: true
nodeGypRebuild: false
Also, I am not sure about the macro ${ext}. From where does this electron-builder-config.yaml file is picking up this value ? Even in the documentation for file-macros, the version does not have the clear definition. Any suggestions ?
I got it figured out. In case someone else is looking for the answer to this question, here is how I got it working.
Step 1: Create a file by the name electron-builder.env at the root level where your package.json resides. Please make sure that you keep the file name as electron-builder.env
Step 2: Define the variables that you would like to inside the electron-builder.env file, for example ELECTRON_BUILD_VERSION=99.99
Step 3: Inside your builder-config.yaml file, access the environment variable with the syntax {env.ELECTRON_BUILD_VERSION}
There you go. Have fun. Happy Coding 😊
Consider this simple example:
service: my-service
frameworkVersion: ">=1.38.0 <2.0.0"
plugins:
- serverless-step-functions
- serverless-pseudo-parameters
- serverless-cf-vars
- serverless-parameters
- serverless-scriptable-plugin
- serverless-s3-deploy
provider:
name: aws
region: us-east-1
custom:
myVariable: "some var value"
assets:
auto: true
targets:
- bucket: ${self:custom.myVariable}
prefix: ${self:custom.myVariable}/
acl: private
files:
- source: my file
glob: "*"
The problem here is - when serverless generate a json cloudformation template and uploads it into cloud-formation. I can not see what actual values were in bucket: ${self:custom.myVariable}.
Is there a way to output serverless template with already resolved variables?
You can use the serverless package command packages your entire infrastructure into the .serverless directory.
This is where you could see the results of any local variables.
Note that any CloudFormation variables (e.g. Fn::* config) won't have been compiled as this is handled by CloudFormation at deployment time.
I have a react app running in node with serverside rendering.
The following environment variable is set to test through kubernetes in my test environment: process.env.NODE_ENV.
When I run the following two commands they give different results. I expect the value to always be test.
log.debug(process.env.NODE_ENV) // logs development
log.debug(eval('process.env.NODE_ENV')) // logs test
Somehow, it looks like the variable is first interpreted as development (Which can happen in my code if it is undefined), but it is somehow interpreted correctly to test by the eval() function.
What can cause node to interpret the value differently between the two expressions?
EDIT: Added kubernetes yaml config.
The ${} variables are replaced by Azure DevOps during the release process.
apiVersion: v1
kind: ConfigMap
metadata:
name: config
namespace: ${KUBERNETES_NAMESPACE}
data:
NODE_ENV: ${NODE_ENV}
---
kind: Service
apiVersion: v1
metadata:
name: ${SERVICE_NAME}
spec:
selector:
app: ${SERVICE_NAME}
ports:
- name: http
protocol: TCP
port: 80
targetPort: 3000
loadBalancerIP: ${IP_NUMBER}
type: LoadBalancer
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: ${SERVICE_NAME}
labels:
app: ${SERVICE_NAME}
spec:
replicas: 2
selector:
matchLabels:
app: ${SERVICE_NAME}
template:
metadata:
labels:
app: ${SERVICE_NAME}
spec:
containers:
- name: ${SERVICE_NAME}
image: {IMAGE_PATH}/${IMAGE_REPO}:${BUILD_NUMBER}
ports:
- name: http
containerPort: 3000
protocol: TCP
resources:
limits:
cpu: 100m
memory: 1024Mi
requests:
cpu: 100m
memory: 1024Mi
envFrom:
- configMapRef:
name: config
imagePullSecrets:
- name: ${IMAGEPULLSECRETNAME}
I seem to have found the cause of the issue.
We use webpack for bundling (which I maybe should have mentioned), and in the server code webpack has outputted, I see that it has resolved process.env.NODE_ENV to a static value, but it doesn't do the same for eval(process.env.NODE_ENV).
Seems my post was unecessary, but I hope it might help someone in the future.