I have two variable containing my namespaces names:
$KUBE_NAMESPACE_DEV ="stellacenter-dev"
$KUBE_NAMESPACE_STAGE "stellacenter-stage-uat"
Now I want to modify the following .gitlab-ci.yaml configuration to include the namespace logic:
deploy_dev:
stage: deploy
image: stellacenter/aws-helm-kubectl
before_script:
- aws configure set aws_access_key_id ${DEV_AWS_ACCESS_KEY_ID}
- aws configure set aws_secret_access_key ${DEV_AWS_SECRET_ACCESS_KEY}
- aws configure set region ${DEV_AWS_DEFAULT_REGION}
script:
- sed -i "s/<VERSION>/${CI_COMMIT_SHORT_SHA}/g" provider-service.yml
- mkdir -p $HOME/.kube
- cp $KUBE_CONFIG_DEV $HOME/.kube/config
- chown $(id -u):$(id -g) $HOME/.kube/config
- export KUBECONFIG=$HOME/.kube/config
- kubectl apply -f ./provider-service.yml
only:
- developer
Provide-service.yml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: provider-app
namespace: "stellacenter-dev" or "stellacenter-stage-uat"
labels:
app: provider-app
spec:
replicas: 1
selector:
matchLabels:
app : provider-app
template:
metadata:
labels:
app: provider-app
spec:
containers:
- name: provider-app
image: registry.gitlab.com/stella-center/backend-services/provider-service:<VERSION>
imagePullPolicy: Always
ports:
- containerPort: 8092
imagePullSecrets:
- name: gitlab-registry-token-auth
---
apiVersion: v1
kind: Service
metadata:
name: provider-service
namespace: "stellacenter-dev" "stellacenter-stage-uat"
spec:
type: NodePort
selector:
app: provider-app
ports:
- port: 8092
targetPort: 8092
I don't know how to integrate the variables and the values correctly . I'm facing the error while I run pipeline.Kindly help me to sort it out.
You can remove the namespace: NAMESPACE from the manifest, and apply the resource in a namespace using the commandline.
- kubectl apply -f ./provider-service.yml -n ${KUBE_NAMESPACE_DEV}
- kubectl apply -f ./provider-service.yml -n ${KUBE_NAMESPACE_STAGE}
Just add one line above the apply command
- export KUBECONFIG=$HOME/.kube/config
- kubectl apply -f ./provider-service.yml
using sed you can replace the respective variable into YAML file
sed -i "s, NAMESPACE,$KUBE_NAMESPACE_DEV," Provide-service.yml
Inside that YAML file keep it something like
apiVersion: v1
kind: Service
metadata:
name: provider-service
namespace: NAMESPACE
spec:
type: NodePort
You can keep one variable instead of two for Namespace management however using the sed you can set the Namespace into the YAML and apply that YAML.
While inside your repo it will be like a template, when CI will run NAMESPACE will get replaced by sed command and YAML will get applied to k8s. Accordingly, you can also keep other things as templates and replace them with sed as per need.
apiVersion: v1
kind: Service
metadata:
name: SERVICE_NAME
namespace: NAMESPACE
spec:
type: SERVICE_TYPE
When deploying lambdas with serverless, the following error occurs:
The CloudFormation template is invalid: Template format error: Output ServerlessDeploymentBucketName is malformed. The Name field of every Export member must be specified and consist only of alphanumeric characters, colons, or hyphens.
I don't understand what the problem is.
Serverless config file:
service: lambdas-${opt:region}
frameworkVersion: '2'
provider:
name: aws
runtime: nodejs12.x
memorySize: 512
timeout: 10
lambdaHashingVersion: 20201221
region: ${opt:region}
stackName: lambdas-${opt:region}
logRetentionInDays: 14
deploymentBucket:
name: lambdas-${opt:region}
plugins:
- serverless-deployment-bucket
functions:
function1:
handler: function1/index.handler
name: function1-${opt:stage}
description: This function should call specific API on Backend server
events:
- schedule: cron(0 0 * * ? *)
environment:
ENV: ${opt:stage}
function2:
handler: function2/index.handler
name: function2-${opt:stage}
description: Function should be triggered by invocation from backend.
environment:
ENV: ${opt:stage}
I ran into this same problem.
In the serverless.yml I changed service that I had it as lambda_function and put it as lambdaFunction
The error was solved and it deployed correctly.
Most likely your stage name contains an illegal character. Serverless auto-generates a name for your s3 bucket based on your stage name. If you look at the generated template file you will see the full export, which will look something like the following:
"ServerlessDeploymentBucketName": {
"Value": "api-deployment",
"Export": {
"Name": "sls-api_stage-ServerlessDeploymentBucketName"
}
}
The way around this (assuming you don't want to change your stage name) is to explicitly set the output by adding something like this to your serverless config (in this case the illegal character was the underscore)
resources: {
Outputs: {
ServerlessDeploymentBucketName: {
Export: {
Name: `sls-${stageKey.replace('api_', 'api-')}-ServerlessDeploymentBucketName`
}
}
}
}
Unfortunately this has to be done for every export... It is a better option to update your stage name to not include illegal characters
Currently running my automation from a pipeline please see yaml file below:
jobs:
- job: master
pool:
vmImage: ubuntu-latest
steps:
- task: NodeTool#0
inputs:
versionSpec: '10.14'
displayName: 'Install Node.js'
- script: npm install
displayName: 'Install TestCafe'
- script: npm test
displayName: 'Run Tests'
- task: PublishTestResults#2
inputs:
testResultsFiles: 'report.xml'
testResultsFormat: 'JUnit'
this works well the problem I have is that I would live to be able to make the URL dynamic based on variables entered into Azure dev ops
The update yaml file is below:
trigger:
- master
parameters:
- name: env
type: string
default: testing
values:
- testing
- bdev
- fdev
- name: person
type: string
default: uat
values:
- bs
- nk
- uat
- mc
- rm
- pe
- mv
- mm
variables:
webapp: 'Test-rt5-${{ parameters.env }}-app-${{ parameters.person }}'
Stages:
- stage: 'Build'
displayName: 'Build ${{ parameters.env }}-${{ parameters.person }} '
jobs:
- job: master
pool:
vmImage: ubuntu-latest
steps:
- task: NodeTool#0
inputs:
versionSpec: '10.14'
displayName: 'Install Node.js'
- script: npm install
displayName: 'Install TestCafe'
- script: npm test
displayName: 'Run Tests'
- task: PublishTestResults#2
inputs:
testResultsFiles: 'report.xml'
testResultsFormat: 'JUnit'
How do I use these variables to form my URL for each test/fixture?
currently using
const URL = 'https://test-rt5-bdev-app-rm.com/';
fixture ("SmokeFuxture")
.page(URL);
Azure Docs about defining variables state the following:
Notice that variables are also made available to scripts through environment variables.
So, you can use your variable with a dynamically created URL just by accessing the corresponding environment variable in the TestCafe test:
fixture("SmokeFixture")
.page(process.env.WEBAPP)
Consider this simple example:
service: my-service
frameworkVersion: ">=1.38.0 <2.0.0"
plugins:
- serverless-step-functions
- serverless-pseudo-parameters
- serverless-cf-vars
- serverless-parameters
- serverless-scriptable-plugin
- serverless-s3-deploy
provider:
name: aws
region: us-east-1
custom:
myVariable: "some var value"
assets:
auto: true
targets:
- bucket: ${self:custom.myVariable}
prefix: ${self:custom.myVariable}/
acl: private
files:
- source: my file
glob: "*"
The problem here is - when serverless generate a json cloudformation template and uploads it into cloud-formation. I can not see what actual values were in bucket: ${self:custom.myVariable}.
Is there a way to output serverless template with already resolved variables?
You can use the serverless package command packages your entire infrastructure into the .serverless directory.
This is where you could see the results of any local variables.
Note that any CloudFormation variables (e.g. Fn::* config) won't have been compiled as this is handled by CloudFormation at deployment time.
Seeing this Error message when I'm trying to run Intern tests from within my test files dir. the (relevant) structure of the dir is:
test
resources
rest
pickup.js
cashManagement.js
gitignore
intern.js
packages.js
packages.sample.js
...
The inter.js contains references to testFile1.js and testFile2.js in the suites section. I played a bit with the way these 2 files are referenced and got a "Failed to load module..." error. So I guess now that's solved and the ReferenceError is the one I need to figure out. I did go over the existing threads and nothing seems to solve my issue. The full error message is pasted below. I'll gladly supply more relevant info if needed.
Thanks,
Eran
***ReferenceError: document is not defined**
at /Applications/dojo-release-1.10.2/dojo/has.js:31:33
at execModule (/Users/eranbrand/src/MobilePosSolution/ovc-build/ovc-repo/src/test/node_modules/intern/node_modules/dojo/dojo.js:515:54)
at /Users/eranbrand/src/MobilePosSolution/ovc-build/ovc-repo/src/test/node_modules/intern/node_modules/dojo/dojo.js:504:12
at Array.map (native)
at execModule (/Users/eranbrand/src/MobilePosSolution/ovc-build/ovc-repo/src/test/node_modules/intern/node_modules/dojo/dojo.js:499:17)
at /Users/eranbrand/src/MobilePosSolution/ovc-build/ovc-repo/src/test/node_modules/intern/node_modules/dojo/dojo.js:582:7
at guardCheckComplete (/Users/eranbrand/src/MobilePosSolution/ovc-build/ovc-repo/src/test/node_modules/intern/node_modules/dojo/dojo.js:566:4)
at checkComplete (/Users/eranbrand/src/MobilePosSolution/ovc-build/ovc-repo/src/test/node_modules/intern/node_modules/dojo/dojo.js:574:27)
at onLoadCallback (/Users/eranbrand/src/MobilePosSolution/ovc-build/ovc-repo/src/test/node_modules/intern/node_modules/dojo/dojo.js:656:7)
at /Users/eranbrand/src/MobilePosSolution/ovc-build/ovc-repo/src/test/node_modules/intern/node_modules/dojo/dojo.js:761:5*
Here's the content of the intern.js file:
// Learn more about configuring this file at <https://github.com/theintern/intern/wiki/Configuring-Intern>.
// These default settings work OK for most people. The options that *must* be changed below are the
// packages, suites, excludeInstrumentation, and (if you want functional tests) functionalSuites.
serviceURL = "http://ovc.local:8080/POSMClient/json/process/execute/";
define(['./packages'], function(Packages) {
var returnValue = {
// Configuration options for the module loader; any AMD configuration options supported by the specified AMD loader
// can be used here
loader: {
packages: Packages.packages
},
// The port on which the instrumenting proxy will listen
proxyPort: 9000,
// A fully qualified URL to the Intern proxy
proxyUrl: 'http://localhost:9000/',
// Default desired capabilities for all environments. Individual capabilities can be overridden by any of the
// specified browser environments in the `environments` array below as well. See
// https://code.google.com/p/selenium/wiki/DesiredCapabilities for standard Selenium capabilities and
// https://saucelabs.com/docs/additional-config#desired-capabilities for Sauce Labs capabilities.
// Note that the `build` capability will be filled in with the current commit ID from the Travis CI environment
// automatically
capabilities: {
'selenium-version': '2.39.0'
},
// Browsers to run integration testing against. Note that version numbers must be strings if used with Sauce
// OnDemand. Options that will be permutated are browserName, version, platform, and platformVersion; any other
// capabilities options specified for an environment will be copied as-is
environments: [/*
{ browserName: 'internet explorer', version: '11', platform: 'Windows 8.1' },
{ browserName: 'internet explorer', version: '10', platform: 'Windows 8' },
{ browserName: 'internet explorer', version: '9', platform: 'Windows 7' },
{ browserName: 'firefox', version: '27', platform: [ 'OS X 10.6', 'Windows 7', 'Linux' ] },
{ browserName: 'chrome', version: '32', platform: [ 'OS X 10.6', 'Windows 7', 'Linux' ] },
{ browserName: 'safari', version: '6', platform: 'OS X 10.8' },
{ browserName: 'safari', version: '7', platform: 'OS X 10.9' }*/
],
// Maximum number of simultaneous integration tests that should be executed on the remote WebDriver service
maxConcurrency: 3,
// Whether or not to start Sauce Connect before running tests
useSauceConnect: false,
// Connection information for the remote WebDriver service. If using Sauce Labs, keep your username and password
// in the SAUCE_USERNAME and SAUCE_ACCESS_KEY environment variables unless you are sure you will NEVER be
// publishing this configuration file somewhere
webdriver: {
host: 'localhost',
port: 4444
},
// The desired AMD loader to use when running unit tests (client.html/client.js). Omit to use the default Dojo
// loader
useLoader: {
'host-node': 'dojo/dojo',
'host-browser': 'node_modules/dojo/dojo.js'
},
// Non-functional test suite(s) to run in each browser
suites: [
"rest/pickup",
"rest/cashManagement"
],
// Functional test suite(s) to run in each browser once non-functional tests are completed
functionalSuites: [ /* 'myPackage/tests/functional' */ ],
// A regular expression matching URLs to files that should not be included in code coverage analysis
excludeInstrumentation: /^tests\//
}
return returnValue;
});
It looks like you're using the release build of Dojo, which assumes the document object will be available. To use that build you'll need to run intern with intern-runner or the browser client (client.html). If you'd prefer to run your tests with the node client (intern-client), you'll need to use the source distribution of Dojo or the dojo npm package.