Can we cache yarn globals in github actions - javascript

I have some global packages such as serverless framework, ESLint and etc. I've implemented GitHub Actions cache for yarn. Below is my code.
- uses: actions/cache#v1
id: yarn-cache # use this to check for `cache-hit` (`steps.yarn-cache.outputs.cache-hit != 'true'`)
with:
path: ${{ steps.yarn-cache-dir-path.outputs.dir }}
key: ${{ runner.os }}-yarn-${{ hashFiles('**/yarn.lock') }}
restore-keys: |
${{ runner.os }}-yarn-
- name: Adding serverless globally
run: yarn global add serverless
- name: Yarn Install
if: steps.yarn-cache.outputs.cache-hit != 'true'
run: |
echo "cache hit failed"
yarn install
env:
CI: false
But my global packages are not cached. Is there any way to cache Yarn globals?

I'm pasting the build file for the solution,
name: global-test
on:
push:
branches:
- dev
pull_request:
branches:
- dev
jobs:
aws-deployment:
runs-on: ubuntu-latest
steps:
- name: CHECKOUT ACTION
uses: actions/checkout#v2
- name: NODE SETUP ACTION
uses: actions/setup-node#v1
with:
node-version: '12.x'
- name: Get yarn cache directory path
id: yarn-cache-dir-path
run: |
echo "::set-output name=dir::$(yarn cache dir)"
- name: Set yarn global bin path
run: |
yarn config set prefix $(yarn cache dir)
- name: Add yarn bin path to system path
run: |
echo $(yarn global bin) >> $GITHUB_PATH
- name: Set yarn global installation path
run: |
yarn config set global-folder $(yarn cache dir)
- name: CACHE ACTION
uses: actions/cache#v2
env:
cache-version: v1
id: yarn-cache
with:
path: |
${{ steps.yarn-cache-dir-path.outputs.dir }}
**/node_modules
key: ${{ runner.os }}-yarn-${{ env.cache-version }}-${{ hashFiles('**/yarn.lock') }}
restore-keys: |
${{ runner.os }}-yarn-${{ env.cache-version }}-
${{ runner.os }}-yarn-
${{ runner.os }}-
- name: Installing dependencies
if: steps.yarn-cache.outputs.cache-hit != 'true'
run: |
echo "YARN CACHE CHANGED"
yarn install
- name: Adding serverless globally
if: steps.yarn-cache.outputs.cache-hit != 'true'
run: |
echo "NO CACHE HIT"
yarn global add serverless
I named the steps, so they can be understood.
UPDATED the answer on 2020-12-06

Related

Lerna publish gets stuck after first package

Current Behavior
When I run lerna publish, it gets stuck after packaging the first package. This is where it gets stuck:
lerna WARN ENOLICENSE One way to fix this is to add a LICENSE.md file to the root of this repository.
lerna WARN ENOLICENSE See https://choosealicense.com for additional guidance.
(#########⠂⠂⠂⠂⠂⠂⠂⠂⠂) ⠏ publish: verb packed packages/react
This line particularly:
(#########⠂⠂⠂⠂⠂⠂⠂⠂⠂) ⠏ publish: verb packed packages/react
Expected Behavior
It should publish to NPM successfully
Failure Logs / Configuration
lerna.json
{
"packages": [
"packages/*",
"docs"
],
"version": "independent",
"stream": true,
"hoist": true,
"command": {
"bootstrap": {
"npmClientArgs": ["--no-package-lock"]
},
"publish": {
"ignoreChanges": ["**/stories/**", "**/tests/**"]
}
}
}
Environment
System:
OS: macOS 12.3
CPU: (8) arm64 Apple M1 Pro
Binaries:
Node: 14.21.2 - ~/.nvm/versions/node/v14.21.2/bin/node
npm: 6.14.17 - ~/.nvm/versions/node/v14.21.2/bin/npm
Utilities:
Git: 2.32.1 - /usr/bin/git
npmPackages:
lerna: ^5.6.2 => 5.6.2

Yarn install error: NX Cannot read properties of undefined (reading 'endsWith')

I am trying to setup circeci in NX workspace for react app.
On the step where is executed yarn install
I get next error:
error /home/circleci/project/node_modules/#nrwl/js/node_modules/nx,
/home/circleci/project/node_modules/#nrwl/remix/node_modules/nx:
Command failed. Exit code: 1 Command: node ./bin/init Arguments:
Directory:
/home/circleci/project/node_modules/#nrwl/js/node_modules/nx Output:
NX Cannot read properties of undefined (reading 'endsWith') info Visit https://yarnpkg.com/en/docs/cli/install for documentation about
this command.
Exited with code exit status 1
This is my circeci config
version: 2.1
orbs:
nx: nrwl/nx#1.4.0
jobs:
agent:
resource_class: xlarge
docker:
- image: cimg/node:lts-browsers
parameters:
ordinal:
type: integer
steps:
- checkout
- run:
name: Install dependencies
command: |
yarn install
- run:
name: Start the agent << parameters.ordinal >>
command: yarn nx-cloud start-agent
no_output_timeout: 60m
main:
resource_class: xlarge
docker:
- image: cimg/node:lts-browsers
environment:
NX_CLOUD_DISTRIBUTED_EXECUTION: 'true'
steps:
- checkout
- run:
name: Install dependencies
command: |
yarn install
- nx/set-shas:
main-branch-name: 'main'
- run:
name: Initialize the Nx Cloud distributed CI run
command: yarn nx-cloud start-ci-run
- run:
name: Run workspace lint
command: yarn nx-cloud record -- yarn nx workspace-lint
- run:
name: Check format
command: yarn nx-cloud record -- yarn nx format:check --base=$NX_BASE --head=$NX_HEAD
- run:
name: Run lint
command: yarn nx affected --base=$NX_BASE --head=$NX_HEAD --target=lint --parallel=3
- run:
name: Run test
command: yarn nx affected --base=$NX_BASE --head=$NX_HEAD --target=test --parallel=3 --ci --code-coverage
- run:
name: Run build
command: yarn nx affected --base=$NX_BASE --head=$NX_HEAD --target=build --parallel=3
- run:
name: Stop all agents
command: yarn nx-cloud stop-all-agents
when: always
workflows:
version: 2
ci:
jobs:
- agent:
name: Nx Cloud Agent << matrix.ordinal >>
matrix:
parameters:
ordinal: [1, 2, 3]
- main:
name: Nx Cloud Main
Does anyone had similar problem?
I encount this issue a lot of times.
Just remove the ".cache/nx" folder in your node_modules and re-run your command.

How to Deploy from Gitlab-ci to multiple kubernetes namespaces?

I have two variable containing my namespaces names:
$KUBE_NAMESPACE_DEV ="stellacenter-dev"
$KUBE_NAMESPACE_STAGE "stellacenter-stage-uat"
Now I want to modify the following .gitlab-ci.yaml configuration to include the namespace logic:
deploy_dev:
stage: deploy
image: stellacenter/aws-helm-kubectl
before_script:
- aws configure set aws_access_key_id ${DEV_AWS_ACCESS_KEY_ID}
- aws configure set aws_secret_access_key ${DEV_AWS_SECRET_ACCESS_KEY}
- aws configure set region ${DEV_AWS_DEFAULT_REGION}
script:
- sed -i "s/<VERSION>/${CI_COMMIT_SHORT_SHA}/g" provider-service.yml
- mkdir -p $HOME/.kube
- cp $KUBE_CONFIG_DEV $HOME/.kube/config
- chown $(id -u):$(id -g) $HOME/.kube/config
- export KUBECONFIG=$HOME/.kube/config
- kubectl apply -f ./provider-service.yml
only:
- developer
Provide-service.yml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: provider-app
namespace: "stellacenter-dev" or "stellacenter-stage-uat"
labels:
app: provider-app
spec:
replicas: 1
selector:
matchLabels:
app : provider-app
template:
metadata:
labels:
app: provider-app
spec:
containers:
- name: provider-app
image: registry.gitlab.com/stella-center/backend-services/provider-service:<VERSION>
imagePullPolicy: Always
ports:
- containerPort: 8092
imagePullSecrets:
- name: gitlab-registry-token-auth
---
apiVersion: v1
kind: Service
metadata:
name: provider-service
namespace: "stellacenter-dev" "stellacenter-stage-uat"
spec:
type: NodePort
selector:
app: provider-app
ports:
- port: 8092
targetPort: 8092
I don't know how to integrate the variables and the values correctly . I'm facing the error while I run pipeline.Kindly help me to sort it out.
You can remove the namespace: NAMESPACE from the manifest, and apply the resource in a namespace using the commandline.
- kubectl apply -f ./provider-service.yml -n ${KUBE_NAMESPACE_DEV}
- kubectl apply -f ./provider-service.yml -n ${KUBE_NAMESPACE_STAGE}
Just add one line above the apply command
- export KUBECONFIG=$HOME/.kube/config
- kubectl apply -f ./provider-service.yml
using sed you can replace the respective variable into YAML file
sed -i "s, NAMESPACE,$KUBE_NAMESPACE_DEV," Provide-service.yml
Inside that YAML file keep it something like
apiVersion: v1
kind: Service
metadata:
name: provider-service
namespace: NAMESPACE
spec:
type: NodePort
You can keep one variable instead of two for Namespace management however using the sed you can set the Namespace into the YAML and apply that YAML.
While inside your repo it will be like a template, when CI will run NAMESPACE will get replaced by sed command and YAML will get applied to k8s. Accordingly, you can also keep other things as templates and replace them with sed as per need.
apiVersion: v1
kind: Service
metadata:
name: SERVICE_NAME
namespace: NAMESPACE
spec:
type: SERVICE_TYPE

Lerna publish workflow not publishing as expected

For smoother CI experience, I have made a Github Action workflow to publish the monorepo packages with a prerelease version, everytime any member opens a PR against master with a particular label 'publish'. This workflow
should ideally publish all the changed packages since the last publish with a preid -pr{pr#} e.g., package-pr1049.0. Have added dist-tag and predist-tag also here.
Background:
Before publishing the packages, I also run a MAKE executable(make -j init script) to clean and bootstrap all the packages. Post this, it will fetch the repo, checkout to the required branch, and run the publish command with the PR number parameter.
There are 2 problems I am facing in this workflow:
Publishes all the packages in the first commit of the PR:
To debug the issue , Had also added a logger to see if it has the correct record for last 10 commits, which reflects the correct set of commits.
Second commit onwards, only the changed packages are published which is as expected. refer the log
lerna notice cli v3.20.2
lerna info versioning independent
lerna info ci enabled
lerna info Assuming all packages changed
lerna info getChangelogConfig Successfully resolved preset "conventional-changelog-angular"
Changes:
- #swiggy-private/package-1: 1.0.4 => 1.1.0-pr19000.0
- #swiggy-private/package-2: 1.1.4 => 1.2.0-pr19000.0
- #swiggy-private/package-3: 2.41.2 => 2.42.0-pr19000.0 (private)
Always updates the patch version as 0, and increments the preid (PR number with -pr prefix) PACKAGEv0-pr{##}.0, PACKAGEv0-pr{##}.1, ....
For speeding up debugging process, have limited monorepo packages in lerna.json to only 3 of the packages.
My GH workflow
name: Branch Publish
on:
pull_request:
types: [opened, synchronize, reopened, labeled]
branches:
- master
jobs:
check:
runs-on: ubuntu-latest
timeout-minutes: 15
outputs:
author: ${{ steps.step1.outputs.author }}
steps:
- uses: actions/checkout#v2
with:
ref: ${{ github.event.pull_request.head.sha }}
- id: "step1"
run: |
AUTHOR_NAME=$(git show ${{ github.event.pull_request.head.sha }} | grep Author)
echo "::set-output name=author::$AUTHOR_NAME"
init:
if: "!contains(needs.check.outputs.author, 'GitHub Action Branch') && !contains(github.event.head_commit.message, '[skip ci]')"
runs-on: ubuntu-latest
timeout-minutes: 15
needs: [check]
steps:
- uses: actions/checkout#v2
- uses: actions/setup-node#v1
with:
node-version: "12.x"
- run: git fetch --prune --unshallow
- run: |
make -j init
env:
NPM_TOKEN: ${{ secrets.GH_TOKEN }}
- uses: actions/cache#v1
id: cache-build
with:
path: "."
key: ${{ github.sha }}
release:
if: "contains(github.event.pull_request.labels.*.name, 'publish')"
runs-on: ubuntu-latest
timeout-minutes: 15
needs: [init]
steps:
- uses: actions/checkout#v2
with:
fetch-depth: "0"
- uses: actions/setup-node#v1
with:
node-version: "12.x"
- uses: actions/cache#v1
id: restore-build
with:
path: "."
key: ${{ github.sha }}
- name: Setup Git
uses: webfactory/ssh-agent#v0.4.1
with:
ssh-private-key: ${{ secrets.GHA_DEPLOY_KEY }}
- name: Lerna Publish
if: success()
env:
GH_TOKEN: ${{ secrets.GH_TOKEN }}
NODE_ENV: production
run: |
git config user.email "action#github.com"
git config user.name "GitHub Action Branch"
git remote set-url origin "git#github.com:${{ github.repository }}"
git fetch --depth=1 origin +refs/tags/*:refs/tags/*
git checkout -- .
git log --pretty=oneline -n 10
git checkout --track origin/$(echo $GITHUB_HEAD_REF | cut -d'/' -f 3)
NUMBER=${{ github.event.number }} npm run publish-branch
- name: Possible Package lock update
if: success()
run: |
git config user.email "action#github.com"
git config user.name "GitHub Action Branch"
git remote set-url origin "git#github.com:${{ github.repository }}"
npx lerna clean -y
npx lerna exec -- npm i --package-lock-only --ignore-scripts --no-audit
echo `git add . && git commit -m "chore: package lock update" --no-verify && git push`
Publish command
"publish-branch": "lerna publish --conventional-prerelease --exact --no-changelog --preid pr$NUMBER --dist-tag beta --pre-dist-tag beta --no-verify-access --yes"
Lerna.json
{
"packages": ["*"],
"version": "independent",
"command": {
"publish": {
"npmClient": "npm",
"graphType": "all",
"allowBranch": ["master", "integration", "*"],
"conventionalCommits": true,
"message": "chore(release): publish",
"includeMergedTags": true,
"ignoreChanges": ["**/__tests__/**", "**/*.md"]
}
}
}
Make Script to bootstrap packages
init: clean-all
$(MAKE) create-npmrc-all
npm ci
npm run bootstrap:ci
NODE_ENV=production npm run prepare:all
create-npmrc-all:
echo $(GITHUB_SCOPE_REGISTRY) >> .npmrc
echo $(GITHUB_REGISTRY_TOKEN) >> .npmrc
$(foreach source, $(DIRECTORY), $(call pass-to-npmrc, $(source), $(GITHUB_SCOPE_REGISTRY)))
$(foreach source, $(DIRECTORY), $(call pass-to-npmrc, $(source), $(GITHUB_REGISTRY_TOKEN)))
clean-all:
rm -rf node_modules
$(foreach source, $(SOURCES), \
$(call clean-source-all, $(source)))
rm -rf .npmrc
rm -rf packages/*/.npmrc
rm -rf coverage
rm -rf packages/*/npm-debug*
Instead of creating script from .npmrc file you can just provide those data directly in the pipeline, those 2 line are perfectly working even with the github predefined token. Not really sure what environment you are using, but I think you'll get the point.
name: Setting Up NPM
run: |
npm set#organization:registry=https://npm.pkg.github.com/organization
npm set "//npm.pkg.github.com/:_authToken=${{ secrets.GITHUB_TOKEN }}"

Currently trying to get my smoketest.js to run on a different URL depending on parameters provided on Azure dev ops (TestCafe)

Currently running my automation from a pipeline please see yaml file below:
jobs:
- job: master
pool:
vmImage: ubuntu-latest
steps:
- task: NodeTool#0
inputs:
versionSpec: '10.14'
displayName: 'Install Node.js'
- script: npm install
displayName: 'Install TestCafe'
- script: npm test
displayName: 'Run Tests'
- task: PublishTestResults#2
inputs:
testResultsFiles: 'report.xml'
testResultsFormat: 'JUnit'
this works well the problem I have is that I would live to be able to make the URL dynamic based on variables entered into Azure dev ops
The update yaml file is below:
trigger:
- master
parameters:
- name: env
type: string
default: testing
values:
- testing
- bdev
- fdev
- name: person
type: string
default: uat
values:
- bs
- nk
- uat
- mc
- rm
- pe
- mv
- mm
variables:
webapp: 'Test-rt5-${{ parameters.env }}-app-${{ parameters.person }}'
Stages:
- stage: 'Build'
displayName: 'Build ${{ parameters.env }}-${{ parameters.person }} '
jobs:
- job: master
pool:
vmImage: ubuntu-latest
steps:
- task: NodeTool#0
inputs:
versionSpec: '10.14'
displayName: 'Install Node.js'
- script: npm install
displayName: 'Install TestCafe'
- script: npm test
displayName: 'Run Tests'
- task: PublishTestResults#2
inputs:
testResultsFiles: 'report.xml'
testResultsFormat: 'JUnit'
How do I use these variables to form my URL for each test/fixture?
currently using
const URL = 'https://test-rt5-bdev-app-rm.com/';
fixture ("SmokeFuxture")
.page(URL);
Azure Docs about defining variables state the following:
Notice that variables are also made available to scripts through environment variables.
So, you can use your variable with a dynamically created URL just by accessing the corresponding environment variable in the TestCafe test:
fixture("SmokeFixture")
.page(process.env.WEBAPP)

Categories

Resources