Vscode containers from git repo - javascript

Keep getting errors from vscode when trying to setup containers as a dev enviroment. Im running linux ubuntu 22. vscode latest.
So what i have done so far.
I pulled my git repo locally.
I added a Dockerfile:
# Use an official Node.js image as the base image
FROM node:18.13.0
# Set the working directory in the image
WORKDIR /app
# Copy the package.json and package-lock.json files from the host to the image
COPY package.json package-lock.json ./
# Install the dependencies from the package.json file
RUN npm ci
# Copy the rest of the application code from the host to the image
COPY . .
# Build the Next.js application
RUN npm run build
# Specify the command to run when the container starts
CMD [ "npm", "start" ]
This a a basic nextjs (latest) app nothing but tailwind added.
Then i build image:
docker build -t filename .
Then i mount image:
docker run -p 3000:3000 -d containerName
Then i go to vscode and select:
dev Containers: open folder in container
Vscode then gives this message:Command failed:
/usr/share/code/code --ms-enable-electron-run-as-node /home/ellsium/.vscode/extensions/ms-vscode-remote.remote-containers-0.275.0/dist/spec-node/devContainersSpecCLI.js up --user-data-folder /home/ellsium/.config/Code/User/globalStorage/ms-vscode-remote.remote-containers/data --container-session-data-folder tmp/devcontainers-b4794c92-ea56-497d-9059-03ea0ea3cb4a1675620049507 --workspace-folder /srv/http/Waldo --workspace-mount-consistency cached --id-label devcontainer.local_folder=/srv/http/Waldo --id-label devcontainer.config_file=/srv/http/Waldo/.devcontainer/devcontainer.json --log-level debug --log-format json --config /srv/http/Waldo/.devcontainer/devcontainer.json --default-user-env-probe loginInteractiveShell --mount type=volume,source=vscode,target=/vscode,external=true --skip-post-create --update-remote-user-uid-default on --mount-workspace-git-root true
From what i understand vscode needs to see a running docker image? Then it jumps inside and i can use the enviroment? this image can be running on host or ssh? I only want to run host. I hope the method above is correct?

Related

Disabling GPU Acceleration in Cypress

I'm running Cypress in a Docker container in Jenkins.
This is my Dockerfile:
#Base image taken from:https://github.com/cypress-io/cypress-docker-images
FROM cypress/browsers:node14.17.0-chrome91-ff89
#Create the folder where our project will be stored
RUN mkdir /my-cypress-project
#We make it our workdirectory
WORKDIR /my-cypress-project
#Let's copy the essential files that we MUST use to run our scripts.
COPY ./package.json .
COPY ./cypress/tsconfig.json .
COPY ./cypress.config.ts .
COPY ./cypress ./cypress
RUN pwd
RUN ls
#Install the cypress dependencies in the work directory
RUN npm install
RUN npm audit fix
RUN npx cypress verify
RUN apt-get install -y xvfb
RUN google-chrome --disable-gpu --no-sandbox --headless
#Executable commands the container will use[Exec Form]
ENTRYPOINT ["npx","cypress","run"]
#With CMD in this case, we can specify more parameters to the last entrypoint.
CMD [""]
I'm building it like this:
docker build -t my-cypress-image:1.1.0 .
and running like this:
docker run -v '$PWD':/my-cypress-project -t my-cypress-image:1.1.0 --spec cypress/e2e/pom/homeSauce.spec.js --headless --browser chrome --config-file=/my-cypress-project/cypress.config.ts
and I get this error in the console:
libva error: va_getDriverName() failed with unknown libva error,driver_name=(null)
[218:0822/100658.356057:ERROR:gpu_memory_buffer_support_x11.cc(44)] dri3 extension not supported.
Could not find a Cypress configuration file.
We looked but did not find a cypress.config.ts file in this folder: /my-cypress-project
Now as far as I know, this is due to the browser running with GPU acceleration... how do I disable that?
I tried pasting this in my index.js file:
// cypress/plugins/index.js
module.exports = (on, config) => {
on('before:browser:launch', (browser = {}, launchOptions) => {
console.log(launchOptions.args)
if (browser.name == 'chrome') {
launchOptions.args.push('--disable-gpu')
}
return launchOptions
})
}
but I still get the exact same error...
Any help would be appreciated!
Cheers

Why do we install dependencies during Docker's final "CMD" command, in development?

I'm working through a book about bootstrapping microservices, and the author provides the following dockerfile, which is meant to be used in development.**
FROM node:12.18.1-alpine
WORKDIR /usr/src/app
COPY package*.json .
CMD npm config set cache-min 999999 && \
npm install && \
npm run start:dev
The CMD command here is obviously somewhat unusual. The rationale provided is as follows: By doing the npm install when the container starts, we can "make use of npm caching so it's much faster to install at container startup than if we installed it during the build process."
What is going on behind the scenes here with the CMD command? How is this different from having a RUN command that installs the dependencies prior to the CMD command? And relatedly, why do we need to set a cache-min policy?
**The source files are not copied over here because they are included in a mounted volume.
EDIT: Here is the docker compose file as well
version: '3'
services:
history:
image: history
build:
context: ./history
dockerfile: Dockerfile-dev
container_name: history
volumes:
- /tmp/history/npm-cache:/root/.npm:z
- ./history/src:/usr/src/app/src/:z
ports:
- '4002:80'
environment:
- PORT=80
- NODE_ENV=development
restart: 'no'
...
When you develop, you often change the packages that are included in the project. By doing it this way, you don't need to build a new image when you do that. You can just stop and start the container and it'll install the new packages.
I am a little surprised by the copying of package*.json though. I'd assume that that would be passed into the image using a volume like you say the source code is. It can still be done like that and maybe it is. We'd need to see your docker run command do know if it is.

Dockerfile, switch between dev / prod

I'm new to docker, i've done their tutorial and some others things on the web, but that's all.. So I guess I'm doing this in a very wrong way..
It has been one day since I'm looking for a way to publish a Dockerfile that will either launch npm run dev or npm start, depends on the prod or dev environnement.
Playground
What I got so far :
# Specify the node base image version such as node:<version>
FROM node:10
# Define environment variable, can be overight by runinng docker run with -e "NODE_ENV=prod"
ENV NODE_ENV dev
# Set the working directory to /usr/src/app
WORKDIR /usr/src/app
# Install nodemon for hot reload
RUN npm install -g nodemon
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install && \
npm cache clean --force
# Set the port used by the app
EXPOSE 8080
# Bundle app source
COPY . .
# Launch the app
CMD [ "nodemon", "server.js" ]
From what i've saw in the www, is that people tend to use bash for doing that kind of operation or mount a volume in the docker-compose, however it looks so much verbosity for just doing an if else condition inside a Dockerfile.
Goal
Without using any other file(keep things simple)
What i'm looking for is something like :
if [ "$NODE_ENV" = "dev" ]; then
CMD ["nodemon", "server.js"] // dev env
else
CMD ["node", "server.js"] // prod env
fi
Maybe I'm wrong, any good advice about how doing such a thing in docker would be nice.
Also, nota that I'm not sure how to allow reload in my container when modifying a file in my host, I guess it's all about volume, but again not sure how to do it..
Sadly there is no way to apply this logic in Dockerfile syntax, everything should be at the entrypoint script. To avoid using other files, you can implement this logic in one-line bash script:
ENTRYPOINT ["/bin/bash"]
CMD ['-c','if [ "$NODE_ENV" = "dev" ]; then nodemon server.js; else node server.js; fi']
You can use the ENTRYPOINT or CMD so you can execute a bash script inside the container as the first command.
ENTRYPOINT["your/script.sh"]
CMD["your/script.sh"]
in your script do your thing!
Even you dont need to pass the env variable since in the script you can access it.

Running nuxt js application in Docker

I'm trying to run nuxt application in docker container. In order to do so, I created the following Dockerfile:
FROM node:6.10.2
RUN mkdir -p /app
EXPOSE 3000
COPY . /app
WORKDIR /app
RUN npm install
RUN npm run build
CMD [ "npm", "start" ]
However, when I build the image and run the container (docker run -p 3000:3000 <image-id>) I get nothing while hitting localhost:3000 in my browser. What could be the cause?
The application inside Docker container by default is accepting network traffic onhttp://127.0.0.1:3000. This interface does not accept external traffic so no wonder that it does not work. In order to make it work we need to set HOST environmental variable for nuxt app to 0.0.0.0 (all ip addresses). We can do this either in Dockerfile, like this:
FROM node:6.10.2
ENV HOST 0.0.0.0
# rest of the file
or in package.json in the script's "start" command:
"scripts": { "start": "HOST=0.0.0.0 nuxt start" ...}
Or any other way that will make the nuxt application to listen elsewhere than on localhost inside container only.

Jenkins in docker, workspace location

I have a Docker container running Jenkins, I can successfully build my program but I could'nt find out where it is stored.
At the end of my build I make a zip file of the JavaScript project (the dist directory) and ask to store it to /var/jenkins_home/canopy.zip , using this script
npm install
npm install -g bower
npm install -g grunt-cli
bower install
grunt build
zip /var/jenkins_home/canopy.zip /var/jenkins_home/workspace/Canopy/dist
The build is successful and in the Jenkins UI I can sucessfully see the workspace, however when I try to find it on the docker at path /var/jenkins_home the directory is empty.
I would like to have the location of the workspace so I can easily get back my zip file.
I finally found it.
The directory is in the _data folder of the docker at the following path :
/var/lib/docker/volumes/dockerID/_data/workspace
it can be found in this path as well
where the jenkins hosted to run
PATH: /var/lib/docker/volumes/jenkins_home/_data/workspace
eg:
/var/lib/docker/volumes/jenkins_home/_data/workspace
[root#localmav workspace]# ls -lr firstjob/
total 0
here firstjob is the jenkins job

Categories

Resources