When i run my .sh script i get syntax error: unexpected end of file
#!/usr/bin/env bash
# Make sure the port is not already bound
if ss -lnt | grep -q :"$SERVER_PORT"; then
echo "Another process is already listening to port $SERVER_PORT"
exit 1;
fi
RETRY_INTERVAL=${RETRY_INTERVAL:-0.2}
if ! systemctl is-active --quiet elasticsearch.service; then
sudo systemctl start elasticsearch.service
# Wait until Elasticsearch is ready to respond
until curl --silent "$ELASTICSEARCH_HOSTNAME":"$ELASTICSEARCH_PORT" -w "" -o /dev/null; do
sleep "$RETRY_INTERVAL"
done
fi
# Run our API server as a background process
npm run serve &
until ss -lnt | grep -q :"$SERVER_PORT"; do
sleep "$RETRY_INTERVAL"
done
npx cucumber-js spec/cucumber/features --require-module #babel/register --require spec/cucumber/steps
kill -15 0
actual result when i run my script is syntax error: unexpected end of file
Related
I have a TypeScript function defined at src/index.ts for AWS Lambda below. For the purpose of this question, it simply sends a JSON body with a "hello world" statement. While running the docker container locally and making a file change, the lambda return value should change accordingly.
import { Context, APIGatewayProxyResult, APIGatewayEvent } from "aws-lambda";
export const handler = async (
event: APIGatewayEvent,
context: Context
): Promise<APIGatewayProxyResult> => {
return {
statusCode: 200,
headers: {
"content-type": "application/json",
},
body: JSON.stringify({ text: "hello world" })
};
};
My yarn develop command is equal to the following shell command:
esbuild src/index.ts --watch --bundle --minify --sourcemap --platform=node --target=es2020 --outfile=build/index.js
Paired with yarn develop, I use the following Dockerfile.dev to set up my local development environment:
FROM public.ecr.aws/lambda/nodejs:16
RUN npm install --global yarn
WORKDIR /usr/app
RUN yarn install --frozen-lockfile
# where transpiled index.js is pointed.
VOLUME /var/task/build
CMD [ "build/index.handler" ]
I use the following Makefile with a develop target to run my lambda locally.
SHELL := /bin/bash
APPNAME = hello-world
.PHONY: develop
develop:
#docker build -f Dockerfile.dev -t $(APPNAME):dev .
#docker run -p 9000:8080 -v $(shell pwd)/build:/var/task/build $(APPNAME):dev & yarn develop
Lastly, this is how I test my lambda output locally:
cfaruki#hello-world[main] curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}' | jq .
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 102 100 100 100 2 161 3 --:--:-- --:--:-- --:--:-- 166
{
"statusCode": 200,
"headers": {
"content-type": "application/json"
},
"body": "{\"text\":\"hello world\"}"
}
With this context in mind, the issue I run into is what happens when I make a file change. If I change the response body to anything else, the file change is acknowledged in my terminal output. But the resultant output of my AWS lambda does not change.
docker run -p 9000:8080 -v /path/to/hello-world/build:/var/task/build hello-world:dev & yarn develop
yarn run v1.22.19
$ esbuild src/index.ts --watch --bundle --minify --sourcemap --platform=node --target=es2020 --outfile=build/index.js
01 Feb 2023 14:54:54,725 [INFO] (rapid) exec '/var/runtime/bootstrap' (cwd=/usr/app, handler=)
[watch] build finished, watching for changes...
01 Feb 2023 14:55:00,290 [INFO] (rapid) extensionsDisabledByLayer(/opt/disable-extensions-jwigqn8j) -> stat /opt/disable-extensions-jwigqn8j: no such file or directory
01 Feb 2023 14:55:00,290 [WARNING] (rapid) Cannot list external agents error=open /opt/extensions: no such file or directory
START RequestId: 5b738544-3713-4b41-86f8-7eca4355d230 Version: $LATEST
END RequestId: 5b738544-3713-4b41-86f8-7eca4355d230
REPORT RequestId: 5b738544-3713-4b41-86f8-7eca4355d230 Init Duration: 1.35 ms Duration: 1143.56 ms Billed Duration: 1144 ms Memory Size: 3008 MB Max Memory Used: 3008 MB
[watch] build started (change: "src/index.ts") // <----- HERE IS THE CHANGE
[watch] build finished
I have validated that any change to my src/index.ts results in a change in build/index.js. I validate this by SSHing into the docker container and printing the file contents of build/index.js:
docker exec -it <CONTAINER-ID> /bin/bash
cat /var/task/build/index.js
It seems to me that the issue I am encountering is that the AWS Lambda docker image does not support live reloading of the function on file change. If that is the case, is there a way I can affect this behavior without adding another dependency to my development workflow?
Or is my only option to destroy and rebuild my docker image each time I make a file change? I am aware that docker-compose simplifies the process of rebuilding docker images. But I want to minimize the amount of configuration required by my project.
when I type this code ( npm run moralis:sync), I run into this error:
my-app#0.1.0 moralis:sync
moralis-admin-cli connect-local-devchain --chain hardhat --moralisSubdomain cxdjddn5lxdh.usemoralis.com --frpcPath ./frp/frpc
Starting connection to Hardhat
exec error: Error: Command failed: "./frp/frpc" http -s cxdjddn5lxdh.usemoralis.com:7000 -t GCqiUSu6wG -l 8545 -d cxdjddn5lxdh.usemoralis.com
'"./frp/frpc"' is not recognized as an internal or external command,
operable program or batch file.
Hello which version of frp have you installed? Make sure your fprcPath is correct.
Two related threads:
https://forum.moralis.io/t/unable-to-connect-hardhat-local-node-to-moralis/17154
https://forum.moralis.io/t/which-frpc-file-to-download-to-connect-moralis-server-to-blockchain/17294/15
If you have more questions or issues with this, you can post on the Moralis forum: https://forum.moralis.io
I've got a nodejs server inside docker:
FROM node:12
WORKDIR /app
COPY package.json /app
RUN yarn install
COPY . /app
EXPOSE 8080
CMD [ "yarn", "start" ]
And then I've got a dockerfile that is used for converting 3D models:
FROM leon/usd:latest
WORKDIR /usr/src/ufg
# Configuration
ARG UFG_RELEASE="3bf441e0eb5b6cfbe487bbf1e2b42b7447c43d02"
ARG UFG_SRC="/usr/src/ufg"
ARG UFG_INSTALL="/usr/local/ufg"
ENV USD_DIR="/usr/local/usd"
ENV LD_LIBRARY_PATH="${USD_DIR}/lib:${UFG_SRC}/lib"
ENV PATH="${PATH}:${UFG_INSTALL}/bin"
ENV PYTHONPATH="${PYTHONPATH}:${UFG_INSTALL}/python"
# Build + install usd_from_gltf
RUN git init && \
git remote add origin https://github.com/google/usd_from_gltf.git && \
git fetch --depth 1 origin "${UFG_RELEASE}" && \
git checkout FETCH_HEAD && \
python "${UFG_SRC}/tools/ufginstall/ufginstall.py" -v "${UFG_INSTALL}" "${USD_DIR}" && \
cp -r "${UFG_SRC}/tools/ufgbatch" "${UFG_INSTALL}/python" && \
rm -rf "${UFG_SRC}" "${UFG_INSTALL}/build" "${UFG_INSTALL}/src"
RUN mkdir /usr/app
WORKDIR /usr/app
# Start the service
ENTRYPOINT ["usd_from_gltf"]
CMD ["usd_from_gltf"]
The image works like so: When run, the 3d model passed as an argument is converted and then the image stops running
I want to have my node.js server running all the time, and when there's a request for conversion, the second image converts the file. I don't really care whether the second image runs on request, or runs all the time.
How can I do this?
I have an application that runs its tests with Jasmine and WebdriverIO which I would like to automate in CircleCI. I'm new to testing in general so I'm not sure what to do.
Here's what I know:
To run the tests, I invoke npm test
A selenium server is required on port 4444 (which I can start with npm start)
The application should be running on port 80 (which I can serve with another npm command)
When the tests complete, I'm returned to command line, but the other services (on p4444 and p80) are still running
Here's what I don't fully understand:
Locally, these require 3 terminals to run concurrently, is there a way to do this with CircleCI?
If so, how do I tell when the p4444 and p80 are ready to test on, or cancel them when the tests are done?
Is my issue with Docker, or CircleCI?
In order to answer your questions clearly, I'm going to refer to each of your commands as follows:
To run tests you run npm test
To run selenium you run npm start selenium
To run your app you run npm start app
Questions/Answers:
Locally, these require 3 terminals to run concurrently, is there a way to do this with CircleCI?
Yes. you just need to start a process with background set to true
i.e. to start Selenium in the background you can run the following
- run:
name: Start Selenium in background
command: |
npm start selenium
background: true
After starting a process, but before using it, wait for the process to be ready on the given port
- run:
name: Waiting for Selenium server to be ready
command: |
for i in `seq 1 10`;
do
nc -z localhost 4444 && echo Success && exit 0
echo -n .
sleep 1
done
echo Failed waiting for Selenium && exit 1
Note if you replace 4444 in the above command, you can wait for a process on another port
How do I tell when the p4444 and p80 are ready to test on, or cancel them when the tests are done?
Your CircleCi commands might look like this
- run:
name: Start Selenium in background
command: |
npm start selenium
background: true
- run:
name: Start App in background
command: |
npm start app
background: true
- run:
name: Waiting for Selenium server to be ready
command: |
for i in `seq 1 10`;
do
nc -z localhost 4444 && echo Success && exit 0
echo -n .
sleep 1
done
echo Failed waiting for Selenium && exit
- run:
name: Waiting for App server to be ready
command: |
for i in `seq 1 10`;
do
nc -z localhost 80 && echo Success && exit 0
echo -n .
sleep 1
done
echo Failed waiting for Selenium && exit
- run:
name: Run Tests
command: |
npm test
You asked a separate question - How do I cancel the processes on port 4444 and 80 when the tests are done? You don't really need to. When the test job finishes the container will be disposed and the helper apps will stop.
However, if you want to stop those processes in order to run some other job steps, you can run kill commands (I can elaborate if this is unclear)
Is my issue with Docker, or CircleCI?
It looks like it's just an issue in understanding how to run a series of commands in CircleCi
If you follow the steps above you should be able to accomplish your goal.
Context
I have always been running my Karma tests locally in PhantomJS, Google Chrome and Firefox without any problems. Currently, I'm looking to run the Karma tests in Docker and have been having problems with running the Karma tests in Firefox inside a Docker container although the Docker container can run the Karma tests in Google Chrome without any problems.
Problem
When I created a Docker container that contains Google Chrome, Firefox, JS libraries (node, npm, grunt ... etc), and VNC utilities (Xvfb, x11vnc). I started the VNC server and ran the tests. Firefox was started and the socket was created with a unique ID. When I entered a VNC session, I could see that firefox was started, the URL was loaded to the URL bar, and the Karma page was loaded. However, after about 2 seconds, the webpage would freeze and the Firefox is hanged. Therefore I could not see LOG: 'INFO[2015-10-16 20:19:15]: Router Started' message as well.
Interesting Find while Reproducing this Manually
I've tired commenting the lines that starts Firefox, so that will only start the karma server when I run the Karma tests. I then tried to run the tests with the following 2 methods -
Start a Bash session through docker exec -it <container_tag>, execute firefox, and typed the server url with the corresponding ID of the test run. Firefox didn't hang in this case and proceeded to start the test run.
Start a Bash session through docker exec -it <container_tag>, execute firefox <server_url_with_coresponding_id>. Firefox didn't hang in this case and proceeded to start the test run.
My DockerFile
FROM ubuntu:14.04
#========================
# Environment Variables for Configuration
#========================
ENV GEOMETRY 1920x1080x24
ENV DISPLAY :0
#========================
# Install Required Packages
#========================
RUN apt-get update -qq && apt-get install -qqy wget \
wget \
firefox \
xvfb \
x11vnc \
nodejs \
npm
#========================
# Install Google Chrome (Latest Stable Version)
#========================
RUN \
wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && \
echo "deb http://dl.google.com/linux/chrome/deb/ stable main" > /etc/apt/sources.list.d/google.list && \
apt-get update -qq && \
apt-get install -qqy google-chrome-stable
#========================
# Clean up Apt
#========================
RUN \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
#========================
# Setup VNC Server
#========================
RUN \
mkdir -p ~/.vnc && \
x11vnc -storepasswd 1234 ~/.vnc/passwd
#========================
# Symlink NodeJS
#========================
RUN ln -s /usr/bin/nodejs /usr/bin/node
#========================
# Install Grunt and Grunt-CLI
#========================
RUN \
npm install -g grunt && \
npm install -g grunt-cli
#========================
# Setup Entry Point
#========================
COPY entry_point.sh /opt/bin/entry_point.sh
RUN chmod +x /opt/bin/entry_point.sh
ENTRYPOINT ["/opt/bin/entry_point.sh"]
I believe that this is a problem relating to the karma-firefox-launcher or karma main library. If anyone can give me some pointers and ideas, that would be great!
I have already submitted PR to karma-firefox-launcher https://github.com/karma-runner/karma-firefox-launcher/pull/45.
This is just for others which might have fall into this.
Firefox has issue with having profile folder on VirtualBox shared folders see https://bugzilla.mozilla.org/show_bug.cgi?id=801274 which is used with Docker setup. Trick is to specify profile folder outside of shared folder like so:
in karma.conf.js:
browsers: [ 'FirefoxDocker' ],
browserNoActivityTimeout: 30000, // < might be necessary for slow machines
customLaunchers: {
FirefoxDocker: {
base: 'Firefox',
profile: '/tmp/firefox' // < location is up to you but make sure folder exists
}
},
Remember to update to version v0.1.7 to make this work.