I have a TypeScript function defined at src/index.ts for AWS Lambda below. For the purpose of this question, it simply sends a JSON body with a "hello world" statement. While running the docker container locally and making a file change, the lambda return value should change accordingly.
import { Context, APIGatewayProxyResult, APIGatewayEvent } from "aws-lambda";
export const handler = async (
event: APIGatewayEvent,
context: Context
): Promise<APIGatewayProxyResult> => {
return {
statusCode: 200,
headers: {
"content-type": "application/json",
},
body: JSON.stringify({ text: "hello world" })
};
};
My yarn develop command is equal to the following shell command:
esbuild src/index.ts --watch --bundle --minify --sourcemap --platform=node --target=es2020 --outfile=build/index.js
Paired with yarn develop, I use the following Dockerfile.dev to set up my local development environment:
FROM public.ecr.aws/lambda/nodejs:16
RUN npm install --global yarn
WORKDIR /usr/app
RUN yarn install --frozen-lockfile
# where transpiled index.js is pointed.
VOLUME /var/task/build
CMD [ "build/index.handler" ]
I use the following Makefile with a develop target to run my lambda locally.
SHELL := /bin/bash
APPNAME = hello-world
.PHONY: develop
develop:
#docker build -f Dockerfile.dev -t $(APPNAME):dev .
#docker run -p 9000:8080 -v $(shell pwd)/build:/var/task/build $(APPNAME):dev & yarn develop
Lastly, this is how I test my lambda output locally:
cfaruki#hello-world[main] curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}' | jq .
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 102 100 100 100 2 161 3 --:--:-- --:--:-- --:--:-- 166
{
"statusCode": 200,
"headers": {
"content-type": "application/json"
},
"body": "{\"text\":\"hello world\"}"
}
With this context in mind, the issue I run into is what happens when I make a file change. If I change the response body to anything else, the file change is acknowledged in my terminal output. But the resultant output of my AWS lambda does not change.
docker run -p 9000:8080 -v /path/to/hello-world/build:/var/task/build hello-world:dev & yarn develop
yarn run v1.22.19
$ esbuild src/index.ts --watch --bundle --minify --sourcemap --platform=node --target=es2020 --outfile=build/index.js
01 Feb 2023 14:54:54,725 [INFO] (rapid) exec '/var/runtime/bootstrap' (cwd=/usr/app, handler=)
[watch] build finished, watching for changes...
01 Feb 2023 14:55:00,290 [INFO] (rapid) extensionsDisabledByLayer(/opt/disable-extensions-jwigqn8j) -> stat /opt/disable-extensions-jwigqn8j: no such file or directory
01 Feb 2023 14:55:00,290 [WARNING] (rapid) Cannot list external agents error=open /opt/extensions: no such file or directory
START RequestId: 5b738544-3713-4b41-86f8-7eca4355d230 Version: $LATEST
END RequestId: 5b738544-3713-4b41-86f8-7eca4355d230
REPORT RequestId: 5b738544-3713-4b41-86f8-7eca4355d230 Init Duration: 1.35 ms Duration: 1143.56 ms Billed Duration: 1144 ms Memory Size: 3008 MB Max Memory Used: 3008 MB
[watch] build started (change: "src/index.ts") // <----- HERE IS THE CHANGE
[watch] build finished
I have validated that any change to my src/index.ts results in a change in build/index.js. I validate this by SSHing into the docker container and printing the file contents of build/index.js:
docker exec -it <CONTAINER-ID> /bin/bash
cat /var/task/build/index.js
It seems to me that the issue I am encountering is that the AWS Lambda docker image does not support live reloading of the function on file change. If that is the case, is there a way I can affect this behavior without adding another dependency to my development workflow?
Or is my only option to destroy and rebuild my docker image each time I make a file change? I am aware that docker-compose simplifies the process of rebuilding docker images. But I want to minimize the amount of configuration required by my project.
Related
I've got a nodejs server inside docker:
FROM node:12
WORKDIR /app
COPY package.json /app
RUN yarn install
COPY . /app
EXPOSE 8080
CMD [ "yarn", "start" ]
And then I've got a dockerfile that is used for converting 3D models:
FROM leon/usd:latest
WORKDIR /usr/src/ufg
# Configuration
ARG UFG_RELEASE="3bf441e0eb5b6cfbe487bbf1e2b42b7447c43d02"
ARG UFG_SRC="/usr/src/ufg"
ARG UFG_INSTALL="/usr/local/ufg"
ENV USD_DIR="/usr/local/usd"
ENV LD_LIBRARY_PATH="${USD_DIR}/lib:${UFG_SRC}/lib"
ENV PATH="${PATH}:${UFG_INSTALL}/bin"
ENV PYTHONPATH="${PYTHONPATH}:${UFG_INSTALL}/python"
# Build + install usd_from_gltf
RUN git init && \
git remote add origin https://github.com/google/usd_from_gltf.git && \
git fetch --depth 1 origin "${UFG_RELEASE}" && \
git checkout FETCH_HEAD && \
python "${UFG_SRC}/tools/ufginstall/ufginstall.py" -v "${UFG_INSTALL}" "${USD_DIR}" && \
cp -r "${UFG_SRC}/tools/ufgbatch" "${UFG_INSTALL}/python" && \
rm -rf "${UFG_SRC}" "${UFG_INSTALL}/build" "${UFG_INSTALL}/src"
RUN mkdir /usr/app
WORKDIR /usr/app
# Start the service
ENTRYPOINT ["usd_from_gltf"]
CMD ["usd_from_gltf"]
The image works like so: When run, the 3d model passed as an argument is converted and then the image stops running
I want to have my node.js server running all the time, and when there's a request for conversion, the second image converts the file. I don't really care whether the second image runs on request, or runs all the time.
How can I do this?
Context
I have always been running my Karma tests locally in PhantomJS, Google Chrome and Firefox without any problems. Currently, I'm looking to run the Karma tests in Docker and have been having problems with running the Karma tests in Firefox inside a Docker container although the Docker container can run the Karma tests in Google Chrome without any problems.
Problem
When I created a Docker container that contains Google Chrome, Firefox, JS libraries (node, npm, grunt ... etc), and VNC utilities (Xvfb, x11vnc). I started the VNC server and ran the tests. Firefox was started and the socket was created with a unique ID. When I entered a VNC session, I could see that firefox was started, the URL was loaded to the URL bar, and the Karma page was loaded. However, after about 2 seconds, the webpage would freeze and the Firefox is hanged. Therefore I could not see LOG: 'INFO[2015-10-16 20:19:15]: Router Started' message as well.
Interesting Find while Reproducing this Manually
I've tired commenting the lines that starts Firefox, so that will only start the karma server when I run the Karma tests. I then tried to run the tests with the following 2 methods -
Start a Bash session through docker exec -it <container_tag>, execute firefox, and typed the server url with the corresponding ID of the test run. Firefox didn't hang in this case and proceeded to start the test run.
Start a Bash session through docker exec -it <container_tag>, execute firefox <server_url_with_coresponding_id>. Firefox didn't hang in this case and proceeded to start the test run.
My DockerFile
FROM ubuntu:14.04
#========================
# Environment Variables for Configuration
#========================
ENV GEOMETRY 1920x1080x24
ENV DISPLAY :0
#========================
# Install Required Packages
#========================
RUN apt-get update -qq && apt-get install -qqy wget \
wget \
firefox \
xvfb \
x11vnc \
nodejs \
npm
#========================
# Install Google Chrome (Latest Stable Version)
#========================
RUN \
wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && \
echo "deb http://dl.google.com/linux/chrome/deb/ stable main" > /etc/apt/sources.list.d/google.list && \
apt-get update -qq && \
apt-get install -qqy google-chrome-stable
#========================
# Clean up Apt
#========================
RUN \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
#========================
# Setup VNC Server
#========================
RUN \
mkdir -p ~/.vnc && \
x11vnc -storepasswd 1234 ~/.vnc/passwd
#========================
# Symlink NodeJS
#========================
RUN ln -s /usr/bin/nodejs /usr/bin/node
#========================
# Install Grunt and Grunt-CLI
#========================
RUN \
npm install -g grunt && \
npm install -g grunt-cli
#========================
# Setup Entry Point
#========================
COPY entry_point.sh /opt/bin/entry_point.sh
RUN chmod +x /opt/bin/entry_point.sh
ENTRYPOINT ["/opt/bin/entry_point.sh"]
I believe that this is a problem relating to the karma-firefox-launcher or karma main library. If anyone can give me some pointers and ideas, that would be great!
I have already submitted PR to karma-firefox-launcher https://github.com/karma-runner/karma-firefox-launcher/pull/45.
This is just for others which might have fall into this.
Firefox has issue with having profile folder on VirtualBox shared folders see https://bugzilla.mozilla.org/show_bug.cgi?id=801274 which is used with Docker setup. Trick is to specify profile folder outside of shared folder like so:
in karma.conf.js:
browsers: [ 'FirefoxDocker' ],
browserNoActivityTimeout: 30000, // < might be necessary for slow machines
customLaunchers: {
FirefoxDocker: {
base: 'Firefox',
profile: '/tmp/firefox' // < location is up to you but make sure folder exists
}
},
Remember to update to version v0.1.7 to make this work.
2 nodejs scripts are being handled by forever. The system is using forever v0.11.1 and node v0.10.29
# forever list
info: Forever processes running
data: uid command script forever pid logfile uptime
data: [0] D34J userdown app/main.js 7441 10950 /root/.forever/D34J.log 0:2:31:45.572
data: [1] P0BX userdown app/main.js 11242 11261 /root/.forever/P0BX.log 0:2:20:22.157
# forever logs 0
error: undefined
# forever logs 1
error: undefined
Question: Why are the log files created by forever missing? Restarting the 2 processes still doesn't create any log files...
The directory /root/.forever does not show the log files too!
# ls -la /root/.forever
total 20
drwxr-xr-x 4 root root 4096 Jul 4 11:37 .
drwx------ 8 root root 4096 Jul 10 13:24 ..
-rw-r--r-- 1 root root 259 Jul 10 19:34 config.json
drwxr-xr-x 2 root root 4096 Jul 4 11:37 pids
drwxr-xr-x 2 root root 4096 Jul 10 17:12 sock
If you start your node process with forever your_script.js and don't specify a log file, forever will write your logs to the terminal (or cmd on Windows). The log file that is shown when you run forever list or forever logs does not reflect reality, since it's not created in that scenario. But if you specify log files, following the options:
-l LOGFILE Logs the forever output to LOGFILE
-o OUTFILE Logs stdout from child script to OUTFILE
-e ERRFILE Logs stderr from child script to ERRFILE
, like forever -l console.log -e error.log your_script.log, they will be created.
If you want forever to automatically create a log file for you, you have to start your script as a daemon, with forever start your_script.js. In this case, you can also specify your log files.
In the docs page you can see all the command line options.
To answer Shreejibawa (since I can't comment yet)...
forever is very sensitive to the order of the arguments. Take a look at their documentation and notice that the options must come before the script.
forever [action] [options] SCRIPT [script-options]
Instead of: forever start bin/www -e logs/error.log -l logs/logs.log
Try: forever start -e /path/to/logs/error.log -l /path/to/logs/logs.log your_script.js
I had the same problem in OSX. In OSX (at least), the command forever app.js starts the forever process in the foreground and doesn't write the log to file, even when -l and -e are provided. In this case, forever list lists the file even though it isn't there and forever logs i produces error: undefined.
When the forever DAEMON is started using forever start app.js, the log files do appear and can be viewed using forever logs i.
I have a digital ocean droplet that I am trying to deploy the most basic of meteor apps to, but I am getting a failing response. Any idea why this is happening?
UPDATE: added entire output
Anderss-iMac:microscope-deploy anderskitson$ mup deploy
Meteor-UP : Production Quality Meteor Deployments
--------------------------------------------------
Bundling Started: /Users/anderskitson/sites/microscope
Started TaskList: Deploying App
[bray.anderskitson.ca] uploading bundle
[bray.anderskitson.ca] uploading bundle: SUCCESS
[bray.anderskitson.ca] setting up env vars
[bray.anderskitson.ca] setting up env vars: SUCCESS
[bray.anderskitson.ca] invoking deployment process
[bray.anderskitson.ca] invoking deployment process: FAILED
-----------------------------------STDERR-----------------------------------
Warning: Permanently added 'bray.anderskitson.ca,162.243.52.235' (RSA) to the list of known hosts.
npm WARN package.json http-proxy#1.0.0 No repository field.
npm http GET https://registry.npmjs.org/fibers
npm http 304 https://registry.npmjs.org/fibers
stop: Unknown instance:
bash: line 46: wait-for-mongo: command not found
-----------------------------------STDOUT-----------------------------------
> fibers#1.0.1 install /opt/meteor/tmp/bundle/programs/server/node_modules/fibers
> node ./build.js
`linux-x64-v8-3.14` exists; testing
Binary is fine; exiting
fibers#1.0.1 node_modules/fibers
meteor start/running, process 10373
wait for mongo(5 minutes) to initiaze
----------------------------------------------------------------------------
Completed TaskList: Deploying App
I ran into same problem and I figured out that this command wasn't fired really well
sudo npm install -g forever userdown wait-for-mongo
and I manually did that so I can see wait-for-mongo a valid command,
see if that helps you too.
I'm trying to build a simple "Twitter" style short messaging app in Node.js which uses Redis as the database (although I've heard that MongoDB might be easier)...
I have found a few links that point me in the direction of https://github.com/mranney/node_redis so I set up a new Node.js project using Brunch and ran the following in my project directory as instructed:
npm install redis hiredis
I then added the following from the auth.js example to vendor/script.js
var redis = require("redis"),
client = redis.createClient();
However when I run brunch w -s I get the following error in the console:
Uncaught Error: Cannot find module "redis"
I'm assuming that it's something to do with modules not being included into my project but I'm not really sure where to start. I added
"redis": "latest"
to my package.json file but that doesn't appear to do anything.
I also tried to install the redis module globally by running
sudo npm install -g redis
But still no luck.
I should also add that I have redis-server installed on OS X, and I can run it in the terminal:
$ redis-server
[2221] 17 Aug 10:48:42 # Warning: no config file specified, using the default config. In order to specify a config file use 'redis-server /path/to/redis.conf'
[2221] 17 Aug 10:48:42 * Server started, Redis version 2.4.13
[2221] 17 Aug 10:48:42 * The server is now ready to accept connections on port 6379
[2221] 17 Aug 10:48:42 - 0 clients connected (0 slaves), 922304 bytes in use
[2221] 17 Aug 10:48:47 - 0 clients connected (0 slaves), 922304 bytes in use
My application directory is a standard brunch install -
app
config.coffee
generators
node_modules
package.json
public
README.md
test
vendor
What am I doing wrong?
Brunch is html5 application assembler, not node.js, you can’t require node modules there.