docker production code build - javascript

When creating production build with docker what is the strategy people uses about compiling and bundling the code.
So outside the docker world, I would create a build (using some sort of npm command) which will create a dist. code (without any source code, uglified and compressed javascript for e.g.) and then I point a web server to the dist folder.
In docker world where would you build the code, Is it in docker image or on host os and just copy the dist folder to the docker image? Basically I do not want the whole npm_modules and all source code files in the docker image/container.
Any idea how to achieve this?
Thanks

It sounds like you're worried about two different, valid, problems:
Ensuring an isolated/reproducible production environment.
Ensuring an isolated/reproducible build environment (for building #1).
You can achieve both with the approach you suggested - have the build steps run as part of your Dockerfile. But this has the disadvantages that you mention - you're left with all of your source/development artifacts at runtime, unless you take explicit steps to remove them all.
Docker introduced multi-stage builds to somewhat alleviate this issue - it effectively allows you to "squash" multiple layers into one. But it doesn't eliminate the problem of needing to explicitly clean up.
So in my experience, the most common solution is indeed to build your artifact externally, and then COPY it into your production image.
That solves problem #1, but not #2. So go one step further - build your Docker image inside a Docker container! CI platforms are increasingly supporting this approach as a first-class concept - see e.g. Circle CI's Docker executor.

In Docker, in general, you want to create an image of your software that contains all that is necessary for your Application to run on any machine. This is what Docker is used for: bind the Application and its dependencies into a single artifact so that it can run everywhere where Docker is installed.
The self-sufficient image is very handy when you use an orchestrator like Docker swarm. The orchestrator can run the container on any machine that is part of the network (i.e. the swarm) by pulling the image and starting a container. If nor the image neither the host contain all that it needs then the container fails.
There are cases where you need to change very easy the files inside the container as it runs, for example in development. In that case you mount a local directory from the host into the container (see volumes); in this way you just modify the files in the host and the modification is propagated immediately inside the container. Even in this case, your image should contain the files needed to run the Application on other machine; using a volume just hides them in the development environment.

Related

How to create a common code module for use with cloud functions or docker

I have a typescript monorepo with front end, cloud functions, and back end (docker) packages in it. I'd like to add a "common" package that can be used by all of those.
Some production environments like Google Cloud Functions and Docker (at least in one common usage) package up the current app's directory and send it to a remote server, where it does "yarn install" to install all the dependencies. This means the obvious ways of referencing the common package (import * as common from "../../common" and the like) don't work because the build server won't find common there.
Also, sadly, docker and GCF don't follow symlinks, so the yarn link: protocol isn't helpful.
And since I'm working on the common code just as much as the apps, I really don't want to publish it to a private NPM registry every time I make a change. I'd like that change process to be as seamless as possible.
The most seamless way is to just symlink common into each app's node_modules, but (a) yarn deletes those, and (b) the builders don't follow symlinks. So I assume I'll need some kind of build/push process when changing the common code.
It seems like a kind of common thing to want to abstract out common code into its own module that can be used anywhere in the monorepo, but I can't find a solution that works. (For extra points I'd like "jump to definition" in my IDE to go to the actual definition, not a local copy.) Any ideas? Can Yarn 2 help with this use case?

Copy file into build directory prior to running Docker build script in Elastic Beanstalk?

I'm trying to set up an SSH Key on my Docker container, which I'm deploying to Elastic Beanstalk. This question and answer has helped me a lot, but as I'm using Dockerized NodeJS and not NodeJS directly, the exact steps for copying in SSH keys don't really work for me.
I have a ssh_setup.config file inside .ebextensions, and right now I can copy my SSH key from my private S3 Bucket to /root/.ssh; however, this is /root/.ssh on my EC2 instance, and not inside of my Docker container. It's particularly difficult, because I can't figure out where the Docker container is actually being built (it seems to be a different directory every time, something like /var/lib/docker/tmp/docker-builder<number>/ or something like that).
I've tried this:
commands:
copy-git-token:
command: cp -R /root/.ssh/ ./ssh/
Which doesn't work - that copies into /opt/elasticbeanstalk/eb_infra. I've also tried:
container_commands:
copy-git-token:
command: cp -R /root/.ssh/ ./ssh/
Which also doesn't work - that copies into /var/app/current, but I've since found out that that only runs after the Docker container has already been built (hence too late).
From what I can tell, there are three main ways I can end up solving this:
1. Figure out how to copy a file into the Docker build directory, and then use COPY to copy that into the Docker container; or
2. Figure out how to configure the docker build command, so as to use a --build-arg or something of the sort to get my SSH key into the Docker container; or
3. Figure out how to retrieve the file from outside of the Docker build directory.
Any assistance on any of these three pathways (or others if they do exist, which I'm sure they do) would be greatly appreciated. The Elastic Beanstalk documentation on this is sorely lacking.
Thanks in advance.

Error: Module did not self register

I have the exact same error which says
at bindings (/node_modules/pg-native/node_modules/libpq/node_modules/bindings/bindings.js:76:44)
This might seem similar to:
Error: Module did not self-register.
but the difference being, I am using docker to build images, so it will not be possible for me to go back and remove node_modules and perform npm install again for every container.
Is there a more elegant solution?
One of the advantages of Docker is that it should be easy to upgrade your images and replace your containers. If you have a bunch of Node apps which all start from the same image:
FROM node
Then you just need to rebuild your images and they will use the latest version of the Node base image (which currently has NPM 3.10.3). In a non-production environment, just stop your container and run a new one from the new image. In production, look at rolling upgrades in swarm mode.
Ideally you should be working towards an automated workflow where you commit a change, that builds a new image and replaces your running container. You shouldn't need to do any maintenance on running containers - they are meant to be disposable.
I was getting this error when I ran docker-compose. Also, in my docker-compose I was mounting the current folder. To fix this issue I rebuilt my node modules with npm rebuild.

Building two applications into one

So I've two separate applications - one is REST Api written in java which is exported as a war. The other is a REACT javascript application which is rendered in a folder with 1 HTML, 1 CSS and 1 JS file.
Now my organization wants me to package the two applications in gradle and offer 1 single war for deployment.
Personally I think it's a terrible idea, but well, can't always choose.
My question is, can that even be done, if yes how can I do it. Important to note that it has to be a gradle build.
Theoretically, you should be able to package assets into your .war and have Tomcat (or whatever) serve them alongside your actual assets.
You should be able to place them alongside the WEB-INF folder at the same level. The container will serve them directly.
You probably want a build step that runs whatever JS things needs to run (minification, etc). Another build step could take the output of that and dump it into war.

NodeWebkit - deploy the application

I have one code base for both Web and NodeWebkit (NW) application.
I use the following stack:
- React
- Hapi
- Sequelize
- Windows environment
Web version of the application uses MySQL, while NW uses Sqlite. It all works fine. I have config file that compiles application for what I need (web or NW).
The problem that I face now is how to deploy the NW application. Idea is to provide NW applicaiton to a client, where he will open it clicking the icon.
Since I use the Node for the NW version, and the application uses many modules which are stored in node_modules, I face a challenge how to pack it all up.
My idea is to make an Windows installer. User will click it and the installer will extract all files to the destination. And also make an icon on the user desktop to run it.
Problem is with the Windows file name limitation. Inside the node_modules, there are many subdirectories that simply violate the Windows limitation. I cant even copy the node_modules folder. I cant even delete it. Well sure I can copy it If I zip it... or remove manually long folders.
I have not yet started working on the installer, but I am thinking I will hit the wall with this approach.
Does anyone have an idea how to make this deployment?
How can I integrate NPM3 in NW?
My plan now is to make Windows installer. That windows installer will install normally application files. The node_modules will be zipped previously and placed inside the installer. Installer will then simply unzip it to the destionation folder.
I will post my progress here.
Some update here.
Main issue here was the depth of the node_modules. I have many modules in node_modules, and after some thinking I figured out there is a simple rule there. Some modules are server side modules, while other ones are used by react.
And since Webpack already creates a huge files in which all of the modules are already included, I simply do not need them at all.
So I have removed all front end side modules(babel modules, react-*), and left only server side (Hapi, sequelize...). Miracle happened, application run and was much faster at the startup.
I am going to use Inno setup to make a manifest file, and it should be good to go.
I am still not out of the danger zone, as developer might need a server side module, which has huge depth. But I will think about that if it happens.
More to follow...
actually in nodejs you can do the following:
1-Create another folder inside your project folder for example "server_modules"
2-In the created folder create another package.json file and install any modules needed for server out there
3-All these modules will be accessible as normal node_modules using require('module_name') and you can delete "server_modules" folder when you package your desktop version if you don't need it
Note: this approach used by some developers to achive micro services in nodejs but it is useful in your case

Categories

Resources