Can't send request during build time in next.js with docker? - javascript

I'm trying to send a request in my getStaticProps function to my backend api from another docker container. However, even though the api url is written correctly, the static page still is not created. This is because for the static page to be built, the backend should be up already, and because it is build-time the other container is not up yet and waits for the build to be finished and the build can not be finished without the backend.
So what's the solution to this approach? I tried setting the depends_on value to my other container, still doesn't work. what solution would you suggest?

There are 2 solutions I can think of.
Apparently, the Next.js build fails because the service it is querying is not running. Thus why not build and start the service it depends on explicitly and build the rest like this.
docker-compose build some_services
docker-compose up -d some_services
docker-compose build the_rest
This way the Next.js app will be able to make the request. Please keep in mind that You still need to configure the ports and networks correctly. Pretty sure this will resolve the issue.
A more 'fancy' solution would be using build-time networks which are added in the later versions, 3.4+ if I am not mistaken.
docker-compose.yml
build:
context: ./service_directory
network: some_network
For more details please see Docker-compose network

Running a new service that depends whether another service is up or not is a tricky part of docker orchestration. Keep in mind that even with Healthcheck, it doesn't guarantee you database is ready before the next stage. depends_on is not going to be really helpful, because it just determine the order of running service. docker documentation:
depends_on does not wait for db and redis to be “ready” before starting web - only until they have been started. What docker doc suggests is to write a wait-for-it script or [wait-for][2] and run it after or before depends-on. For example:
build: next_service
command: sh -c './wait-for db:5432 -- npm start'
depends_on:
- db
And of course, you explicitly run separate docker-compose command but that loses the point of docker orchestration.

Related

nuxt.js - build ssr with missing pages

I have a problem with nuxt build with ssr and missing pages on my website. Due to our deployment process, the same steps are executed on different instances, resulting in unexplained errors. The instances are built identically and the same code is being pulled before the build is started. However, in some cases, there are different builds.
instance1 - server section
instance2 - server section
it seems as if the pages don't exist at all, which is why the users of the page only get an error page. The server-built pages are missing, although they seem to be available for the client.
instance1 - client section
By repeating the deployment process only for the "broken" instance, a correct result can be achieved. I am using Node v14.17.0, Yarn v1.22.4, Webpack v4.46.0, Nuxt v2.15.4 and have not found anything causing similar problems. and I have no idea where to start.
ps. the same procedure works in my development environment and for our staging server. the structure of the staging and production server are identical, also the same packages are used.
deployment steps:
connect to server
checkout master, git pull
stop service with current yarn start
run yarn install (package.json could be updated from git pull)
run yarn build
start service where yarn start will be executed

Checking for update of a file in gitlab project

I am writing a service with node.js, and depending on whether a file in my gitlab project has been recently updated or not, the way the service works will change. So how can I tell if the content of a file in my gitlab project has changed or not on the node.js service side? Is there a Gitlab API service for this?
Gitlab does offer a robust API but could you not just run a git fetch and diff:
git fetch origin master
git diff origin/master:./ --compact-summary
That will list any files that have changed compared to your local. If you'd like to access that from Node you could put it in a shell script, run it as a spawned child_process and parse the stdout.
This method depends on your service checking for changes manually, if instead you want your service to be alerted when a change happens at any time you may want to look into webhooks: https://docs.gitlab.com/ee/user/project/integrations/webhooks.html
you can monitor (with a crontab) the project's commits or with webhooks (like #Dave said) and after that get commit diff info with gitlab commit api to see if your file was modified

NodeJS server side application deployment cosiderations

I am writing a nodejs application with Angular as my front end.
For this I am having Git as my code management server.
For client, I am doing minification and it is ready for production.
But I am not sure, how to prepare server side files for production.
Do we need to just copy all the Git folders into production server?.
Let me know the best way to deploy nodejs server application.
You could use pm2 as your daemon to keep your nodejs app up all the time.
Try not to include node_modules in the repo, cause different machines have different setups/installations, you cannot tell if one package would work before you run it unless you npm install them.
If you are familiar with Docker, use it, pre-bundle all (include node_modules) files into the docker image, and you do not need pm2 here, Docker itself can restart automatically. This is the ideal approach.
It really depends on how you (or your company) want to organize the workflow and the size of the project.
Sometimes I too use a GIT repository, because then is really simple to update: just a git pull and (if server files got edits) a pm2 restart N command.
In this way, you dont have to install the whole development stack in order to compile (and minify) the bundles - I guess you work on your local machine where all the development tools are installed.
Keep in mind to use the --dev flag while installing packages that are only required in development mode, so you can keep the production server as slim as possible.
A good practice I found is to add some random tokens inside the final bundle filename (both for js and css) that get then injected inside the final html static files, to avoid the refresh the page loop.
Once you have the bundle files on your dev machine, just upload them to the server (ftp, git, rsync, sshfs mount, whatever you like) and (if server files got edits) restart/reload the node process (Im using pm2 for this, its really great). If you only edited client files, no reload is needed.
Starting from here, there a lot of ways more or less sophisticated to do the job, like git pipelines for example.. but depends on the situation.
Edit: this is a good article about task runner (gulp vs grunt vs vanilla npm), while may be a little off topic, it analyze some aspect of the common deployment process

Is it possible to deploy a Node.js application to Heroku without a web dyno?

For some backstory and reference, here are some quotes from a few Heroku documentation pages.
From the Heroku Node.js Support > Activation:
The Heroku Node.js buildpack is employed when the application has a package.json file in the root directory.
From Heroku Node.js Support > Default web process type:
First, Heroku looks for a Procfile specifying your process types.
If no Procfile is present in the root directory of your app during the build process, your web process will be started by running npm start, [...]
From Process Types and the Procfile > Process types as templates:
A Procfile contains a number of process type declarations, each on a new line. Each process type is a declaration of a command that is executed when a dyno of that process type is started.
For example, if a web process type is declared, then when a dyno of this type is started, the command associated with the web process type, will be executed. This could mean starting a web server, for example.
I have a package.json file in the root (which will activate the Node.js buildpack), and I've also included a Procfile in the root with the following contents:
service: npm start
I would assume that not defining a web dyno would cause it to not be created; only the service dyno should be created, following the configuration declared in the Procfile.
Instead, what happened is that an active web dyno was automatically created using npm start and an inactive service dyno was created using the definition in Procfile. I then had to:
heroku ps:scale web=0
heroku ps:scale service=1
I can definitely imagine wanting to run a Node.js "service" application on Heroku that does not accept any incoming connections, only making outgoing ones. Is there a way to configure the Node.js buildpack to not automatically create a web dyno when one is not defined? I've looked through lots of documentation looking for a way to either: (1) define it as such or (2) remove the automatically generated web dyno; but, I haven't found anything.
Thanks for the help!
I ended up opening a helpdesk ticket with Heroku on this one. Got a response from them, so I'll post it here. Thanks Heroku support!
The short answer is that, no, currently you'll need to heroku scale web=0 service=1 in order to run a service without a public web process. For a longer explanation:
Early on, the Node.js Buildpack checked for the presence of a Procfile and, if missing, created a default one with web: npm start. This made it easy to create apps without a web process, since you could just provide a Procfile that defined some processes, omitting web from the list.
However, as more and more users needed arrays of buildpacks instead of a single one, that solution created issues. Node is the most popular first buildpack, since it's frequently used by Java, Python, PHP, and Ruby apps to build front-end assets. Whenever an app without a Procfile ran Node first, and another buildpack second, Node would inject its own default Procfile (web: npm start), and the second buildpack would then not create its default Procfile as one already existed in the filesystem. So injecting a default Procfile when one is missing from the app creates problems downstream for multilingual apps.
So, we stopped creating a default Procfile and instead used default_process_types in bin/release. This fixes the issue of subsequent buildpacks inheriting incorrect default Procfiles, but since default_process_types is extended rather than replaced by the Procfile process list, apps without a web process defined in their Procfile will get the default web process merged in. This is why web appears even without a web entry in Procfile.
We also don't want to surprise any customers with unexpected bills. Some apps have many process types, some of which are only to be run occasionally, some limited to a single instance, some which need to be scaled up and down, etc, so defaulting everything to 1 rather than 0 could cause extra billing as well as app malfunctions. This is why non-web processes are scaled to zero by default.
I just ran into the same problem and worked it around doing this in my Procfile after reading Shibumi's answer:
web: echo "useless"
service: node index.js

What's the easiest way to deploy Meteor app?

I've spent whole day and without success. I've tried Heroku with https://github.com/jordansissel/heroku-buildpack-meteor, but it gives an error and logs doesn't give any good info. I want a free service with the ability to scale once the app gets more traffic. I just want to write as few lines as possible, or just drop a bundle. It shouldn't be so difficult. Thank you.
IMO The easiest way to deploy meteor app for production is to use meteor-up and your own server (DigitalOcean, Linode,...) .
meteor-up setups server for you (install nodejs, mongodb, etc) and give you easy way to deploy:
mup deploy
You can have server good enough for start for only 5 $/month.
It doesn’t get much simpler than meteor deploy.
$ meteor deploy myapp.meteor.com
Where myapp is a not-taken subdomain of your choice.
From the documentation:
You can also deploy to your own domain. Just set up the hostname you want to use as a CNAME to origin.meteor.com, then deploy to that name.
$ meteor deploy www.myapp.com
If you want scalable, it's not going to be free (to my knowlege). But you can use AWS, linode, or pretty much any of the cloud services. Just install meteor on your host, and run this command from the project directory:
$ cd my_project_directory && meteor
If you want it to run in the background:
$ cd my_project_directory && meteor &>.log &
$ disown %1 // or whatever job number meteor runs as.
I made a few tutorial vids for using Meteor Up with Amazon EC2. You can start out with the free EC2 Micro Tier.
Setting up EC2
https://www.youtube.com/watch?v=OXdPdSekVtg&list=UUs2gDoWu9gHHR0aOklT3nvg
EC2 SSH
https://www.youtube.com/watch?v=K-IRgEge6jA&list=UUs2gDoWu9gHHR0aOklT3nvg
Meteor Deployment onto EC2
https://www.youtube.com/watch?v=Lyyh2fkXovo&list=UUs2gDoWu9gHHR0aOklT3nvg
It seems an old question by now but in case anybody stumbles upon here,
after I made my research and tried lots of different things, I ended up with the process below which includes amazing phusion passenger and I'm doing it for many projects of mine so far.
1 - Install meteor on server by doing
curl https://install.meteor.com/ | sh
2 - Install Phusion Passenger by following the documents here
3 - Build your app locally (it is important to add meteor packages)
4 - Delete .meteor/local/build and .meteor/local/cordova-build (don't delete db if you want to keep your local db)
5 - Using ftp, create a folder on your server and upload all the files including .meteor folder
6 - Run phusion passenger standalone by doing
sudo -E passenger start --port 80 --user root --environment production --daemonize --sticky-sessions
Of course, you should change the variables before doing that. You can use the last 4 steps for every app you have. If you want to publish a cordova app just use your domain with selected port instead of yourapp.meteor.com
Since meteor is reloading itself automatically until you say not to, you can just upload the new client files to server and wait the reload when you want to make a quick change. If the change is on the server, stop the passenger with
passenger stop --port 80
upload your files and run passenger again.
I hope this helps someone out there.
Best

Categories

Resources