I have a problem with nuxt build with ssr and missing pages on my website. Due to our deployment process, the same steps are executed on different instances, resulting in unexplained errors. The instances are built identically and the same code is being pulled before the build is started. However, in some cases, there are different builds.
instance1 - server section
instance2 - server section
it seems as if the pages don't exist at all, which is why the users of the page only get an error page. The server-built pages are missing, although they seem to be available for the client.
instance1 - client section
By repeating the deployment process only for the "broken" instance, a correct result can be achieved. I am using Node v14.17.0, Yarn v1.22.4, Webpack v4.46.0, Nuxt v2.15.4 and have not found anything causing similar problems. and I have no idea where to start.
ps. the same procedure works in my development environment and for our staging server. the structure of the staging and production server are identical, also the same packages are used.
deployment steps:
connect to server
checkout master, git pull
stop service with current yarn start
run yarn install (package.json could be updated from git pull)
run yarn build
start service where yarn start will be executed
Related
I'm trying to send a request in my getStaticProps function to my backend api from another docker container. However, even though the api url is written correctly, the static page still is not created. This is because for the static page to be built, the backend should be up already, and because it is build-time the other container is not up yet and waits for the build to be finished and the build can not be finished without the backend.
So what's the solution to this approach? I tried setting the depends_on value to my other container, still doesn't work. what solution would you suggest?
There are 2 solutions I can think of.
Apparently, the Next.js build fails because the service it is querying is not running. Thus why not build and start the service it depends on explicitly and build the rest like this.
docker-compose build some_services
docker-compose up -d some_services
docker-compose build the_rest
This way the Next.js app will be able to make the request. Please keep in mind that You still need to configure the ports and networks correctly. Pretty sure this will resolve the issue.
A more 'fancy' solution would be using build-time networks which are added in the later versions, 3.4+ if I am not mistaken.
docker-compose.yml
build:
context: ./service_directory
network: some_network
For more details please see Docker-compose network
Running a new service that depends whether another service is up or not is a tricky part of docker orchestration. Keep in mind that even with Healthcheck, it doesn't guarantee you database is ready before the next stage. depends_on is not going to be really helpful, because it just determine the order of running service. docker documentation:
depends_on does not wait for db and redis to be “ready” before starting web - only until they have been started. What docker doc suggests is to write a wait-for-it script or [wait-for][2] and run it after or before depends-on. For example:
build: next_service
command: sh -c './wait-for db:5432 -- npm start'
depends_on:
- db
And of course, you explicitly run separate docker-compose command but that loses the point of docker orchestration.
I am writing a nodejs application with Angular as my front end.
For this I am having Git as my code management server.
For client, I am doing minification and it is ready for production.
But I am not sure, how to prepare server side files for production.
Do we need to just copy all the Git folders into production server?.
Let me know the best way to deploy nodejs server application.
You could use pm2 as your daemon to keep your nodejs app up all the time.
Try not to include node_modules in the repo, cause different machines have different setups/installations, you cannot tell if one package would work before you run it unless you npm install them.
If you are familiar with Docker, use it, pre-bundle all (include node_modules) files into the docker image, and you do not need pm2 here, Docker itself can restart automatically. This is the ideal approach.
It really depends on how you (or your company) want to organize the workflow and the size of the project.
Sometimes I too use a GIT repository, because then is really simple to update: just a git pull and (if server files got edits) a pm2 restart N command.
In this way, you dont have to install the whole development stack in order to compile (and minify) the bundles - I guess you work on your local machine where all the development tools are installed.
Keep in mind to use the --dev flag while installing packages that are only required in development mode, so you can keep the production server as slim as possible.
A good practice I found is to add some random tokens inside the final bundle filename (both for js and css) that get then injected inside the final html static files, to avoid the refresh the page loop.
Once you have the bundle files on your dev machine, just upload them to the server (ftp, git, rsync, sshfs mount, whatever you like) and (if server files got edits) restart/reload the node process (Im using pm2 for this, its really great). If you only edited client files, no reload is needed.
Starting from here, there a lot of ways more or less sophisticated to do the job, like git pipelines for example.. but depends on the situation.
Edit: this is a good article about task runner (gulp vs grunt vs vanilla npm), while may be a little off topic, it analyze some aspect of the common deployment process
I am currently getting my feet wet using Express. To start out, I used express-generator to scaffold a simple app.
While examining the project, I noticed that the npm start command is mapped to a binary (bin/www). Upon further inspection I noticed that this file actually contains code to be executed in Node, hence the #!/usr/bin/env node pragma. For anyone having a deeper understanding of Express/Node the answer may be obvious, but still I am wondering: Why didn't they simply use a .js file to bootstrap the framework. That file could then be run using node www.js, I imagine.
There are probably a few reasons why the script was made an executable
npm scripts can be mapped to execute local JS files in the project or executables on the system.
By mapping npm start to bin/www it is effectively the same as running ./bin/www on the command line with the important distinction that by running it via a npm start, it will also work cross platform (e.g. on systems that ignore the hashbang statement, like Windows), otherwise you would need to run it as node bin/www on those systems.
There's a binary ready to add to startup scripts.
For some backstory and reference, here are some quotes from a few Heroku documentation pages.
From the Heroku Node.js Support > Activation:
The Heroku Node.js buildpack is employed when the application has a package.json file in the root directory.
From Heroku Node.js Support > Default web process type:
First, Heroku looks for a Procfile specifying your process types.
If no Procfile is present in the root directory of your app during the build process, your web process will be started by running npm start, [...]
From Process Types and the Procfile > Process types as templates:
A Procfile contains a number of process type declarations, each on a new line. Each process type is a declaration of a command that is executed when a dyno of that process type is started.
For example, if a web process type is declared, then when a dyno of this type is started, the command associated with the web process type, will be executed. This could mean starting a web server, for example.
I have a package.json file in the root (which will activate the Node.js buildpack), and I've also included a Procfile in the root with the following contents:
service: npm start
I would assume that not defining a web dyno would cause it to not be created; only the service dyno should be created, following the configuration declared in the Procfile.
Instead, what happened is that an active web dyno was automatically created using npm start and an inactive service dyno was created using the definition in Procfile. I then had to:
heroku ps:scale web=0
heroku ps:scale service=1
I can definitely imagine wanting to run a Node.js "service" application on Heroku that does not accept any incoming connections, only making outgoing ones. Is there a way to configure the Node.js buildpack to not automatically create a web dyno when one is not defined? I've looked through lots of documentation looking for a way to either: (1) define it as such or (2) remove the automatically generated web dyno; but, I haven't found anything.
Thanks for the help!
I ended up opening a helpdesk ticket with Heroku on this one. Got a response from them, so I'll post it here. Thanks Heroku support!
The short answer is that, no, currently you'll need to heroku scale web=0 service=1 in order to run a service without a public web process. For a longer explanation:
Early on, the Node.js Buildpack checked for the presence of a Procfile and, if missing, created a default one with web: npm start. This made it easy to create apps without a web process, since you could just provide a Procfile that defined some processes, omitting web from the list.
However, as more and more users needed arrays of buildpacks instead of a single one, that solution created issues. Node is the most popular first buildpack, since it's frequently used by Java, Python, PHP, and Ruby apps to build front-end assets. Whenever an app without a Procfile ran Node first, and another buildpack second, Node would inject its own default Procfile (web: npm start), and the second buildpack would then not create its default Procfile as one already existed in the filesystem. So injecting a default Procfile when one is missing from the app creates problems downstream for multilingual apps.
So, we stopped creating a default Procfile and instead used default_process_types in bin/release. This fixes the issue of subsequent buildpacks inheriting incorrect default Procfiles, but since default_process_types is extended rather than replaced by the Procfile process list, apps without a web process defined in their Procfile will get the default web process merged in. This is why web appears even without a web entry in Procfile.
We also don't want to surprise any customers with unexpected bills. Some apps have many process types, some of which are only to be run occasionally, some limited to a single instance, some which need to be scaled up and down, etc, so defaulting everything to 1 rather than 0 could cause extra billing as well as app malfunctions. This is why non-web processes are scaled to zero by default.
I just ran into the same problem and worked it around doing this in my Procfile after reading Shibumi's answer:
web: echo "useless"
service: node index.js
I would like you to share your way of deploying complicated JS projects, where Grunt or Gulp are used.
For example, grunt build command concats css, js files, puts minified bower dependencies into dist folder. As far as i know, we should not store build results in version control repo, shouldn't we? Also, development environment is not needed on production server.
That is why flow like: git push production, and then grunt build on production server, then restarting it is not a good practice, isn't it?
The purpose o question is to find out, how should I deploy complicated JS projects when:
Building is necessary.
Building should not be done on production server.(Or it is a normal practice?)
Build results should not be indexed by version control system.
Deployment should not be done manually.