Docker compose network not resolving hostname in javascript http request - javascript

I am currently writing a small fullstack application using Docker compose, vue, and python. All my containers work in isolation, but I can't seem to get my containers to communicate using host names... Here's my code:
Docker Compose
version: "3.8"
services:
web:
build: ./TranscriptionFrontend
ports:
- 4998:4998
api:
build: ./TranscriptionAPI
Javascript Frontend Request
fetch("http://api:4999/transcribe", {
method:'POST',
headers: {'Content-Type': 'application/json'},
body: JSON.stringify(data)
}).then(res => {
res.json().then((json_obj) =>{
this.transcription_result = json_obj['whisper-response']
})
}).catch(e => {
this.transcription_result = "Error communicating with api: " + e;
})
I know my API service works because originally I was just mapping it to a port on my localhost, but that got messy and I want to keep access to it within my docker container. In all cases the host name could not be resolved from my JS request. Also, curl-ing from my containers using host names does provide a response i.e. docker-compose exec web curl api or vice versa. I'm a beginner to java script and docker so apologies if I'm missing something basic.
What I've tried:
XML Http Request
Making call without using http://

Your docker service name won't reflect in the url unless you configure some virtual hosts that points to its image on localhost.
You can look into something like How to change the URL from "localhost" to something else, on a local system using wampserver? if that is what you really want to do but I feel it might be defeating the point.
My approach would be to pass the API url as environment variable to your front-end service directly in the Docker compose file. Something like:
services:
web:
build: ./TranscriptionFrontend
ports:
- 4998:4998
environment:
API_URL: http://127.0.0.1:4999
api:
build: ./TranscriptionAPI
ports:
- 4999:4999
and then inject the env variable into your Javascript app build.
For example assuming you're using any Node.js stack for compiling your JS you could have it as a process.env property and then do:
fetch(`${process.env.API_URL}/transcribe`, { ... })

Host names for docker container communication are only possible if you define a user defined bridge network. Here's an example:
(On your docker-compose.yml)
...
networks:
my_network:
driver: bridge
Here's a link to the Docker docs just in case. Let me know if this helped!

Related

How to route frontend API calls to Docker network

I developed a docker compose stack, with, let's say, an API and a frontend.
The frontend queries the API service with JavaScript Fetch calls. What I am trying to achieve is to fetch the API through the http://api/endpoint URL.
version: 3
services:
api:
build: ./api
ports:
- "5000:80"
frontend:
build: ./frontend
ports:
- "80:80"
Whenever I try to reach an API endpoint from the frontend:
from a terminal in the frontend container: curl http://api:5000/endpoint works
from my local browser: a fetch from my frontend to http://localhost:5000 works
from my local browser: a fetch from my frontend to http://api:5000 or http://api not found (of course, my local network is not the docker compose network)
Is there a possibility to map my container's traffic from my browser to the docker compose network? Or should I access my container from the browser differently?
I get that my javaScript is executed on runtime, therefore on my machine's network, but the "DNS" feature that docker compose offers is too good not to be used even on frontend development purposes.
Many thanks in advance guys for your ideas!

Azure: Connect/Proxy Frontend (React) to Backend (Express/node.js)

I am currently trying to connect a frontend (React) to a backend (Express/nodejs) within Azure App Services. I am using Windows, since "Virtual applications and directories" are currently not available for Linux. However, according to my research, that is necessary in this case.
Backend sample: server.js
const express = require('express');
const app = express();
const port = 3003;
require("dotenv").config(); // For process.env
[...]
app.get("/api/getBooks", async (req, res) => {
const books = await Books.find();
res.send(books);
});
Frontend sample: App.js
const getBooks = () => {
axios.get('/api/getBooks')
.then(res => {
setBooks(res.data);
console.log("Got books: ")
console.log(res.data);
})
.catch(err => {
console.log(err);
})
}
Azure: Folder structure
site/server/server.js (Express)
site/wwwroot/index.html (React)
I successfully executed "npm install" via "Development Tools/Console".
The two are already connected via Virtual applications in Azure by using the following configuration.
Virtual applications
The app generally loads succesfully. However, the connection to the backend is not working.
How can I start the node.js server now on Azure and make the proxy working?
I tried to start the server via "node server" on the console. But this does not seem to be working.
I discovered two possible ways to solve this issue.
Assuming you have a client (client/App.js) and a server (server/server.js).
Serve the React App via node.js/Express
Based on the above architecture, a little bit of structure needs to be changed here. Because the React app is no longer output through its own server, but directly through Express.
In server/server.js, the following function must be called after express is declared.
app.use(express.static("../client/build"));
After defining some endpoints to the APIs, the last API node to define is the default route - the static output of the React build.
app.get("/", (res) => {
res.sendFile(path.resolve(__dirname, "client", "build", "index.html"));
});
Using an FTP client, you can now create the /client/build directory that will contain the built React app. Of course, another directory structure can be used.
The client files from the built React app are then simply uploaded there.
The deployment from the server is best done via Visual Studio Code and the Azure plugin.
In the above structure, /server would then be deployed to your in the Azure extension (Azure/App Services --> Right click on "myapp" --> Deploy to Web App ...)
Create two App Services
For example: myapp.azurewebsites.net & myapp-api.azurewebsites.net
myapp must simply contain the built React app (/build) in the wwwroot directory. This can be achieved via FTP.
The deployment from the /server to *myapp-api is best done via Visual Studio Code and the Azure plugin.
In the above structure, /server would then be deployed to myapp-api in the Azure extension (Azure/App Services --> Right click on "myapp-api" --> Deploy to Web App ...)
Also worth mentioning is that CORS should be configured, so that API calls can only be made from myapp.azurewebsites.net. This can be configured in the Azure Portal.
Occasionally the node dependencies have to be installed afterwards via the SSH console in the Azure Portal. For me it sometimes worked automatically and sometimes not.
To do this, simply change to the wwwroot directory (of the /server) and execute the following command.
npm cache clean --force && npm install
Combine this with React Router
React Router is usually used with React. This can be easily combined with a static-served web app from Express.
https://create-react-app.dev/docs/deployment/#other-solutions
Excerpt
How to handle React Router with Node Express routing
https://dev.to/nburgess/creating-a-react-app-with-react-router-and-an-express-backend-33l3

How to point all node js microservices in single port locally and consume it in react app

In my React and node application, the is microservice architecture on the server side which is written in node.js, all the microservices are running different ports like
http://localhost:3001,
http://localhost:3002
so on..,
I want to point all the services in a single port so that I can consume that services in react through only one single URL as a base path.
want to do this on a local server/ local system.
As I want to run the application end to end on the local server.
Try to use an API GATEWAY with a Message Broker (rabbitmq for example) and in your service in index.js just consume the Queue of message broker. before send the response to the msg broker and This message are consume by your gateway
do you perhaps mean something like this?
you put this into the .env file
CONNECTION_URL = 'put your db url '
SECRET=...
BASE_URL='http://localhost:3000/'
NODE_ENV=env
as for front you can just call your functions by link
example
const getItem= () => {
Axios.get("http://localhost:5000/Items").then((response) => {
setListItem(response.data);
});
};
Using Nginx on WSL with Node.js resolved my issue, by creating a proxy server.

vue.config.js (devServer) not used in npm run serve

I'm trying to set up a reverse proxy on the development server for my VUE js webapp to get around the CORS issue that I was getting when I was trying to use my flask HTTP APIs with the vue js webapp.
I did this by creating a vue.config.js file in the root of the project directory:
module.exports = {
devServer: {
proxy: 'http://localhost:5001/'
}
}
when I run npm run serve, and try to use a REST API defined on port 5001 - I don't see the request going to port 5001, it uses the same port as the web app.
And there are no useful logs being written to stdout either to help me debug this.
Has anyone come across this issue before ?
I had a similar issue and found that the port was already in use by another application and hence it was not going to the correct port. Once i shutdown the other app, it started working as expected.

What's the cause of the error 'getaddrinfo EAI_AGAIN'?

My server threw this today, which is a Node.js error I've never seen before:
Error: getaddrinfo EAI_AGAIN my-store.myshopify.com:443
at Object.exports._errnoException (util.js:870:11)
at errnoException (dns.js:32:15)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:78:26)
I'm wondering if this is related to the DynDns DDOS attack which affected Shopify and many other services today. Here's an article about that.
My main question is what does dns.js do? What part of node is it a part of? How can I recreate this error with a different domain?
If you get this error with Firebase Cloud Functions, this is due to the limitations of the free tier (outbound networking only allowed to Google services).
Upgrade to the Flame or Blaze plans for it to work.
EAI_AGAIN is a DNS lookup timed out error, means it is a network connectivity error or proxy related error.
My main question is what does dns.js do?
The dns.js is there for node to get ip address of the domain(in brief).
Some more info:
http://www.codingdefined.com/2015/06/nodejs-error-errno-eaiagain.html
If you get this error from within a docker container, e.g. when running npm install inside of an alpine container, the cause could be that the network changed since the container was started.
To solve this, just stop and restart the container
docker-compose down
docker-compose up
Source: https://github.com/moby/moby/issues/32106#issuecomment-578725551
As xerq's excellent answer explains, this is a DNS timeout issue.
I wanted to contribute another possible answer for those of you using Windows Subsystem for Linux - there are some cases where something seems to be askew in the client OS after Windows resumes from sleep. Restarting the host OS will fix these issues (it's also likely restarting the WSL service will do the same).
For those who perform thousand or millions of requests per day, and need a solution to this issue:
It's quite normal to get getaddrinfo EAI_AGAIN errors when performing a lot of requests on your server. Node.js itself doesn't perform any DNS caching, it delegates everything DNS related to the OS.
You need to have in mind that every http/https request performs a DNS lookup, this can become quite expensive, to avoid this bottleneck and getaddrinfo errors, you can implement a DNS cache.
http.request (and https) accepts a lookup property which defaults to dns.lookup()
http.get('http://example.com', { lookup: yourLookupImplementation }, response => {
// do something here with response
});
I strongly recommend to use an already tested module, instead of writing a DNS cache yourself, since you'll have to handle TTL correctly, among other things to avoid hard to track bugs.
I personally use cacheable-lookup which is the one that got uses (see dnsCache option).
You can use it on specific requests
const http = require('http');
const CacheableLookup = require('cacheable-lookup');
const cacheable = new CacheableLookup();
http.get('http://example.com', {lookup: cacheable.lookup}, response => {
// Handle the response here
});
or globally
const http = require('http');
const https = require('https');
const CacheableLookup = require('cacheable-lookup');
const cacheable = new CacheableLookup();
cacheable.install(http.globalAgent);
cacheable.install(https.globalAgent);
NOTE: have in mind that if a request is not performed through Node.js http/https module, using .install on the global agent won't have any effect on said request, for example requests made using undici
The OP's error specifies a host (my-store.myshopify.com).
The error I encountered is the same in all respects except that no domain is specified.
My solution may help others who are drawn here by the title "Error: getaddrinfo EAI_AGAIN"
I encountered the error when trying to serve a NodeJs & VueJs app from a different VM from where the code was developed originally.
The file vue.config.js read :
module.exports = {
devServer: {
host: 'tstvm01',
port: 3030,
},
};
When served on the original machine the start up output is :
App running at:
- Local: http://tstvm01:3030/
- Network: http://tstvm01:3030/
Using the same settings on a VM tstvm07 got me a very similar error to the one the OP describes:
INFO Starting development server...
10% building modules 1/1 modules 0 activeevents.js:183
throw er; // Unhandled 'error' event
^
Error: getaddrinfo EAI_AGAIN
at Object._errnoException (util.js:1022:11)
at errnoException (dns.js:55:15)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:92:26)
If it ain't already obvious, changing vue.config.js to read ...
module.exports = {
devServer: {
host: 'tstvm07',
port: 3030,
},
};
... solved the problem.
I started getting this error (different stack trace though) after making a trivial update to my GraphQL API application that is operated inside a docker container. For whatever reason, the container was having difficulty resolving a back-end service being used by the API.
After poking around to see if some change had been made in the docker base image I was building from (node:13-alpine, incidentally), I decided to try the oldest computer science trick of rebooting... I stopped and started the docker container and all went back to normal.
Clearly, this isn't a meaningful solution to the underlying problem - I am merely posting this since it did clear up the issue for me without going too deep down rabbit holes.
I was having this issue on docker-compose. Turns out I forgot to add my custom isolated named network to my service which couldn't be found.
TLDR; Make sure, in your compose file, you have your custom-networks defined on both services that need to talk to each other.
My error looked like this: Error: getaddrinfo EAI_AGAIN minio-service. The error was coming from my server's backend when making a call to the minio-service using the minio-service hostname. This tells me that minio-service's running service, was not reachable by my server's running service. The way I was able to fix this issue is I changed the minio-service in my docker-compose from this:
docker-compose.yml
version: "3.8"
# ...
services:
server:
# ...
networks:
my-network:
# ...
minio-service:
# ... (missing networks: section)
# ...
networks:
my-network:
To include my custom isolated named network, like this:
docker-compose.yml
version: "3.8"
# ...
services:
server:
# ...
networks:
my-network:
# ...
minio-service:
# ...
networks:
my-network:
# ...
# ...
networks:
my-network:
More details on docker-compose networking can be found here.
This is the issue related to hosts file setup.
Add the following line to your hosts file
In Ubuntu: /etc/hosts
127.0.0.1 localhost
In windows: c:\windows\System32\drivers\etc\hosts
127.0.0.1 localhost
In my case the problem was the docker networks ip allocation range, see this post for details
#xerq pointed correctly, here's some more reference
http://www.codingdefined.com/2015/06/nodejs-error-errno-eaiagain.html
i got the same error, i solved it by updating "hosts" file present under this location in windows os
C:\Windows\System32\drivers\etc
Hope it helps!!
In my case, connected to VPN, the error happens when running Ubuntu from inside Windows Terminal but doesn't happen when opening Ubuntu directly from Windows (not from inside the Windows Terminal)
I had a same problem with AWS and Serverless. I tried with eu-central-1 region and it didn't work so I had to change it to us-east-2 for the example.
I was getting this error after I recently added a new network to my docker-compose file.
I initially had these services:
services:
frontend:
depends_on:
- backend
ports:
- 3005:3000
backend:
ports:
- 8005:8000
I decided to add a new network which hosts other services I wanted my frontend service to have access to, so I did this:
networks:
moar:
name: moar-network
attachable: true
services:
frontend:
networks:
- moar
depends_on:
- backend
ports:
- 3005:3000
backend:
ports:
- 8005:8000
Unfortunately, the above made it so that my frontend service was no longer visible on the default network, and only visible in the moar network. This meant that the frontend service could no longer proxy requests to backend, therefore I was getting errors like:
Error occured while trying to proxy to: localhost:3005/graphql/
The solution is to add the default network to the frontend service's network list, like so:
networks:
moar:
name: moar-network
attachable: true
services:
frontend:
networks:
- moar
- default # here
depends_on:
- backend
ports:
- 3005:3000
backend:
ports:
- 8005:8000
Now we're peachy!
One last thing, if you want to see which services are running within a given network, you can use the docker network inspect <network_name> command to do so. This is what helped me discover that the frontend service was not part of the default network anymore.
Enabled Blaze and it still doesn't work?
Most probably you need to set .env from the right path, require('dotenv').config({ path: __dirname + './../.env' }); won't work (or any other path). Simply put the .env file in the functions directory, from which you deploy to Firebase.

Categories

Resources