I am running a Quarkus RESTful Webservice inside a Docker container which runs in a VM (Ubuntu 22.04). I have an Angular frontend (named frontend
), a Quarkus backend (named quarkus
), an authorization server (named keycloak-w
) and a MySQL-DB (named mysql-kc
), that is used by Keycloak as a persistent storage. The goal is to only forward the ports of the frontend-service which is deployed via nginx. However, for testing purposes I exposed the 8080
port for the quarkus webservice to check if I can access it. Unfortunately I am not able to do so. So, first here is the docker-compose.yml
, the setup works on my local machine:
version: '3.9'
services:
frontend:
build:
context: ../
network: host
ports:
- 80:80
- 443:443
restart: unless-stopped
environment:
KEYCLOAK_URL: 'http://keycloak-w:8080'
KEYCLOAK_REALM: code2uml
KEYCLOAK_CLIENT_ID: code2uml
BACKEND: 'http://quarkus:8080'
DOMAIN: 'my.domain'
volumes:
- ./imports/java/:/opt/jboss/container/java/run/
networks:
- keycloak-and-mysql-network
quarkus:
build:
context: ./backend
network: host
ports:
- "8080:8080"
restart: unless-stopped
environment:
KEYCLOAK: keycloak-w
volumes:
- ./imports/java/:/opt/jboss/container/java/run/
networks:
- keycloak-and-mysql-network
mysql-kc:
image: mysql:8.0.27
ports:
- 3366:3306
restart: unless-stopped
environment:
# The user, password and database that Keycloak
# is going to create and use
MYSQL_USER: keycloak_user
MYSQL_PASSWORD: keycloak_password
MYSQL_DATABASE: keycloak_db
# Self-Explanatory
MYSQL_ROOT_PASSWORD: root_password
healthcheck:
test: [ "CMD-SHELL", "mysqladmin ping -P 3306 -proot_password | grep 'mysqld is alive' || exit 1" ]
interval: 10s
timeout: 30s
retries: 10
volumes:
- keycloak-and-mysql-volume:/var/lib/mysql
# - ./db-data:/var/lib/mysql
networks:
- keycloak-and-mysql-network
keycloak-w:
image: keycloak/keycloak:22.0.5
ports:
- 8081:8080
restart: unless-stopped
command:
- start-dev
- --import-realm
environment:
# User and password for the Administration Console
KEYCLOAK_ADMIN: timmyOTool
KEYCLOAK_ADMIN_PASSWORD: myPassword
KEYCLOAK_IMPORT: /opt/keycloak/data/import/code2uml-realm.json
KC_HOSTNAME_STRICT: 'false'
KC_DB: mysql
KC_DB_URL: 'jdbc:mysql://mysql-kc:3306/keycloak_db'
KC_DB_USERNAME: keycloak_user
KC_DB_PASSWORD: keycloak_password
KC_Hostname: localhost
KC_HTTPS_KEY_STORE_PASSWORD: secret
DB: keycloak_db
DB_URL_HOST: mysql-kc
DB_SCHEMA: keycloak
DB_USERNAME: keycloak_user
DB_PASSWORD: keycloak_password
DB_PORT: 3306
depends_on:
mysql-kc:
condition: service_healthy
volumes:
- ./imports:/opt/keycloak/data/import
- ./export:/export
networks:
- keycloak-and-mysql-network
networks:
keycloak-and-mysql-network:
volumes:
keycloak-and-mysql-volume:
So I checked the following on the VM (via Putty and SSH): If I try to do a curl localhost:8080
it takes some time and I receive curl: (56) Recv failure: Connection reset by peer
. In the file /etc/hosts
there is a mapping from 127.0.0.1
to localhost
. I also found this question (docker-compose up not forwarding port), so I took a look inside the logs of the quarkus webservice: It tells me, that it is listening on http://0.0.0.0:8080. So it seems not to use localhost
in the application code.
2024-09-16 15:19:38,712 INFO [io.quarkus] (main) my-service on JVM (powered by Quarkus 3.11.2) started in 6.259s. Listening on: http://0.0.0.0:8080
When I remove the networks
part in he docker-compose.yml
(under quarkus
) and replace it with network_mode: host
everything works fine and I am even able to connect to the webservice from my local machine (curl my.domain:8080
works and I receive some Quarkus-specific html). However, that is not what I want. In the end I want to do no port-forwarding at all (except for the frontend-service which is my reverse proxy), this is just for testing purposes.
On the other hand the setup works fine on my local machine and if I execute the above curl localhost:8080
I receive some Quarkus-specific html.
Does anybody have an idea why the service is not accessible, although it is up and running (at least on the VM) and the port is forwarded? Do I need maybe some kind of network bridge? Any help is kindly appreciated.
UPDATE: I compared the network-settings (by running docker inspect container_id
) of both my locally running WS and the corresponding settings on the VM and I realized the following differences:
Local:
"NetworkSettings": {
"Bridge": "",
"SandboxID": "5b43932cc258003afdef4d9403f53beb0ff5365e0285ca233546895d10105a67",
"SandboxKey": "/var/run/docker/netns/5b43932cc258",
"Ports": {
"8080/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8080"
}
]
},
VM:
"NetworkSettings": {
"Bridge": "",
"SandboxID": "720e610735d12a295499e70146256fdcb2b1c9320f8974794323fddaf6ade942",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"8080/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8080"
},
{
"HostIp": "::",
"HostPort": "8080"
}
]
},
Is the problem maybe somehow related to the usage of IPv6-adresses?
I was finally able to find a solution to my problem (original link can be found here). In case the above links will be removed in the future, I executed the following steps on my VM:
/etc/netplan/50-cloud-init.yaml
and add en in front of the *:network:
ethernets:
all:
dhcp4: true
dhcp6: true
match:
name: 'en*'
renderer: networkd
version: 2
netplan apply
sudo systemctl restart systemd-networkd
#Docker needs to be restarted to request a new ip
sudo systemctl restart docker
Explanation: Ubuntu 22.04 uses netplan to adress ip via DHCP. The change in the above configuration file forces netplan to use DHCP only on interface en*
and the docker interface is docker0
.