I have an Openshift Local (CRC) installed on my Windows Notebook I would like to deploy a simple wildfly server to openshift. The goal is provide two routes to the application port 8080 and the admin port 9990 of quay.io/wildfly/wildfly:33.0.2.Final-jdk21 vie these urls: app-route.example.com adm-route.example.com
I have configured deployment, services and routes. If I port-forward the pod or the services, I can call http://localhost:9990 or http://localhost:8080 for service and pods. It is working. But it is not possible to call the admin page via http://adm-route.example.com (a request to the application page via http://app-route.apps-crc.testingexample.com is possible)
The deployment for the wildfly image
-----deployment.yaml-----
apiVersion: apps/v1
kind: Deployment
metadata:
name: wildfly-deployment
namespace: test
spec:
replicas: 1
selector:
matchLabels:
app: wildfly
template:
metadata:
labels:
app: wildfly
spec:
containers:
- name: wildfly-container
image: quay.io/wildfly/wildfly:33.0.2.Final-jdk21
ports:
- containerPort: 8080
- containerPort: 9990
-----deployment.yaml-----
then I describe two services, one for each port
-----service-app.yaml-----
kind: Service
apiVersion: v1
metadata:
name: app-service
namespace: test
spec:
ipFamilies:
- IPv4
ports:
- name: 8080-tcp
protocol: TCP
port: 8080
targetPort: 8080
internalTrafficPolicy: Cluster
type: ClusterIP
ipFamilyPolicy: SingleStack
selector:
app: wildfly
-----service-app.yaml-----
-----service-adm.yaml-----
kind: Service
apiVersion: v1
metadata:
name: adm-service
namespace: test
spec:
ipFamilies:
- IPv4
ports:
- name: 9990-tcp
protocol: TCP
port: 9990
targetPort: 9990
internalTrafficPolicy: Cluster
type: ClusterIP
ipFamilyPolicy: SingleStack
selector:
app: wildfly
-----service-adm.yaml-----
for each service I have a route configured
-----route-app.yaml-----
kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: app-route
namespace: test
spec:
host: example.com
to:
kind: Service
name: app-service
weight: 100
port:
targetPort: 8080-tcp
wildcardPolicy: None
-----route-app.yaml-----
-----route-adm.yaml-----
kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: adm-route
namespace: test
spec:
host: example.com
to:
kind: Service
name: adm-service
weight: 100
port:
targetPort: 9990-tcp
wildcardPolicy: None
-----route-adm.yaml-----
I create the the project test
oc login -u developer -p developer example.com
oc new-project test-project
this is that the server may start with configured jboss user
oc login -u kubeadmin -p kubeadmin example.com
oc adm policy add-scc-to-user anyuid -n test -z default
and finally I apply the files
oc login -u developer -p developer example.com
oc apply -f deployment.yaml
oc apply -f service-app.yaml
oc apply -f service-adm.yaml
oc apply -f route-app.yaml
oc apply -f route-adm.yaml
status after configuration
oc get pod
NAME READY STATUS RESTARTS AGE
wildfly-deployment-65d59dbcb5-4ql2x 1/1 Running 0 107s
oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
adm-service ClusterIP 10.217.4.237 <none> 9990/TCP 61s
app-service ClusterIP 10.217.5.148 <none> 8080/TCP 65s
oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
adm-route adm-route.apps-crc.testing adm-service 9990-tcp None
app-route app-route.apps-crc.testing app-service 8080-tcp None
I can port-forward the pod to 8080 and 9990 and also I can do a port-forward for each service: Each time a call to admin page or the application page is working (curl http://localhost:8080 or curl http://localhost:9990)
but if I use the url: the request to app-route.example.com works fine. the request to adm-route.example.com will be answered with this error.
This is the error message when I call adm-route.example.com
curl http://adm-route.example.com
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
</head>
<body>
<div>
<h1>Application is not available</h1>
<p>The application is currently not serving requests at this endpoint. It may not have been started or is still starting.</p>
<div class="alert alert-info">
<p class="info">
Possible reasons you are seeing this page:
</p>
<ul>
<li>
<strong>The host doesn't exist.</strong>
Make sure the hostname was typed correctly and that a route matching this hostname exists.
</li>
<li>
<strong>The host exists, but doesn't have a matching path.</strong>
Check if the URL path was typed correctly and that the route was created using the desired path.
</li>
<li>
<strong>Route and path matches, but all pods are down.</strong>
Make sure that the resources exposed by this route (pods, services, deployment configs, etc) have at least one pod running.
</li>
</ul>
</div>
</div>
</body>
</html>
I have also tried to configure two ingress routes
-----ingress-app.yaml-----
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
spec:
rules:
- host: app-ingress.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app-service
port:
number: 8080
-----ingress-app.yaml-----
-----ingress-adm.yaml-----
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: adm-ingress
spec:
rules:
- host: adm-ingress.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: adm-service
port:
number: 9990
-----ingress-adm.yaml-----
I can apply both files
oc apply -f ingress-app.yaml
oc apply -f ingress-adm.yaml
oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
adm-ingress-mzwx6 adm-ingress.apps-crc.testing / adm-service 9990-tcp None
adm-route adm-route.apps-crc.testing adm-service 9990-tcp None
app-ingress-bm7j6 app-ingress.apps-crc.testing / app-service 8080-tcp None
app-route app-route.apps-crc.testing app-service 8080-tcp None
and here is the same situation: a call to http://app-ingress.example.com is working - but not a call to http://adm-ingress.apps-crc.testingexample.com
I also can check the endpoints of the services
oc describe svc adm-service
Name: adm-service
Namespace: test
Labels: <none>
Annotations: <none>
Selector: app=wildfly
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.217.4.237
IPs: 10.217.4.237
Port: 9990-tcp 9990/TCP
TargetPort: 9990/TCP
Endpoints: 10.217.0.46:9990
Session Affinity: None
Events: <none>
D:\Daten.wrk\00_PROJEKTE\BArch\openshift.local\barch\test>oc describe svc app-service
Name: app-service
Namespace: test
Labels: <none>
Annotations: <none>
Selector: app=wildfly
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.217.5.148
IPs: 10.217.5.148
Port: 8080-tcp 8080/TCP
TargetPort: 8080/TCP
Endpoints: 10.217.0.46:8080
Session Affinity: None
Events: <none>
If I do a curl to the endpoint of the application service - I will get a normal answer
tmp-shell ~ curl http://10.217.0.46:8080
-> HTML of the normal application
tmp-shell ~
If I do a curl to the endpoint of the admin service - I receive an error
tmp-shell ~ curl http://10.217.0.46:9990
curl: (7) Failed to connect to 10.217.0.46 port 9990 after 0 ms: Couldn't connect to server
It seems there is something wrong - perhaps with the service?
------------------
The core issue probably lies in how WildFly handles its management interface (port 9990).
In the WildFly configuration file, specifically the management interface, check if it's bound to 127.0.0.1
(Loopback).
If it is then that's the problem. It only listens for connections originating from within the same container (or the host if port-forwarded).
You have to change the management interface to bind to 0.0.0.0
(All Interfaces), to listen on all available network interfaces, allowing connections from any IP address that can reach the container.
127.0.0.1
is the loopback address. Services bound to this address are only accessible from within the same machine or container.0.0.0.0
means "listen on all available network interfaces". Services bound to this address are accessible from any IP address that can reach the machine or container.When you use Docker's -p
flag, you're essentially creating a network bridge that forwards traffic from your host machine to a port inside the container. Even if the Wildfly container has port 8080 bound to 127.0.0.1, the docker port forwarding makes it accessible from the outside.
Because of this proxy behavior, the application port appears to be working, while the management port does not.
The management interface (port 9990) is designed for administrative tasks, it is intentionally restricted for security reasons. Docker port forwarding alone doesn't bypass the binding restrictions of the management interface.
When the management interface is bound to 127.0.0.1, it only accepts connections originating from within the container. Therefore, even if you forwarded port 9990, the WildFly management interface would reject the connection because it's coming from an external source.