I'm using K3D to test a local cluster with a simple deployment. I'm following https://github.com/jpetazzo/container.training/tree/main/dockercoins which has 4 deployments one of them being a express app which serves some JS and html (https://github.com/jpetazzo/container.training/blob/main/dockercoins/webui/webui.js)
K3D cluster is created as such:
k3d cluster create k8cluster -p 8081:80@loadbalancer
So localhost:8081 access all my ingress routes. Ingress controller is in the kube-system namespace and my deployment, service is in the default namespace. I don't believe this is an issue since ingress controller manages rules for all namespaces.
Below is the deployment for the webui:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2025-02-23T03:20:59Z"
generation: 1
labels:
app: webui
name: webui
namespace: default
resourceVersion: "29220"
uid: 0c8f8ba6-70c9-4a27-ba72-98b1607340b4
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: webui
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: webui
spec:
containers:
- image: dockercoins/webui:v0.1
imagePullPolicy: IfNotPresent
name: webui
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2025-02-23T03:20:59Z"
lastUpdateTime: "2025-02-23T03:21:14Z"
message: ReplicaSet "webui-6d54d8646b" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2025-02-23T18:43:56Z"
lastUpdateTime: "2025-02-23T18:43:56Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
I have a simple service exposing the deployment as such:
apiVersion: v1
kind: Service
metadata:
name: webui
labels:
app: webui
spec:
selector:
app: webui
ports:
- port: 80
targetPort: 80
Exposing the pod through a nodeport works:
apiVersion: v1
kind: Service
metadata:
labels:
app: webui
name: webuinodeport
spec:
ports:
- name: 80-80
nodePort: 30080
port: 8080
protocol: TCP
targetPort: 80
selector:
app: webui
type: NodePort
But trying to create an ingress rule to access this deployment via its service is not working:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: blue
#annotations:
# ingress.kubernetes.io/ssl-redirect: "false"
spec:
ingressClassName: traefik #nginx
rules:
- host: localhost
http:
paths:
- path: /webui
pathType: Prefix
backend:
service:
name: webui
port:
number: 80
Below are things I confirmed are working:
#3,#4,#5 all return the correct output and show index.html. But when I try to access the ingress route all I get is: Cannot GET /webui
What am I missing? Service is valid and points to the correct pod, everything works within the cluster but I can't access it through ingress.
This was not an issue with my ingress rules or any K8 configuration. It was with how the path was defined in ingress alongside how the express app serves requests. Express app was trying to serve static files on the root "/" path instead of the path I defined in ingress /webui.
Had to adjust static file middleware in express to serve on /webui path and adjusted necessary routes in express and I am able to access things properly.