I've been toying with DevSpace with Helm charts and possibly migrating to it from Skaffold and Kubernetes manifests. I can't seem to get the ingress controller working for local development: comes back with 404 Not Found
. I can reach it via port-forwarding, however, at localhost:3000
.
Like I've always done, I installed the ingress-nginx
controller first for docker-desktop
with:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/cloud/deploy.yaml
Then in my devspace.yaml
I have the following:
version: v1beta10
images:
client:
image: app/client
dockerfile: client/Dockerfile
context: client/
deployments:
- name: client
helm:
componentChart: true
values:
containers:
- image: app/client
service:
ports:
- port: 3000
ingress:
name: ingress
rules:
- host: localhost
path: /
pathType: Prefix
servicePort: 3000
serviceName: client
dev:
ports:
- name: client
imageSelector: app/client
forward:
- port: 3000
remotePort: 3000
sync:
- name: client
imageSelector: app/client
localSubPath: ./client
excludePaths:
- .git/
- node_modules/
The Dockerfile
is the same for both configurations.
FROM node:14-alpine
WORKDIR /app
COPY ./package.json ./
ENV CI=true
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
Furthermore, I've noticed as I add services (e.g. /api
, /admin
, etc.) with corresponding ingress.rules
, it creates as an ingress for each service instead of just one for the entire application.
For reference, this is what I used to do with skaffold
and manifests:
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
name: ingress-dev
spec:
rules:
- host: localhost
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: client-cluster-ip-service-dev
port:
number: 3000
# client.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: client-deployment-dev
spec:
replicas: 1
revisionHistoryLimit: 5
selector:
matchLabels:
component: client
environment: development
template:
metadata:
labels:
component: client
environment: development
spec:
containers:
- name: client
image: client
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: client-cluster-ip-service-dev
spec:
type: ClusterIP
selector:
component: client
environment: development
ports:
- port: 3000
targetPort: 3000
# skaffold.yaml
apiVersion: skaffold/v2beta1
kind: Config
build:
artifacts:
- image: client
context: client
sync:
manual:
- src: 'src/**/*.js'
dest: .
- src: 'src/**/*.jsx'
dest: .
- src: 'package.json'
dest: .
- src: 'public/**/*.html'
dest: .
- src: 'src/assets/sass/**/*.scss'
dest: .
- src: 'src/build/**/*.js'
dest: .
docker:
dockerfile: Dockerfile.dev
local:
push: false
deploy:
kubectl:
manifests:
- k8s/ingress.yaml
- k8s/client.yaml
I prefer using the ingress
controller during development instead of port-forwarding. That way I can I just go to localhost/
, localhost/admin
, localhost/api
, etc. I've run into serious bugs before that didn't come up using port-forwarding, but do with the ingress controller, so I just don't trust it.
Any suggestions for:
devspace.yaml
so that it creates one ingress instead of one for each service?The devspace render
:
---
# Source: component-chart/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: "client"
labels:
"app.kubernetes.io/name": "client"
"app.kubernetes.io/managed-by": "Helm"
annotations:
"helm.sh/chart": "component-chart-0.8.2"
spec:
externalIPs:
ports:
- name: "port-0"
port: 3000
targetPort: 3000
protocol: "TCP"
selector:
"app.kubernetes.io/name": "devspace-app"
"app.kubernetes.io/component": "client"
type: "ClusterIP"
---
# Source: component-chart/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: "client"
labels:
"app.kubernetes.io/name": "devspace-app"
"app.kubernetes.io/component": "client"
"app.kubernetes.io/managed-by": "Helm"
annotations:
"helm.sh/chart": "component-chart-0.8.2"
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
"app.kubernetes.io/name": "devspace-app"
"app.kubernetes.io/component": "client"
"app.kubernetes.io/managed-by": "Helm"
template:
metadata:
labels:
"app.kubernetes.io/name": "devspace-app"
"app.kubernetes.io/component": "client"
"app.kubernetes.io/managed-by": "Helm"
annotations:
"helm.sh/chart": "component-chart-0.8.2"
spec:
imagePullSecrets:
nodeSelector:
null
nodeName:
null
affinity:
null
tolerations:
null
dnsConfig:
null
hostAliases:
null
overhead:
null
readinessGates:
null
securityContext:
null
topologySpreadConstraints:
null
terminationGracePeriodSeconds: 5
ephemeralContainers:
null
containers:
- image: "croner-app/client:AtrvTRR"
name: "container-0"
command:
args:
env:
null
envFrom:
null
securityContext:
null
lifecycle:
null
livenessProbe:
null
readinessProbe:
null
startupProbe:
null
volumeDevices:
null
volumeMounts:
initContainers:
volumes:
volumeClaimTemplates:
---
# Source: component-chart/templates/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "ingress"
labels:
"app.kubernetes.io/name": "client"
"app.kubernetes.io/managed-by": "Helm"
annotations:
"helm.sh/chart": "component-chart-0.8.2"
spec:
rules:
- host: "localhost"
http:
paths:
- backend:
serviceName: client
servicePort: 3000
path: "/"
pathType: "Prefix"
---
The biggest difference I can see is what I used to use is apiVersion: networking.k8s.io/v1
and the devspace
one is apiVersion: extensions/v1beta1
. Perhaps the ingress controller controller-v1.0.0
I'm applying isn't compatible? Not sure...
In this particular case, the solution was to use an older version of the ingress-nginx
controller that is compatible with the version used by DevSpace. In my case, I was using devspace v5.16.0-alpha.0
and the following controller works with it:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.49.0/deploy/static/provider/cloud/deploy.yaml
Since this solution will change with newer versions of devspace
and ingress-nginx
, in general:
ingress-nginx
controller version and the devspace
version are compatible.devspace render
to see how ingress config is being generated and that the apiVersion
is compatible with ingress controller version you kubectl apply
.