I'm experiencing an issue with my AGH pod where it has to be reconfigured every time the container shuts down; be it manually, or at server restart.
These are the various YAMLs:
---
apiVersion: v1
kind: Namespace
metadata:
name: adguard
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: adguard-data-pv
namespace: adguard
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/tank/apps/adguard/data"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: adguard-conf-pv
namespace: adguard
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/tank/apps/adguard/conf"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: adguard-data-pvc
namespace: adguard
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
volumeName: adguard-data-pv
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: adguard-conf-pvc
namespace: adguard
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
volumeName: adguard-conf-pv
---
apiVersion: v1
kind: ConfigMap
metadata:
name: adguard-config
namespace: adguard
data:
AdGuardHome.yaml: |
bind_host: 0.0.0.0
bind_port: 3000
auth_name: "admin"
auth_pass: "[REDACTED]"
language: "en"
rlimit_nofile: 0
rlimit_nproc: 0
log_file: ""
log_syslog: false
log_syslog_srv: ""
pid_file: ""
verbose: false
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: adguard-deployment
namespace: adguard
spec:
replicas: 1
selector:
matchLabels:
app: adguard
template:
metadata:
labels:
app: adguard
spec:
containers:
- name: adguard-home
image: adguard/adguardhome:latest
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "1000m"
env:
- name: AGH_CONFIG
valueFrom:
configMapKeyRef:
name: adguard-config
key: AdGuardHome.yaml
ports:
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 53
name: dns-udp
protocol: UDP
- containerPort: 67
name: dhcp-one
protocol: UDP
- containerPort: 68
name: dhcp-two
protocol: TCP
- containerPort: 68
name: dhcp-three
protocol: UDP
- containerPort: 80
name: http-tcp
protocol: TCP
- containerPort: 443
name: doh-tcp
protocol: TCP
- containerPort: 443
name: doh-udp
protocol: UDP
- containerPort: 3000
name: http-initial
- containerPort: 784
name: doq-one
protocol: UDP
- containerPort: 853
name: dot
protocol: TCP
- containerPort: 853
name: doq-two
protocol: UDP
- containerPort: 5443
name: dnscrypt-tcp
protocol: TCP
- containerPort: 5443
name: dnscrypt-udp
protocol: UDP
volumeMounts:
- name: adguard-data
mountPath: /opt/adguardhome/work
- name: adguard-conf
mountPath: /opt/adguardhome/conf
volumes:
- name: adguard-data
persistentVolumeClaim:
claimName: adguard-data-pvc
- name: adguard-conf
persistentVolumeClaim:
claimName: adguard-conf-pvc
---
apiVersion: v1
kind: Service
metadata:
name: adguard-service
namespace: adguard
spec:
selector:
app: adguard
ports:
- protocol: TCP
port: 3000
targetPort: 3000
name: http-initial
- protocol: TCP
port: 80
targetPort: 80
name: http-tcp
- protocol: UDP
port: 53
targetPort: 53
name: dns-udp
- protocol: TCP
port: 53
targetPort: 53
name: dns-tcp
- protocol: UDP
port: 67
targetPort: 67
name: dhcp-one
- protocol: TCP
port: 68
targetPort: 68
name: dhcp-two
- protocol: UDP
port: 68
targetPort: 68
name: dhcp-three
- protocol: TCP
port: 443
targetPort: 443
name: doh-tcp
- protocol: UDP
port: 443
targetPort: 443
name: doh-udp
- protocol: UDP
port: 784
targetPort: 784
name: doq-one
- protocol: TCP
port: 853
targetPort: 853
name: dot
- protocol: UDP
port: 853
targetPort: 853
name: doq-two
- protocol: TCP
port: 5443
targetPort: 5443
name: dnscrypt-tcp
- protocol: UDP
port: 5443
targetPort: 5443
name: dnscrypt-udp
type: LoadBalancer
externalTrafficPolicy: Local
I have to admit that I am new to Kubernetes, so maybe I am doing something wrong? I do, however, find it puzzling that having deployed Plex in a similar fashion seems to work just fine; I can stop, destroy, etc. and re-deploy it, and it starts as if nothing ever happened.
I'm using microk8s
and metallb
over ZFS (for the data).
I found out what the issue was: applying the various YAMLs for the first time spin up the pod/stack - this creates an AdGuardHome.yaml
file using the template/params from the adguard-config.yml
. Applying changes using the web UI and successively doing a cat /path/to/AdGuardHome.yaml
, you can see that the file's content changes (i.e. changes in the web UI get written to that file). I found out that ConfigMap
s are generally immutable in k8s
unless you set up an automatic reloading mechanism, which basically means that each time you reboot your system, destroy/re-deploy the pod, the original ConfigMap
gets applied.
My workaround for this, at the moment, is just to comment out the following bit inside adguard-deployment.yml
:
...
#env:
# - name: AGH_CONFIG
# valueFrom:
# configMapKeyRef:
# name: adguard-config
# key: AdGuardHome.yaml
...
And doing a # microk8s kubectl apply -f adguard-deployment.yml
.
I know this might not be the optimal/right way to do it, but it works for now; at least until I reach a better understanding of k8s
.
As a minor addendum: I think that a viable workaround would be to add an initContainer
which checks if the file already exists in the given path; if it doesn't, create it applying the contents of the adguard-config.yml
.