I would like to deploy an ssh bastion jumper as a deployment in a Kubernetes cluster. This should receive its sshd_config as well as the authorized_keys via a ConfigMap or Secret. These can of course change over time, so that a reload of the sshd service becomes necessary.
How can I automate this process? Existing ssh connections should not be killed when updating the config or authorized_keys file.
My dockerfile is:
FROM docker.io/alpine:latest
RUN apk add --no-cache openssh-server
EXPOSE 22/tcp
CMD ["/usr/sbin/sshd", "-D", "-e"]
My deployment looks like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: sshd-server
namespace: sshd
spec:
replicas: 2
template:
metadata:
labels:
app: sshd-server
spec:
containers:
- name: my-sshd-server
image: my-sshd-server-image:latest
imagePullPolicy: Always
ports:
- containerPort: 22
volumeMounts:
- mountPath: /etc/ssh/sshd_config
name: sshd_config
- mountPath: /user/.ssh/authorized_keys
name: authorized_keys
...
If you mount a ConfigMap
as a directory, the directory contents will update when you update the ConfigMap
(possibly after a short delay).
That means if you were just concerned about your authorized_keys
file, you could do something like this:
Create the following ConfigMap
:
apiVersion: v1
kind: ConfigMap
metadata:
name: ssh-config
data:
authorized_keys: |
ssh-rsa ...
ssh-rsa ...
sshd_config: |
StrictModes no
AuthorizedKeysFile /config/authorized_keys
And deploy your ssh pod using something like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: sshtest
spec:
replicas: 1
template:
spec:
containers:
- image: quay.io/larsks/alpine-sshd:5
imagePullPolicy: Always
name: sshtest
ports:
- containerPort: 22
name: ssh
volumeMounts:
- name: ssh-config
mountPath: /config
- name: ssh-config
mountPath: /etc/ssh/sshd_config
subPath: sshd_config
- name: ssh-data
mountPath: /etc/ssh
volumes:
- name: ssh-config
configMap:
name: ssh-config
defaultMode: 0440
- name: ssh-data
emptyDir: {}
Where quay.io/larsks/alpine-sshd:5
is simply alpine + sshd + an
ENTRYPOINT
that runs ssh-keygen -A
. You should build your own
rather than random some random person's image :).
This will work on straight Kubernetes but will not run on OpenShift without additional work.
With this configuration (and an appropriate Service
) you can ssh
into the container as root using the private key that corresponds to
one of the public keys contained in the authorized_keys
part of the
ssh-config
ConfigMap
.
When you update the ConfigMap
, the container will eventually see the
updated values, no restarts required.
If you really want to respond to changes in sshd_config
, that
becomes a little more complicated. sshd
itself doesn't have any
built-in facility for responding to changes in the configuration file,
so you'll need to add a sidecar container that watches for config file
updates and then sends the appropriate signal (SIGHUP
) to sshd
.
Something like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: sshtest
spec:
replicas: 1
template:
spec:
shareProcessNamespace: true
containers:
- image: docker.io/alpine:latest
name: reloader
volumeMounts:
- name: ssh-config
mountPath: /config
- name: ssh-data
mountPath: /etc/ssh
command:
- /bin/sh
- -c
- |
while true; do
if [ -f /etc/ssh/sshd_config ] && [ -f /etc/ssh/sshd.pid ]; then
if ! diff -q /config/sshd_config /etc/ssh/sshd_config; then
cp /config/sshd_config /etc/ssh/sshd_config
kill -HUP $(cat /etc/ssh/sshd.pid)
fi
fi
sleep 10
done
- image: quay.io/larsks/alpine-sshd:6
imagePullPolicy: Always
name: sshd
ports:
- containerPort: 22
name: ssh
volumeMounts:
- name: ssh-config
mountPath: /config
- name: ssh-data
mountPath: /etc/ssh
volumes:
- name: ssh-config
configMap:
name: ssh-config
defaultMode: 0600
- name: ssh-data
emptyDir: {}
This requires a slightly modified container image that includes the
following ENTRYPOINT
script:
#!/bin/sh
if [ -f /config/sshd_config ]; then
cp /config/sshd_config /etc/ssh/sshd_config
fi
ssh-keygen -A
exec "$@"
With this configuration, the reloader
container watches for changes in the configuration file supplied by the ConfigMap
. When it detects a change, it copies the updated file to the correct location and then sends a SIGHUP
to sshd
, which reloads its configuration.
This does not interrupt existing ssh connections.