I have a Kubernetes cluster with 3 nodes and it has a storage class that uses CEPH with dynamic provisioning as default.
My question is:
Is there a way to copy files that are available on the CI/CD (GitLab on premise) to one or more of the POD's PVCs?
These files must be available before the application starts.
Something like bind mount on docker would be ideal.
I looked up configMap but its too small (1Mi)
In the general case, no, there's no way to read or write an arbitrary (persistent) volume from outside the cluster. For some storage types it might be; if you have an NFS mount, for example, you probably have enough details to mount the same storage from another system.
If the volume will be mountable (it has ReadWriteMany access mode) then one hacky option is to create a temporary do-nothing Pod (command: [sleep, infinity]
) that mounts the volume. You know the Pod's name, and so you can kubectl cp
files into it, and thence the mounted volume.
For the use case you describe, a PersistentVolumeClaim might not be the right approach: it's hard to get a ReadWriteMany volume, and as noted here it's hard to access from outside the cluster. Some other options:
If this is static data that is tied to an application build, COPY
it into your image in your CI pipeline. (This has the usual advantages of making sure applications don't see other versions' data, and of being able to easily roll back if needed.)
You could move the logic to build the data into a Kubernetes Job, at which point it's running inside the cluster and can mount the RWX PVC.
Assuming the data is read-only, you could run an HTTP server outside the cluster and use that to fetch it. (As a variant, if it must be files, have an init container or an entrypoint wrapper script fetch it at Pod/container startup.)