The cluster nodes are on-prem vmware servers, we used rancher just to build the k8s cluster.
Built was successful, when trying to host apps that are using PVC we have problems, the dynamic volume provisioning isn't happening and pvc are stuck in 'pending' state.
VMWare storage class is being used, we got confirmed from our vsphere admins that the VM's have visibility to the datastores and ideally it should work.
While configuring the cluster we have used the cloud provider credentials according the rancher docs.
cloud_provider:
name: vsphere
vsphereCloudProvider:
disk:
scsicontrollertype: pvscsi
global:
datacenters: nxs
insecure-flag: true
port: '443'
soap-roundtrip-count: 0
user: k8s_volume_svc@vsphere.local
Storage class yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nxs01-k8s-0004
parameters:
datastore: ds1_K8S_0004
diskformat: zeroedthick
reclaimPolicy: Delete
PVC yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: arango
namespace: arango
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: nxs01-k8s-0004
Now wanted understand why my PVC are stuck under pending state? is there any other steps missed out.
I saw in the rancher documentation saying Storage Policy has to be given as an input https://rancher.com/docs/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/vsphere/#creating-a-storageclass
In a vmware document it referred that as an optional parameter, and also had a statement on the top stating it doesn't apply to the tools that use CSI(container storage Interface)
https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/storageclass.html
I found that rancher is using an CSI driver called rshared.
So now is this storage policy a mandatory one? is this one that stopping me from provisioning a VMDK file?
I gave the documentation of creating the storage policy to the vsphere admins, they said this is for VSAN and the datastores are in VMax. I couldn't understand the difference or find an diff doc for VMax.
It would be a great help!! if fixed :)
The whole thing is about just the path defined for the storage end, in the cloud config yaml the PATH was wrong. The vpshere admins gave us the PATH where the vm 's residing instead they should have given the path where the Storage Resides.
Once this was corrected the PVC came to bound state.