azurekubernetesazure-storageazure-akspersistent-volume-claims

Error when switching Mariadb datafolder from Pod to Azure Share volume on AKS cluster


I try to deploy a MariaDb instance on AKS using a Azure Storage file share as volume.

If I create my MariaDb pod without specify the volume, everything is fine, the Pod is created and I can access to the database. But when I try to switch on a volume using Azure Storage file share, I got a bunch of error at container startup:

2023-04-14 14:29:11+00:00 [Note] [Entrypoint]: Initializing database files
2023-04-14 14:29:11 0 [ERROR] InnoDB: The Auto-extending data file './ibdata1' is of a different size 0 pages than specified by innodb_data_file_path
2023-04-14 14:29:11 0 [ERROR] InnoDB: Plugin initialization aborted with error Generic error
2023-04-14 14:29:11 0 [ERROR] Plugin 'InnoDB' init function returned error.
2023-04-14 14:29:11 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
2023-04-14 14:29:11 0 [ERROR] Unknown/unsupported storage engine: InnoDB
2023-04-14 14:29:11 0 [ERROR] Aborting

Installation of system tables failed!  Examine the logs in
/var/lib/mysql/ for more information.

If I look in the fileshare, MariaDb had created some files such as ibdata1 with 0 size. I got also 2 binary files: aria_log.00000001 and aria_log_control.

If I create another pod with another image (such as nginx) with the same volume and touch a file from the pod, it appears correctly in the Fileshare on Storage side.

My MariaDb is deployed as a Statefull Set (see below)

My Persistent volume:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: mariadb-pv
  annotations:
    pv.kubernetes.io/provisioned-by: file.csi.azure.com
spec:
  capacity:
    storage: 50Gi
  azureFile:
    secretName: storage-account
    shareName: mariadb
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: azurefile-csi
  mountOptions:
    - file_mode=0777
    - nobrl
    - mfsymlinks
    - gid=999
    - uid=999
    - dir_mode=0777
    - nosharesock
    - cache=strict
  volumeMode: Filesystem

My PVC:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mariadb-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 50Gi
  volumeName: mariadb-pv
  storageClassName: azurefile-csi
  volumeMode: Filesystem

And my mariadb Statefull set:

kind: StatefulSet
apiVersion: apps/v1
metadata:
  name: mariadb-sts
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mariadb
  template:
    metadata:
      labels:
        app: mariadb
    spec:
      volumes:
        - name: datadir
          persistentVolumeClaim:
            claimName: mariadb-pvc
      containers:
        - name: mariadb
          image: mariadb:10.11
          ports:
            - name: mariadb-port
              containerPort: 3306
              protocol: TCP
          env:
            - name: MARIADB_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mariadb
                  key: password
                  optional: false
          volumeMounts:
            - name: datadir
              mountPath: /var/lib/mysql
              mountPropagation: None
      restartPolicy: Always
  serviceName: mariadb

What is wrong ?

I've edited the uid and gid of the mount option to make sure the mount use 'mysql:mysql' user but that doesn't change anything.


Solution

  • It seems that Standard Azure Storage Account is not as fast as MariaDb require to work in the cluster (see WordPress-MariaDB app crashes when deployed with AzureFile storage).

    Since I would need to use a Premium account (more expensive) and there are many other details to handle (security, ssl, etc.) I abandon the idea to embbed the database in the cluster.

    It would be a simplier to create an Azure Database for MariaDb resource in Azure and access to is from the cluster.