I'm developing an application using Flink Kubernetes Operator version 1.1.0 but receiving the below error message in spawned taskmanager pods:
MountVolume.SetUp failed for volume "hadoop-config-volume" : "hadoop-config-name" not found
Unable to attach or mount volumes: unmounted volumes=[hadoop-config-volume], unattached volumes=[hadoop-xml hadoop-config-volume flink-config-volume flink-token-kk558]: timed out waiting for the condition
My link app.yaml
apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
name: "${DEPLOYMENT_NAME}"
namespace: data
spec:
flinkConfiguration:
taskmanager.numberOfTaskSlots: "2"
flinkVersion: v1_15
image: "${IMAGE_TAG}"
imagePullPolicy: Always
job:
jarURI: local:///opt/flink/opt/executable.jar
parallelism: 2
state: running
upgradeMode: stateless
jobManager:
resource:
cpu: 1
memory: 1024m
podTemplate:
apiVersion: v1
kind: Pod
metadata:
namespace: bigdata
spec:
containers:
-
env:
-
name: HADOOP_CONF_DIR
value: /hadoop/conf
envFrom:
-
configMapRef:
name: data-connection
name: flink-main-container
volumeMounts:
-
mountPath: /hadoop/conf
name: hadoop-xml
imagePullSecrets:
-
name: registry
serviceAccount: flink
volumes:
-
configMap:
name: hadoop-conf
name: hadoop-xml
serviceAccount: flink
taskManager:
resource:
cpu: 2
memory: 5000m
From the documentation, I believe hadoop-config-name is an internal configmap created by flink to ship hdfs configurations to taskmanager. I already mounted my configmap (contains *core-site.xml" and "hdfs-site.xml" to $HADOOP_CONF_DIR dir).
Is this a flink bug or I did something wrong with my set up?
For anyone facing the same issue, I fixed it by changing HADOOP_CONF_DIR -> HADOOP_CLASSPATH!