Context: I am using managed kafka i.e. AWS MSK cluster and running Strimzi KafkaConnect worker pods in aws eks cluster. I have removed replicas field from KafkaConnect manifest and it spins up 3 KafkaConnect pods by default
I have deployed 6 mssql debezium connectors on kafka connect cluster and they are running fine.
I created a scaled object
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: kafkaconnect-autoscaler
namespace: devtest
spec:
scaleTargetRef:
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
name: kafka-connect-cluster
minReplicaCount: 1
maxReplicaCount: 5
pollingInterval: 10
cooldownPeriod: 60
triggers:
- type: cpu
metadata:
type: Utilization
value: "75"
In ideal state, CPU usage is just 2% (as per metrics-server API), so I am assuming it should scale down KafkaConnect pods from 3 to 1 but it didnt.
I described scaledObject, it says
Status:
Conditions:
Message: ScaledObject doesn't have correct scaleTargetRef specification
Reason: ScaledObjectCheckFailed
Status: False
Type: Ready
Message: ScaledObject check failed
Reason: UnknownState
Status: Unknown
Type: Active
Status: Unknown
Type: Fallback
Status: Unknown
Type: Paused
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ScaledObjectCheckFailed 13s (x12 over 23s) keda-operator ScaledObject doesn't have correct scaleTargetRef specification
Warning ScaledObjectCheckFailed 2s (x13 over 23s) keda-operator Target resource doesn't expose /scale subresource
I saw at many posts that /scale subresource is exposed for KafkaConnect custom resource but dont know why its not working for me.
I checked that scaled subresource is present here
kubectl get crd kafkaconnects.kafka.strimzi.io -o yaml
And even if I am able to scale it up, will connectors re-distribute among scaled worker nodes?
You definitely should set the .spec.replicas
field to the initial number of replicas you want to use.