I am trying to resolve the connection.uri
using FileConfigProvider, by following this example:
https://docs.confluent.io/platform/current/connect/security.html#externalizing-secrets
I have the following POST request:
POST http://localhost:8083/connectors/my-sink/config
{
"connector.class": "com.mongodb.kafka.connect.MongoSinkConnector",
"topics": "topic",
"database": "my-database",
"connection.uri": "${file:/home/appuser/my-file.txt:mongo_uri}",
"config.providers": "file",
"config.providers.file.class": "org.apache.kafka.common.config.provider.FileConfigProvider"
}
I get the following error:
{
"error_code": 400,
"message": "Connector configuration is invalid and contains the following 1 error(s):\nInvalid value ${file:/home/appuser/my-file.txt:mongo_topic} for configuration connection.uri: The connection string is invalid. Connection strings must start with either 'mongodb://' or 'mongodb+srv://\nYou can also find the above list of errors at the endpoint `/connector-plugins/{connectorType}/config/validate`"
}
It appears that the config validation is executed before resolving the secret value.
And, for this reason, the value "connection.uri": "${my-secret}"
is not a valid mongodb connection string.
Is there a possibility to fix this?
Source-code:
/MyFolder
├── kafka-connect
│ └── Dockerfile
└── docker-compose.yml
MyFolder\docker-compose.yml:
version: "3"
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.0.0
container_name: zookeeper
hostname: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-kafka:6.0.0
container_name: kafka
hostname: kafka
depends_on:
- zookeeper
ports:
- "29092:29092"
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_offsets_TOPIC_REPLICATION_FACTOR: 1
kafka-connect:
build:
context: ./kafka-connect
dockerfile: Dockerfile
container_name: kafka_connect
depends_on:
- kafka
ports:
- "8083:8083"
mongo:
image: mongo
container_name: mongo
restart: unless-stopped
depends_on:
- kafka-connect
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: password
MyFolder\kafka-connect\Dockerfile:
FROM confluentinc/cp-kafka-connect:6.0.0
COPY ./plugins/ /usr/local/share/kafka/plugins/
ENV CONNECT_BOOTSTRAP_SERVERS=PLAINTEXT://kafka:29092
ENV CONNECT_REST_ADVERTISED_HOST_NAME=kafka_connect
ENV CONNECT_GROUP_ID=kafka-connect-group
ENV CONNECT_CONFIG_STORAGE_TOPIC=kafka-connect-group-config
ENV CONNECT_OFFSET_STORAGE_TOPIC=connect-group-offset
ENV CONNECT_STATUS_STORAGE_TOPIC=kafka-connect-group-status
ENV CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR=1
ENV CONNECT_STATUS_STORAGE_REPLICATION_FACTOR=1
ENV CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR=1
ENV CONNECT_KEY_CONVERTER=org.apache.kafka.connect.storage.StringConverter
ENV CONNECT_VALUE_CONVERTER=org.apache.kafka.connect.storage.StringConverter
ENV CONNECT_PLUGIN_PATH=/usr/local/share/kafka/plugins
EXPOSE 8083
I create the container using docker-compose up
.
I configure the MongoSinkConnector
using the kafka-connect REST endpoints.
The properties for setting up the provider are for the Connect worker (the process started by the container), not a specific connector
kafka-connect:
...
depends_on:
- kafka
- mongo
ports:
- "8083:8083"
environment:
...
CONNECT_CONFIG_PROVIDERS: file
CONNECT_CONFIG_PROVIDERS_FILE_CLASS: org.apache.kafka.common.config.provider.FileConfigProvider