I encountered that my topic despite running and operating doesn't register events occuring in my MongoDB.
Everytime I insert/modify record I'm not getting anymore logs from kafka-console-consumer
command.
Is there a way clear Kafka's cache/offset maybe? Source and sink connection are up and running. Entire cluster is also healthy, thing is that everything worked as usual but every couple weeks I see this coming back or when I log into my Mongo cloud from other location.
--partition 0
parameter didn't help, changing retention_ms
to 1
too.
I checked both connectors' status and got RUNNING
:
curl localhost:8083/connectors | jq
curl localhost:8083/connectors/monit_people/status | jq
Running docker-compose logs connect
I found:
WARN Failed to resume change stream: Resume of change stream was not possible, as the resume point may no longer be in the oplog. 286
If the resume token is no longer available then there is the potential for data loss.
Saved resume tokens are managed by Kafka and stored with the offset data.
When running Connect in standalone mode offsets are configured using the:
`offset.storage.file.filename` configuration.
When running Connect in distributed mode the offsets are stored in a topic.
Use the `kafka-consumer-groups.sh` tool with the `--reset-offsets` flag to reset offsets.
Resetting the offset will allow for the connector to be resume from the latest resume token.
Using `copy.existing=true` ensures that all data will be outputted by the connector but it will duplicate existing data.
Future releases will support a configurable `errors.tolerance` level for the source connector and make use of the `postBatchResumeToken
Issue requires more practice with Confluent Platform thus for now I re-built entire environment by removing entire container with:
docker system prune -a -f --volumes
docker container stop $(docker container ls -a -q -f "label=io.confluent.docker")
.
After running docker-compose up -d
all is up and working.