You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I deploy Brooklin with Kubernetes to migrate data from one Kafka cluster to another.
For some reason the container dies and restart during the migration. At the end of the migration I noticed that there are more messages in the target cluster than there are in the source cluster. Most of the messages are duplicated.
I’d like to know how can I ensure that no message will be duplicated in the same data stream, even when the pod restarts after failure.
Subject of the issue
I deploy Brooklin with Kubernetes to migrate data from one Kafka cluster to another.
For some reason the container dies and restart during the migration. At the end of the migration I noticed that there are more messages in the target cluster than there are in the source cluster. Most of the messages are duplicated.
I’d like to know how can I ensure that no message will be duplicated in the same data stream, even when the pod restarts after failure.
The datastream :
${BROOKLIN_HOME}/bin/brooklin-rest-client.sh -o CREATE -u http://localhost:32311/ -n test-mirroring-stream -s "kafkassl://kafka.kafka-non-prod:9092/.*” -c kafkaMirroringC -t kafkaTP -m '{"owner":"test-user","system.reuseExistingDestination":"false"}'
Config :
Your environment
Steps to reproduce
Expected behaviour
The datastream should resume without duplicating any message that has already been copied.
Actual behaviour
The datastream doesn’t resume, most of the messages are duplicated.
The text was updated successfully, but these errors were encountered: