Conversation
src/main/java/de/azapps/kafkabackup/common/partition/cloud/S3BatchDeserializer.java
Outdated
Show resolved
Hide resolved
loffek
left a comment
There was a problem hiding this comment.
LGTM! But see my question around de-dup logic
src/main/java/de/azapps/kafkabackup/restore/message/PartitionMessageWriterWorker.java
Outdated
Show resolved
Hide resolved
src/main/java/de/azapps/kafkabackup/restore/message/RestoreMessageS3Service.java
Outdated
Show resolved
Hide resolved
Unify gradle task for all restoration modes - topics, messages and of…
…ithub.com:getdreams/kafka-backup into TC-128/read-messages-from-s3-and-write-to-cluster
|
How it works. Worker obtains list of json files with backup data and tries to send content of each batch in one transaction. After sending each message it's offset is saved in memory to prevent duplicates - before sending message we check if it's offset has already ben produced. It is possible to force producer to use transaction for one produced record - to do this one needs to change RestoreMessageProducer.PRODUCER_BATCH_SIZE to 1. Please note that we produce each record with timestamp we have from backup. Looking at documentation HERE I would argue that we can always give timestamp to produced message but for |
💯 |
…t of records