-
Apache Pulsar Version - 2.11.1 While using Apache Pulsar delayed functionality, messages are produced with a delay. We also have configured a message retention policy.
Can someone help why past acknowledged messages are consumed again? Pulsar Config
|
Beta Was this translation helpful? Give feedback.
Replies: 7 comments 5 replies
-
One possible reason is simply bugs in the Pulsar client or broker were acknowledgements were lost. If an acknowledgement is missing, messages would get redelivered. The redelivery moment depends on the subscription type and client settings. There have been bugs in the past where acknowledgements were lost due to client or broker bugs, however those are a rare minority of the problems. There are cases where so called individual acknowledgements ("individually deleted", Lines 1274 to 1281 in bcf8afb When adjusting this limit, it's possible that the maximum size of a single Netty message or ZNode is reached. To reduce the size of ZNodes, there's ManagedCursorInfo compression which was introduced in PIP-146. (ManagedLedgerInfo compression was introduced earlier, in PR #11490.) For allowing higher In your case, you'd need to check your topic's There are also many improvements where the There was also PIP-299: Stop dispatch messages if the individual acks will be lost in the persistent storage which introduced a setting Since you are running on 2.11.1, you are way behind in sticking to a maintained and supported Pulsar version. In the OSS project, we would request Pulsar users to contribute in a way where they first try to reproduce the issue with a supported version since we don't maintain old versions. This is explained in the Pulsar release policy and supported versions document. Newer Pulsar clients are compatible with all older Pulsar broker versions with some small exceptions. The latest Pulsar 3.0.x or 4.0.x client is compatible with Pulsar 2.11.x brokers. On your side, the first step would be to upgrade clients and then get to a maintained Pulsar version. In your case, you'd first have to upgrade to 2.11.4 and then to 3.0.9 to get on a supported Pulsar broker version. Upgrade compatibility between release versions is explained in the release policy document. |
Beta Was this translation helpful? Give feedback.
-
Thanks, We will check the topic stats & possibly perform a upgrade Any idea why the storage is not cleared event after acknowledgment? |
Beta Was this translation helpful? Give feedback.
-
Attached here are the topic stats for reference. |
Beta Was this translation helpful? Give feedback.
-
The is the count of nonContiguousDeletedMessagesRanges in our case, As this is greater than 10000 (default managedLedgerMaxUnackedRangesToPersist=10000). Can we increase the config managedLedgerMaxUnackedRangesToPersist more to accommodate this? Since our primary use case is delayed delivery, we might see out-of-sequence acks for sure. |
Beta Was this translation helpful? Give feedback.
-
thanks for the clarifications, I will check & get back if anything required |
Beta Was this translation helpful? Give feedback.
-
Hi @lhotari we increased the nonContiguousDeletedMessagesRanges in config, Now the nonContiguousDeletedMessagesRangesSerializedSize for the entire topic (of 6 partition) is 10MB What issue we might face if it is greater than 5 MB? |
Beta Was this translation helpful? Give feedback.
-
Hi @lhotari , Can you please check this? |
Beta Was this translation helpful? Give feedback.
One possible reason is simply bugs in the Pulsar client or broker were acknowledgements were lost. If an acknowledgement is missing, messages would get redelivered. The redelivery moment depends on the subscription type and client settings. There have been bugs in the past where acknowledgements were lost due to client or broker bugs, however those are a rare minority of the problems.
There are cases where so called individual acknowledgements ("individually deleted",
nonContiguousDeletedMessagesRanges
in stats,totalNonContiguousDeletedMessagesRange
in stats-internal) can overflow the limits.By default, Pulsar will keep up to 10000 of such "ack holes". This is explained in the comment of
…