Skip to content

[Bug]: Kafka reconciliation stuck in dynamic config update loop likely due to precision value issues #12293

@shk3

Description

@shk3

Bug Description

We observed a repeat of log entry of "Updating cluster configuration : log.cleaner.io.max.bytes.per.second -> 102400000" as well as a full printout of configs. This happened once every minute on each broker pod. It ended up generating more than 90% extra logs.

Our Kafka config looked like:
log.cleaner.io.max.bytes.per.second: 102400000

The temporary workaround that stopped this reconciliation loop was to set as a scientific notation string.
log.cleaner.io.max.bytes.per.second: "1.024E8"

I suspected that there was some double precision issue when the dynamic config diff happened, as Kafka could print out the double values in scientific notations. The part we would want to fix might be around

isConfigUpdated = updateOrAdd(entry.name(), configModel, desiredMap, updatedCE);

I haven't dug further to find out how to fix it, but I would like to report it here first.

Steps to reproduce

No response

Expected behavior

No response

Strimzi version

0.49.1,0.46.0

Kubernetes version

1.29.1

Installation method

No response

Infrastructure

No response

Configuration files and logs

No response

Additional context

No response

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions