Skip to content

KAFKA-20372: Fix flaky test in NamedTopologyIntegrationTest#21968

Open
chickenchickenlove wants to merge 1 commit intoapache:trunkfrom
chickenchickenlove:KAFKA-20372-2
Open

KAFKA-20372: Fix flaky test in NamedTopologyIntegrationTest#21968
chickenchickenlove wants to merge 1 commit intoapache:trunkfrom
chickenchickenlove:KAFKA-20372-2

Conversation

@chickenchickenlove
Copy link
Copy Markdown
Contributor

@chickenchickenlove chickenchickenlove commented Apr 4, 2026

This PR fixes a flaky test in NamedTopologyIntegrationTest:
shouldAddToEmptyInitialTopologyRemoveResetOffsetsThenAddSameNamedTopologyWithRepartitioning.

The failure is caused by a race in the task lifecycle during named
topology removal.

In some runs, local tasks for the topology move to active/running
quickly. In other runs, some tasks can remain in a transitional state
for a short time, such as state update / restore, before becoming fully
active.

The test currently removes and cleans up the topology without waiting
for this lifecycle to settle. Because of that,
removeNamedTopology(...) may run while some local tasks are still
transitioning, and cleanUpNamedTopology(...) may follow before those
tasks are fully gone. This makes the subsequent re-add flow
non-deterministic and leads to intermittent failures.

To make the test deterministic, this PR adds explicit waits around the
remove / cleanup sequence:

  • wait until all local tasks for the topology are running before calling
    removeNamedTopology(...)
  • wait until no local tasks remain for the topology before calling
    cleanUpNamedTopology(...)

This keeps the test semantics unchanged and only stabilizes task
lifecycle timing in the test.

Test result in my local

  • Before: 12/17
  • After: 89/89

Considered Alternatives

  1. Send records to partition 1 as well
    This would make it more likely that both partition 0 and partition 1
    tasks are initialized before removal.

However, this changes the test input and can also introduce additional
rebalance and task-transition timing effects, since partition 1
processing becomes part of the exercised path. In other words, it may
reduce one symptom while introducing a different concurrency surface
related to task initialization and reassignment.

More importantly, the root cause of the flake is that the test does not
wait for task lifecycle transitions to settle before calling
removeNamedTopology(...) and cleanUpNamedTopology(...). Adding more
input does not address that directly.

  1. Reduce the input topic partition count from 2 to 1
    This would avoid the multi-partition timing issue by construction.
    However, this also weakens the coverage of the test by removing the
    multi-partition setup entirely. Since the production code supports
    multiple partitions, reducing the topic to a single partition would hide
    the race instead of synchronizing with the intended lifecycle of the
    test.

Reviewers: Lianet Magrans 98415067+lianetm@users.noreply.github.com

@github-actions github-actions bot added triage PRs from the community streams tests Test fixes (including flaky tests) labels Apr 4, 2026
@chickenchickenlove
Copy link
Copy Markdown
Contributor Author

@lianetm Hi!
When you get a chance, could you take a look?

@chickenchickenlove
Copy link
Copy Markdown
Contributor Author

@lianetm
Gently ping!
When you get a chance, please take a look. 🙇‍♂️

Copy link
Copy Markdown
Member

@lianetm lianetm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for looking into this!

}
}

return true;
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if there are no tasks, this allTasksRunning would still return true, which is ok in the current test usage, but I wonder if it may be confusing. Should we add a java doc to call out the behaviour?


TestUtils.waitForCondition(
() -> streams.allLocalTasksRunningForTopology(TOPOLOGY_1),
"topology tasks are still transitioning before remove"
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: seems clearer if we excplititly mention the final state that we didn't get to (~ tasks did not transition to running state as expected)


TestUtils.waitForCondition(
() -> !streams.hasAnyLocalTaskForTopology(TOPOLOGY_1),
"topology tasks still exist internally after remove"
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit

Suggested change
"topology tasks still exist internally after remove"
"tasks still exist internally after topology removed"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

small Small PRs streams tests Test fixes (including flaky tests) triage PRs from the community

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants