Skip to content

OCPBUGS-78016: Fix double counting of pod restart events#30862

Open
xueqzhan wants to merge 1 commit intoopenshift:mainfrom
xueqzhan:double-pod-termination
Open

OCPBUGS-78016: Fix double counting of pod restart events#30862
xueqzhan wants to merge 1 commit intoopenshift:mainfrom
xueqzhan:double-pod-termination

Conversation

@xueqzhan
Copy link
Contributor

@xueqzhan xueqzhan commented Mar 11, 2026

Fix double-counting of container restart events

Problem

Container exit events are being double-counted when containers restart with non-zero exit codes. This causes inflated failure metrics in test monitoring.

Root Cause

When a container exits and restarts, Kubernetes moves the termination state through this sequence:

  1. Container exits → State.Terminated appears → Monitor fires Event
  2. Container restarts → termination moves to LastTerminationState.Terminated → Monitor fires Event (duplicate!)

The monitor has two code paths that detect container exits:

  • Case 1: LastTerminationState.Terminated appears (line 220)
  • Case 2: State.Terminated appears (line 256)

Both fire for the same exit, creating duplicate events.

Fix

Added deduplication check in Case 1 to skip events when we already recorded the exit in Case 2:

alreadyRecorded := oldContainerStatus.State.Terminated != nil &&
containerStatus.State.Terminated == nil &&
oldContainerStatus.State.Terminated.FinishedAt.Equal(&containerStatus.LastTerminationState.Terminated.FinishedAt)

This prevents double-counting by checking if:

  1. We previously saw this termination in State.Terminated
  2. It has now moved to LastTerminationState.Terminated
  3. The timestamps match (same termination event)

Summary by CodeRabbit

  • Bug Fixes
    • Prevented duplicate container termination events in pod monitoring. Improved detection now skips repeated termination records and emits a single, accurate event (error or normal completion), ensuring correct event counts and more reliable pod lifecycle dashboards and alerts.

@openshift-ci-robot
Copy link

Pipeline controller notification
This repo is configured to use the pipeline controller. Second-stage tests will be triggered either automatically or after lgtm label is added, depending on the repository configuration. The pipeline controller will automatically detect which contexts are required and will utilize /test Prow commands to trigger the second stage.

For optional jobs, comment /test ? to see a list of all defined jobs. To trigger manually all jobs from second stage use /pipeline required command.

This repository is configured in: automatic mode

@openshift-ci-robot openshift-ci-robot added jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Mar 11, 2026
@openshift-ci-robot
Copy link

@xueqzhan: This pull request references Jira Issue OCPBUGS-78016, which is invalid:

  • expected the bug to target the "4.22.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

Details

In response to this:

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@coderabbitai
Copy link

coderabbitai bot commented Mar 11, 2026

Important

Review skipped

Auto reviews are limited based on label configuration.

🚫 Review skipped — only excluded labels are configured. (1)
  • do-not-merge/work-in-progress

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 811fd8cc-d75e-48ed-b87f-845ead179728

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Modified container exit event handling in the watchpods collection logic to include a guard check that compares termination states and their FinishedAt timestamps. If a termination event has already been recorded, the code skips duplicate event emission; otherwise, it emits the appropriate interval based on the exit code.

Changes

Cohort / File(s) Summary
Container Exit Event Handling
pkg/monitortests/node/watchpods/collection.go
Added alreadyRecorded guard condition to prevent duplicate event emission when handling terminated container states. Compares previous and current termination states using FinishedAt timestamps to determine if an exit event has already been recorded. Emits Error or Info intervals conditionally based on exit code only when the event is new, replacing unconditional emission with idempotent behavior.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'OCPBUGS-78016: Fix double counting of pod restart events' directly and clearly describes the main change: adding a guard to prevent duplicate pod restart event emissions.
Stable And Deterministic Test Names ✅ Passed The modified file contains pod monitoring and event collection logic with no Ginkgo test definitions, making the stable test names check inapplicable.
Test Structure And Quality ✅ Passed The custom check is designed to review Ginkgo test code for quality requirements, but this PR modifies production code (collection.go) that implements pod monitoring functionality, not Ginkgo test code.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
📝 Coding Plan
  • Generate coding plan for human review comments

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pkg/monitortests/node/watchpods/collection.go`:
- Around line 221-253: The two code paths for recording container exit in the
LastTerminationState branch are inconsistent: in the !alreadyRecorded branch the
ExitCode == 0 path uses monitorapi.Info while the other (existing) path uses
monitorapi.Error; change the new branch to use monitorapi.Error instead of
monitorapi.Info so both paths use the same severity. Locate the !alreadyRecorded
block that constructs intervals with
containerStatus.LastTerminationState.Terminated and replace the monitorapi.Info
call in the ExitCode == 0 branch with monitorapi.Error so the interval level
matches the existing behavior.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 45e82f29-a3bb-4691-ae5b-2d62d2989052

📥 Commits

Reviewing files that changed from the base of the PR and between 02b4172 and 90a6967.

📒 Files selected for processing (1)
  • pkg/monitortests/node/watchpods/collection.go

Comment on lines +221 to 253
// if we are transitioning to a terminated state in LastTerminationState
// Check if we already recorded this exit when it was in State.Terminated
// If oldContainerStatus.State.Terminated matches the current LastTerminationState.Terminated,
// then we already recorded this exit and should skip to avoid double-counting
alreadyRecorded := oldContainerStatus.State.Terminated != nil &&
oldContainerStatus.State.Terminated.FinishedAt.Equal(&containerStatus.LastTerminationState.Terminated.FinishedAt)

if !alreadyRecorded {
// We missed the original exit event, record it now as a safety net
if containerStatus.LastTerminationState.Terminated.ExitCode != 0 {
intervals = append(intervals,
monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Error).
Locator(monitorapi.NewLocator().ContainerFromPod(pod, containerName)).
Message(monitorapi.NewMessage().
Reason(monitorapi.ContainerReasonContainerExit).
WithAnnotation(monitorapi.AnnotationContainerExitCode, fmt.Sprintf("%d", containerStatus.LastTerminationState.Terminated.ExitCode)).
Cause(containerStatus.LastTerminationState.Terminated.Reason).
HumanMessage(containerStatus.LastTerminationState.Terminated.Message),
).BuildNow(),
)
} else {
intervals = append(intervals,
monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Info).
Locator(monitorapi.NewLocator().ContainerFromPod(pod, containerName)).
Message(monitorapi.NewMessage().
Reason(monitorapi.ContainerReasonContainerExit).
WithAnnotation(monitorapi.AnnotationContainerExitCode, "0").
Cause(containerStatus.LastTerminationState.Terminated.Reason).
HumanMessage(containerStatus.LastTerminationState.Terminated.Message)).
BuildNow(),
)
}
}
Copy link

@coderabbitai coderabbitai bot Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Inconsistent interval level for exit code 0 between new and existing code paths.

The alreadyRecorded guard logic correctly prevents double-counting by comparing FinishedAt timestamps.

However, there's an inconsistency in interval levels:

  • New code (line 243): Uses monitorapi.Info when ExitCode == 0
  • Existing code (line 271): Uses monitorapi.Error when ExitCode == 0

This means a successful container exit (code 0) could be logged as Info or Error depending on which code path fires first, leading to inconsistent monitoring data.

Please align the interval levels—either both should use Info for successful exits, or both should use Error.

🔧 Proposed fix to align with existing behavior (use Error)
 				} else {
 					intervals = append(intervals,
-						monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Info).
+						monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Error).
 							Locator(monitorapi.NewLocator().ContainerFromPod(pod, containerName)).
 							Message(monitorapi.NewMessage().
 								Reason(monitorapi.ContainerReasonContainerExit).
🔧 Alternative fix: Update existing code to use Info for exit code 0

If Info is semantically correct for successful exits, update line 271 as well:

 			} else {
 				intervals = append(intervals,
-					monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Error).
+					monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Info).
 						Locator(monitorapi.NewLocator().ContainerFromPod(pod, containerName)).
 						Message(monitorapi.NewMessage().
 							Reason(monitorapi.ContainerReasonContainerExit).
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// if we are transitioning to a terminated state in LastTerminationState
// Check if we already recorded this exit when it was in State.Terminated
// If oldContainerStatus.State.Terminated matches the current LastTerminationState.Terminated,
// then we already recorded this exit and should skip to avoid double-counting
alreadyRecorded := oldContainerStatus.State.Terminated != nil &&
oldContainerStatus.State.Terminated.FinishedAt.Equal(&containerStatus.LastTerminationState.Terminated.FinishedAt)
if !alreadyRecorded {
// We missed the original exit event, record it now as a safety net
if containerStatus.LastTerminationState.Terminated.ExitCode != 0 {
intervals = append(intervals,
monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Error).
Locator(monitorapi.NewLocator().ContainerFromPod(pod, containerName)).
Message(monitorapi.NewMessage().
Reason(monitorapi.ContainerReasonContainerExit).
WithAnnotation(monitorapi.AnnotationContainerExitCode, fmt.Sprintf("%d", containerStatus.LastTerminationState.Terminated.ExitCode)).
Cause(containerStatus.LastTerminationState.Terminated.Reason).
HumanMessage(containerStatus.LastTerminationState.Terminated.Message),
).BuildNow(),
)
} else {
intervals = append(intervals,
monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Info).
Locator(monitorapi.NewLocator().ContainerFromPod(pod, containerName)).
Message(monitorapi.NewMessage().
Reason(monitorapi.ContainerReasonContainerExit).
WithAnnotation(monitorapi.AnnotationContainerExitCode, "0").
Cause(containerStatus.LastTerminationState.Terminated.Reason).
HumanMessage(containerStatus.LastTerminationState.Terminated.Message)).
BuildNow(),
)
}
}
// if we are transitioning to a terminated state in LastTerminationState
// Check if we already recorded this exit when it was in State.Terminated
// If oldContainerStatus.State.Terminated matches the current LastTerminationState.Terminated,
// then we already recorded this exit and should skip to avoid double-counting
alreadyRecorded := oldContainerStatus.State.Terminated != nil &&
oldContainerStatus.State.Terminated.FinishedAt.Equal(&containerStatus.LastTerminationState.Terminated.FinishedAt)
if !alreadyRecorded {
// We missed the original exit event, record it now as a safety net
if containerStatus.LastTerminationState.Terminated.ExitCode != 0 {
intervals = append(intervals,
monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Error).
Locator(monitorapi.NewLocator().ContainerFromPod(pod, containerName)).
Message(monitorapi.NewMessage().
Reason(monitorapi.ContainerReasonContainerExit).
WithAnnotation(monitorapi.AnnotationContainerExitCode, fmt.Sprintf("%d", containerStatus.LastTerminationState.Terminated.ExitCode)).
Cause(containerStatus.LastTerminationState.Terminated.Reason).
HumanMessage(containerStatus.LastTerminationState.Terminated.Message),
).BuildNow(),
)
} else {
intervals = append(intervals,
monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Error).
Locator(monitorapi.NewLocator().ContainerFromPod(pod, containerName)).
Message(monitorapi.NewMessage().
Reason(monitorapi.ContainerReasonContainerExit).
WithAnnotation(monitorapi.AnnotationContainerExitCode, "0").
Cause(containerStatus.LastTerminationState.Terminated.Reason).
HumanMessage(containerStatus.LastTerminationState.Terminated.Message)).
BuildNow(),
)
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/monitortests/node/watchpods/collection.go` around lines 221 - 253, The
two code paths for recording container exit in the LastTerminationState branch
are inconsistent: in the !alreadyRecorded branch the ExitCode == 0 path uses
monitorapi.Info while the other (existing) path uses monitorapi.Error; change
the new branch to use monitorapi.Error instead of monitorapi.Info so both paths
use the same severity. Locate the !alreadyRecorded block that constructs
intervals with containerStatus.LastTerminationState.Terminated and replace the
monitorapi.Info call in the ExitCode == 0 branch with monitorapi.Error so the
interval level matches the existing behavior.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This wasn't really new code. It just puts the old code in a new scope.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xueqzhan, you're right — looking at the diff more carefully, the monitorapi.Info for exit code 0 was already present in the original code before this PR. The change only wrapped the existing logic in the if !alreadyRecorded guard without altering the interval levels. My comment was incorrect. Sorry for the noise!

(ᵕ—ᴗ—)

@openshift-ci openshift-ci bot requested review from deads2k and sjenning March 11, 2026 15:43
@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 11, 2026
@xueqzhan
Copy link
Contributor Author

/payload-job periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 11, 2026

@xueqzhan: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/6c7edd10-1d62-11f1-90ca-56d2448f888b-0

@openshift-ci-robot
Copy link

Scheduling required tests:
/test e2e-aws-csi
/test e2e-aws-ovn-fips
/test e2e-aws-ovn-microshift
/test e2e-aws-ovn-microshift-serial
/test e2e-aws-ovn-serial-1of2
/test e2e-aws-ovn-serial-2of2
/test e2e-gcp-csi
/test e2e-gcp-ovn
/test e2e-gcp-ovn-upgrade
/test e2e-metal-ipi-ovn-ipv6
/test e2e-vsphere-ovn
/test e2e-vsphere-ovn-upi

@xueqzhan
Copy link
Contributor Author

/payload-job periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 12, 2026

@xueqzhan: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/9aa03e20-1e3c-11f1-8846-9847a77dd631-0

@openshift-ci-robot
Copy link

@xueqzhan: This pull request references Jira Issue OCPBUGS-78016, which is invalid:

  • expected the bug to target the "4.22.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

Details

In response to this:

Summary by CodeRabbit

  • Bug Fixes
  • Prevented duplicate container termination events in pod monitoring. Improved detection now skips repeated termination records and emits a single, accurate event (error or normal completion), ensuring correct event counts and more reliable pod lifecycle dashboards and alerts.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pkg/monitortests/node/watchpods/collection.go`:
- Around line 221-229: The current guard using alreadyRecorded (comparing
oldContainerStatus.State.Terminated to
containerStatus.LastTerminationState.Terminated.FinishedAt) can suppress the
only emission when the first observed status is Terminated; instead, change the
logic to dedupe against an explicit record of emissions: add an
emitted-termination registry keyed by the unique container identifier (e.g.,
pod+container name or UID) that stores the emitted termination FinishedAt
timestamp, update that registry when you emit a termination, and replace the
alreadyRecorded check to compare
containerStatus.LastTerminationState.Terminated.FinishedAt against the registry
entry (skip only if the same FinishedAt was already emitted); ensure you update
the registry when emitting in the code paths that currently call the emission
logic so future transitions into LastTerminationState are correctly recognized
as already emitted.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 179cffda-6591-4976-84aa-3cc7f505373e

📥 Commits

Reviewing files that changed from the base of the PR and between 90a6967 and 70264ef.

📒 Files selected for processing (1)
  • pkg/monitortests/node/watchpods/collection.go

@xueqzhan
Copy link
Contributor Author

/payload-job periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 13, 2026

@xueqzhan: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/8e31ee80-1f14-11f1-8be4-348d9668c0b9-0

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pkg/resourcewatch/git/git_store.go`:
- Around line 254-281: The execGit retry loop can hang because it uses
exec.Command rather than a context-aware command; change the per-attempt
invocation in execGit to use exec.CommandContext with a child context derived
from the parent ctx and a short per-attempt timeout (e.g., 1s or a configurable
duration) so that if ctx is cancelled or the attempt exceeds the timeout the git
subprocess is killed and CombinedOutput returns; ensure you create the command
with exec.CommandContext, set cmd.Dir = s.path, capture CombinedOutput from that
cmd, and propagate the resulting error (and log output) as before, and keep
lastErr/retry logic unchanged.
- Around line 310-323: In commitModify, stop using exec.Command directly for the
git diff check and instead call the existing execGit helper so the semaphore,
retries, and context cancellation are honored; after running "git diff --cached
--quiet -- <path>" via execGit, handle its result precisely: if err == nil
return nil (no changes), if err is an *exec.ExitError with exit status 1 treat
that as "changes exist" and continue to commit, but for any other non-nil error
return it (wrap or annotate with context) so real errors (permissions, repo
corruption, etc.) are not masked; update references in commitModify to use
execGit and explicit exit-code inspection rather than assuming any non-nil error
means changes.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 7aa239bf-70b5-4365-b011-219994e730bf

📥 Commits

Reviewing files that changed from the base of the PR and between 70264ef and d9529ed.

📒 Files selected for processing (1)
  • pkg/resourcewatch/git/git_store.go

Comment on lines +310 to +323
// Check if there are actually any changes staged
// git diff --cached --quiet returns 0 if no changes, 1 if changes exist
osCommand := exec.Command("git", "diff", "--cached", "--quiet", "--", path)
osCommand.Dir = s.path
err := osCommand.Run()
if err == nil {
// No changes, skip commit
klog.V(4).Infof("Skipping commit for unchanged file %s", path)
return nil
}
// err != nil means there are changes (exit code 1) or a real error
// If it's exit code 1, that's expected and means we should commit
// Any other error will be caught by the commit call below

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cd pkg/resourcewatch/git && wc -l git_store.go

Repository: openshift/origin

Length of output: 76


🏁 Script executed:

cd pkg/resourcewatch/git && sed -n '300,330p' git_store.go

Repository: openshift/origin

Length of output: 1132


🏁 Script executed:

cd pkg/resourcewatch/git && rg -n "func.*execGit" -A 10 git_store.go

Repository: openshift/origin

Length of output: 493


🏁 Script executed:

cd pkg/resourcewatch/git && rg -n "semaphore\|retry\|cancel" git_store.go | head -20

Repository: openshift/origin

Length of output: 42


🏁 Script executed:

cd pkg/resourcewatch && find . -name "*.go" -type f | xargs rg "exec\.Command.*git" | head -15

Repository: openshift/origin

Length of output: 218


🏁 Script executed:

cd pkg/resourcewatch/git && sed -n '239,270p' git_store.go

Repository: openshift/origin

Length of output: 1134


🏁 Script executed:

cd pkg/resourcewatch/git && rg -n "exec\.Command" -B 2 -A 5 git_store.go

Repository: openshift/origin

Length of output: 641


🏁 Script executed:

cd pkg/resourcewatch/git && sed -n '1,30p' git_store.go | grep -E "^import|errors|exec"

Repository: openshift/origin

Length of output: 79


🏁 Script executed:

cd pkg/resourcewatch/git && sed -n '239,290p' git_store.go

Repository: openshift/origin

Length of output: 1785


🏁 Script executed:

cd pkg/resourcewatch/git && sed -n '290,330p' git_store.go

Repository: openshift/origin

Length of output: 1464


🏁 Script executed:

cd pkg/resourcewatch/git && rg -n "func.*commit" git_store.go

Repository: openshift/origin

Length of output: 534


🏁 Script executed:

cd pkg/resourcewatch/git && sed -n '1,50p' git_store.go

Repository: openshift/origin

Length of output: 1238


🏁 Script executed:

cd pkg/resourcewatch/git && sed -n '305,332p' git_store.go | cat -n

Repository: openshift/origin

Length of output: 1252


🏁 Script executed:

cd pkg/resourcewatch/git && rg "errors\." git_store.go

Repository: openshift/origin

Length of output: 42


Distinguish expected exit code 1 from real errors in commitModify.

The direct exec.Command("git", "diff", ...) call bypasses execGit's semaphore, retry, and context cancellation safeguards. More critically, the code treats all non-nil errors as "changes exist" and proceeds to commit, which masks genuine failures (e.g., permission errors, repo corruption). Extract the exit code and return early on non-1 errors before proceeding to commit.

Proposed fix
import (
	"context"
+	"errors"
	"fmt"
	osCommand := exec.Command("git", "diff", "--cached", "--quiet", "--", path)
	osCommand.Dir = s.path
	err := osCommand.Run()
	if err == nil {
		// No changes, skip commit
		klog.V(4).Infof("Skipping commit for unchanged file %s", path)
		return nil
	}
-	// err != nil means there are changes (exit code 1) or a real error
-	// If it's exit code 1, that's expected and means we should commit
-	// Any other error will be caught by the commit call below
+	var exitErr *exec.ExitError
+	if !errors.As(err, &exitErr) || exitErr.ExitCode() != 1 {
+		return fmt.Errorf("checking staged changes for %s failed: %w", path, err)
+	}
+	// Exit code 1 means changes exist; proceed with commit.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/resourcewatch/git/git_store.go` around lines 310 - 323, In commitModify,
stop using exec.Command directly for the git diff check and instead call the
existing execGit helper so the semaphore, retries, and context cancellation are
honored; after running "git diff --cached --quiet -- <path>" via execGit, handle
its result precisely: if err == nil return nil (no changes), if err is an
*exec.ExitError with exit status 1 treat that as "changes exist" and continue to
commit, but for any other non-nil error return it (wrap or annotate with
context) so real errors (permissions, repo corruption, etc.) are not masked;
update references in commitModify to use execGit and explicit exit-code
inspection rather than assuming any non-nil error means changes.

@xueqzhan
Copy link
Contributor Author

/payload-job periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 14, 2026

@xueqzhan: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/b7adb7c0-1fab-11f1-9f3d-decb961d64e6-0

@openshift-ci-robot
Copy link

@xueqzhan: An error was encountered searching for bug OCPBUGS-78016 on the Jira server at https://issues.redhat.com. No known errors were detected, please see the full error message for details.

Full error message. No response returned: Get "https://issues.redhat.com/rest/api/2/issue/OCPBUGS-78016": GET https://issues.redhat.com/rest/api/2/issue/OCPBUGS-78016 giving up after 5 attempt(s)

Please contact an administrator to resolve this issue, then request a bug refresh with /jira refresh.

Details

In response to this:

Summary by CodeRabbit

  • Bug Fixes
  • Prevented duplicate container termination events in pod monitoring. Improved detection now skips repeated termination records and emits a single, accurate event (error or normal completion), ensuring correct event counts and more reliable pod lifecycle dashboards and alerts.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
pkg/resourcewatch/git/git_store.go (1)

324-344: ⚠️ Potential issue | 🟠 Major

commitModify bypasses the shared git execution path.

Line 328 uses exec.CommandContext directly instead of execGit, so this path skips the common git-operation controls (notably semaphore gating and centralized behavior). This creates inconsistent execution policy and can spike subprocess count under load.

Use this to verify the bypass is still present:

#!/bin/bash
set -euo pipefail

echo "Inspect commitModify command execution:"
sed -n '319,350p' pkg/resourcewatch/git/git_store.go

echo
echo "Inspect shared executor controls:"
sed -n '247,296p' pkg/resourcewatch/git/git_store.go

Expected result: commitModify shows direct exec.CommandContext(... "git", "diff", ...) while semaphore acquisition is only in execGit.
As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/resourcewatch/git/git_store.go` around lines 324 - 344, commitModify
currently calls exec.CommandContext directly for "git diff --cached --quiet"
(using diffCtx and s.path), bypassing the centralized execGit executor and its
semaphore; change this to call the shared execGit helper (or a new wrapper that
accepts context, args, and Dir) so the diff check goes through the same
semaphore/gating and centralized behavior as other git operations (preserve the
10s timeout context.WithTimeout, use the same error handling around
exec.ExitError, and keep s.path as the working directory). Ensure you update the
call site in commitModify and adapt execGit's signature if needed to accept a
pre-created context/timeout and working dir so you don't spawn
exec.CommandContext directly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pkg/resourcewatch/git/git_store.go`:
- Around line 28-31: The gitSemaphore currently only limits single command
concurrency causing interleaving across multi-step operations; modify commitAdd,
commitModify, and commitRemove to acquire gitSemaphore once at the start of the
entire high-level transaction and release it only after the final commit (i.e.,
wrap the full sequence of index mutations, add/remove calls, and commit call in
a single semaphore lock/unlock), or provide a helper like a
doGitWrite/withGitLock that obtains the semaphore and runs the whole transaction
atomically; ensure no inner calls reacquire/release the semaphore separately and
keep repo/index/commit operations inside that single serialized block to avoid
mixed staged content.

---

Duplicate comments:
In `@pkg/resourcewatch/git/git_store.go`:
- Around line 324-344: commitModify currently calls exec.CommandContext directly
for "git diff --cached --quiet" (using diffCtx and s.path), bypassing the
centralized execGit executor and its semaphore; change this to call the shared
execGit helper (or a new wrapper that accepts context, args, and Dir) so the
diff check goes through the same semaphore/gating and centralized behavior as
other git operations (preserve the 10s timeout context.WithTimeout, use the same
error handling around exec.ExitError, and keep s.path as the working directory).
Ensure you update the call site in commitModify and adapt execGit's signature if
needed to accept a pre-created context/timeout and working dir so you don't
spawn exec.CommandContext directly.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 837ac442-7a1e-4cbf-8561-d78c2a391fc3

📥 Commits

Reviewing files that changed from the base of the PR and between d9529ed and fbf644c.

📒 Files selected for processing (1)
  • pkg/resourcewatch/git/git_store.go

Comment on lines 28 to 31
repo *git.Repository
path string
gitSemaphore chan struct{} // Limits concurrent git operations
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Serialize full git write transactions, not individual commands.

gitSemaphore (Line 251) limits single command concurrency, but commitAdd/commitModify/commitRemove run multi-step index mutations across separate calls. Those steps can interleave across goroutines and produce commits with unintended staged content.

Proposed fix
 import (
 	"context"
 	"errors"
 	"fmt"
 	"os"
 	"os/exec"
 	"path"
 	"path/filepath"
 	"strings"
+	"sync"
 	"time"
@@
 type GitStorage struct {
 	repo         *git.Repository
 	path         string
 	gitSemaphore chan struct{} // Limits concurrent git operations
+	commitTxnMu  sync.Mutex    // Serializes add/diff/commit and rm/commit transactions
 }
@@
 func (s *GitStorage) commitAdd(ctx context.Context, timestamp time.Time, path, author, ocCommand string) error {
+	s.commitTxnMu.Lock()
+	defer s.commitTxnMu.Unlock()
+
 	if err := s.execGit(ctx, "add", path); err != nil {
 		return err
 	}
@@
 func (s *GitStorage) commitModify(ctx context.Context, timestamp time.Time, path, author, ocCommand string) error {
+	s.commitTxnMu.Lock()
+	defer s.commitTxnMu.Unlock()
+
 	if err := s.execGit(ctx, "add", path); err != nil {
 		return err
 	}
@@
 func (s *GitStorage) commitRemove(ctx context.Context, timestamp time.Time, path, author, ocCommand string) error {
+	s.commitTxnMu.Lock()
+	defer s.commitTxnMu.Unlock()
+
 	if err := s.execGit(ctx, "rm", path); err != nil {
 		return err
 	}

As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."

Also applies to: 251-257, 305-367

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/resourcewatch/git/git_store.go` around lines 28 - 31, The gitSemaphore
currently only limits single command concurrency causing interleaving across
multi-step operations; modify commitAdd, commitModify, and commitRemove to
acquire gitSemaphore once at the start of the entire high-level transaction and
release it only after the final commit (i.e., wrap the full sequence of index
mutations, add/remove calls, and commit call in a single semaphore lock/unlock),
or provide a helper like a doGitWrite/withGitLock that obtains the semaphore and
runs the whole transaction atomically; ensure no inner calls reacquire/release
the semaphore separately and keep repo/index/commit operations inside that
single serialized block to avoid mixed staged content.

@xueqzhan xueqzhan force-pushed the double-pod-termination branch from fbf644c to 70264ef Compare March 15, 2026 00:29
@openshift-ci-robot
Copy link

@xueqzhan: An error was encountered searching for bug OCPBUGS-78016 on the Jira server at https://issues.redhat.com. No known errors were detected, please see the full error message for details.

Full error message. No response returned: Get "https://issues.redhat.com/rest/api/2/issue/OCPBUGS-78016": GET https://issues.redhat.com/rest/api/2/issue/OCPBUGS-78016 giving up after 5 attempt(s)

Please contact an administrator to resolve this issue, then request a bug refresh with /jira refresh.

Details

In response to this:

Summary by CodeRabbit

  • Bug Fixes
  • Prevented duplicate container termination events in pod monitoring. Improved detection now skips repeated termination records and emits a single, accurate event (error or normal completion), ensuring correct event counts and more reliable pod lifecycle dashboards and alerts.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@xueqzhan
Copy link
Contributor Author

/payload-job periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 15, 2026

@xueqzhan: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/312113e0-2006-11f1-906d-a972c6521d04-0

@xueqzhan xueqzhan force-pushed the double-pod-termination branch from 70264ef to 3b1e7b0 Compare March 15, 2026 20:43
@openshift-ci-robot
Copy link

@xueqzhan: An error was encountered searching for bug OCPBUGS-78016 on the Jira server at https://issues.redhat.com. No known errors were detected, please see the full error message for details.

Full error message. No response returned: Get "https://issues.redhat.com/rest/api/2/issue/OCPBUGS-78016": GET https://issues.redhat.com/rest/api/2/issue/OCPBUGS-78016 giving up after 5 attempt(s)

Please contact an administrator to resolve this issue, then request a bug refresh with /jira refresh.

Details

In response to this:

Summary by CodeRabbit

  • Bug Fixes
  • Prevented duplicate container termination events in pod monitoring. Improved detection now skips repeated termination records and emits a single, accurate event (error or normal completion), ensuring correct event counts and more reliable pod lifecycle dashboards and alerts.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link

@xueqzhan: An error was encountered searching for bug OCPBUGS-78016 on the Jira server at https://issues.redhat.com. No known errors were detected, please see the full error message for details.

Full error message. No response returned: Get "https://issues.redhat.com/rest/api/2/issue/OCPBUGS-78016": GET https://issues.redhat.com/rest/api/2/issue/OCPBUGS-78016 giving up after 5 attempt(s)

Please contact an administrator to resolve this issue, then request a bug refresh with /jira refresh.

Details

In response to this:

Fix double-counting of container restart events

Problem

Container exit events are being double-counted when containers restart with non-zero exit codes. This causes inflated failure metrics in test monitoring.

Root Cause

When a container exits and restarts, Kubernetes moves the termination state through this sequence:

  1. Container exits → State.Terminated appears → Monitor fires Event Add link to GOPATH doc and fixup typo #1
  2. Container restarts → termination moves to LastTerminationState.Terminated → Monitor fires Event Add alpha API documentation #2 ❌ (duplicate!)

The monitor has two code paths that detect container exits:

  • Case 1: LastTerminationState.Terminated appears (line 220)
  • Case 2: State.Terminated appears (line 256)

Both fire for the same exit, creating duplicate events.

Fix

Added deduplication check in Case 1 to skip events when we already recorded the exit in Case 2:

alreadyRecorded := oldContainerStatus.State.Terminated != nil &&
containerStatus.State.Terminated == nil &&
oldContainerStatus.State.Terminated.FinishedAt.Equal(&containerStatus.LastTerminationState.Terminated.FinishedAt)

This prevents double-counting by checking if:

  1. We previously saw this termination in State.Terminated
  2. It has now moved to LastTerminationState.Terminated
  3. The timestamps match (same termination event)

Summary by CodeRabbit

  • Bug Fixes
  • Prevented duplicate container termination events in pod monitoring. Improved detection now skips repeated termination records and emits a single, accurate event (error or normal completion), ensuring correct event counts and more reliable pod lifecycle dashboards and alerts.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link

@xueqzhan: An error was encountered searching for bug OCPBUGS-78016 on the Jira server at https://issues.redhat.com. No known errors were detected, please see the full error message for details.

Full error message. No response returned: Get "https://issues.redhat.com/rest/api/2/issue/OCPBUGS-78016": GET https://issues.redhat.com/rest/api/2/issue/OCPBUGS-78016 giving up after 5 attempt(s)

Please contact an administrator to resolve this issue, then request a bug refresh with /jira refresh.

Details

In response to this:

Fix double-counting of container restart events

Problem

Container exit events are being double-counted when containers restart with non-zero exit codes. This causes inflated failure metrics in test monitoring.

Root Cause

When a container exits and restarts, Kubernetes moves the termination state through this sequence:

  1. Container exits → State.Terminated appears → Monitor fires Event Add link to GOPATH doc and fixup typo #1
  2. Container restarts → termination moves to LastTerminationState.Terminated → Monitor fires Event Add alpha API documentation #2 (duplicate!)

The monitor has two code paths that detect container exits:

  • Case 1: LastTerminationState.Terminated appears (line 220)
  • Case 2: State.Terminated appears (line 256)

Both fire for the same exit, creating duplicate events.

Fix

Added deduplication check in Case 1 to skip events when we already recorded the exit in Case 2:

alreadyRecorded := oldContainerStatus.State.Terminated != nil &&
containerStatus.State.Terminated == nil &&
oldContainerStatus.State.Terminated.FinishedAt.Equal(&containerStatus.LastTerminationState.Terminated.FinishedAt)

This prevents double-counting by checking if:

  1. We previously saw this termination in State.Terminated
  2. It has now moved to LastTerminationState.Terminated
  3. The timestamps match (same termination event)

Summary by CodeRabbit

  • Bug Fixes
  • Prevented duplicate container termination events in pod monitoring. Improved detection now skips repeated termination records and emits a single, accurate event (error or normal completion), ensuring correct event counts and more reliable pod lifecycle dashboards and alerts.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link

@xueqzhan: An error was encountered searching for bug OCPBUGS-78016 on the Jira server at https://issues.redhat.com. No known errors were detected, please see the full error message for details.

Full error message. No response returned: Get "https://issues.redhat.com/rest/api/2/issue/OCPBUGS-78016": GET https://issues.redhat.com/rest/api/2/issue/OCPBUGS-78016 giving up after 5 attempt(s)

Please contact an administrator to resolve this issue, then request a bug refresh with /jira refresh.

Details

In response to this:

Fix double-counting of container restart events

Problem

Container exit events are being double-counted when containers restart with non-zero exit codes. This causes inflated failure metrics in test monitoring.

Root Cause

When a container exits and restarts, Kubernetes moves the termination state through this sequence:

  1. Container exits → State.Terminated appears → Monitor fires Event
  2. Container restarts → termination moves to LastTerminationState.Terminated → Monitor fires Event (duplicate!)

The monitor has two code paths that detect container exits:

  • Case 1: LastTerminationState.Terminated appears (line 220)
  • Case 2: State.Terminated appears (line 256)

Both fire for the same exit, creating duplicate events.

Fix

Added deduplication check in Case 1 to skip events when we already recorded the exit in Case 2:

alreadyRecorded := oldContainerStatus.State.Terminated != nil &&
containerStatus.State.Terminated == nil &&
oldContainerStatus.State.Terminated.FinishedAt.Equal(&containerStatus.LastTerminationState.Terminated.FinishedAt)

This prevents double-counting by checking if:

  1. We previously saw this termination in State.Terminated
  2. It has now moved to LastTerminationState.Terminated
  3. The timestamps match (same termination event)

Summary by CodeRabbit

  • Bug Fixes
  • Prevented duplicate container termination events in pod monitoring. Improved detection now skips repeated termination records and emits a single, accurate event (error or normal completion), ensuring correct event counts and more reliable pod lifecycle dashboards and alerts.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link

Scheduling required tests:
/test e2e-aws-csi
/test e2e-aws-ovn-fips
/test e2e-aws-ovn-microshift
/test e2e-aws-ovn-microshift-serial
/test e2e-aws-ovn-serial-1of2
/test e2e-aws-ovn-serial-2of2
/test e2e-gcp-csi
/test e2e-gcp-ovn
/test e2e-gcp-ovn-upgrade
/test e2e-metal-ipi-ovn-ipv6
/test e2e-vsphere-ovn
/test e2e-vsphere-ovn-upi

@openshift-trt
Copy link

openshift-trt bot commented Mar 16, 2026

Risk analysis has seen new tests most likely introduced by this PR.
Please ensure that new tests meet guidelines for naming and stability.

New Test Risks for sha: 3b1e7b0

Job Name New Test Risk
pull-ci-openshift-origin-main-e2e-aws-ovn-microshift High - "[Jira:"Cloud Credential Operator"] Cluster_Operator CCO is disabled [Suite:cco/conformance/parallel][OTP][PolarionID:68220] NonHyperShiftHOST-Critical-Leverage Composable OpenShift feature to make cloud-credential optional" is a new test, was only seen in one job, and failed 1 time(s) against the current commit.

New tests seen in this PR at sha: 3b1e7b0

  • "[Jira:"Cloud Credential Operator"] Cluster_Operator CCO is disabled [Suite:cco/conformance/parallel][OTP][PolarionID:68220] NonHyperShiftHOST-Critical-Leverage Composable OpenShift feature to make cloud-credential optional" [Total: 1, Pass: 0, Fail: 1, Flake: 0]

@petr-muller
Copy link
Member

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Mar 16, 2026
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 16, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: petr-muller, xueqzhan

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:
  • OWNERS [petr-muller,xueqzhan]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@xueqzhan
Copy link
Contributor Author

/verified by @xueqzhan

@openshift-ci-robot openshift-ci-robot added the verified Signifies that the PR passed pre-merge verification criteria label Mar 16, 2026
@openshift-ci-robot
Copy link

@xueqzhan: This PR has been marked as verified by @xueqzhan.

Details

In response to this:

/verified by @xueqzhan

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@xueqzhan
Copy link
Contributor Author

/jira refresh

@openshift-ci-robot openshift-ci-robot added jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. and removed jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Mar 16, 2026
@openshift-ci-robot
Copy link

@xueqzhan: This pull request references Jira Issue OCPBUGS-78016, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.22.0) matches configured target version for branch (4.22.0)
  • bug is in the state New, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @wangke19

Details

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot requested a review from wangke19 March 16, 2026 22:09
@xueqzhan
Copy link
Contributor Author

/override ci/prow/e2e-vsphere-ovn

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 16, 2026

@xueqzhan: Overrode contexts on behalf of xueqzhan: ci/prow/e2e-vsphere-ovn

Details

In response to this:

/override ci/prow/e2e-vsphere-ovn

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 16, 2026

@xueqzhan: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged. verified Signifies that the PR passed pre-merge verification criteria

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants