OCPBUGS-78016: Fix double counting of pod restart events#30862
OCPBUGS-78016: Fix double counting of pod restart events#30862xueqzhan wants to merge 1 commit intoopenshift:mainfrom
Conversation
|
Pipeline controller notification For optional jobs, comment This repository is configured in: automatic mode |
|
@xueqzhan: This pull request references Jira Issue OCPBUGS-78016, which is invalid:
Comment The bug has been updated to refer to the pull request using the external bug tracker. DetailsIn response to this: Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
Important Review skippedAuto reviews are limited based on label configuration. 🚫 Review skipped — only excluded labels are configured. (1)
Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughModified container exit event handling in the watchpods collection logic to include a guard check that compares termination states and their FinishedAt timestamps. If a termination event has already been recorded, the code skips duplicate event emission; otherwise, it emits the appropriate interval based on the exit code. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/monitortests/node/watchpods/collection.go`:
- Around line 221-253: The two code paths for recording container exit in the
LastTerminationState branch are inconsistent: in the !alreadyRecorded branch the
ExitCode == 0 path uses monitorapi.Info while the other (existing) path uses
monitorapi.Error; change the new branch to use monitorapi.Error instead of
monitorapi.Info so both paths use the same severity. Locate the !alreadyRecorded
block that constructs intervals with
containerStatus.LastTerminationState.Terminated and replace the monitorapi.Info
call in the ExitCode == 0 branch with monitorapi.Error so the interval level
matches the existing behavior.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 45e82f29-a3bb-4691-ae5b-2d62d2989052
📒 Files selected for processing (1)
pkg/monitortests/node/watchpods/collection.go
| // if we are transitioning to a terminated state in LastTerminationState | ||
| // Check if we already recorded this exit when it was in State.Terminated | ||
| // If oldContainerStatus.State.Terminated matches the current LastTerminationState.Terminated, | ||
| // then we already recorded this exit and should skip to avoid double-counting | ||
| alreadyRecorded := oldContainerStatus.State.Terminated != nil && | ||
| oldContainerStatus.State.Terminated.FinishedAt.Equal(&containerStatus.LastTerminationState.Terminated.FinishedAt) | ||
|
|
||
| if !alreadyRecorded { | ||
| // We missed the original exit event, record it now as a safety net | ||
| if containerStatus.LastTerminationState.Terminated.ExitCode != 0 { | ||
| intervals = append(intervals, | ||
| monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Error). | ||
| Locator(monitorapi.NewLocator().ContainerFromPod(pod, containerName)). | ||
| Message(monitorapi.NewMessage(). | ||
| Reason(monitorapi.ContainerReasonContainerExit). | ||
| WithAnnotation(monitorapi.AnnotationContainerExitCode, fmt.Sprintf("%d", containerStatus.LastTerminationState.Terminated.ExitCode)). | ||
| Cause(containerStatus.LastTerminationState.Terminated.Reason). | ||
| HumanMessage(containerStatus.LastTerminationState.Terminated.Message), | ||
| ).BuildNow(), | ||
| ) | ||
| } else { | ||
| intervals = append(intervals, | ||
| monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Info). | ||
| Locator(monitorapi.NewLocator().ContainerFromPod(pod, containerName)). | ||
| Message(monitorapi.NewMessage(). | ||
| Reason(monitorapi.ContainerReasonContainerExit). | ||
| WithAnnotation(monitorapi.AnnotationContainerExitCode, "0"). | ||
| Cause(containerStatus.LastTerminationState.Terminated.Reason). | ||
| HumanMessage(containerStatus.LastTerminationState.Terminated.Message)). | ||
| BuildNow(), | ||
| ) | ||
| } | ||
| } |
There was a problem hiding this comment.
Inconsistent interval level for exit code 0 between new and existing code paths.
The alreadyRecorded guard logic correctly prevents double-counting by comparing FinishedAt timestamps.
However, there's an inconsistency in interval levels:
- New code (line 243): Uses
monitorapi.InfowhenExitCode == 0 - Existing code (line 271): Uses
monitorapi.ErrorwhenExitCode == 0
This means a successful container exit (code 0) could be logged as Info or Error depending on which code path fires first, leading to inconsistent monitoring data.
Please align the interval levels—either both should use Info for successful exits, or both should use Error.
🔧 Proposed fix to align with existing behavior (use Error)
} else {
intervals = append(intervals,
- monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Info).
+ monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Error).
Locator(monitorapi.NewLocator().ContainerFromPod(pod, containerName)).
Message(monitorapi.NewMessage().
Reason(monitorapi.ContainerReasonContainerExit).🔧 Alternative fix: Update existing code to use Info for exit code 0
If Info is semantically correct for successful exits, update line 271 as well:
} else {
intervals = append(intervals,
- monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Error).
+ monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Info).
Locator(monitorapi.NewLocator().ContainerFromPod(pod, containerName)).
Message(monitorapi.NewMessage().
Reason(monitorapi.ContainerReasonContainerExit).📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // if we are transitioning to a terminated state in LastTerminationState | |
| // Check if we already recorded this exit when it was in State.Terminated | |
| // If oldContainerStatus.State.Terminated matches the current LastTerminationState.Terminated, | |
| // then we already recorded this exit and should skip to avoid double-counting | |
| alreadyRecorded := oldContainerStatus.State.Terminated != nil && | |
| oldContainerStatus.State.Terminated.FinishedAt.Equal(&containerStatus.LastTerminationState.Terminated.FinishedAt) | |
| if !alreadyRecorded { | |
| // We missed the original exit event, record it now as a safety net | |
| if containerStatus.LastTerminationState.Terminated.ExitCode != 0 { | |
| intervals = append(intervals, | |
| monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Error). | |
| Locator(monitorapi.NewLocator().ContainerFromPod(pod, containerName)). | |
| Message(monitorapi.NewMessage(). | |
| Reason(monitorapi.ContainerReasonContainerExit). | |
| WithAnnotation(monitorapi.AnnotationContainerExitCode, fmt.Sprintf("%d", containerStatus.LastTerminationState.Terminated.ExitCode)). | |
| Cause(containerStatus.LastTerminationState.Terminated.Reason). | |
| HumanMessage(containerStatus.LastTerminationState.Terminated.Message), | |
| ).BuildNow(), | |
| ) | |
| } else { | |
| intervals = append(intervals, | |
| monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Info). | |
| Locator(monitorapi.NewLocator().ContainerFromPod(pod, containerName)). | |
| Message(monitorapi.NewMessage(). | |
| Reason(monitorapi.ContainerReasonContainerExit). | |
| WithAnnotation(monitorapi.AnnotationContainerExitCode, "0"). | |
| Cause(containerStatus.LastTerminationState.Terminated.Reason). | |
| HumanMessage(containerStatus.LastTerminationState.Terminated.Message)). | |
| BuildNow(), | |
| ) | |
| } | |
| } | |
| // if we are transitioning to a terminated state in LastTerminationState | |
| // Check if we already recorded this exit when it was in State.Terminated | |
| // If oldContainerStatus.State.Terminated matches the current LastTerminationState.Terminated, | |
| // then we already recorded this exit and should skip to avoid double-counting | |
| alreadyRecorded := oldContainerStatus.State.Terminated != nil && | |
| oldContainerStatus.State.Terminated.FinishedAt.Equal(&containerStatus.LastTerminationState.Terminated.FinishedAt) | |
| if !alreadyRecorded { | |
| // We missed the original exit event, record it now as a safety net | |
| if containerStatus.LastTerminationState.Terminated.ExitCode != 0 { | |
| intervals = append(intervals, | |
| monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Error). | |
| Locator(monitorapi.NewLocator().ContainerFromPod(pod, containerName)). | |
| Message(monitorapi.NewMessage(). | |
| Reason(monitorapi.ContainerReasonContainerExit). | |
| WithAnnotation(monitorapi.AnnotationContainerExitCode, fmt.Sprintf("%d", containerStatus.LastTerminationState.Terminated.ExitCode)). | |
| Cause(containerStatus.LastTerminationState.Terminated.Reason). | |
| HumanMessage(containerStatus.LastTerminationState.Terminated.Message), | |
| ).BuildNow(), | |
| ) | |
| } else { | |
| intervals = append(intervals, | |
| monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Error). | |
| Locator(monitorapi.NewLocator().ContainerFromPod(pod, containerName)). | |
| Message(monitorapi.NewMessage(). | |
| Reason(monitorapi.ContainerReasonContainerExit). | |
| WithAnnotation(monitorapi.AnnotationContainerExitCode, "0"). | |
| Cause(containerStatus.LastTerminationState.Terminated.Reason). | |
| HumanMessage(containerStatus.LastTerminationState.Terminated.Message)). | |
| BuildNow(), | |
| ) | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/monitortests/node/watchpods/collection.go` around lines 221 - 253, The
two code paths for recording container exit in the LastTerminationState branch
are inconsistent: in the !alreadyRecorded branch the ExitCode == 0 path uses
monitorapi.Info while the other (existing) path uses monitorapi.Error; change
the new branch to use monitorapi.Error instead of monitorapi.Info so both paths
use the same severity. Locate the !alreadyRecorded block that constructs
intervals with containerStatus.LastTerminationState.Terminated and replace the
monitorapi.Info call in the ExitCode == 0 branch with monitorapi.Error so the
interval level matches the existing behavior.
There was a problem hiding this comment.
This wasn't really new code. It just puts the old code in a new scope.
There was a problem hiding this comment.
@xueqzhan, you're right — looking at the diff more carefully, the monitorapi.Info for exit code 0 was already present in the original code before this PR. The change only wrapped the existing logic in the if !alreadyRecorded guard without altering the interval levels. My comment was incorrect. Sorry for the noise!
(ᵕ—ᴗ—)
|
/payload-job periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips |
|
@xueqzhan: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/6c7edd10-1d62-11f1-90ca-56d2448f888b-0 |
|
Scheduling required tests: |
|
/payload-job periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips |
|
@xueqzhan: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/9aa03e20-1e3c-11f1-8846-9847a77dd631-0 |
|
@xueqzhan: This pull request references Jira Issue OCPBUGS-78016, which is invalid:
Comment DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/monitortests/node/watchpods/collection.go`:
- Around line 221-229: The current guard using alreadyRecorded (comparing
oldContainerStatus.State.Terminated to
containerStatus.LastTerminationState.Terminated.FinishedAt) can suppress the
only emission when the first observed status is Terminated; instead, change the
logic to dedupe against an explicit record of emissions: add an
emitted-termination registry keyed by the unique container identifier (e.g.,
pod+container name or UID) that stores the emitted termination FinishedAt
timestamp, update that registry when you emit a termination, and replace the
alreadyRecorded check to compare
containerStatus.LastTerminationState.Terminated.FinishedAt against the registry
entry (skip only if the same FinishedAt was already emitted); ensure you update
the registry when emitting in the code paths that currently call the emission
logic so future transitions into LastTerminationState are correctly recognized
as already emitted.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 179cffda-6591-4976-84aa-3cc7f505373e
📒 Files selected for processing (1)
pkg/monitortests/node/watchpods/collection.go
|
/payload-job periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips |
|
@xueqzhan: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/8e31ee80-1f14-11f1-8be4-348d9668c0b9-0 |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/resourcewatch/git/git_store.go`:
- Around line 254-281: The execGit retry loop can hang because it uses
exec.Command rather than a context-aware command; change the per-attempt
invocation in execGit to use exec.CommandContext with a child context derived
from the parent ctx and a short per-attempt timeout (e.g., 1s or a configurable
duration) so that if ctx is cancelled or the attempt exceeds the timeout the git
subprocess is killed and CombinedOutput returns; ensure you create the command
with exec.CommandContext, set cmd.Dir = s.path, capture CombinedOutput from that
cmd, and propagate the resulting error (and log output) as before, and keep
lastErr/retry logic unchanged.
- Around line 310-323: In commitModify, stop using exec.Command directly for the
git diff check and instead call the existing execGit helper so the semaphore,
retries, and context cancellation are honored; after running "git diff --cached
--quiet -- <path>" via execGit, handle its result precisely: if err == nil
return nil (no changes), if err is an *exec.ExitError with exit status 1 treat
that as "changes exist" and continue to commit, but for any other non-nil error
return it (wrap or annotate with context) so real errors (permissions, repo
corruption, etc.) are not masked; update references in commitModify to use
execGit and explicit exit-code inspection rather than assuming any non-nil error
means changes.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7aa239bf-70b5-4365-b011-219994e730bf
📒 Files selected for processing (1)
pkg/resourcewatch/git/git_store.go
pkg/resourcewatch/git/git_store.go
Outdated
| // Check if there are actually any changes staged | ||
| // git diff --cached --quiet returns 0 if no changes, 1 if changes exist | ||
| osCommand := exec.Command("git", "diff", "--cached", "--quiet", "--", path) | ||
| osCommand.Dir = s.path | ||
| err := osCommand.Run() | ||
| if err == nil { | ||
| // No changes, skip commit | ||
| klog.V(4).Infof("Skipping commit for unchanged file %s", path) | ||
| return nil | ||
| } | ||
| // err != nil means there are changes (exit code 1) or a real error | ||
| // If it's exit code 1, that's expected and means we should commit | ||
| // Any other error will be caught by the commit call below | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd pkg/resourcewatch/git && wc -l git_store.goRepository: openshift/origin
Length of output: 76
🏁 Script executed:
cd pkg/resourcewatch/git && sed -n '300,330p' git_store.goRepository: openshift/origin
Length of output: 1132
🏁 Script executed:
cd pkg/resourcewatch/git && rg -n "func.*execGit" -A 10 git_store.goRepository: openshift/origin
Length of output: 493
🏁 Script executed:
cd pkg/resourcewatch/git && rg -n "semaphore\|retry\|cancel" git_store.go | head -20Repository: openshift/origin
Length of output: 42
🏁 Script executed:
cd pkg/resourcewatch && find . -name "*.go" -type f | xargs rg "exec\.Command.*git" | head -15Repository: openshift/origin
Length of output: 218
🏁 Script executed:
cd pkg/resourcewatch/git && sed -n '239,270p' git_store.goRepository: openshift/origin
Length of output: 1134
🏁 Script executed:
cd pkg/resourcewatch/git && rg -n "exec\.Command" -B 2 -A 5 git_store.goRepository: openshift/origin
Length of output: 641
🏁 Script executed:
cd pkg/resourcewatch/git && sed -n '1,30p' git_store.go | grep -E "^import|errors|exec"Repository: openshift/origin
Length of output: 79
🏁 Script executed:
cd pkg/resourcewatch/git && sed -n '239,290p' git_store.goRepository: openshift/origin
Length of output: 1785
🏁 Script executed:
cd pkg/resourcewatch/git && sed -n '290,330p' git_store.goRepository: openshift/origin
Length of output: 1464
🏁 Script executed:
cd pkg/resourcewatch/git && rg -n "func.*commit" git_store.goRepository: openshift/origin
Length of output: 534
🏁 Script executed:
cd pkg/resourcewatch/git && sed -n '1,50p' git_store.goRepository: openshift/origin
Length of output: 1238
🏁 Script executed:
cd pkg/resourcewatch/git && sed -n '305,332p' git_store.go | cat -nRepository: openshift/origin
Length of output: 1252
🏁 Script executed:
cd pkg/resourcewatch/git && rg "errors\." git_store.goRepository: openshift/origin
Length of output: 42
Distinguish expected exit code 1 from real errors in commitModify.
The direct exec.Command("git", "diff", ...) call bypasses execGit's semaphore, retry, and context cancellation safeguards. More critically, the code treats all non-nil errors as "changes exist" and proceeds to commit, which masks genuine failures (e.g., permission errors, repo corruption). Extract the exit code and return early on non-1 errors before proceeding to commit.
Proposed fix
import (
"context"
+ "errors"
"fmt" osCommand := exec.Command("git", "diff", "--cached", "--quiet", "--", path)
osCommand.Dir = s.path
err := osCommand.Run()
if err == nil {
// No changes, skip commit
klog.V(4).Infof("Skipping commit for unchanged file %s", path)
return nil
}
- // err != nil means there are changes (exit code 1) or a real error
- // If it's exit code 1, that's expected and means we should commit
- // Any other error will be caught by the commit call below
+ var exitErr *exec.ExitError
+ if !errors.As(err, &exitErr) || exitErr.ExitCode() != 1 {
+ return fmt.Errorf("checking staged changes for %s failed: %w", path, err)
+ }
+ // Exit code 1 means changes exist; proceed with commit.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/resourcewatch/git/git_store.go` around lines 310 - 323, In commitModify,
stop using exec.Command directly for the git diff check and instead call the
existing execGit helper so the semaphore, retries, and context cancellation are
honored; after running "git diff --cached --quiet -- <path>" via execGit, handle
its result precisely: if err == nil return nil (no changes), if err is an
*exec.ExitError with exit status 1 treat that as "changes exist" and continue to
commit, but for any other non-nil error return it (wrap or annotate with
context) so real errors (permissions, repo corruption, etc.) are not masked;
update references in commitModify to use execGit and explicit exit-code
inspection rather than assuming any non-nil error means changes.
|
/payload-job periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips |
|
@xueqzhan: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/b7adb7c0-1fab-11f1-9f3d-decb961d64e6-0 |
|
@xueqzhan: An error was encountered searching for bug OCPBUGS-78016 on the Jira server at https://issues.redhat.com. No known errors were detected, please see the full error message for details. Full error message.
No response returned: Get "https://issues.redhat.com/rest/api/2/issue/OCPBUGS-78016": GET https://issues.redhat.com/rest/api/2/issue/OCPBUGS-78016 giving up after 5 attempt(s)
Please contact an administrator to resolve this issue, then request a bug refresh with DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
pkg/resourcewatch/git/git_store.go (1)
324-344:⚠️ Potential issue | 🟠 Major
commitModifybypasses the shared git execution path.Line 328 uses
exec.CommandContextdirectly instead ofexecGit, so this path skips the common git-operation controls (notably semaphore gating and centralized behavior). This creates inconsistent execution policy and can spike subprocess count under load.Use this to verify the bypass is still present:
#!/bin/bash set -euo pipefail echo "Inspect commitModify command execution:" sed -n '319,350p' pkg/resourcewatch/git/git_store.go echo echo "Inspect shared executor controls:" sed -n '247,296p' pkg/resourcewatch/git/git_store.goExpected result:
commitModifyshows directexec.CommandContext(... "git", "diff", ...)while semaphore acquisition is only inexecGit.
As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/resourcewatch/git/git_store.go` around lines 324 - 344, commitModify currently calls exec.CommandContext directly for "git diff --cached --quiet" (using diffCtx and s.path), bypassing the centralized execGit executor and its semaphore; change this to call the shared execGit helper (or a new wrapper that accepts context, args, and Dir) so the diff check goes through the same semaphore/gating and centralized behavior as other git operations (preserve the 10s timeout context.WithTimeout, use the same error handling around exec.ExitError, and keep s.path as the working directory). Ensure you update the call site in commitModify and adapt execGit's signature if needed to accept a pre-created context/timeout and working dir so you don't spawn exec.CommandContext directly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/resourcewatch/git/git_store.go`:
- Around line 28-31: The gitSemaphore currently only limits single command
concurrency causing interleaving across multi-step operations; modify commitAdd,
commitModify, and commitRemove to acquire gitSemaphore once at the start of the
entire high-level transaction and release it only after the final commit (i.e.,
wrap the full sequence of index mutations, add/remove calls, and commit call in
a single semaphore lock/unlock), or provide a helper like a
doGitWrite/withGitLock that obtains the semaphore and runs the whole transaction
atomically; ensure no inner calls reacquire/release the semaphore separately and
keep repo/index/commit operations inside that single serialized block to avoid
mixed staged content.
---
Duplicate comments:
In `@pkg/resourcewatch/git/git_store.go`:
- Around line 324-344: commitModify currently calls exec.CommandContext directly
for "git diff --cached --quiet" (using diffCtx and s.path), bypassing the
centralized execGit executor and its semaphore; change this to call the shared
execGit helper (or a new wrapper that accepts context, args, and Dir) so the
diff check goes through the same semaphore/gating and centralized behavior as
other git operations (preserve the 10s timeout context.WithTimeout, use the same
error handling around exec.ExitError, and keep s.path as the working directory).
Ensure you update the call site in commitModify and adapt execGit's signature if
needed to accept a pre-created context/timeout and working dir so you don't
spawn exec.CommandContext directly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 837ac442-7a1e-4cbf-8561-d78c2a391fc3
📒 Files selected for processing (1)
pkg/resourcewatch/git/git_store.go
pkg/resourcewatch/git/git_store.go
Outdated
| repo *git.Repository | ||
| path string | ||
| gitSemaphore chan struct{} // Limits concurrent git operations | ||
| } |
There was a problem hiding this comment.
Serialize full git write transactions, not individual commands.
gitSemaphore (Line 251) limits single command concurrency, but commitAdd/commitModify/commitRemove run multi-step index mutations across separate calls. Those steps can interleave across goroutines and produce commits with unintended staged content.
Proposed fix
import (
"context"
"errors"
"fmt"
"os"
"os/exec"
"path"
"path/filepath"
"strings"
+ "sync"
"time"
@@
type GitStorage struct {
repo *git.Repository
path string
gitSemaphore chan struct{} // Limits concurrent git operations
+ commitTxnMu sync.Mutex // Serializes add/diff/commit and rm/commit transactions
}
@@
func (s *GitStorage) commitAdd(ctx context.Context, timestamp time.Time, path, author, ocCommand string) error {
+ s.commitTxnMu.Lock()
+ defer s.commitTxnMu.Unlock()
+
if err := s.execGit(ctx, "add", path); err != nil {
return err
}
@@
func (s *GitStorage) commitModify(ctx context.Context, timestamp time.Time, path, author, ocCommand string) error {
+ s.commitTxnMu.Lock()
+ defer s.commitTxnMu.Unlock()
+
if err := s.execGit(ctx, "add", path); err != nil {
return err
}
@@
func (s *GitStorage) commitRemove(ctx context.Context, timestamp time.Time, path, author, ocCommand string) error {
+ s.commitTxnMu.Lock()
+ defer s.commitTxnMu.Unlock()
+
if err := s.execGit(ctx, "rm", path); err != nil {
return err
}As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
Also applies to: 251-257, 305-367
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/resourcewatch/git/git_store.go` around lines 28 - 31, The gitSemaphore
currently only limits single command concurrency causing interleaving across
multi-step operations; modify commitAdd, commitModify, and commitRemove to
acquire gitSemaphore once at the start of the entire high-level transaction and
release it only after the final commit (i.e., wrap the full sequence of index
mutations, add/remove calls, and commit call in a single semaphore lock/unlock),
or provide a helper like a doGitWrite/withGitLock that obtains the semaphore and
runs the whole transaction atomically; ensure no inner calls reacquire/release
the semaphore separately and keep repo/index/commit operations inside that
single serialized block to avoid mixed staged content.
fbf644c to
70264ef
Compare
|
@xueqzhan: An error was encountered searching for bug OCPBUGS-78016 on the Jira server at https://issues.redhat.com. No known errors were detected, please see the full error message for details. Full error message.
No response returned: Get "https://issues.redhat.com/rest/api/2/issue/OCPBUGS-78016": GET https://issues.redhat.com/rest/api/2/issue/OCPBUGS-78016 giving up after 5 attempt(s)
Please contact an administrator to resolve this issue, then request a bug refresh with DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/payload-job periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips |
|
@xueqzhan: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/312113e0-2006-11f1-906d-a972c6521d04-0 |
70264ef to
3b1e7b0
Compare
|
@xueqzhan: An error was encountered searching for bug OCPBUGS-78016 on the Jira server at https://issues.redhat.com. No known errors were detected, please see the full error message for details. Full error message.
No response returned: Get "https://issues.redhat.com/rest/api/2/issue/OCPBUGS-78016": GET https://issues.redhat.com/rest/api/2/issue/OCPBUGS-78016 giving up after 5 attempt(s)
Please contact an administrator to resolve this issue, then request a bug refresh with DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
@xueqzhan: An error was encountered searching for bug OCPBUGS-78016 on the Jira server at https://issues.redhat.com. No known errors were detected, please see the full error message for details. Full error message.
No response returned: Get "https://issues.redhat.com/rest/api/2/issue/OCPBUGS-78016": GET https://issues.redhat.com/rest/api/2/issue/OCPBUGS-78016 giving up after 5 attempt(s)
Please contact an administrator to resolve this issue, then request a bug refresh with DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
@xueqzhan: An error was encountered searching for bug OCPBUGS-78016 on the Jira server at https://issues.redhat.com. No known errors were detected, please see the full error message for details. Full error message.
No response returned: Get "https://issues.redhat.com/rest/api/2/issue/OCPBUGS-78016": GET https://issues.redhat.com/rest/api/2/issue/OCPBUGS-78016 giving up after 5 attempt(s)
Please contact an administrator to resolve this issue, then request a bug refresh with DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
@xueqzhan: An error was encountered searching for bug OCPBUGS-78016 on the Jira server at https://issues.redhat.com. No known errors were detected, please see the full error message for details. Full error message.
No response returned: Get "https://issues.redhat.com/rest/api/2/issue/OCPBUGS-78016": GET https://issues.redhat.com/rest/api/2/issue/OCPBUGS-78016 giving up after 5 attempt(s)
Please contact an administrator to resolve this issue, then request a bug refresh with DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
Scheduling required tests: |
|
Risk analysis has seen new tests most likely introduced by this PR. New Test Risks for sha: 3b1e7b0
New tests seen in this PR at sha: 3b1e7b0
|
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: petr-muller, xueqzhan The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/verified by @xueqzhan |
|
@xueqzhan: This PR has been marked as verified by DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/jira refresh |
|
@xueqzhan: This pull request references Jira Issue OCPBUGS-78016, which is valid. The bug has been moved to the POST state. 3 validation(s) were run on this bug
Requesting review from QA contact: DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/override ci/prow/e2e-vsphere-ovn |
|
@xueqzhan: Overrode contexts on behalf of xueqzhan: ci/prow/e2e-vsphere-ovn DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
@xueqzhan: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Fix double-counting of container restart events
Problem
Container exit events are being double-counted when containers restart with non-zero exit codes. This causes inflated failure metrics in test monitoring.
Root Cause
When a container exits and restarts, Kubernetes moves the termination state through this sequence:
The monitor has two code paths that detect container exits:
Both fire for the same exit, creating duplicate events.
Fix
Added deduplication check in Case 1 to skip events when we already recorded the exit in Case 2:
alreadyRecorded := oldContainerStatus.State.Terminated != nil &&
containerStatus.State.Terminated == nil &&
oldContainerStatus.State.Terminated.FinishedAt.Equal(&containerStatus.LastTerminationState.Terminated.FinishedAt)
This prevents double-counting by checking if:
Summary by CodeRabbit