Skip to content

Add collect keyless params task & related keyless improvements#3171

Open
simonbaird wants to merge 5 commits intoconforma:mainfrom
simonbaird:collect-keyless-params-task
Open

Add collect keyless params task & related keyless improvements#3171
simonbaird wants to merge 5 commits intoconforma:mainfrom
simonbaird:collect-keyless-params-task

Conversation

@simonbaird
Copy link
Member

There's quite a lot here, but it's broken down into a (hopefully) sensible set of commits for your reviewing convenience.

Ref: https://issues.redhat.com/browse/EC-1695

@simonbaird simonbaird marked this pull request as draft March 11, 2026 21:30
@coderabbitai
Copy link

coderabbitai bot commented Mar 11, 2026

📝 Walkthrough

Walkthrough

This pull request introduces TUF (The Update Framework) infrastructure for acceptance testing, adds keyless signing parameter collection capabilities, implements certificate identity regexp variants for verification tasks, and extends the Kubernetes acceptance test framework with ConfigMap and namespace management features.

Changes

Cohort / File(s) Summary
Build Configuration
.gitignore, Makefile
Added TUF ConfigMap generation workflow with phony targets (tuf-yaml, tuf-yaml-clean, tuf-yaml-refresh) and integrated TUF YAML generation into acceptance test pipeline; updated gitignore to exclude generated TUF ConfigMap YAML files.
TUF Infrastructure
hack/tuf/create-tuf-files.sh, hack/tuf/kustomization.yaml, hack/tuf/namespace.yaml, hack/tuf/tuf.yaml, hack/test/kustomization.yaml
Added complete TUF server deployment setup with Bash script to generate ConfigMaps from WireMock recordings, Kubernetes manifests for TUF namespace, deployment, and nginx configuration, and Kustomize configurations to define the TUF resources.
Kubernetes Acceptance Test API
acceptance/kubernetes/types/types.go
Extended Cluster interface with two new methods: CreateNamedNamespace() and CreateConfigMap() for programmatic namespace and ConfigMap management in tests.
Kubernetes Acceptance Test Implementation
acceptance/kubernetes/kind/kubernetes.go, acceptance/kubernetes/kind/kind.go, acceptance/kubernetes/stub/stub.go
Implemented ConfigMap creation with JSON parsing, RBAC setup for ConfigMap access, namespace creation, Rekor environment variable injection, and stub implementations; minor error message improvement for port availability.
Acceptance Test Step Definitions
acceptance/kubernetes/kubernetes.go
Added Rekor integration for snapshot testing and new step definitions for ConfigMap creation, namespace management, and policy helpers (basic with known key, SLSA provenance, golden container).
Image and Signature Processing
acceptance/image/image.go
Added handling to skip empty DSSE signature annotations in keyless image creation and filter empty signatures from attestation results.
TUF and WireMock Support
acceptance/tuf/tuf.go, acceptance/wiremock/wiremock.go
Added TUF Stub function to construct TUF endpoint URL and refactored WireMock startup to support port-specific configuration with conditional URL resolution.
Tekton Task: Keyless Signing Parameters
tasks/collect-keyless-signing-params/0.1/collect-keyless-signing-params.yaml
New Tekton task to collect keyless signing configuration from a ConfigMap in a specified namespace, with results for OIDC issuer, Rekor/Fulcio/TUF URLs, build identity, and enablement flag.
Tekton Tasks: Certificate Regexp Support
tasks/verify-conforma-konflux-ta/0.1/verify-conforma-konflux-ta.yaml, tasks/verify-enterprise-contract/0.1/verify-enterprise-contract.yaml
Added CERTIFICATE_IDENTITY_REGEXP and CERTIFICATE_OIDC_ISSUER_REGEXP parameters to both verification tasks with precedence rules and updated logic to support regexp-based certificate matching alongside exact matching.
Documentation
docs/modules/ROOT/pages/collect-keyless-signing-params.adoc, docs/modules/ROOT/pages/verify-conforma-konflux-ta.adoc, docs/modules/ROOT/pages/verify-enterprise-contract.adoc, docs/modules/ROOT/partials/tasks_nav.adoc
Added documentation for new keyless signing parameter collection task, documented new regexp parameters in verification task docs, and updated navigation references.
Feature Tests
features/task_validate_image.feature
Replaced explicit policy JSON blocks with policy helper steps, added keyless signing verification scenarios with local test images and regexp parameter variants, and introduced ConfigMap collection test scenarios with various edge cases (enabled/disabled, malformed, missing).

Sequence Diagram(s)

sequenceDiagram
    participant Test as Test Runner
    participant KubAPI as Kubernetes API
    participant ConfigMap as ConfigMap<br/>(cluster-config)
    participant VerifyTask as Verify Task
    participant TUF as TUF Server
    participant Rekor as Rekor

    Test->>KubAPI: Create namespace
    KubAPI-->>Test: namespace created
    
    Test->>KubAPI: Create ConfigMap<br/>(keyless config)
    KubAPI-->>Test: ConfigMap created
    
    Test->>VerifyTask: Execute with keyless params<br/>(CERTIFICATE_IDENTITY,<br/>CERTIFICATE_OIDC_ISSUER)
    activate VerifyTask
    VerifyTask->>VerifyTask: Detect keyless mode<br/>(non-empty cert params)
    VerifyTask->>TUF: Fetch root.json
    TUF-->>VerifyTask: root metadata
    VerifyTask->>Rekor: Verify transparency log
    Rekor-->>VerifyTask: verification result
    VerifyTask->>VerifyTask: Validate certificate identity<br/>against OIDC issuer
    deactivate VerifyTask
    VerifyTask-->>Test: verification result
    
    Note over Test,Rekor: Alternatively with regexp params<br/>uses CERTIFICATE_IDENTITY_REGEXP<br/>and CERTIFICATE_OIDC_ISSUER_REGEXP
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~35 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 59.09% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main change: adding a new Tekton task for collecting keyless signing parameters along with related improvements to support keyless signing verification.
Description check ✅ Passed The description is related to the changeset, referencing the issue EC-1695 and explaining the work involves a new Tekton task, test coverage, and acceptance-test scaffolding for keyless signing scenarios.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
📝 Coding Plan
  • Generate coding plan for human review comments

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 golangci-lint (2.11.3)

Error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions
The command is terminated due to an error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions


Comment @coderabbitai help to get the list of available commands and usage tips.

@qodo-code-review
Copy link
Contributor

Review Summary by Qodo

Add collect-keyless-signing-params task and keyless signing improvements

✨ Enhancement 🧪 Tests

Grey Divider

Walkthroughs

Description
• Add collect-keyless-signing-params Tekton task for reading keyless signing configuration from
  ConfigMaps
• Support running TUF service inside Kind cluster using nginx with wiremock-extracted files
• Add keyless image verification acceptance test with local cluster-internal TUF
• Enhance verification tasks with certificate identity/issuer regexp parameters
• Add ConfigMap and namespace creation capabilities to Kubernetes test infrastructure
Diagram
flowchart LR
  A["Tekton Tasks"] -->|add regexp params| B["verify-enterprise-contract<br/>verify-conforma-konflux-ta"]
  C["Kubernetes Infrastructure"] -->|create ConfigMaps| D["Test Cluster"]
  E["TUF Service"] -->|nginx in cluster| D
  F["Acceptance Tests"] -->|keyless verification| B
  G["collect-keyless-signing-params"] -->|read cluster config| D
Loading

Grey Divider

File Changes

1. tasks/collect-keyless-signing-params/0.1/collect-keyless-signing-params.yaml ✨ Enhancement +160/-0

New Tekton task for collecting keyless signing parameters

tasks/collect-keyless-signing-params/0.1/collect-keyless-signing-params.yaml


2. acceptance/image/image.go 🐞 Bug fix +35/-1

Handle DSSE envelope signatures and empty signature filtering

acceptance/image/image.go


3. acceptance/kubernetes/kind/kubernetes.go ✨ Enhancement +136/-0

Add ConfigMap and namespace creation with RBAC support

acceptance/kubernetes/kind/kubernetes.go


View more (20)
4. acceptance/kubernetes/kind/kind.go Miscellaneous +1/-1

Improve error message for free port detection

acceptance/kubernetes/kind/kind.go


5. acceptance/kubernetes/kubernetes.go ✨ Enhancement +28/-0

Add BDD steps for ConfigMap and namespace creation

acceptance/kubernetes/kubernetes.go


6. acceptance/kubernetes/stub/stub.go ✨ Enhancement +8/-0

Add stub implementations for ConfigMap operations

acceptance/kubernetes/stub/stub.go


7. acceptance/kubernetes/types/types.go ✨ Enhancement +2/-0

Extend Cluster interface with ConfigMap methods

acceptance/kubernetes/types/types.go


8. acceptance/tuf/tuf.go 📝 Documentation +1/-0

Add documentation comment for external WireMock TUF

acceptance/tuf/tuf.go


9. acceptance/wiremock/wiremock.go ✨ Enhancement +26/-6

Support starting WireMock on specific port for testing

acceptance/wiremock/wiremock.go


10. hack/tuf/tuf.sh ✨ Enhancement +132/-0

New script to generate TUF Kubernetes resources

hack/tuf/tuf.sh


11. hack/tuf/create-tuf-files.sh ✨ Enhancement +82/-0

Script to extract TUF files from wiremock recordings

hack/tuf/create-tuf-files.sh


12. hack/tuf/kustomization.yaml ⚙️ Configuration changes +27/-0

Kustomization for TUF service deployment

hack/tuf/kustomization.yaml


13. hack/tuf/namespace.yaml ⚙️ Configuration changes +21/-0

Kubernetes namespace for TUF service

hack/tuf/namespace.yaml


14. hack/tuf/tuf.yaml ⚙️ Configuration changes +161/-0

TUF service deployment with nginx and ConfigMaps

hack/tuf/tuf.yaml


15. tasks/verify-enterprise-contract/0.1/verify-enterprise-contract.yaml ✨ Enhancement +51/-5

Add certificate identity/issuer regexp parameters

tasks/verify-enterprise-contract/0.1/verify-enterprise-contract.yaml


16. tasks/verify-conforma-konflux-ta/0.1/verify-conforma-konflux-ta.yaml ✨ Enhancement +51/-5

Add certificate identity/issuer regexp parameters

tasks/verify-conforma-konflux-ta/0.1/verify-conforma-konflux-ta.yaml


17. docs/modules/ROOT/pages/collect-keyless-signing-params.adoc 📝 Documentation +41/-0

Documentation for collect-keyless-signing-params task

docs/modules/ROOT/pages/collect-keyless-signing-params.adoc


18. docs/modules/ROOT/pages/verify-enterprise-contract.adoc 📝 Documentation +2/-0

Document new regexp certificate parameters

docs/modules/ROOT/pages/verify-enterprise-contract.adoc


19. docs/modules/ROOT/pages/verify-conforma-konflux-ta.adoc 📝 Documentation +2/-0

Document new regexp certificate parameters

docs/modules/ROOT/pages/verify-conforma-konflux-ta.adoc


20. docs/modules/ROOT/partials/tasks_nav.adoc 📝 Documentation +1/-0

Add collect-keyless-signing-params to navigation

docs/modules/ROOT/partials/tasks_nav.adoc


21. features/task_validate_image.feature 🧪 Tests +136/-0

Add keyless signing acceptance test scenarios

features/task_validate_image.feature


22. hack/test/kustomization.yaml ⚙️ Configuration changes +1/-0

Include TUF resources in test kustomization

hack/test/kustomization.yaml


23. Makefile ⚙️ Configuration changes +23/-2

Add TUF ConfigMap generation and test targets

Makefile


Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Contributor

qodo-code-review bot commented Mar 11, 2026

Code Review by Qodo

🐞 Bugs (4) 📘 Rule violations (0) 📎 Requirement gaps (0)

Grey Divider


Action required

1. ConfigMap update missing RV 🐞 Bug ✓ Correctness
Description
acceptance/kubernetes/kind.(*kindCluster).CreateConfigMap attempts to Update an existing ConfigMap
using a newly constructed object without ResourceVersion, so updates will be rejected whenever the
ConfigMap already exists. This breaks the function’s intended idempotent "create or update"
behavior.
Code

acceptance/kubernetes/kind/kubernetes.go[R229-235]

+	// Create the ConfigMap (or update if it already exists)
+	if _, err := k.client.CoreV1().ConfigMaps(namespace).Create(ctx, configMap, metav1.CreateOptions{}); err != nil {
+		if strings.Contains(err.Error(), "already exists") {
+			// ConfigMap exists, so update it with the new content
+			if _, err := k.client.CoreV1().ConfigMaps(namespace).Update(ctx, configMap, metav1.UpdateOptions{}); err != nil {
+				return fmt.Errorf("failed to update existing ConfigMap: %w", err)
+			}
Evidence
The code constructs a new ConfigMap with only Name/Namespace/Data and, on "already exists", calls
Update with that object without first fetching the current ConfigMap to populate metadata like
ResourceVersion.

acceptance/kubernetes/kind/kubernetes.go[221-239]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`CreateConfigMap` attempts to update an existing ConfigMap using a locally-constructed object that does not include `metadata.resourceVersion`, which Kubernetes requires for Update.

### Issue Context
This code path triggers when the ConfigMap already exists, so the function is currently not reliably idempotent.

### Fix Focus Areas
- acceptance/kubernetes/kind/kubernetes.go[221-239]

### Implementation notes
- Replace `strings.Contains(err.Error(), &quot;already exists&quot;)` with `apierrors.IsAlreadyExists(err)`.
- On already-exists:
 - `existing, err := client.CoreV1().ConfigMaps(ns).Get(ctx, name, metav1.GetOptions{})`
 - set `configMap.ResourceVersion = existing.ResourceVersion`
 - then `Update`.
- Consider using Patch (server-side apply or strategic merge) to avoid RV handling if preferred.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Kubectl errors treated NotFound 🐞 Bug ⛯ Reliability
Description
The collect-keyless-signing-params task treats any kubectl failure as "ConfigMap not found" because
stderr is discarded and error types aren’t distinguished. This contradicts the task description
(which only promises fallback when the ConfigMap is not found) and can silently disable
keyless-signing configuration on RBAC/cluster errors.
Code

tasks/collect-keyless-signing-params/0.1/collect-keyless-signing-params.yaml[R114-137]

+        # Read from the ConfigMap
+        echo "Reading ConfigMap ${configMapNamespace}/${configMapName}"
+        KFLX_CONFIG_PATH='/tmp/cluster-config.json'
+        if kubectl get configmap "${configMapName}" -n "${configMapNamespace}" -o json > "${KFLX_CONFIG_PATH}" 2>/dev/null; then
+            echo "ConfigMap found, extracting keyless signing parameters"
+
+            # First we read "keylessSigningEnabled"
+            keylessSigningEnabled=$(jq -r '.data.enableKeylessSigning // "false"' "$KFLX_CONFIG_PATH")
+
+            if [ "$keylessSigningEnabled" = "true" ]; then
+              # If that is set to "true" then read the other values
+              defaultOIDCIssuer=$(jq -r '.data.defaultOIDCIssuer // ""' "$KFLX_CONFIG_PATH")
+              rekorExternalUrl=$(jq -r '.data.rekorExternalUrl // ""' "$KFLX_CONFIG_PATH")
+              fulcioExternalUrl=$(jq -r '.data.fulcioExternalUrl // ""' "$KFLX_CONFIG_PATH")
+              tufExternalUrl=$(jq -r '.data.tufExternalUrl // ""' "$KFLX_CONFIG_PATH")
+              buildIdentity=$(jq -r '.data.buildIdentity // ""' "$KFLX_CONFIG_PATH")
+              buildIdentityRegexp=$(jq -r '.data.buildIdentityRegexp // ""' "$KFLX_CONFIG_PATH")
+            else
+              # Otherwise we ignore the reast of the ConfigMap
+              echo "enableKeylessSigning is not set, using default empty values"
+            fi
+        else
+            echo "ConfigMap not found, using default empty values"
+        fi
Evidence
The task spec states fallback behavior specifically for the "ConfigMap is not found" case, but the
script redirects kubectl stderr to /dev/null and treats all failures the same, proceeding with empty
defaults.

tasks/collect-keyless-signing-params/0.1/collect-keyless-signing-params.yaml[29-36]
tasks/collect-keyless-signing-params/0.1/collect-keyless-signing-params.yaml[114-137]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
The task currently treats all `kubectl get configmap` failures as &quot;not found&quot; because stderr is discarded. This can silently disable keyless signing parameters on RBAC or cluster/API failures.

### Issue Context
The task description promises fallback only when the ConfigMap is not found.

### Fix Focus Areas
- tasks/collect-keyless-signing-params/0.1/collect-keyless-signing-params.yaml[114-137]

### Implementation notes
- Capture stderr and branch on error type:
 - If NotFound: continue with defaults.
 - Otherwise: print the error to stderr and exit non-zero (or emit a clear warning and still continue if that is the intended contract, but then update the task description).
- Consider using `kubectl get ... -o json 2&gt;err` and checking for NotFound markers, or `--ignore-not-found` + explicit empty-output detection.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

3. Cluster-wide ConfigMap RBAC 🐞 Bug ⛨ Security
Description
acceptance/kubernetes/kind.ensureConfigMapRBAC grants every service account in the cluster list/get
access to all ConfigMaps cluster-wide via a ClusterRoleBinding to the system:serviceaccounts group.
Even for acceptance tests, this is broader than necessary and can mask missing/incorrect RBAC
assumptions.
Code

acceptance/kubernetes/kind/kubernetes.go[R250-297]

+// ensureConfigMapRBAC creates necessary RBAC permissions for ConfigMap access across namespaces
+func (k *kindCluster) ensureConfigMapRBAC(ctx context.Context) error {
+	// Create ClusterRole for ConfigMap reading (idempotent)
+	clusterRole := &rbacv1.ClusterRole{
+		ObjectMeta: metav1.ObjectMeta{
+			Name: "acceptance-configmap-reader",
+		},
+		Rules: []rbacv1.PolicyRule{
+			{
+				APIGroups: []string{""},
+				Resources: []string{"configmaps"},
+				Verbs:     []string{"get", "list"},
+			},
+		},
+	}
+
+	if _, err := k.client.RbacV1().ClusterRoles().Create(ctx, clusterRole, metav1.CreateOptions{}); err != nil {
+		// Ignore error if ClusterRole already exists
+		if !strings.Contains(err.Error(), "already exists") {
+			return fmt.Errorf("failed to create ClusterRole: %w", err)
+		}
+	}
+
+	// Create ClusterRoleBinding for all service accounts (idempotent)
+	clusterRoleBinding := &rbacv1.ClusterRoleBinding{
+		ObjectMeta: metav1.ObjectMeta{
+			Name: "acceptance-configmap-reader-binding",
+		},
+		RoleRef: rbacv1.RoleRef{
+			APIGroup: "rbac.authorization.k8s.io",
+			Kind:     "ClusterRole",
+			Name:     "acceptance-configmap-reader",
+		},
+		Subjects: []rbacv1.Subject{
+			{
+				Kind:     "Group",
+				Name:     "system:serviceaccounts",
+				APIGroup: "rbac.authorization.k8s.io",
+			},
+		},
+	}
+
+	if _, err := k.client.RbacV1().ClusterRoleBindings().Create(ctx, clusterRoleBinding, metav1.CreateOptions{}); err != nil {
+		// Ignore error if ClusterRoleBinding already exists
+		if !strings.Contains(err.Error(), "already exists") {
+			return fmt.Errorf("failed to create ClusterRoleBinding: %w", err)
+		}
+	}
Evidence
The ClusterRole allows get/list on configmaps, and the ClusterRoleBinding binds it to the
"system:serviceaccounts" group, which includes all service accounts in all namespaces.

acceptance/kubernetes/kind/kubernetes.go[250-299]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
Acceptance test scaffolding grants all service accounts cluster-wide read access to all ConfigMaps.

### Issue Context
This is test-only, but it’s broader than necessary and can hide RBAC regressions/assumptions.

### Fix Focus Areas
- acceptance/kubernetes/kind/kubernetes.go[250-299]

### Implementation notes
- Prefer a `Role` in `konflux-info` with `resources: [configmaps]`, `verbs: [get]`, and `resourceNames: [&lt;configmap name&gt;]`.
- Bind that Role via `RoleBinding` to a narrower subject:
 - `system:serviceaccounts:&lt;taskrun-namespace&gt;` group, or
 - the specific `ServiceAccount` used by TaskRuns.
- If you need multi-namespace, create bindings per acceptance namespace rather than cluster-wide.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


4. tuf-yaml needs kubectl PATH 🐞 Bug ⛯ Reliability
Description
hack/tuf/create-tuf-files.sh invokes kubectl from PATH to generate YAML, but the build already
produces a kubectl binary under dist/ and the script does not use it. This introduces a host-tooling
dependency that can break make acceptance in environments that otherwise wouldn’t need a
preinstalled kubectl.
Code

hack/tuf/create-tuf-files.sh[R45-80]

+# Build kubectl command with dynamic file list
+build_kubectl_command() {
+    local files_dir="$WIREMOCK_TUF_DIR/__files"
+    local cmd_args=("kubectl" "create" "configmap" "tuf-files" "--namespace=tuf-service")
+
+    local body_files
+    readarray -t body_files < <(extract_body_files)
+
+    if [[ ${#body_files[@]} -eq 0 ]]; then
+        echo "Error: No TUF body files found in wiremock recordings" >&2
+        exit 1
+    fi
+
+    echo "Found ${#body_files[@]} TUF files to include in ConfigMap:" >&2
+
+    for body_file in "${body_files[@]}"; do
+        echo "  - $body_file" >&2
+        # Use the filename as the key in the ConfigMap
+        cmd_args+=("--from-file=$body_file=$files_dir/$body_file")
+    done
+
+    cmd_args+=("--dry-run=client" "-o" "yaml" "--validate=false")
+
+    printf '%s\n' "${cmd_args[@]}"
+}
+
+# Create ConfigMap YAML files
+echo "Creating TUF files ConfigMap YAML..." >&2
+readarray -t kubectl_cmd < <(build_kubectl_command)
+"${kubectl_cmd[@]}" > "$SCRIPT_DIR/tuf-files-configmap.yaml"
+
+echo "Creating TUF root ConfigMap YAML..." >&2
+kubectl create configmap tuf-root-data \
+  --namespace=tuf-service \
+  --from-file=root.json="$PROJECT_ROOT/acceptance/tuf/root.json" \
+  --dry-run=client -o yaml --validate=false > "$SCRIPT_DIR/tuf-root-configmap.yaml"
Evidence
The acceptance Makefile runs make build then make tuf-yaml, and build.sh copies kubectl into
dist/. However, create-tuf-files.sh calls kubectl ... directly, requiring an external kubectl on
PATH instead of using the repo-built one.

Makefile[122-155]
hack/tuf/create-tuf-files.sh[45-80]
build.sh[68-84]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`make acceptance` now generates TUF ConfigMap YAML using a script that calls `kubectl` from PATH, even though the repo build already produces a kubectl binary under `dist/`.

### Issue Context
This can fail in minimal CI/dev environments without a preinstalled kubectl.

### Fix Focus Areas
- hack/tuf/create-tuf-files.sh[45-80]
- Makefile[122-155]

### Implementation notes
- In the script, introduce `KUBECTL_BIN=&quot;${KUBECTL:-kubectl}&quot;` and replace hardcoded `kubectl` calls.
- In Makefile, set `KUBECTL=./dist/kubectl_$$(go env GOOS)_$$(go env GOARCH)` when invoking the script (or ensure PATH includes `dist/`).
- Keep a fallback to system kubectl if the dist binary is missing.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🧹 Nitpick comments (3)
acceptance/kubernetes/kind/kubernetes.go (2)

241-248: RBAC is created on every ConfigMap creation.

ensureConfigMapRBAC is called on every CreateConfigMap invocation. While the function is idempotent, this adds unnecessary API calls. Consider caching whether RBAC has been created or moving RBAC setup to cluster initialization.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@acceptance/kubernetes/kind/kubernetes.go` around lines 241 - 248, The code
currently calls ensureConfigMapRBAC on every CreateConfigMap call causing
unnecessary API chatter; update the logic to run RBAC setup once and skip
subsequent calls by adding a guarded/cache flag (e.g., a field on the Kubernetes
struct like rbacConfigured bool or sync.Once to protect ensureConfigMapRBAC) and
have CreateConfigMap check that flag/Once before invoking ensureConfigMapRBAC;
alternatively move the ensureConfigMapRBAC call to cluster initialization path
(where cluster setup functions run) so CreateConfigMap no longer invokes
ensureConfigMapRBAC on each call.

229-239: Use errors.IsAlreadyExists instead of string matching for error handling.

String-based error checking is fragile and could break if error message wording changes. The Kubernetes client-go library provides proper error type checking.

♻️ Suggested fix using proper error type checking
+import (
+	k8serrors "k8s.io/apimachinery/pkg/api/errors"
+)

 	// Create the ConfigMap (or update if it already exists)
 	if _, err := k.client.CoreV1().ConfigMaps(namespace).Create(ctx, configMap, metav1.CreateOptions{}); err != nil {
-		if strings.Contains(err.Error(), "already exists") {
+		if k8serrors.IsAlreadyExists(err) {
 			// ConfigMap exists, so update it with the new content
 			if _, err := k.client.CoreV1().ConfigMaps(namespace).Update(ctx, configMap, metav1.UpdateOptions{}); err != nil {
 				return fmt.Errorf("failed to update existing ConfigMap: %w", err)
 			}
 		} else {
 			return err
 		}
 	}

Apply the same pattern to lines 268, 294, and 311.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@acceptance/kubernetes/kind/kubernetes.go` around lines 229 - 239, Replace
fragile string matching on err.Error() with Kubernetes API error type checking
using errors.IsAlreadyExists: when Create returns an error, call
apierrors.IsAlreadyExists(err) (importing "k8s.io/apimachinery/pkg/api/errors"
as apierrors) and if true perform the Update; otherwise return the error. Apply
this change to the ConfigMap create/update block that uses
k.client.CoreV1().ConfigMaps(namespace).Create(...) and the analogous create
handlers at the other locations mentioned (the blocks around the usages at lines
referenced in the review) so all Create/Update flows use
apierrors.IsAlreadyExists(err) instead of strings.Contains(err.Error(), "already
exists").
hack/tuf/tuf.yaml (1)

27-97: Consider adding security context for containers.

Static analysis flagged missing security hardening (CKV_K8S_20, CKV_K8S_23). While this is acceptable for test infrastructure in a Kind cluster, adding a security context would align with best practices and silence these warnings.

♻️ Optional security context addition
       initContainers:
         - name: setup-tuf-data
           image: busybox:1.35
+          securityContext:
+            allowPrivilegeEscalation: false
+            runAsNonRoot: true
+            runAsUser: 65534
           command:
             - sh

Note: If runAsNonRoot causes permission issues with volume mounts, this can be omitted for test infrastructure.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@hack/tuf/tuf.yaml` around lines 27 - 97, Summary: Add a securityContext to
the initContainer "setup-tuf-data" (and the following containers block) to
satisfy CKV_K8S_20/23. Instruction: In the initContainers entry for name:
setup-tuf-data (and each container under containers) add a securityContext with
fields such as runAsNonRoot: true (or runAsUser: 1000 if needed),
allowPrivilegeEscalation: false, readOnlyRootFilesystem: true, and capabilities:
drop: ["ALL"]; ensure any volume-mounted paths still have correct ownership or
switch to a non-root UID that matches volume permissions if mounting fails;
apply the same securityContext pattern to the main containers to silence the
static analysis warnings.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@acceptance/image/image.go`:
- Around line 955-983: The DSSE parsing in unmarshallSignatures incorrectly
short-circuits fallback because dsseEnvelope only declares Signatures; update
dsseEnvelope to include DSSE-specific fields (e.g., "payload" and "payloadType"
or other required DSSE fields used by your format) and in unmarshallSignatures
validate those fields (and that envelope.Signatures has exactly one entry)
before returning a cosign.Signatures; if those DSSE-specific fields are missing
or invalid, proceed to unmarshal into cosign.AttestationPayload as the fallback.
Ensure you reference dsseEnvelope and unmarshallSignatures when making the
changes so the parser only accepts genuine DSSE envelopes.

In `@features/task_validate_image.feature`:
- Around line 488-514: Add the missing assertion asserting the task result
"buildIdentity" is empty for the "Collect keyless signing parameters from
ConfigMap with keyless signing disabled" scenario: after the other task result
checks (e.g., "buildIdentityRegexp" and "keylessSigningEnabled") add an
assertion that the task result "buildIdentity" equals "" so the step
"collect-signing-params" output is validated consistently with the disabled
case.

In `@hack/tuf/create-tuf-files.sh`:
- Around line 1-5: Add the project's standard license header at the top of the
script so lint passes: insert the repository's required comment block (the
standard license/header text) immediately before the existing shebang
(#!/bin/bash) in hack/tuf/create-tuf-files.sh, preserving the rest of the file
(including set -e); ensure the header format and exact text match other files in
the repo.
- Around line 34-36: The condition currently excludes any filename containing
"root.json" (e.g., "1.root.json"), so change the test to only skip the exact
unversioned file "root.json"; update the if that checks body_file (the branch
that appends to body_files via body_files+=("$body_file")) to use an exact
string comparison ([[ "$body_file" != "root.json" ]]) or an anchored regex ([[ !
"$body_file" =~ ^root\.json$ ]]) so rotated files like "1.root.json" are allowed
through.

In `@hack/tuf/tuf.sh`:
- Around line 81-94: The nginx document root only mounts the tuf-root-data
ConfigMap; add the generated tuf-files ConfigMap into the same mounted volume by
changing the volume named "tuf-data" into a projected volume that includes both
configMap sources (tuf-root-data and tuf-files) so the volumeMount at
/usr/share/nginx/html continues to serve timestamp.json, snapshot.json and
targets; leave the "nginx-config" volume/volumeMount as-is and ensure the
projected volume uses the same name "tuf-data" so the existing volumeMount
remains valid.

In `@hack/tuf/tuf.yaml`:
- Around line 1-8: Add the Apache 2.0 license header at the very top of this
YAML file (before the existing document start '---'), ensuring the header is a
commented block so it doesn't interfere with YAML parsing; update the file that
contains the ConfigMap named "tuf-port-number" (kind: ConfigMap, data key: PORT)
to include the full Apache 2.0 license comment header.

In
`@tasks/collect-keyless-signing-params/0.1/collect-keyless-signing-params.yaml`:
- Around line 117-137: The current kubectl get call hides all errors and always
treats failures as “ConfigMap not found”; change the logic in the block that
runs kubectl get configmap "${configMapName}" -n "${configMapNamespace}" -o json
> "${KFLX_CONFIG_PATH}" so that you capture kubectl’s stderr and exit code,
treat a true “NotFound” (or 404) response as the existing “not found” branch,
but for any other non-zero exit status (RBAC/authorization errors, apiserver
errors, network issues) log the stderr (or a clear error message referencing
"${configMapName}" and "${configMapNamespace}") and exit non-zero instead of
defaulting keylessSigningEnabled to false; keep the subsequent jq reads
(keylessSigningEnabled, defaultOIDCIssuer, etc.) unchanged once the ConfigMap is
successfully fetched to "${KFLX_CONFIG_PATH}".

---

Nitpick comments:
In `@acceptance/kubernetes/kind/kubernetes.go`:
- Around line 241-248: The code currently calls ensureConfigMapRBAC on every
CreateConfigMap call causing unnecessary API chatter; update the logic to run
RBAC setup once and skip subsequent calls by adding a guarded/cache flag (e.g.,
a field on the Kubernetes struct like rbacConfigured bool or sync.Once to
protect ensureConfigMapRBAC) and have CreateConfigMap check that flag/Once
before invoking ensureConfigMapRBAC; alternatively move the ensureConfigMapRBAC
call to cluster initialization path (where cluster setup functions run) so
CreateConfigMap no longer invokes ensureConfigMapRBAC on each call.
- Around line 229-239: Replace fragile string matching on err.Error() with
Kubernetes API error type checking using errors.IsAlreadyExists: when Create
returns an error, call apierrors.IsAlreadyExists(err) (importing
"k8s.io/apimachinery/pkg/api/errors" as apierrors) and if true perform the
Update; otherwise return the error. Apply this change to the ConfigMap
create/update block that uses
k.client.CoreV1().ConfigMaps(namespace).Create(...) and the analogous create
handlers at the other locations mentioned (the blocks around the usages at lines
referenced in the review) so all Create/Update flows use
apierrors.IsAlreadyExists(err) instead of strings.Contains(err.Error(), "already
exists").

In `@hack/tuf/tuf.yaml`:
- Around line 27-97: Summary: Add a securityContext to the initContainer
"setup-tuf-data" (and the following containers block) to satisfy CKV_K8S_20/23.
Instruction: In the initContainers entry for name: setup-tuf-data (and each
container under containers) add a securityContext with fields such as
runAsNonRoot: true (or runAsUser: 1000 if needed), allowPrivilegeEscalation:
false, readOnlyRootFilesystem: true, and capabilities: drop: ["ALL"]; ensure any
volume-mounted paths still have correct ownership or switch to a non-root UID
that matches volume permissions if mounting fails; apply the same
securityContext pattern to the main containers to silence the static analysis
warnings.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 792d01cd-dd7b-4142-9146-28ad0fe7334a

📥 Commits

Reviewing files that changed from the base of the PR and between 3ce85d1 and 2fd2640.

⛔ Files ignored due to path filters (1)
  • features/__snapshots__/task_validate_image.snap is excluded by !**/*.snap
📒 Files selected for processing (24)
  • .gitignore
  • Makefile
  • acceptance/image/image.go
  • acceptance/kubernetes/kind/kind.go
  • acceptance/kubernetes/kind/kubernetes.go
  • acceptance/kubernetes/kubernetes.go
  • acceptance/kubernetes/stub/stub.go
  • acceptance/kubernetes/types/types.go
  • acceptance/tuf/tuf.go
  • acceptance/wiremock/wiremock.go
  • docs/modules/ROOT/pages/collect-keyless-signing-params.adoc
  • docs/modules/ROOT/pages/verify-conforma-konflux-ta.adoc
  • docs/modules/ROOT/pages/verify-enterprise-contract.adoc
  • docs/modules/ROOT/partials/tasks_nav.adoc
  • features/task_validate_image.feature
  • hack/test/kustomization.yaml
  • hack/tuf/create-tuf-files.sh
  • hack/tuf/kustomization.yaml
  • hack/tuf/namespace.yaml
  • hack/tuf/tuf.sh
  • hack/tuf/tuf.yaml
  • tasks/collect-keyless-signing-params/0.1/collect-keyless-signing-params.yaml
  • tasks/verify-conforma-konflux-ta/0.1/verify-conforma-konflux-ta.yaml
  • tasks/verify-enterprise-contract/0.1/verify-enterprise-contract.yaml

simonbaird and others added 2 commits March 18, 2026 15:17
The goal is to be able to have a working TUF root in the Kind
cluster so we can use it in a Tekton task during acceptance tests.

Also, expose some env vars that we'll need in upcoming Tekton
task acceptance tests.

The way this is done is that we run nginx in the cluster, but with
files that are extracted (using a bash script added in this commit)
from the existing wiremock data.

Beware we now have two different ways of stubbing/mocking a TUF
instance, one new way that works in the cluster and can be use when
testing Tekton tasks, and one old way that works directly in a test
container and uses wiremock.

(In case it's not obvious, there was a lot of heavy lifing done by
Claude here.)

Ref: https://issues.redhat.com/browse/EC-1695
Co-authored-by: Claude Code <noreply@anthropic.com>
Also namespaces, since we want the ConfigMap in a particular
namespace. An RBAC is created also so the ConfigMap is readable by
every service account.

This will be used in the acceptance test added in an upcoming
commit.

Ref: https://issues.redhat.com/browse/EC-1695
Co-authored-by: Claude Code <noreply@anthropic.com>
@simonbaird simonbaird force-pushed the collect-keyless-params-task branch from 2fd2640 to 92180c0 Compare March 18, 2026 19:19
simonbaird and others added 3 commits March 18, 2026 15:59
I got tired of looking at the repeated policies in
task_validate_image.feature.

This technique could perhaps be extended to other feature files, but
I don't want to do that right now.

Unrelated to, but created while working on the PR for...

Ref: https://issues.redhat.com/browse/EC-1695
Co-authored-by: Claude Code <noreply@anthropic.com>
- Add acceptance tests that use the local test image instead of the
  "real" image in quay. This was enabled by the previous commit
  related to running nginx TUF in the cluster.
- Add support for regexp style params keyless signature verification
  params
- Minor tweak to the handling of ignore rekor so we don't ignore it
  when doing keyless.
- Minor improvements/tidying with the task param logic

Note: I was trying to maintain several smaller commits, but it
became difficult after some (possibly unrelated) local acceptance
test problems.

Ref: https://redhat.atlassian.net/browse/EC-1695
@simonbaird simonbaird force-pushed the collect-keyless-params-task branch from 92180c0 to 142a144 Compare March 18, 2026 20:56
@simonbaird simonbaird marked this pull request as ready for review March 18, 2026 21:12
@qodo-code-review
Copy link
Contributor

Review Summary by Qodo

Add collect-keyless-signing-params task and keyless signing improvements

✨ Enhancement 🧪 Tests

Grey Divider

Walkthroughs

Description
• Add collect-keyless-signing-params Tekton task to extract signing configuration from ConfigMaps
• Support running nginx TUF server inside Kind cluster for acceptance testing
• Add ConfigMap and namespace creation capabilities to Kubernetes test infrastructure
• Implement regexp support for certificate identity/issuer matching in verification tasks
• Reduce policy repetition in feature files with helper step definitions
• Add acceptance tests for keyless signing with local test images
Diagram
flowchart LR
  A["Tekton Task<br/>collect-keyless-signing-params"] -->|reads| B["ConfigMap<br/>cluster-config"]
  B -->|provides| C["Signing Parameters<br/>OIDC, Rekor, TUF URLs"]
  D["TUF Server<br/>nginx in cluster"] -->|serves| E["Acceptance Tests<br/>keyless verification"]
  F["Test Infrastructure<br/>ConfigMap/Namespace creation"] -->|supports| E
  G["Verification Tasks<br/>regexp certificate matching"] -->|uses| C
Loading

Grey Divider

File Changes

1. tasks/collect-keyless-signing-params/0.1/collect-keyless-signing-params.yaml ✨ Enhancement +167/-0

New Tekton task for collecting keyless signing parameters

tasks/collect-keyless-signing-params/0.1/collect-keyless-signing-params.yaml


2. acceptance/image/image.go 🐞 Bug fix +9/-1

Skip empty DSSE envelope signatures in attestations

acceptance/image/image.go


3. acceptance/kubernetes/kind/kubernetes.go ✨ Enhancement +148/-0

Add ConfigMap and namespace creation methods with RBAC

acceptance/kubernetes/kind/kubernetes.go


View more (21)
4. acceptance/kubernetes/kind/kind.go 📝 Documentation +1/-1

Improve error message for free port detection

acceptance/kubernetes/kind/kind.go


5. acceptance/kubernetes/kubernetes.go ✨ Enhancement +81/-1

Add step definitions for ConfigMap and namespace creation

acceptance/kubernetes/kubernetes.go


6. acceptance/kubernetes/kubernetes.go ✨ Enhancement +81/-1

Add helper functions for common policy definitions

acceptance/kubernetes/kubernetes.go


7. acceptance/kubernetes/kubernetes.go ✨ Enhancement +81/-1

Add TUF and Rekor environment variables for Tekton tasks

acceptance/kubernetes/kubernetes.go


8. acceptance/kubernetes/stub/stub.go ✨ Enhancement +8/-0

Add stub implementations for ConfigMap operations

acceptance/kubernetes/stub/stub.go


9. acceptance/kubernetes/types/types.go ✨ Enhancement +2/-0

Add ConfigMap and namespace creation to Cluster interface

acceptance/kubernetes/types/types.go


10. acceptance/tuf/tuf.go 📝 Documentation +1/-0

Add documentation comment for TUF stub endpoint

acceptance/tuf/tuf.go


11. acceptance/wiremock/wiremock.go ✨ Enhancement +26/-6

Support starting WireMock on specific port for TUF service

acceptance/wiremock/wiremock.go


12. hack/tuf/create-tuf-files.sh ✨ Enhancement +98/-0

Script to generate TUF ConfigMaps from WireMock data

hack/tuf/create-tuf-files.sh


13. hack/tuf/kustomization.yaml ⚙️ Configuration changes +27/-0

Kustomize configuration for TUF service deployment

hack/tuf/kustomization.yaml


14. hack/tuf/namespace.yaml ⚙️ Configuration changes +21/-0

Kubernetes namespace for TUF service

hack/tuf/namespace.yaml


15. hack/tuf/tuf.yaml ⚙️ Configuration changes +177/-0

TUF server deployment with nginx and ConfigMaps

hack/tuf/tuf.yaml


16. hack/test/kustomization.yaml ⚙️ Configuration changes +1/-0

Include TUF service in test cluster setup

hack/test/kustomization.yaml


17. Makefile ⚙️ Configuration changes +23/-2

Add targets for generating and managing TUF ConfigMaps

Makefile


18. features/task_validate_image.feature 🧪 Tests +166/-142

Add keyless signing acceptance tests and reduce policy repetition

features/task_validate_image.feature


19. docs/modules/ROOT/pages/collect-keyless-signing-params.adoc 📝 Documentation +41/-0

Documentation for collect-keyless-signing-params task

docs/modules/ROOT/pages/collect-keyless-signing-params.adoc


20. docs/modules/ROOT/pages/verify-enterprise-contract.adoc 📝 Documentation +2/-0

Document regexp certificate identity/issuer parameters

docs/modules/ROOT/pages/verify-enterprise-contract.adoc


21. docs/modules/ROOT/pages/verify-conforma-konflux-ta.adoc 📝 Documentation +2/-0

Document regexp certificate identity/issuer parameters

docs/modules/ROOT/pages/verify-conforma-konflux-ta.adoc


22. docs/modules/ROOT/partials/tasks_nav.adoc 📝 Documentation +1/-0

Add collect-keyless-signing-params to task navigation

docs/modules/ROOT/partials/tasks_nav.adoc


23. tasks/verify-enterprise-contract/0.1/verify-enterprise-contract.yaml ✨ Enhancement +51/-5

Add regexp support for certificate matching in verification

tasks/verify-enterprise-contract/0.1/verify-enterprise-contract.yaml


24. tasks/verify-conforma-konflux-ta/0.1/verify-conforma-konflux-ta.yaml ✨ Enhancement +51/-5

Add regexp support for certificate matching in verification

tasks/verify-conforma-konflux-ta/0.1/verify-conforma-konflux-ta.yaml


Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Contributor

qodo-code-review bot commented Mar 18, 2026

Code Review by Qodo

🐞 Bugs (6) 📘 Rule violations (0) 📎 Requirement gaps (0) 📐 Spec deviations (0)

Grey Divider


Action required

1. Acceptance go test now fails 🐞 Bug ⛯ Reliability ⭐ New
Description
Acceptance tests run via the documented cd acceptance && go test ./... flow can fail because the
kind cluster kustomization now includes hack/tuf, which references generated ConfigMap YAMLs that
won’t exist unless make tuf-yaml has been run first. This yields a kustomize render error before
tests even start.
Code

hack/tuf/kustomization.yaml[R21-27]

+resources:
+  - namespace.yaml
+  - tuf.yaml
+  # These two files are generated using `make tuf-yaml` from wiremock data.
+  # See also the hack/tuf/create-tuf-files.sh script in this directory.
+  - tuf-files-configmap.yaml
+  - tuf-root-configmap.yaml
Evidence
The test cluster render path always runs kustomize on hack/test from disk; hack/test now
includes ../tuf, and the hack/tuf kustomization explicitly requires two generated YAML files.
The repo’s acceptance README states you can run tests with cd acceptance && go test ./..., but
only the Makefile acceptance targets generate these files first.

hack/tuf/kustomization.yaml[21-27]
hack/test/kustomization.yaml[21-26]
acceptance/kubernetes/kind/kind.go[281-296]
acceptance/kustomize/kustomize.go[29-42]
acceptance/README.md[13-28]
Makefile[122-162]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
Acceptance tests can fail when run directly from the `acceptance` module (`cd acceptance &amp;&amp; go test ./...`) because `hack/tuf/kustomization.yaml` references `tuf-files-configmap.yaml` and `tuf-root-configmap.yaml`, which are generated by `make tuf-yaml` and are not present by default.

### Issue Context
`kind.Start()` renders `hack/test` via `kustomize.Render(&quot;test&quot;)`. `hack/test/kustomization.yaml` now includes `../tuf`, and `hack/tuf/kustomization.yaml` includes the generated YAMLs as resources. The Makefile’s `acceptance` target runs `make tuf-yaml`, but the acceptance README also documents running `go test` directly.

### Fix Focus Areas
- acceptance/kubernetes/kind/kind.go[281-296]
- hack/tuf/kustomization.yaml[21-27]
- acceptance/README.md[13-28]
- Makefile[122-162]

### Suggested fix approaches
1) Add a preflight check in `renderTestConfiguration` (or before calling it) that verifies the generated YAML files exist; if not, return a clear error instructing users to run `make tuf-yaml` (or `make acceptance`).
2) Alternatively (or additionally), update `acceptance/README.md` to explicitly require `make tuf-yaml` before `go test`, or add an automated generation step in the acceptance test harness (with clear logging).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. ConfigMap update missing RV 🐞 Bug ✓ Correctness
Description
acceptance/kubernetes/kind.(*kindCluster).CreateConfigMap attempts to Update an existing ConfigMap
using a newly constructed object without ResourceVersion, so updates will be rejected whenever the
ConfigMap already exists. This breaks the function’s intended idempotent "create or update"
behavior.
Code

acceptance/kubernetes/kind/kubernetes.go[R229-235]

+	// Create the ConfigMap (or update if it already exists)
+	if _, err := k.client.CoreV1().ConfigMaps(namespace).Create(ctx, configMap, metav1.CreateOptions{}); err != nil {
+		if strings.Contains(err.Error(), "already exists") {
+			// ConfigMap exists, so update it with the new content
+			if _, err := k.client.CoreV1().ConfigMaps(namespace).Update(ctx, configMap, metav1.UpdateOptions{}); err != nil {
+				return fmt.Errorf("failed to update existing ConfigMap: %w", err)
+			}
Evidence
The code constructs a new ConfigMap with only Name/Namespace/Data and, on "already exists", calls
Update with that object without first fetching the current ConfigMap to populate metadata like
ResourceVersion.

acceptance/kubernetes/kind/kubernetes.go[221-239]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`CreateConfigMap` attempts to update an existing ConfigMap using a locally-constructed object that does not include `metadata.resourceVersion`, which Kubernetes requires for Update.
### Issue Context
This code path triggers when the ConfigMap already exists, so the function is currently not reliably idempotent.
### Fix Focus Areas
- acceptance/kubernetes/kind/kubernetes.go[221-239]
### Implementation notes
- Replace `strings.Contains(err.Error(), &amp;quot;already exists&amp;quot;)` with `apierrors.IsAlreadyExists(err)`.
- On already-exists:
- `existing, err := client.CoreV1().ConfigMaps(ns).Get(ctx, name, metav1.GetOptions{})`
- set `configMap.ResourceVersion = existing.ResourceVersion`
- then `Update`.
- Consider using Patch (server-side apply or strategic merge) to avoid RV handling if preferred.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Kubectl errors treated NotFound 🐞 Bug ⛯ Reliability
Description
The collect-keyless-signing-params task treats any kubectl failure as "ConfigMap not found" because
stderr is discarded and error types aren’t distinguished. This contradicts the task description
(which only promises fallback when the ConfigMap is not found) and can silently disable
keyless-signing configuration on RBAC/cluster errors.
Code

tasks/collect-keyless-signing-params/0.1/collect-keyless-signing-params.yaml[R114-137]

+        # Read from the ConfigMap
+        echo "Reading ConfigMap ${configMapNamespace}/${configMapName}"
+        KFLX_CONFIG_PATH='/tmp/cluster-config.json'
+        if kubectl get configmap "${configMapName}" -n "${configMapNamespace}" -o json > "${KFLX_CONFIG_PATH}" 2>/dev/null; then
+            echo "ConfigMap found, extracting keyless signing parameters"
+
+            # First we read "keylessSigningEnabled"
+            keylessSigningEnabled=$(jq -r '.data.enableKeylessSigning // "false"' "$KFLX_CONFIG_PATH")
+
+            if [ "$keylessSigningEnabled" = "true" ]; then
+              # If that is set to "true" then read the other values
+              defaultOIDCIssuer=$(jq -r '.data.defaultOIDCIssuer // ""' "$KFLX_CONFIG_PATH")
+              rekorExternalUrl=$(jq -r '.data.rekorExternalUrl // ""' "$KFLX_CONFIG_PATH")
+              fulcioExternalUrl=$(jq -r '.data.fulcioExternalUrl // ""' "$KFLX_CONFIG_PATH")
+              tufExternalUrl=$(jq -r '.data.tufExternalUrl // ""' "$KFLX_CONFIG_PATH")
+              buildIdentity=$(jq -r '.data.buildIdentity // ""' "$KFLX_CONFIG_PATH")
+              buildIdentityRegexp=$(jq -r '.data.buildIdentityRegexp // ""' "$KFLX_CONFIG_PATH")
+            else
+              # Otherwise we ignore the reast of the ConfigMap
+              echo "enableKeylessSigning is not set, using default empty values"
+            fi
+        else
+            echo "ConfigMap not found, using default empty values"
+        fi
Evidence
The task spec states fallback behavior specifically for the "ConfigMap is not found" case, but the
script redirects kubectl stderr to /dev/null and treats all failures the same, proceeding with empty
defaults.

tasks/collect-keyless-signing-params/0.1/collect-keyless-signing-params.yaml[29-36]
tasks/collect-keyless-signing-params/0.1/collect-keyless-signing-params.yaml[114-137]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The task currently treats all `kubectl get configmap` failures as &amp;quot;not found&amp;quot; because stderr is discarded. This can silently disable keyless signing parameters on RBAC or cluster/API failures.
### Issue Context
The task description promises fallback only when the ConfigMap is not found.
### Fix Focus Areas
- tasks/collect-keyless-signing-params/0.1/collect-keyless-signing-params.yaml[114-137]
### Implementation notes
- Capture stderr and branch on error type:
- If NotFound: continue with defaults.
- Otherwise: print the error to stderr and exit non-zero (or emit a clear warning and still continue if that is the intended contract, but then update the task description).
- Consider using `kubectl get ... -o json 2&amp;gt;err` and checking for NotFound markers, or `--ignore-not-found` + explicit empty-output detection.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

4. AlreadyExists check is brittle 🐞 Bug ⚙ Maintainability ⭐ New
Description
The new kind helpers treat “already exists” as a substring of err.Error() for
ClusterRole/ClusterRoleBinding/Namespace creation, which is brittle and inconsistent with the file’s
existing apierrors.IsAlreadyExists usage. This can cause spurious failures if error strings are
wrapped/changed even though the correct typed predicate is available.
Code

acceptance/kubernetes/kind/kubernetes.go[R275-306]

+	if _, err := k.client.RbacV1().ClusterRoles().Create(ctx, clusterRole, metav1.CreateOptions{}); err != nil {
+		// Ignore error if ClusterRole already exists
+		if !strings.Contains(err.Error(), "already exists") {
+			return fmt.Errorf("failed to create ClusterRole: %w", err)
+		}
+	}
+
+	// Create ClusterRoleBinding for all service accounts (idempotent)
+	clusterRoleBinding := &rbacv1.ClusterRoleBinding{
+		ObjectMeta: metav1.ObjectMeta{
+			Name: "acceptance-configmap-reader-binding",
+		},
+		RoleRef: rbacv1.RoleRef{
+			APIGroup: "rbac.authorization.k8s.io",
+			Kind:     "ClusterRole",
+			Name:     "acceptance-configmap-reader",
+		},
+		Subjects: []rbacv1.Subject{
+			{
+				Kind:     "Group",
+				Name:     "system:serviceaccounts",
+				APIGroup: "rbac.authorization.k8s.io",
+			},
+		},
+	}
+
+	if _, err := k.client.RbacV1().ClusterRoleBindings().Create(ctx, clusterRoleBinding, metav1.CreateOptions{}); err != nil {
+		// Ignore error if ClusterRoleBinding already exists
+		if !strings.Contains(err.Error(), "already exists") {
+			return fmt.Errorf("failed to create ClusterRoleBinding: %w", err)
+		}
+	}
Evidence
CreateConfigMap correctly uses apierrors.IsAlreadyExists(err) for idempotent create/update, but
ensureConfigMapRBAC and CreateNamedNamespace instead match on err.Error() text. Since the
package already imports k8s.io/apimachinery/pkg/api/errors, the typed check should be used
consistently.

acceptance/kubernetes/kind/kubernetes.go[231-247]
acceptance/kubernetes/kind/kubernetes.go[275-306]
acceptance/kubernetes/kind/kubernetes.go[311-324]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`ensureConfigMapRBAC` and `CreateNamedNamespace` handle idempotency by checking whether `err.Error()` contains the substring &quot;already exists&quot;. This is brittle and inconsistent with the typed `apierrors.IsAlreadyExists(err)` check already used in `CreateConfigMap`.

### Issue Context
These helpers are used during acceptance test setup and may run multiple times; using the typed predicate avoids dependency on error message text formatting.

### Fix Focus Areas
- acceptance/kubernetes/kind/kubernetes.go[275-306]
- acceptance/kubernetes/kind/kubernetes.go[311-324]
- acceptance/kubernetes/kind/kubernetes.go[231-247]

### Suggested fix
- Replace `strings.Contains(err.Error(), &quot;already exists&quot;)` checks with `apierrors.IsAlreadyExists(err)` (and similar typed checks where appropriate).
- For consistency, also update the namespace creation path to use `apierrors.IsAlreadyExists` instead of substring matching.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. Cluster-wide ConfigMap RBAC 🐞 Bug ⛨ Security
Description
acceptance/kubernetes/kind.ensureConfigMapRBAC grants every service account in the cluster list/get
access to all ConfigMaps cluster-wide via a ClusterRoleBinding to the system:serviceaccounts group.
Even for acceptance tests, this is broader than necessary and can mask missing/incorrect RBAC
assumptions.
Code

acceptance/kubernetes/kind/kubernetes.go[R250-297]

+// ensureConfigMapRBAC creates necessary RBAC permissions for ConfigMap access across namespaces
+func (k *kindCluster) ensureConfigMapRBAC(ctx context.Context) error {
+	// Create ClusterRole for ConfigMap reading (idempotent)
+	clusterRole := &rbacv1.ClusterRole{
+		ObjectMeta: metav1.ObjectMeta{
+			Name: "acceptance-configmap-reader",
+		},
+		Rules: []rbacv1.PolicyRule{
+			{
+				APIGroups: []string{""},
+				Resources: []string{"configmaps"},
+				Verbs:     []string{"get", "list"},
+			},
+		},
+	}
+
+	if _, err := k.client.RbacV1().ClusterRoles().Create(ctx, clusterRole, metav1.CreateOptions{}); err != nil {
+		// Ignore error if ClusterRole already exists
+		if !strings.Contains(err.Error(), "already exists") {
+			return fmt.Errorf("failed to create ClusterRole: %w", err)
+		}
+	}
+
+	// Create ClusterRoleBinding for all service accounts (idempotent)
+	clusterRoleBinding := &rbacv1.ClusterRoleBinding{
+		ObjectMeta: metav1.ObjectMeta{
+			Name: "acceptance-configmap-reader-binding",
+		},
+		RoleRef: rbacv1.RoleRef{
+			APIGroup: "rbac.authorization.k8s.io",
+			Kind:     "ClusterRole",
+			Name:     "acceptance-configmap-reader",
+		},
+		Subjects: []rbacv1.Subject{
+			{
+				Kind:     "Group",
+				Name:     "system:serviceaccounts",
+				APIGroup: "rbac.authorization.k8s.io",
+			},
+		},
+	}
+
+	if _, err := k.client.RbacV1().ClusterRoleBindings().Create(ctx, clusterRoleBinding, metav1.CreateOptions{}); err != nil {
+		// Ignore error if ClusterRoleBinding already exists
+		if !strings.Contains(err.Error(), "already exists") {
+			return fmt.Errorf("failed to create ClusterRoleBinding: %w", err)
+		}
+	}
Evidence
The ClusterRole allows get/list on configmaps, and the ClusterRoleBinding binds it to the
"system:serviceaccounts" group, which includes all service accounts in all namespaces.

acceptance/kubernetes/kind/kubernetes.go[250-299]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Acceptance test scaffolding grants all service accounts cluster-wide read access to all ConfigMaps.
### Issue Context
This is test-only, but it’s broader than necessary and can hide RBAC regressions/assumptions.
### Fix Focus Areas
- acceptance/kubernetes/kind/kubernetes.go[250-299]
### Implementation notes
- Prefer a `Role` in `konflux-info` with `resources: [configmaps]`, `verbs: [get]`, and `resourceNames: [&amp;lt;configmap name&amp;gt;]`.
- Bind that Role via `RoleBinding` to a narrower subject:
- `system:serviceaccounts:&amp;lt;taskrun-namespace&amp;gt;` group, or
- the specific `ServiceAccount` used by TaskRuns.
- If you need multi-namespace, create bindings per acceptance namespace rather than cluster-wide.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


6. tuf-yaml needs kubectl PATH 🐞 Bug ⛯ Reliability
Description
hack/tuf/create-tuf-files.sh invokes kubectl from PATH to generate YAML, but the build already
produces a kubectl binary under dist/ and the script does not use it. This introduces a host-tooling
dependency that can break make acceptance in environments that otherwise wouldn’t need a
preinstalled kubectl.
Code

hack/tuf/create-tuf-files.sh[R45-80]

+# Build kubectl command with dynamic file list
+build_kubectl_command() {
+    local files_dir="$WIREMOCK_TUF_DIR/__files"
+    local cmd_args=("kubectl" "create" "configmap" "tuf-files" "--namespace=tuf-service")
+
+    local body_files
+    readarray -t body_files < <(extract_body_files)
+
+    if [[ ${#body_files[@]} -eq 0 ]]; then
+        echo "Error: No TUF body files found in wiremock recordings" >&2
+        exit 1
+    fi
+
+    echo "Found ${#body_files[@]} TUF files to include in ConfigMap:" >&2
+
+    for body_file in "${body_files[@]}"; do
+        echo "  - $body_file" >&2
+        # Use the filename as the key in the ConfigMap
+        cmd_args+=("--from-file=$body_file=$files_dir/$body_file")
+    done
+
+    cmd_args+=("--dry-run=client" "-o" "yaml" "--validate=false")
+
+    printf '%s\n' "${cmd_args[@]}"
+}
+
+# Create ConfigMap YAML files
+echo "Creating TUF files ConfigMap YAML..." >&2
+readarray -t kubectl_cmd < <(build_kubectl_command)
+"${kubectl_cmd[@]}" > "$SCRIPT_DIR/tuf-files-configmap.yaml"
+
+echo "Creating TUF root ConfigMap YAML..." >&2
+kubectl create configmap tuf-root-data \
+  --namespace=tuf-service \
+  --from-file=root.json="$PROJECT_ROOT/acceptance/tuf/root.json" \
+  --dry-run=client -o yaml --validate=false > "$SCRIPT_DIR/tuf-root-configmap.yaml"
Evidence
The acceptance Makefile runs make build then make tuf-yaml, and build.sh copies kubectl into
dist/. However, create-tuf-files.sh calls kubectl ... directly, requiring an external kubectl on
PATH instead of using the repo-built one.

Makefile[122-155]
hack/tuf/create-tuf-files.sh[45-80]
build.sh[68-84]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`make acceptance` now generates TUF ConfigMap YAML using a script that calls `kubectl` from PATH, even though the repo build already produces a kubectl binary under `dist/`.
### Issue Context
This can fail in minimal CI/dev environments without a preinstalled kubectl.
### Fix Focus Areas
- hack/tuf/create-tuf-files.sh[45-80]
- Makefile[122-155]
### Implementation notes
- In the script, introduce `KUBECTL_BIN=&amp;quot;${KUBECTL:-kubectl}&amp;quot;` and replace hardcoded `kubectl` calls.
- In Makefile, set `KUBECTL=./dist/kubectl_$$(go env GOOS)_$$(go env GOARCH)` when invoking the script (or ensure PATH includes `dist/`).
- Keep a fallback to system kubectl if the dist binary is missing.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
features/task_validate_image.feature (1)

417-423: ⚠️ Potential issue | 🟡 Minor

Add the missing buildIdentity fallback assertion in the missing-ConfigMap scenario.

This scenario validates most empty defaults but skips buildIdentity, which leaves a small regression gap for collect-signing-params output consistency.

🔧 Suggested test assertion
      And the task result "tufExternalUrl" should equal ""
+     And the task result "buildIdentity" should equal ""
      And the task result "buildIdentityRegexp" should equal ""
      And the task result "keylessSigningEnabled" should equal "false"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@features/task_validate_image.feature` around lines 417 - 423, Missing
assertion: the scenario that checks empty defaults for the
"collect-signing-params" step omits the "buildIdentity" task result. Add an
assertion alongside the other task result checks to verify that task result
"buildIdentity" equals "" (i.e., the empty fallback) so the test covers the
missing-ConfigMap fallback; place this new assertion with the other lines
checking "defaultOIDCIssuer", "rekorExternalUrl", "fulcioExternalUrl",
"tufExternalUrl", "buildIdentityRegexp", and "keylessSigningEnabled".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@acceptance/kubernetes/kind/kubernetes.go`:
- Around line 282-299: The ClusterRoleBinding currently grants read access to
all cluster service accounts via the Subject Group "system:serviceaccounts";
update the ClusterRoleBinding (variable clusterRoleBinding of type
rbacv1.ClusterRoleBinding) to scope access to the specific test service account
by replacing the Subjects entry with a Subject of Kind "ServiceAccount", Name
set to the test SA (e.g. the task service account name used in tests), and
Namespace set to the test namespace; keep the RoleRef
(acceptance-configmap-reader) but ensure the binding name and Subject Namespace
match the acceptance test namespace so the permission is limited to only that
service account.

In `@hack/tuf/tuf.yaml`:
- Around line 42-129: Add an explicit securityContext to both the initContainer
"setup-tuf-data" and main container "tuf-server": set runAsNonRoot: true (and
runAsUser: 1000 or a non-root UID), allowPrivilegeEscalation: false,
capabilities: drop: ["ALL"], readOnlyRootFilesystem: true, and a seccompProfile
(type: RuntimeDefault) so both containers are constrained by seccomp; ensure
these keys are placed under each container spec (initContainers[] -> name:
setup-tuf-data and containers[] -> name: tuf-server) and adjust runAsUser if the
busybox copy logic or nginx needs a specific non-root UID.

---

Duplicate comments:
In `@features/task_validate_image.feature`:
- Around line 417-423: Missing assertion: the scenario that checks empty
defaults for the "collect-signing-params" step omits the "buildIdentity" task
result. Add an assertion alongside the other task result checks to verify that
task result "buildIdentity" equals "" (i.e., the empty fallback) so the test
covers the missing-ConfigMap fallback; place this new assertion with the other
lines checking "defaultOIDCIssuer", "rekorExternalUrl", "fulcioExternalUrl",
"tufExternalUrl", "buildIdentityRegexp", and "keylessSigningEnabled".

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: c12462d3-731a-4e75-ab5c-b8dd85512824

📥 Commits

Reviewing files that changed from the base of the PR and between 2fd2640 and 142a144.

⛔ Files ignored due to path filters (1)
  • features/__snapshots__/task_validate_image.snap is excluded by !**/*.snap
📒 Files selected for processing (23)
  • .gitignore
  • Makefile
  • acceptance/image/image.go
  • acceptance/kubernetes/kind/kind.go
  • acceptance/kubernetes/kind/kubernetes.go
  • acceptance/kubernetes/kubernetes.go
  • acceptance/kubernetes/stub/stub.go
  • acceptance/kubernetes/types/types.go
  • acceptance/tuf/tuf.go
  • acceptance/wiremock/wiremock.go
  • docs/modules/ROOT/pages/collect-keyless-signing-params.adoc
  • docs/modules/ROOT/pages/verify-conforma-konflux-ta.adoc
  • docs/modules/ROOT/pages/verify-enterprise-contract.adoc
  • docs/modules/ROOT/partials/tasks_nav.adoc
  • features/task_validate_image.feature
  • hack/test/kustomization.yaml
  • hack/tuf/create-tuf-files.sh
  • hack/tuf/kustomization.yaml
  • hack/tuf/namespace.yaml
  • hack/tuf/tuf.yaml
  • tasks/collect-keyless-signing-params/0.1/collect-keyless-signing-params.yaml
  • tasks/verify-conforma-konflux-ta/0.1/verify-conforma-konflux-ta.yaml
  • tasks/verify-enterprise-contract/0.1/verify-enterprise-contract.yaml
🚧 Files skipped from review as they are similar to previous changes (9)
  • docs/modules/ROOT/partials/tasks_nav.adoc
  • tasks/verify-conforma-konflux-ta/0.1/verify-conforma-konflux-ta.yaml
  • acceptance/tuf/tuf.go
  • acceptance/image/image.go
  • hack/test/kustomization.yaml
  • tasks/verify-enterprise-contract/0.1/verify-enterprise-contract.yaml
  • docs/modules/ROOT/pages/verify-conforma-konflux-ta.adoc
  • Makefile
  • docs/modules/ROOT/pages/verify-enterprise-contract.adoc

Comment on lines +282 to +299
// Create ClusterRoleBinding for all service accounts (idempotent)
clusterRoleBinding := &rbacv1.ClusterRoleBinding{
ObjectMeta: metav1.ObjectMeta{
Name: "acceptance-configmap-reader-binding",
},
RoleRef: rbacv1.RoleRef{
APIGroup: "rbac.authorization.k8s.io",
Kind: "ClusterRole",
Name: "acceptance-configmap-reader",
},
Subjects: []rbacv1.Subject{
{
Kind: "Group",
Name: "system:serviceaccounts",
APIGroup: "rbac.authorization.k8s.io",
},
},
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

RBAC binding is too broad (system:serviceaccounts cluster-wide).

Line 295 binds ConfigMap read access to all service accounts in the cluster. For acceptance tests this should be scoped to the specific task service account/namespace only.

🔒 Scope RBAC to the test service account instead of all SAs
-func (k *kindCluster) ensureConfigMapRBAC(ctx context.Context) error {
+func (k *kindCluster) ensureConfigMapRBAC(ctx context.Context, configMapNamespace string) error {
+  t := testenv.FetchState[testState](ctx)
@@
-  clusterRoleBinding := &rbacv1.ClusterRoleBinding{
+  roleBinding := &rbacv1.RoleBinding{
     ObjectMeta: metav1.ObjectMeta{
-      Name: "acceptance-configmap-reader-binding",
+      Name:      fmt.Sprintf("acceptance-configmap-reader-%s", t.namespace),
+      Namespace: configMapNamespace,
     },
     RoleRef: rbacv1.RoleRef{
       APIGroup: "rbac.authorization.k8s.io",
       Kind:     "ClusterRole",
       Name:     "acceptance-configmap-reader",
     },
     Subjects: []rbacv1.Subject{
       {
-        Kind:     "Group",
-        Name:     "system:serviceaccounts",
-        APIGroup: "rbac.authorization.k8s.io",
+        Kind:      "ServiceAccount",
+        Name:      "default",
+        Namespace: t.namespace,
       },
     },
   }
-  if _, err := k.client.RbacV1().ClusterRoleBindings().Create(ctx, clusterRoleBinding, metav1.CreateOptions{}); err != nil {
+  if _, err := k.client.RbacV1().RoleBindings(configMapNamespace).Create(ctx, roleBinding, metav1.CreateOptions{}); err != nil {
     ...
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@acceptance/kubernetes/kind/kubernetes.go` around lines 282 - 299, The
ClusterRoleBinding currently grants read access to all cluster service accounts
via the Subject Group "system:serviceaccounts"; update the ClusterRoleBinding
(variable clusterRoleBinding of type rbacv1.ClusterRoleBinding) to scope access
to the specific test service account by replacing the Subjects entry with a
Subject of Kind "ServiceAccount", Name set to the test SA (e.g. the task service
account name used in tests), and Namespace set to the test namespace; keep the
RoleRef (acceptance-configmap-reader) but ensure the binding name and Subject
Namespace match the acceptance test namespace so the permission is limited to
only that service account.

Comment on lines +42 to +129
spec:
initContainers:
- name: setup-tuf-data
image: busybox:1.35
command:
- sh
- -c
- |
cp /tuf-files/* /tuf-data/

# Create the versioned metadata files that TUF expects
cp /tuf-data/body-timestamp.json-*.json /tuf-data/timestamp.json
cp /tuf-data/body-1.snapshot.json-*.json /tuf-data/1.snapshot.json
cp /tuf-data/body-1.targets.json-*.json /tuf-data/1.targets.json

# Create root.json from embedded content
cp /tuf-root/root.json /tuf-data/root.json

# Create targets directory and copy target files with hash-based names dynamically
mkdir -p /tuf-data/targets

# Process all target files using patterns instead of hardcoded names
for file in /tuf-data/body-targets-*; do
[ -f "$file" ] || continue

# Extract the original filename by removing the body-targets- prefix and the -HASH.ext suffix
basename_file=$(basename "$file")

# Pattern: body-targets-<LONG-HASH>.<filename>-<SHORT-HASH>.<ext>
# We want to extract <LONG-HASH>.<filename>.<ext>
# Handle different patterns by removing the suffix intelligently
without_prefix=$(echo "$basename_file" | sed 's/^body-targets-//')

# Special case for tsa_intermedi -> tsa_intermediate_0.crt
if echo "$without_prefix" | grep -q "tsa_intermedi-"; then
target_name=$(echo "$without_prefix" | sed 's/tsa_intermedi-.*/tsa_intermediate_0.crt.pem/')
# For files ending with .-HASH.ext, remove the .-HASH part
elif echo "$without_prefix" | grep -q '\.-[A-Za-z0-9]*\.'; then
target_name=$(echo "$without_prefix" | sed 's/\.-[A-Za-z0-9]*\./\./')
# For files ending with -HASH.ext, remove the -HASH part but keep the extension
elif echo "$without_prefix" | grep -q '\-[A-Za-z0-9]*\.[a-z]*$'; then
# Extract everything before -HASH, then add the extension
ext=$(echo "$without_prefix" | sed 's/.*\.//')
base=$(echo "$without_prefix" | sed 's/\(.*\)-[A-Za-z0-9]*\.[a-z]*$/\1/')
target_name="$base.$ext"
else
# Fallback: remove -HASH.ext and add the extension
ext=$(echo "$basename_file" | sed 's/.*\.//')
base=$(echo "$without_prefix" | sed 's/-[A-Za-z0-9]*\.[a-z]*$//')
target_name="$base.$ext"
fi

if [ -n "$target_name" ]; then
cp "$file" "/tuf-data/targets/$target_name"
echo "Copied $basename_file -> targets/$target_name"
else
echo "Warning: Could not parse filename pattern for $basename_file"
fi
done

ls -la /tuf-data/
ls -la /tuf-data/targets/
volumeMounts:
- name: tuf-files
mountPath: /tuf-files
readOnly: true
- name: tuf-root
mountPath: /tuf-root
readOnly: true
- name: tuf-data
mountPath: /tuf-data
containers:
- name: tuf-server
image: docker.io/nginx:1.25-alpine
ports:
- name: http
containerPort: 8080
env:
- name: NGINX_PORT
value: "8080"
volumeMounts:
- name: tuf-data
mountPath: /usr/share/nginx/html
readOnly: true
- name: nginx-config
mountPath: /etc/nginx/conf.d/default.conf
subPath: nginx.conf
volumes:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Harden the TUF deployment with explicit container security context.

Line 42 onward defines containers without explicit hardening (allowPrivilegeEscalation, dropped capabilities, seccomp). This weakens security guarantees in CI/shared clusters.

🔐 Suggested hardening baseline
 spec:
   template:
     metadata:
       labels:
         app.kubernetes.io/name: tuf-server
     spec:
+      securityContext:
+        seccompProfile:
+          type: RuntimeDefault
       initContainers:
         - name: setup-tuf-data
           image: busybox:1.35
+          securityContext:
+            allowPrivilegeEscalation: false
+            capabilities:
+              drop: ["ALL"]
           command:
             - sh
             - -c
             - |
               cp /tuf-files/* /tuf-data/
@@
       containers:
         - name: tuf-server
           image: docker.io/nginx:1.25-alpine
+          securityContext:
+            allowPrivilegeEscalation: false
+            capabilities:
+              drop: ["ALL"]
           ports:
             - name: http
               containerPort: 8080
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@hack/tuf/tuf.yaml` around lines 42 - 129, Add an explicit securityContext to
both the initContainer "setup-tuf-data" and main container "tuf-server": set
runAsNonRoot: true (and runAsUser: 1000 or a non-root UID),
allowPrivilegeEscalation: false, capabilities: drop: ["ALL"],
readOnlyRootFilesystem: true, and a seccompProfile (type: RuntimeDefault) so
both containers are constrained by seccomp; ensure these keys are placed under
each container spec (initContainers[] -> name: setup-tuf-data and containers[]
-> name: tuf-server) and adjust runAsUser if the busybox copy logic or nginx
needs a specific non-root UID.

Comment on lines +21 to +27
resources:
- namespace.yaml
- tuf.yaml
# These two files are generated using `make tuf-yaml` from wiremock data.
# See also the hack/tuf/create-tuf-files.sh script in this directory.
- tuf-files-configmap.yaml
- tuf-root-configmap.yaml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. Acceptance go test now fails 🐞 Bug ⛯ Reliability

Acceptance tests run via the documented cd acceptance && go test ./... flow can fail because the
kind cluster kustomization now includes hack/tuf, which references generated ConfigMap YAMLs that
won’t exist unless make tuf-yaml has been run first. This yields a kustomize render error before
tests even start.
Agent Prompt
### Issue description
Acceptance tests can fail when run directly from the `acceptance` module (`cd acceptance && go test ./...`) because `hack/tuf/kustomization.yaml` references `tuf-files-configmap.yaml` and `tuf-root-configmap.yaml`, which are generated by `make tuf-yaml` and are not present by default.

### Issue Context
`kind.Start()` renders `hack/test` via `kustomize.Render("test")`. `hack/test/kustomization.yaml` now includes `../tuf`, and `hack/tuf/kustomization.yaml` includes the generated YAMLs as resources. The Makefile’s `acceptance` target runs `make tuf-yaml`, but the acceptance README also documents running `go test` directly.

### Fix Focus Areas
- acceptance/kubernetes/kind/kind.go[281-296]
- hack/tuf/kustomization.yaml[21-27]
- acceptance/README.md[13-28]
- Makefile[122-162]

### Suggested fix approaches
1) Add a preflight check in `renderTestConfiguration` (or before calling it) that verifies the generated YAML files exist; if not, return a clear error instructing users to run `make tuf-yaml` (or `make acceptance`).
2) Alternatively (or additionally), update `acceptance/README.md` to explicitly require `make tuf-yaml` before `go test`, or add an automated generation step in the acceptance test harness (with clear logging).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant