From 9d01998eb4228a06ff754e79c9b80fb8d1b809f6 Mon Sep 17 00:00:00 2001 From: James Devine Date: Sun, 10 May 2026 16:46:37 +0100 Subject: [PATCH 1/2] ci(test): lint compiled bash bodies with shellcheck MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Adds tests/bash_lint_tests.rs, an integration test that compiles a representative set of fixtures and runs shellcheck against every literal bash: body in the generated YAML. The lint catches the actual silent-failure patterns ADO's "fail on last command" default lets through (SC2164 cd-without-||, SC2155 masked-return, SC2086/2046 unquoted variables, SC2154 unset refs, SC2088 tilde-in-quotes). This replaces the previously proposed approach of sprinkling `set -eo pipefail` across every bash step (PR #492). That approach added boilerplate to ~27 sites without enforcement, drifted as new steps were added, and in two spots actually masked errors more than the original code (`grep ... | tail -1 || true`). Real bugs surfaced and fixed by the new lint: * `src/engine.rs` — `Engine::Copilot::log_dir()` returned `~/.copilot/logs`. Tilde does not expand inside the double-quoted `[ -d "..." ]` test that consumes this value, so the directory check always failed and Copilot logs were silently never collected to the pipeline artifact. Replaced with `$HOME/.copilot/logs`. * `src/runtimes/node/mod.rs` and `src/runtimes/dotnet/mod.rs` — the ensure-`.npmrc` and ensure-`nuget.config` step generators used Rust `\` line continuations in their format strings, which strip leading whitespace. The emitted YAML had body lines flush-left against `- bash: |`, producing invalid YAML. Replaced with raw string literals so indentation is preserved. * Multiple `cd "$DOWNLOAD_DIR"` in `base.yml` / `1es-base.yml` had no `|| exit` guard. Added. * `exit $AGENT_EXIT_CODE` (multiple sites) — quoted. * `mkdir -p {{ working_directory }}/safe_outputs` and the matching `cp -a ...` — quoted the substitution. * `JSON_CONTENT=$(echo "$RESULT_LINE" | sed 's/.*PFX://')` rewritten to `${RESULT_LINE##*PFX:}` (avoids forking sed and removes a shellcheck SC2001 finding). Targeted `set -eo pipefail` additions (only where masked-pipeline exit codes matter): * `base.yml` / `1es-base.yml` ado-aw download steps (3 stages × 2 templates): `grep "ado-aw-linux-x64" checksums.txt | sha256sum -c -` silently passes when grep matches nothing because sha256sum returns 0 on empty stdin. Without pipefail, the unverified binary would install successfully. * `src/compile/extensions/trigger_filters.rs` script-download step: same `grep | sha256sum` pattern. * `src/runtimes/lean/mod.rs` install step: `curl ... | sh` would silently install nothing on curl failure. The two pre-existing `set -eo pipefail` instances on the AWF download + docker pull steps (introduced in PR #439) and on the `tee`-piped agent / threat-analysis runs are preserved — those were correct. Skip vs. enforce: * Locally, the test prints a notice and returns early when shellcheck is missing. * CI installs shellcheck and sets `ENFORCE_BASH_LINT=1` so a missing shellcheck becomes a hard failure rather than a silent skip. A new `tests/fixtures/runtime-coverage-agent.md` exercises the Lean, Node-with-feed-url, and .NET-with-feed-url runtimes plus the cache-memory tool, ensuring every code-generated bash step is reached. The lint enforces a `REQUIRED_STEP_DISPLAY_NAMES` coverage list to catch fixture/generator drift. Documented in AGENTS.md and docs/extending.md. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- .github/workflows/rust-tests.yml | 8 + AGENTS.md | 19 ++ docs/extending.md | 42 +++ src/compile/extensions/trigger_filters.rs | 1 + src/data/1es-base.yml | 26 +- src/data/base.yml | 26 +- src/engine.rs | 6 +- src/runtimes/dotnet/mod.rs | 34 +-- src/runtimes/lean/mod.rs | 1 + src/runtimes/node/mod.rs | 18 +- tests/bash_lint_tests.rs | 354 ++++++++++++++++++++++ tests/fixtures/runtime-coverage-agent.md | 22 ++ 12 files changed, 510 insertions(+), 47 deletions(-) create mode 100644 tests/bash_lint_tests.rs create mode 100644 tests/fixtures/runtime-coverage-agent.md diff --git a/.github/workflows/rust-tests.yml b/.github/workflows/rust-tests.yml index ff13a1d..37a0c22 100644 --- a/.github/workflows/rust-tests.yml +++ b/.github/workflows/rust-tests.yml @@ -22,8 +22,16 @@ jobs: - uses: Swatinem/rust-cache@v2 + - name: Install shellcheck + # ubuntu-latest already ships shellcheck, but install explicitly to + # guarantee it on every refresh of the runner image and to surface + # a clear failure when the bash-lint integration test is enforced. + run: sudo apt-get update && sudo apt-get install -y shellcheck + - name: Build run: cargo build --verbose - name: Run tests + env: + ENFORCE_BASH_LINT: "1" run: cargo test --verbose diff --git a/AGENTS.md b/AGENTS.md index 10f3bea..4deb9bb 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -273,6 +273,25 @@ cargo test cargo clippy ``` +### Bash step lint + +The `tests/bash_lint_tests.rs` integration test compiles a representative set +of fixtures and runs `shellcheck` against every literal `bash:` body in the +generated YAML. It catches silent-failure patterns that ADO's "fail on last +command" default would let through (e.g. `cd "$X"` without `|| exit`, tilde +inside double quotes, masked-return assignments). + +The test is skipped if `shellcheck` is not on PATH. Install locally with +`brew install shellcheck` (macOS) or `apt-get install -y shellcheck` (Debian +/ Ubuntu); CI installs it in `.github/workflows/rust-tests.yml` and sets +`ENFORCE_BASH_LINT=1` so a missing shellcheck becomes a hard failure rather +than a silent skip. + +When adding a new bash step, run `cargo test --test bash_lint_tests` and fix +anything it flags. If a finding is genuinely intentional, add a +`# shellcheck disable=SCxxxx` comment immediately above the offending line in +the bash body — shellcheck honours the directive and it's inert at runtime. + ## Common Tasks ### Compile a markdown pipeline diff --git a/docs/extending.md b/docs/extending.md index 636b7a7..2f68076 100644 --- a/docs/extending.md +++ b/docs/extending.md @@ -90,3 +90,45 @@ To add a new filter type: `validate_pr_filters()` or `validate_pipeline_filters()` 5. **Write tests** — lowering test, validation test, and codegen test in `filter_ir.rs` + +## Bash steps in pipeline templates + +Pipeline templates and Rust step generators emit dozens of multi-line `bash:` +steps. ADO bash steps fail only on the *last* command's exit status by +default, so a chain like `mkdir … && curl … && cd … && cmd` can silently +swallow earlier failures. + +Rather than spread `set -eo pipefail` boilerplate across every step, the +project enforces hygiene via `tests/bash_lint_tests.rs`, which compiles a set +of fixtures and runs `shellcheck` against every literal `bash:` body in the +generated YAML. The lint catches: + +- **SC2164** — `cd $X` without `|| exit` (the canonical silent-failure) +- **SC2155** — `local var=$(cmd)` masking the inner exit code +- **SC2086 / SC2046** — unquoted variables / command substitutions +- **SC2154** — variables referenced but never assigned +- **SC2088** — tilde inside double quotes (does not expand at all) + +When you add or modify a bash step: + +1. Run `cargo test --test bash_lint_tests` (locally requires `shellcheck` on + PATH; install with `brew install shellcheck` or + `apt-get install -y shellcheck`). CI sets `ENFORCE_BASH_LINT=1` so a + missing shellcheck becomes a hard failure rather than a silent skip. +2. Fix any finding by adjusting the bash. Common fixes: `cd "$X" || exit 1`, + `exit "$CODE"`, `"$HOME/.foo"` instead of `"~/.foo"`, quoting variable + expansions. +3. If a finding is genuinely intentional, add a + `# shellcheck disable=SCxxxx` comment immediately above the line in the + bash body. Such directives are bash comments and have no runtime effect. + +Do **not** sprinkle `set -eo pipefail` into every step to silence the lint — +that approach was tried (PR #492) and was rejected because it adds noise, +drifts as new steps are added, and doesn't address the actual silent-failure +patterns that the lint surfaces. Use targeted `set -eo pipefail` only when a +step has a real fail-fast requirement that the lint cannot express (the +current uses are on AWF/MCPG download and the `tee`-piped agent run). + +The exclude list (`SC1090`, `SC1091`, `SC2034`, `SC2016`) is documented in +`tests/bash_lint_tests.rs`. Each entry has a justification — do not extend +without one. diff --git a/src/compile/extensions/trigger_filters.rs b/src/compile/extensions/trigger_filters.rs index a0d84a6..1b0be9c 100644 --- a/src/compile/extensions/trigger_filters.rs +++ b/src/compile/extensions/trigger_filters.rs @@ -101,6 +101,7 @@ impl CompilerExtension for TriggerFiltersExtension { let mut steps = Vec::new(); steps.push(format!( r#"- bash: | + set -eo pipefail mkdir -p /tmp/ado-aw-scripts curl -fsSL "{RELEASE_BASE_URL}/v{version}/checksums.txt" -o /tmp/ado-aw-scripts/checksums.txt curl -fsSL "{RELEASE_BASE_URL}/v{version}/scripts.zip" -o /tmp/ado-aw-scripts/scripts.zip diff --git a/src/data/1es-base.yml b/src/data/1es-base.yml index 920da5e..dfda3fe 100644 --- a/src/data/1es-base.yml +++ b/src/data/1es-base.yml @@ -59,6 +59,7 @@ extends: {{ engine_install_steps }} - bash: | + set -eo pipefail COMPILER_VERSION="{{ compiler_version }}" DOWNLOAD_DIR="$(Pipeline.Workspace)/agentic-pipeline-compiler" DOWNLOAD_URL="https://github.com/githubnext/ado-aw/releases/download/v${COMPILER_VERSION}/ado-aw-linux-x64" @@ -70,7 +71,7 @@ extends: curl -fsSL -o "$DOWNLOAD_DIR/checksums.txt" "$CHECKSUM_URL" echo "Verifying checksum..." - cd "$DOWNLOAD_DIR" + cd "$DOWNLOAD_DIR" || exit 1 grep "ado-aw-linux-x64" checksums.txt | sha256sum -c - mv ado-aw-linux-x64 ado-aw chmod +x ado-aw @@ -156,7 +157,7 @@ extends: curl -fsSL -o "$DOWNLOAD_DIR/checksums.txt" "$CHECKSUM_URL" echo "Verifying checksum..." - cd "$DOWNLOAD_DIR" + cd "$DOWNLOAD_DIR" || exit 1 grep "awf-linux-x64" checksums.txt | sha256sum -c - mv awf-linux-x64 awf chmod +x awf @@ -204,6 +205,7 @@ extends: # Wait for server to be ready READY=false + # shellcheck disable=SC2034 # i is intentionally unused; wait-N-times loop for i in $(seq 1 30); do if curl -sf "http://localhost:$SAFE_OUTPUTS_PORT/health" > /dev/null 2>&1; then echo "SafeOutputs HTTP server is ready" @@ -267,6 +269,7 @@ extends: # Wait for MCPG to be ready READY=false + # shellcheck disable=SC2034 # i is intentionally unused; wait-N-times loop for i in $(seq 1 30); do if curl -sf "http://localhost:{{ mcpg_port }}/health" > /dev/null 2>&1; then echo "MCPG is ready" @@ -284,6 +287,7 @@ extends: # Health check passing doesn't guarantee stdout is flushed, so poll. echo "Waiting for gateway output file..." GATEWAY_READY=false + # shellcheck disable=SC2034 # i is intentionally unused; wait-N-times loop for i in $(seq 1 15); do if [ -s "$GATEWAY_OUTPUT" ] && jq -e '.mcpServers' "$GATEWAY_OUTPUT" > /dev/null 2>&1; then echo "Gateway output is ready" @@ -361,7 +365,7 @@ extends: "$(Pipeline.Workspace)/awf/awf" logs summary --source "$(Agent.TempDirectory)/staging/logs/firewall" 2>/dev/null || true fi - exit $AGENT_EXIT_CODE + exit "$AGENT_EXIT_CODE" displayName: "Run copilot (AWF network isolated)" workingDirectory: {{ working_directory }} env: @@ -433,6 +437,7 @@ extends: {{ engine_install_steps }} - bash: | + set -eo pipefail COMPILER_VERSION="{{ compiler_version }}" DOWNLOAD_DIR="$(Pipeline.Workspace)/agentic-pipeline-compiler" DOWNLOAD_URL="https://github.com/githubnext/ado-aw/releases/download/v${COMPILER_VERSION}/ado-aw-linux-x64" @@ -444,7 +449,7 @@ extends: curl -fsSL -o "$DOWNLOAD_DIR/checksums.txt" "$CHECKSUM_URL" echo "Verifying checksum..." - cd "$DOWNLOAD_DIR" + cd "$DOWNLOAD_DIR" || exit 1 grep "ado-aw-linux-x64" checksums.txt | sha256sum -c - mv ado-aw-linux-x64 ado-aw chmod +x ado-aw @@ -469,7 +474,7 @@ extends: curl -fsSL -o "$DOWNLOAD_DIR/checksums.txt" "$CHECKSUM_URL" echo "Verifying checksum..." - cd "$DOWNLOAD_DIR" + cd "$DOWNLOAD_DIR" || exit 1 grep "awf-linux-x64" checksums.txt | sha256sum -c - mv awf-linux-x64 awf chmod +x awf @@ -487,8 +492,8 @@ extends: displayName: "Pre-pull AWF container images (v{{ firewall_version }})" - bash: | - mkdir -p {{ working_directory }}/safe_outputs - cp -a "$(Pipeline.Workspace)/agent_outputs_$(Build.BuildId)/." {{ working_directory }}/safe_outputs + mkdir -p "{{ working_directory }}/safe_outputs" + cp -a "$(Pipeline.Workspace)/agent_outputs_$(Build.BuildId)/." "{{ working_directory }}/safe_outputs" displayName: "Prepare safe outputs for analysis" - bash: | @@ -526,7 +531,7 @@ extends: | tee "$THREAT_OUTPUT_FILE" \ && AGENT_EXIT_CODE=0 || AGENT_EXIT_CODE=$? - exit $AGENT_EXIT_CODE + exit "$AGENT_EXIT_CODE" displayName: "Run threat analysis (AWF network isolated)" workingDirectory: {{ working_directory }} env: @@ -550,7 +555,7 @@ extends: RESULT_LINE=$(grep "THREAT_DETECTION_RESULT:" "$(Agent.TempDirectory)/threat-analysis-output.txt" | tail -1) if [ -n "$RESULT_LINE" ]; then # Extract JSON after the prefix - JSON_CONTENT=$(echo "$RESULT_LINE" | sed 's/.*THREAT_DETECTION_RESULT://') + JSON_CONTENT="${RESULT_LINE##*THREAT_DETECTION_RESULT:}" echo "$JSON_CONTENT" > "$(Agent.TempDirectory)/analyzed_outputs/threat-analysis.json" echo "Extracted threat analysis JSON:" cat "$(Agent.TempDirectory)/analyzed_outputs/threat-analysis.json" @@ -635,6 +640,7 @@ extends: artifact: analyzed_outputs_$(Build.BuildId) - bash: | + set -eo pipefail COMPILER_VERSION="{{ compiler_version }}" DOWNLOAD_DIR="$(Pipeline.Workspace)/agentic-pipeline-compiler" DOWNLOAD_URL="https://github.com/githubnext/ado-aw/releases/download/v${COMPILER_VERSION}/ado-aw-linux-x64" @@ -646,7 +652,7 @@ extends: curl -fsSL -o "$DOWNLOAD_DIR/checksums.txt" "$CHECKSUM_URL" echo "Verifying checksum..." - cd "$DOWNLOAD_DIR" + cd "$DOWNLOAD_DIR" || exit 1 grep "ado-aw-linux-x64" checksums.txt | sha256sum -c - mv ado-aw-linux-x64 ado-aw chmod +x ado-aw diff --git a/src/data/base.yml b/src/data/base.yml index d5ca150..83c6d92 100644 --- a/src/data/base.yml +++ b/src/data/base.yml @@ -30,6 +30,7 @@ jobs: {{ engine_install_steps }} - bash: | + set -eo pipefail COMPILER_VERSION="{{ compiler_version }}" DOWNLOAD_DIR="$(Pipeline.Workspace)/agentic-pipeline-compiler" DOWNLOAD_URL="https://github.com/githubnext/ado-aw/releases/download/v${COMPILER_VERSION}/ado-aw-linux-x64" @@ -41,7 +42,7 @@ jobs: curl -fsSL -o "$DOWNLOAD_DIR/checksums.txt" "$CHECKSUM_URL" echo "Verifying checksum..." - cd "$DOWNLOAD_DIR" + cd "$DOWNLOAD_DIR" || exit 1 grep "ado-aw-linux-x64" checksums.txt | sha256sum -c - mv ado-aw-linux-x64 ado-aw chmod +x ado-aw @@ -127,7 +128,7 @@ jobs: curl -fsSL -o "$DOWNLOAD_DIR/checksums.txt" "$CHECKSUM_URL" echo "Verifying checksum..." - cd "$DOWNLOAD_DIR" + cd "$DOWNLOAD_DIR" || exit 1 grep "awf-linux-x64" checksums.txt | sha256sum -c - mv awf-linux-x64 awf chmod +x awf @@ -175,6 +176,7 @@ jobs: # Wait for server to be ready READY=false + # shellcheck disable=SC2034 # i is intentionally unused; wait-N-times loop for i in $(seq 1 30); do if curl -sf "http://localhost:$SAFE_OUTPUTS_PORT/health" > /dev/null 2>&1; then echo "SafeOutputs HTTP server is ready" @@ -238,6 +240,7 @@ jobs: # Wait for MCPG to be ready READY=false + # shellcheck disable=SC2034 # i is intentionally unused; wait-N-times loop for i in $(seq 1 30); do if curl -sf "http://localhost:{{ mcpg_port }}/health" > /dev/null 2>&1; then echo "MCPG is ready" @@ -255,6 +258,7 @@ jobs: # Health check passing doesn't guarantee stdout is flushed, so poll. echo "Waiting for gateway output file..." GATEWAY_READY=false + # shellcheck disable=SC2034 # i is intentionally unused; wait-N-times loop for i in $(seq 1 15); do if [ -s "$GATEWAY_OUTPUT" ] && jq -e '.mcpServers' "$GATEWAY_OUTPUT" > /dev/null 2>&1; then echo "Gateway output is ready" @@ -332,7 +336,7 @@ jobs: "$(Pipeline.Workspace)/awf/awf" logs summary --source "$(Agent.TempDirectory)/staging/logs/firewall" 2>/dev/null || true fi - exit $AGENT_EXIT_CODE + exit "$AGENT_EXIT_CODE" displayName: "Run copilot (AWF network isolated)" workingDirectory: {{ working_directory }} env: @@ -402,6 +406,7 @@ jobs: {{ engine_install_steps }} - bash: | + set -eo pipefail COMPILER_VERSION="{{ compiler_version }}" DOWNLOAD_DIR="$(Pipeline.Workspace)/agentic-pipeline-compiler" DOWNLOAD_URL="https://github.com/githubnext/ado-aw/releases/download/v${COMPILER_VERSION}/ado-aw-linux-x64" @@ -413,7 +418,7 @@ jobs: curl -fsSL -o "$DOWNLOAD_DIR/checksums.txt" "$CHECKSUM_URL" echo "Verifying checksum..." - cd "$DOWNLOAD_DIR" + cd "$DOWNLOAD_DIR" || exit 1 grep "ado-aw-linux-x64" checksums.txt | sha256sum -c - mv ado-aw-linux-x64 ado-aw chmod +x ado-aw @@ -438,7 +443,7 @@ jobs: curl -fsSL -o "$DOWNLOAD_DIR/checksums.txt" "$CHECKSUM_URL" echo "Verifying checksum..." - cd "$DOWNLOAD_DIR" + cd "$DOWNLOAD_DIR" || exit 1 grep "awf-linux-x64" checksums.txt | sha256sum -c - mv awf-linux-x64 awf chmod +x awf @@ -456,8 +461,8 @@ jobs: displayName: "Pre-pull AWF container images (v{{ firewall_version }})" - bash: | - mkdir -p {{ working_directory }}/safe_outputs - cp -a "$(Pipeline.Workspace)/agent_outputs_$(Build.BuildId)/." {{ working_directory }}/safe_outputs + mkdir -p "{{ working_directory }}/safe_outputs" + cp -a "$(Pipeline.Workspace)/agent_outputs_$(Build.BuildId)/." "{{ working_directory }}/safe_outputs" displayName: "Prepare safe outputs for analysis" - bash: | @@ -495,7 +500,7 @@ jobs: | tee "$THREAT_OUTPUT_FILE" \ && AGENT_EXIT_CODE=0 || AGENT_EXIT_CODE=$? - exit $AGENT_EXIT_CODE + exit "$AGENT_EXIT_CODE" displayName: "Run threat analysis (AWF network isolated)" workingDirectory: {{ working_directory }} env: @@ -519,7 +524,7 @@ jobs: RESULT_LINE=$(grep "THREAT_DETECTION_RESULT:" "$(Agent.TempDirectory)/threat-analysis-output.txt" | tail -1) if [ -n "$RESULT_LINE" ]; then # Extract JSON after the prefix - JSON_CONTENT=$(echo "$RESULT_LINE" | sed 's/.*THREAT_DETECTION_RESULT://') + JSON_CONTENT="${RESULT_LINE##*THREAT_DETECTION_RESULT:}" echo "$JSON_CONTENT" > "$(Agent.TempDirectory)/analyzed_outputs/threat-analysis.json" echo "Extracted threat analysis JSON:" cat "$(Agent.TempDirectory)/analyzed_outputs/threat-analysis.json" @@ -603,6 +608,7 @@ jobs: artifact: analyzed_outputs_$(Build.BuildId) - bash: | + set -eo pipefail COMPILER_VERSION="{{ compiler_version }}" DOWNLOAD_DIR="$(Pipeline.Workspace)/agentic-pipeline-compiler" DOWNLOAD_URL="https://github.com/githubnext/ado-aw/releases/download/v${COMPILER_VERSION}/ado-aw-linux-x64" @@ -614,7 +620,7 @@ jobs: curl -fsSL -o "$DOWNLOAD_DIR/checksums.txt" "$CHECKSUM_URL" echo "Verifying checksum..." - cd "$DOWNLOAD_DIR" + cd "$DOWNLOAD_DIR" || exit 1 grep "ado-aw-linux-x64" checksums.txt | sha256sum -c - mv ado-aw-linux-x64 ado-aw chmod +x ado-aw diff --git a/src/engine.rs b/src/engine.rs index 0d8e182..182d74b 100644 --- a/src/engine.rs +++ b/src/engine.rs @@ -105,7 +105,11 @@ impl Engine { /// Used by log collection steps to copy engine logs to pipeline artifacts. pub fn log_dir(&self) -> &str { match self { - Engine::Copilot => "~/.copilot/logs", + // `$HOME` (not `~`) so that the bash `[ -d "..." ]` test below + // actually expands. Tilde does not expand inside double quotes, + // so the previous value caused the directory check to always + // fail and Copilot logs were silently never collected. + Engine::Copilot => "$HOME/.copilot/logs", } } diff --git a/src/runtimes/dotnet/mod.rs b/src/runtimes/dotnet/mod.rs index d91e705..eb885cb 100644 --- a/src/runtimes/dotnet/mod.rs +++ b/src/runtimes/dotnet/mod.rs @@ -207,23 +207,23 @@ pub fn generate_ensure_nuget_config(config: &DotnetRuntimeConfig) -> String { let feed_url = config.feed_url().unwrap_or("https://api.nuget.org/v3/index.json"); format!( - "\ -- bash: |\n\ - if [ ! -f nuget.config ] && [ ! -f NuGet.config ] && [ ! -f NuGet.Config ]; then\n\ - cat > nuget.config <<'EOF'\n\ - \n\ - \n\ - \n\ - \n\ - \n\ - \n\ - \n\ - EOF\n\ - echo 'Created nuget.config with source={feed_url}'\n\ - else\n\ - echo 'nuget.config already exists, skipping creation'\n\ - fi\n\ - displayName: 'Ensure nuget.config exists'" + r#"- bash: | + set -eo pipefail + if [ ! -f nuget.config ] && [ ! -f NuGet.config ] && [ ! -f NuGet.Config ]; then + cat > nuget.config <<'EOF' + + + + + + + + EOF + echo 'Created nuget.config with source={feed_url}' + else + echo 'nuget.config already exists, skipping creation' + fi + displayName: 'Ensure nuget.config exists'"# ) } diff --git a/src/runtimes/lean/mod.rs b/src/runtimes/lean/mod.rs index fb07019..3772844 100644 --- a/src/runtimes/lean/mod.rs +++ b/src/runtimes/lean/mod.rs @@ -91,6 +91,7 @@ pub fn generate_lean_install(config: &LeanRuntimeConfig) -> String { let toolchain = config.toolchain().unwrap_or("stable"); let script = format!( "\ +set -eo pipefail curl https://elan.lean-lang.org/elan-init.sh -sSf | sh -s -- -y --default-toolchain {toolchain} echo \"##vso[task.prependpath]$HOME/.elan/bin\" export PATH=\"$HOME/.elan/bin:$PATH\" diff --git a/src/runtimes/node/mod.rs b/src/runtimes/node/mod.rs index 8933b81..6ace540 100644 --- a/src/runtimes/node/mod.rs +++ b/src/runtimes/node/mod.rs @@ -155,15 +155,15 @@ pub fn generate_ensure_npmrc(config: &NodeRuntimeConfig) -> String { .unwrap_or("https://registry.npmjs.org/"); format!( - "\ -- bash: |\n\ - if [ ! -f .npmrc ]; then\n\ - echo 'registry={registry}' > .npmrc\n\ - echo 'Created .npmrc with registry={registry}'\n\ - else\n\ - echo '.npmrc already exists, skipping creation'\n\ - fi\n\ - displayName: 'Ensure .npmrc exists'" + r#"- bash: | + set -eo pipefail + if [ ! -f .npmrc ]; then + echo 'registry={registry}' > .npmrc + echo 'Created .npmrc with registry={registry}' + else + echo '.npmrc already exists, skipping creation' + fi + displayName: 'Ensure .npmrc exists'"# ) } diff --git a/tests/bash_lint_tests.rs b/tests/bash_lint_tests.rs new file mode 100644 index 0000000..6d9d4f8 --- /dev/null +++ b/tests/bash_lint_tests.rs @@ -0,0 +1,354 @@ +//! Integration test that lints the bash bodies of compiled pipeline YAML +//! using `shellcheck`. +//! +//! ## Why this test exists +//! +//! Pipeline templates contain dozens of multi-line `bash:` steps. ADO bash +//! steps fail only on the *last* command's exit code by default, which makes +//! it easy for an earlier command to fail silently and the step to still +//! report green. Rather than spread `set -eo pipefail` boilerplate across +//! every step, we lint each bash body with shellcheck. Real silent-failure +//! patterns surface here: +//! +//! * **SC2164** — `cd $X` without `|| exit` (the canonical silent-failure) +//! * **SC2155** — `local var=$(cmd)` masking the inner exit code +//! * **SC2086 / SC2046** — unquoted variables / command substitutions +//! * **SC2154** — variables referenced but never assigned +//! * **SC2088** — tilde inside double quotes (does not expand) +//! +//! ## How it works +//! +//! 1. Compiles a representative set of fixtures with `ado-aw compile`. +//! 2. For each generated `*.lock.yml`, walks the YAML and collects every +//! `bash:` body that is the value of a step entry (i.e., a mapping that +//! is itself an element of a sequence). This avoids false positives from +//! arbitrary `bash` keys nested inside `env:` blocks or comments. +//! 3. Pipes each body to `shellcheck --shell=bash --format=json -`. +//! 4. Aggregates findings; the test fails with a structured report listing +//! every finding by fixture / step / line / code / message. +//! +//! ## Skip vs. enforce +//! +//! By default, if `shellcheck` is not installed locally the test prints a +//! notice and returns early. CI runners are expected to set the +//! `ENFORCE_BASH_LINT` environment variable so a missing shellcheck becomes +//! a hard failure rather than a silent skip. To install shellcheck locally: +//! +//! * macOS: `brew install shellcheck` +//! * Debian / Ubuntu: `apt-get install -y shellcheck` + +use std::collections::BTreeMap; +use std::io::Write; +use std::path::{Path, PathBuf}; +use std::process::{Command, Stdio}; + +use serde_yaml::Value; +use tempfile::TempDir; + +/// Shellcheck rule codes that are intentionally suppressed for ADO bash steps. +/// +/// Each entry has a justification — do not extend this list without one. +/// The list is deliberately short: project-specific suppressions belong as +/// per-line `# shellcheck disable=SCxxxx` comments inside the bash body, not +/// as a global override. +/// +/// * **SC1090, SC1091** — `source` paths that include ADO macros +/// (e.g. `$(Pipeline.Workspace)`) are dynamic and cannot be resolved by +/// shellcheck. +const SHELLCHECK_EXCLUDE: &str = "SC1090,SC1091"; + +/// Fixtures exercised by the lint. Chosen to collectively cover every bash-step +/// generator in the codebase: standalone + 1ES templates, every runtime that +/// emits bash steps (Lean, Node with feed-url, .NET with feed-url), and every +/// first-class tool that emits bash (cache-memory). Add a fixture here only +/// when a new generator is introduced that none of the existing fixtures +/// exercises. +const FIXTURES: &[&str] = &[ + "minimal-agent.md", + "complete-agent.md", + "1es-test-agent.md", + "azure-devops-mcp-agent.md", + "pipeline-trigger-agent.md", + "pipeline-filter-agent.md", + "runtime-coverage-agent.md", +]; + +/// Step display names that the lint expects to find at least once across all +/// fixtures. If any of these is missing it means the corresponding generator +/// is not being exercised — almost always because a fixture was deleted or +/// the generator's output changed without updating the coverage list. +const REQUIRED_STEP_DISPLAY_NAMES: &[&str] = &[ + // Static templates (standalone + 1ES) + "Prepare MCPG config", + "Prepare tooling", + "Prepare agent prompt", + "Run copilot (AWF network isolated)", + "Run threat analysis (AWF network isolated)", + "Evaluate threat analysis", + "Execute safe outputs (Stage 3)", + // Rust generators + "Install Lean 4 (elan)", // src/runtimes/lean/mod.rs + "Append Lean 4 prompt", // src/runtimes/lean/extension.rs + "Ensure .npmrc exists", // src/runtimes/node/mod.rs + "Ensure nuget.config exists", // src/runtimes/dotnet/mod.rs + "Restore previous agent memory", // src/tools/cache_memory/extension.rs + "Initialize empty agent memory (clearMemory=true)", + "Generate GITHUB_PATH file", // src/compile/common.rs (AWF path step) +]; + +fn ado_aw_binary() -> PathBuf { + PathBuf::from(env!("CARGO_BIN_EXE_ado-aw")) +} + +fn fixtures_dir() -> PathBuf { + PathBuf::from(env!("CARGO_MANIFEST_DIR")).join("tests/fixtures") +} + +/// Probe for the shellcheck binary. Returns `None` if it is not on PATH or +/// fails to report a version. +fn shellcheck_version() -> Option { + let output = Command::new("shellcheck").arg("--version").output().ok()?; + if !output.status.success() { + return None; + } + let text = String::from_utf8_lossy(&output.stdout); + text.lines() + .find(|l| l.starts_with("version:")) + .map(|l| l.trim().to_string()) +} + +/// A fresh `TempDir` plus a `.git/` marker so `ado-aw compile` can resolve +/// repo-relative paths. RAII cleans up on drop, even on panic. +fn fresh_workspace() -> TempDir { + let dir = tempfile::Builder::new() + .prefix("ado-aw-bash-lint-") + .tempdir() + .expect("create temp dir"); + std::fs::create_dir(dir.path().join(".git")).expect("create .git dir"); + dir +} + +/// Compile a fixture by copying it into `workspace` and invoking +/// `ado-aw compile`. Returns the path to the generated `.lock.yml`. +fn compile_fixture(workspace: &Path, fixture: &str) -> PathBuf { + let src = fixtures_dir().join(fixture); + let dest = workspace.join(fixture); + std::fs::copy(&src, &dest) + .unwrap_or_else(|e| panic!("copy fixture {fixture}: {e}")); + + let output = Command::new(ado_aw_binary()) + .args(["compile", dest.to_str().unwrap()]) + .current_dir(workspace) + .output() + .unwrap_or_else(|e| panic!("spawn ado-aw compile: {e}")); + + assert!( + output.status.success(), + "ado-aw compile failed for {fixture}\nstdout:\n{}\nstderr:\n{}", + String::from_utf8_lossy(&output.stdout), + String::from_utf8_lossy(&output.stderr), + ); + + let lock = dest.with_extension("lock.yml"); + assert!(lock.exists(), "expected lock file {}", lock.display()); + lock +} + +/// A single bash body extracted from a compiled pipeline. +struct BashBody { + display_name: String, + body: String, +} + +/// Walk a parsed YAML document and collect every step that has a literal +/// block-scalar `bash:` body. Only mappings reached via a sequence element +/// are considered candidate steps — this avoids treating an arbitrary +/// `bash:` key inside, e.g., an `env:` block as a step. +fn extract_bash_bodies(yml_path: &Path) -> Vec { + let content = std::fs::read_to_string(yml_path) + .unwrap_or_else(|e| panic!("read {}: {e}", yml_path.display())); + let doc: Value = serde_yaml::from_str(&content) + .unwrap_or_else(|e| panic!("parse YAML {}: {e}", yml_path.display())); + + let mut out = Vec::new(); + collect(&doc, /* in_sequence_element = */ false, &mut out); + out +} + +fn collect(node: &Value, in_sequence_element: bool, out: &mut Vec) { + match node { + Value::Mapping(map) => { + // Only treat this mapping as a step candidate if we reached it + // by descending into a sequence (i.e., it's `[bash: |, …]`). + if in_sequence_element + && let Some(Value::String(body)) = map.get(Value::String("bash".into())) + { + let display_name = map + .get(Value::String("displayName".into())) + .and_then(Value::as_str) + .unwrap_or("") + .to_string(); + out.push(BashBody { + display_name, + body: body.clone(), + }); + } + for (_, v) in map { + collect(v, /* in_sequence_element = */ false, out); + } + } + Value::Sequence(seq) => { + for v in seq { + collect(v, /* in_sequence_element = */ true, out); + } + } + _ => {} + } +} + +/// Run shellcheck on a bash body. Returns the parsed JSON findings. +fn run_shellcheck(body: &str) -> serde_json::Value { + let mut child = Command::new("shellcheck") + .args([ + "--shell=bash", + "--format=json", + &format!("--exclude={SHELLCHECK_EXCLUDE}"), + "-", + ]) + .stdin(Stdio::piped()) + .stdout(Stdio::piped()) + .stderr(Stdio::piped()) + .spawn() + .expect("spawn shellcheck"); + + child + .stdin + .as_mut() + .expect("shellcheck stdin") + .write_all(body.as_bytes()) + .expect("write to shellcheck stdin"); + drop(child.stdin.take()); + + let output = child.wait_with_output().expect("wait for shellcheck"); + + // shellcheck exits 0 when clean and 1 when findings exist; both produce + // valid JSON on stdout. Higher exit codes (e.g. parse error) are real + // failures and should surface in the test output. + let exit = output.status.code().unwrap_or(-1); + if exit > 1 { + panic!( + "shellcheck failed (exit {exit}):\nstdout:\n{}\nstderr:\n{}", + String::from_utf8_lossy(&output.stdout), + String::from_utf8_lossy(&output.stderr), + ); + } + + let stdout = String::from_utf8_lossy(&output.stdout); + let trimmed = stdout.trim(); + if trimmed.is_empty() { + return serde_json::Value::Array(Vec::new()); + } + + serde_json::from_str(trimmed) + .unwrap_or_else(|e| panic!("parse shellcheck JSON: {e}\nraw:\n{stdout}")) +} + +/// Format a single shellcheck finding for the test failure report. +fn format_finding(fixture: &str, display_name: &str, finding: &serde_json::Value) -> String { + let code = finding["code"].as_i64().unwrap_or(0); + let level = finding["level"].as_str().unwrap_or("?"); + let line = finding["line"].as_i64().unwrap_or(0); + let message = finding["message"].as_str().unwrap_or("?"); + format!(" [{level}] SC{code} {fixture} :: {display_name:?} (body L{line}): {message}") +} + +#[test] +fn compiled_bash_bodies_pass_shellcheck() { + let enforce = std::env::var_os("ENFORCE_BASH_LINT").is_some(); + + let Some(version) = shellcheck_version() else { + if enforce { + panic!( + "ENFORCE_BASH_LINT is set but `shellcheck` is not on PATH. \ + Install it in CI (e.g. `apt-get install -y shellcheck`) \ + or unset ENFORCE_BASH_LINT for local development." + ); + } + eprintln!( + "skipping bash lint test: `shellcheck` not found on PATH. \ + Install via your OS package manager (e.g. `brew install shellcheck`, \ + `apt-get install -y shellcheck`). \ + Set ENFORCE_BASH_LINT=1 to make this a hard failure (CI does)." + ); + return; + }; + eprintln!("using {version}"); + + let workspace = fresh_workspace(); + let mut report: BTreeMap> = BTreeMap::new(); + let mut all_display_names: Vec = Vec::new(); + + for fixture in FIXTURES { + let lock = compile_fixture(workspace.path(), fixture); + for body in extract_bash_bodies(&lock) { + all_display_names.push(body.display_name.clone()); + let findings = run_shellcheck(&body.body); + if let Some(arr) = findings.as_array() { + for finding in arr { + report + .entry((*fixture).to_string()) + .or_default() + .push(format_finding(fixture, &body.display_name, finding)); + } + } + } + } + + // Coverage check — every required generator must appear in the harvested + // step list, otherwise a fixture has stopped exercising its generator. + let missing: Vec<&str> = REQUIRED_STEP_DISPLAY_NAMES + .iter() + .copied() + .filter(|name| !all_display_names.iter().any(|d| d == name)) + .collect(); + assert!( + missing.is_empty(), + "the following step display names were not produced by any fixture, \ + meaning their generator is not being linted:\n {}\n\ + Either add a fixture exercising the generator, or update \ + REQUIRED_STEP_DISPLAY_NAMES if the generator was removed.", + missing.join("\n ") + ); + + if !report.is_empty() { + let mut msg = String::from( + "shellcheck flagged silent-failure patterns in compiled bash bodies. \ + Each finding represents a real or stylistic concern; fix the \ + offending bash, or if intentional add `# shellcheck disable=SCxxxx` \ + inline in the bash body (the directive is a comment and does not \ + affect runtime behaviour).\n", + ); + for (fixture, lines) in &report { + msg.push_str(&format!("\n--- {fixture} ---\n")); + for line in lines { + msg.push_str(line); + msg.push('\n'); + } + } + panic!("{msg}"); + } +} + +/// Sanity check: every listed fixture exists. Catches typos in `FIXTURES` +/// without paying the cost of compiling. +#[test] +fn every_listed_fixture_exists() { + for fixture in FIXTURES { + let path = fixtures_dir().join(fixture); + assert!( + path.exists(), + "fixture listed in FIXTURES but not on disk: {}", + path.display() + ); + } +} diff --git a/tests/fixtures/runtime-coverage-agent.md b/tests/fixtures/runtime-coverage-agent.md new file mode 100644 index 0000000..1a727de --- /dev/null +++ b/tests/fixtures/runtime-coverage-agent.md @@ -0,0 +1,22 @@ +--- +name: Runtime Coverage Agent +description: Lint-only fixture exercising Lean, Node, .NET runtimes and the cache-memory tool +on: + schedule: daily +runtimes: + lean: true + node: + version: "22.x" + feed-url: "https://pkgs.dev.azure.com/example/example/_packaging/example/npm/registry/" + dotnet: + version: "8.0.x" + feed-url: "https://pkgs.dev.azure.com/example/example/_packaging/example/nuget/v3/index.json" +tools: + cache-memory: true +--- + +## Runtime Coverage Agent + +This agent enables every runtime that produces a code-generated bash step, +plus the `cache-memory` tool. Its sole job is to compile cleanly so the +bash-step lint can analyse those generated bodies. From 5ebab3140b84387bbf2d3c7e4016dd0a7799557f Mon Sep 17 00:00:00 2001 From: James Devine Date: Sun, 10 May 2026 21:17:00 +0100 Subject: [PATCH 2/2] ci(workflows): add daily bash-step hygiene auditor agentic workflow MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Adds `.github/workflows/bash-lint-auditor.md`, a daily agentic workflow that complements the PR-gate lint added in PR #496. The PR gate gives fast feedback on every PR; this workflow runs once a day and lands small, mechanical improvements that the gate can't: * When a finding does slip onto main (e.g. via merge conflict), the auditor fixes it the next morning instead of waiting for the next contributor PR. * Audits stale `# shellcheck disable=` directives — removes ones that no longer fire (i.e. the underlying code has been cleaned up but the suppression was forgotten). * Audits whether the lint's exclude list could be tightened. * Verifies fixture coverage of every bash-step generator and proposes fixture additions when a new generator appears. When the auditor finds something actionable, it opens a focused PR (one concern per PR) with the structured "what was found / how it was fixed / verification" body. When the lint is green and no proactive improvement is feasible, it exits cleanly with `noop`. Configuration notes: * `schedule: daily around 09:00` — fuzzy schedule scattering across the hour, matching the convention of other daily workflows in this repo (e.g. `cyclomatic-complexity-reducer.md`). * `allowed-files` restricts the auditor to bash-generator code paths plus the tests/fixtures it depends on. `protected-files: fallback-to-issue` ensures that if it tries to edit anything else, the change falls back to an issue rather than a PR. * `cache-memory: true` persists state across runs so the auditor doesn't loop on the same suggestion if a maintainer rejects it. * `bash: ["*"]` + `network.allowed: [defaults, rust]` gives the agent what it needs to install shellcheck (via apt with a static- binary fallback) and run cargo against the rust ecosystem. Compiled with `gh aw compile bash-lint-auditor`; the matching `.lock.yml` is included along with new SHAs in `.github/aw/actions-lock.json` (cache, checkout, download-artifact registered for the first time by this workflow's setup steps). Stacked on top of branch `lint-bash-steps` (PR #496) because the auditor relies on `tests/bash_lint_tests.rs` and `tests/fixtures/runtime-coverage-agent.md`, which are introduced there. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- .github/aw/actions-lock.json | 35 + .github/workflows/bash-lint-auditor.lock.yml | 1437 ++++++++++++++++++ .github/workflows/bash-lint-auditor.md | 206 +++ 3 files changed, 1678 insertions(+) create mode 100644 .github/workflows/bash-lint-auditor.lock.yml create mode 100644 .github/workflows/bash-lint-auditor.md diff --git a/.github/aw/actions-lock.json b/.github/aw/actions-lock.json index dbb565e..18b82e5 100644 --- a/.github/aw/actions-lock.json +++ b/.github/aw/actions-lock.json @@ -1,5 +1,25 @@ { "entries": { + "actions/cache/restore@v5.0.5": { + "repo": "actions/cache/restore", + "version": "v5.0.5", + "sha": "27d5ce7f107fe9357f9df03efb73ab90386fccae" + }, + "actions/cache/save@v5.0.5": { + "repo": "actions/cache/save", + "version": "v5.0.5", + "sha": "27d5ce7f107fe9357f9df03efb73ab90386fccae" + }, + "actions/checkout@v6.0.2": { + "repo": "actions/checkout", + "version": "v6.0.2", + "sha": "de0fac2e4500dabe0009e67214ff5f5447ce83dd" + }, + "actions/download-artifact@v8.0.1": { + "repo": "actions/download-artifact", + "version": "v8.0.1", + "sha": "3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c" + }, "actions/github-script@v8": { "repo": "actions/github-script", "version": "v8", @@ -15,6 +35,21 @@ "version": "v9.0.0", "sha": "d746ffe35508b1917358783b479e04febd2b8f71" }, + "actions/setup-node@v6.4.0": { + "repo": "actions/setup-node", + "version": "v6.4.0", + "sha": "48b55a011bda9f5d6aeb4c2d9c7362e8dae4041e" + }, + "actions/upload-artifact@v7.0.1": { + "repo": "actions/upload-artifact", + "version": "v7.0.1", + "sha": "043fb46d1a93c77aae656e7c1c64a875d1fc6a0a" + }, + "github/gh-aw-actions/setup@v0.71.5": { + "repo": "github/gh-aw-actions/setup", + "version": "v0.71.5", + "sha": "b8068426813005612b960b5ab0b8bd2c27142323" + }, "github/gh-aw/actions/setup@v0.68.1": { "repo": "github/gh-aw/actions/setup", "version": "v0.68.1", diff --git a/.github/workflows/bash-lint-auditor.lock.yml b/.github/workflows/bash-lint-auditor.lock.yml new file mode 100644 index 0000000..071b933 --- /dev/null +++ b/.github/workflows/bash-lint-auditor.lock.yml @@ -0,0 +1,1437 @@ +# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"7ca23adaaef432fb59f7fb090965307b24c67206a5b06d41aabd39da54b83159","compiler_version":"v0.71.5","strict":true,"agent_id":"copilot"} +# gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/cache/restore","sha":"27d5ce7f107fe9357f9df03efb73ab90386fccae","version":"v5.0.5"},{"repo":"actions/cache/save","sha":"27d5ce7f107fe9357f9df03efb73ab90386fccae","version":"v5.0.5"},{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/github-script","sha":"3a2844b7e9c422d3c10d287c895573f7108da1b3","version":"v9.0.0"},{"repo":"actions/github-script","sha":"d746ffe35508b1917358783b479e04febd2b8f71","version":"v9.0.0"},{"repo":"actions/setup-node","sha":"48b55a011bda9f5d6aeb4c2d9c7362e8dae4041e","version":"v6.4.0"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"b8068426813005612b960b5ab0b8bd2c27142323","version":"v0.71.5"}],"containers":[{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.40","digest":"sha256:14ff567e8d9d4c2fbc5e55c973488381c71d7e0fdbe72d30ee7b8a738fd86504","pinned_image":"ghcr.io/github/gh-aw-firewall/agent:0.25.40@sha256:14ff567e8d9d4c2fbc5e55c973488381c71d7e0fdbe72d30ee7b8a738fd86504"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.40","digest":"sha256:2883ca3e5ae9f330cafdd9345bfd4ae17fc8da36c96d4c9a1f76e922b4c45280","pinned_image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.40@sha256:2883ca3e5ae9f330cafdd9345bfd4ae17fc8da36c96d4c9a1f76e922b4c45280"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.40","digest":"sha256:b084f4a2c771f584ee68084ced52fa6b3245197a1889645d817462d307d3ac51","pinned_image":"ghcr.io/github/gh-aw-firewall/squid:0.25.40@sha256:b084f4a2c771f584ee68084ced52fa6b3245197a1889645d817462d307d3ac51"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.3.6","digest":"sha256:2bb8eef86006a4c5963c55616a9c51c32f27bfdecb023b8aa6f91f6718d9171c","pinned_image":"ghcr.io/github/gh-aw-mcpg:v0.3.6@sha256:2bb8eef86006a4c5963c55616a9c51c32f27bfdecb023b8aa6f91f6718d9171c"},{"image":"ghcr.io/github/github-mcp-server:v1.0.3","digest":"sha256:2ac27ef03461ef2b877031b838a7d1fd7f12b12d4ace7796d8cad91446d55959","pinned_image":"ghcr.io/github/github-mcp-server:v1.0.3@sha256:2ac27ef03461ef2b877031b838a7d1fd7f12b12d4ace7796d8cad91446d55959"},{"image":"node:lts-alpine","digest":"sha256:d1b3b4da11eefd5941e7f0b9cf17783fc99d9c6fc34884a665f40a06dbdfc94f","pinned_image":"node:lts-alpine@sha256:d1b3b4da11eefd5941e7f0b9cf17783fc99d9c6fc34884a665f40a06dbdfc94f"}]} +# ___ _ _ +# / _ \ | | (_) +# | |_| | __ _ ___ _ __ | |_ _ ___ +# | _ |/ _` |/ _ \ '_ \| __| |/ __| +# | | | | (_| | __/ | | | |_| | (__ +# \_| |_/\__, |\___|_| |_|\__|_|\___| +# __/ | +# _ _ |___/ +# | | | | / _| | +# | | | | ___ _ __ _ __| |_| | _____ ____ +# | |/\| |/ _ \ '__| |/ /| _| |/ _ \ \ /\ / / ___| +# \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \ +# \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/ +# +# This file was automatically generated by gh-aw (v0.71.5). DO NOT EDIT. +# +# To update this file, edit the corresponding .md file and run: +# gh aw compile +# Not all edits will cause changes to this file. +# +# For more information: https://github.github.com/gh-aw/introduction/overview/ +# +# Audits bash bodies in compiled pipeline YAML, applies shellcheck-driven fixes, and opens a PR with the changes. +# +# Secrets used: +# - COPILOT_GITHUB_TOKEN +# - GH_AW_CI_TRIGGER_TOKEN +# - GH_AW_GITHUB_MCP_SERVER_TOKEN +# - GH_AW_GITHUB_TOKEN +# - GITHUB_TOKEN +# +# Custom actions used: +# - actions/cache/restore@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5 +# - actions/cache/save@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5 +# - actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 +# - actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 +# - actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 +# - actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 +# - actions/github-script@d746ffe35508b1917358783b479e04febd2b8f71 # v9.0.0 +# - actions/setup-node@48b55a011bda9f5d6aeb4c2d9c7362e8dae4041e # v6.4.0 +# - actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 +# - github/gh-aw-actions/setup@b8068426813005612b960b5ab0b8bd2c27142323 # v0.71.5 +# +# Container images used: +# - ghcr.io/github/gh-aw-firewall/agent:0.25.40@sha256:14ff567e8d9d4c2fbc5e55c973488381c71d7e0fdbe72d30ee7b8a738fd86504 +# - ghcr.io/github/gh-aw-firewall/api-proxy:0.25.40@sha256:2883ca3e5ae9f330cafdd9345bfd4ae17fc8da36c96d4c9a1f76e922b4c45280 +# - ghcr.io/github/gh-aw-firewall/squid:0.25.40@sha256:b084f4a2c771f584ee68084ced52fa6b3245197a1889645d817462d307d3ac51 +# - ghcr.io/github/gh-aw-mcpg:v0.3.6@sha256:2bb8eef86006a4c5963c55616a9c51c32f27bfdecb023b8aa6f91f6718d9171c +# - ghcr.io/github/github-mcp-server:v1.0.3@sha256:2ac27ef03461ef2b877031b838a7d1fd7f12b12d4ace7796d8cad91446d55959 +# - node:lts-alpine@sha256:d1b3b4da11eefd5941e7f0b9cf17783fc99d9c6fc34884a665f40a06dbdfc94f + +name: "Bash Step Hygiene Auditor" +"on": + schedule: + - cron: "17 9 * * *" + # Friendly format: daily around 09:00 (scattered) + workflow_dispatch: + inputs: + aw_context: + default: "" + description: Agent caller context (used internally by Agentic Workflows). + required: false + type: string + +permissions: {} + +concurrency: + group: "gh-aw-${{ github.workflow }}" + +run-name: "Bash Step Hygiene Auditor" + +jobs: + activation: + runs-on: ubuntu-slim + permissions: + actions: read + contents: read + outputs: + comment_id: "" + comment_repo: "" + engine_id: ${{ steps.generate_aw_info.outputs.engine_id }} + lockdown_check_failed: ${{ steps.generate_aw_info.outputs.lockdown_check_failed == 'true' }} + model: ${{ steps.generate_aw_info.outputs.model }} + secret_verification_result: ${{ steps.validate-secret.outputs.verification_result }} + setup-trace-id: ${{ steps.setup.outputs.trace-id }} + stale_lock_file_failed: ${{ steps.check-lock-file.outputs.stale_lock_file_failed == 'true' }} + steps: + - name: Setup Scripts + id: setup + uses: github/gh-aw-actions/setup@b8068426813005612b960b5ab0b8bd2c27142323 # v0.71.5 + with: + destination: ${{ runner.temp }}/gh-aw/actions + job-name: ${{ github.job }} + env: + GH_AW_SETUP_WORKFLOW_NAME: "Bash Step Hygiene Auditor" + GH_AW_CURRENT_WORKFLOW_REF: ${{ github.repository }}/.github/workflows/bash-lint-auditor.lock.yml@${{ github.ref }} + GH_AW_INFO_VERSION: "1.0.40" + - name: Generate agentic run info + id: generate_aw_info + env: + GH_AW_INFO_ENGINE_ID: "copilot" + GH_AW_INFO_ENGINE_NAME: "GitHub Copilot CLI" + GH_AW_INFO_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || 'claude-sonnet-4.6' }} + GH_AW_INFO_VERSION: "1.0.40" + GH_AW_INFO_AGENT_VERSION: "1.0.40" + GH_AW_INFO_CLI_VERSION: "v0.71.5" + GH_AW_INFO_WORKFLOW_NAME: "Bash Step Hygiene Auditor" + GH_AW_INFO_EXPERIMENTAL: "false" + GH_AW_INFO_SUPPORTS_TOOLS_ALLOWLIST: "true" + GH_AW_INFO_STAGED: "false" + GH_AW_INFO_ALLOWED_DOMAINS: '["defaults","rust"]' + GH_AW_INFO_FIREWALL_ENABLED: "true" + GH_AW_INFO_AWF_VERSION: "v0.25.40" + GH_AW_INFO_AWMG_VERSION: "" + GH_AW_INFO_FIREWALL_TYPE: "squid" + GH_AW_COMPILED_STRICT: "true" + uses: actions/github-script@d746ffe35508b1917358783b479e04febd2b8f71 # v9.0.0 + with: + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/generate_aw_info.cjs'); + await main(core, context); + - name: Validate COPILOT_GITHUB_TOKEN secret + id: validate-secret + run: bash "${RUNNER_TEMP}/gh-aw/actions/validate_multi_secret.sh" COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default + env: + COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }} + - name: Checkout .github and .agents folders + uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + with: + persist-credentials: false + sparse-checkout: | + .github + .agents + .claude + .codex + .crush + .gemini + .opencode + .pi + sparse-checkout-cone-mode: true + fetch-depth: 1 + - name: Save agent config folders for base branch restoration + env: + GH_AW_AGENT_FOLDERS: ".agents .claude .codex .crush .gemini .github .opencode .pi" + GH_AW_AGENT_FILES: ".crush.json AGENTS.md CLAUDE.md GEMINI.md PI.md opencode.jsonc" + # poutine:ignore untrusted_checkout_exec + run: bash "${RUNNER_TEMP}/gh-aw/actions/save_base_github_folders.sh" + - name: Check workflow lock file + id: check-lock-file + uses: actions/github-script@d746ffe35508b1917358783b479e04febd2b8f71 # v9.0.0 + env: + GH_AW_WORKFLOW_FILE: "bash-lint-auditor.lock.yml" + GH_AW_CONTEXT_WORKFLOW_REF: "${{ github.workflow_ref }}" + with: + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/check_workflow_timestamp_api.cjs'); + await main(); + - name: Check compile-agentic version + uses: actions/github-script@d746ffe35508b1917358783b479e04febd2b8f71 # v9.0.0 + env: + GH_AW_COMPILED_VERSION: "v0.71.5" + with: + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/check_version_updates.cjs'); + await main(); + - name: Create prompt with built-in context + env: + GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt + GH_AW_SAFE_OUTPUTS: ${{ runner.temp }}/gh-aw/safeoutputs/outputs.jsonl + GH_AW_GITHUB_ACTOR: ${{ github.actor }} + GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }} + GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }} + GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }} + GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }} + GH_AW_GITHUB_REPOSITORY: ${{ github.repository }} + GH_AW_GITHUB_RUN_ID: ${{ github.run_id }} + GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }} + # poutine:ignore untrusted_checkout_exec + run: | + bash "${RUNNER_TEMP}/gh-aw/actions/create_prompt_first.sh" + { + cat << 'GH_AW_PROMPT_a275553da7c95aee_EOF' + + GH_AW_PROMPT_a275553da7c95aee_EOF + cat "${RUNNER_TEMP}/gh-aw/prompts/xpia.md" + cat "${RUNNER_TEMP}/gh-aw/prompts/temp_folder_prompt.md" + cat "${RUNNER_TEMP}/gh-aw/prompts/markdown.md" + cat "${RUNNER_TEMP}/gh-aw/prompts/cache_memory_prompt.md" + cat "${RUNNER_TEMP}/gh-aw/prompts/safe_outputs_prompt.md" + cat << 'GH_AW_PROMPT_a275553da7c95aee_EOF' + + Tools: create_pull_request, missing_tool, missing_data, noop + GH_AW_PROMPT_a275553da7c95aee_EOF + cat "${RUNNER_TEMP}/gh-aw/prompts/safe_outputs_create_pull_request.md" + cat << 'GH_AW_PROMPT_a275553da7c95aee_EOF' + + GH_AW_PROMPT_a275553da7c95aee_EOF + cat "${RUNNER_TEMP}/gh-aw/prompts/mcp_cli_tools_prompt.md" + cat << 'GH_AW_PROMPT_a275553da7c95aee_EOF' + + The following GitHub context information is available for this workflow: + {{#if __GH_AW_GITHUB_ACTOR__ }} + - **actor**: __GH_AW_GITHUB_ACTOR__ + {{/if}} + {{#if __GH_AW_GITHUB_REPOSITORY__ }} + - **repository**: __GH_AW_GITHUB_REPOSITORY__ + {{/if}} + {{#if __GH_AW_GITHUB_WORKSPACE__ }} + - **workspace**: __GH_AW_GITHUB_WORKSPACE__ + {{/if}} + {{#if __GH_AW_GITHUB_EVENT_ISSUE_NUMBER__ }} + - **issue-number**: #__GH_AW_GITHUB_EVENT_ISSUE_NUMBER__ + {{/if}} + {{#if __GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER__ }} + - **discussion-number**: #__GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER__ + {{/if}} + {{#if __GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER__ }} + - **pull-request-number**: #__GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER__ + {{/if}} + {{#if __GH_AW_GITHUB_EVENT_COMMENT_ID__ }} + - **comment-id**: __GH_AW_GITHUB_EVENT_COMMENT_ID__ + {{/if}} + {{#if __GH_AW_GITHUB_RUN_ID__ }} + - **workflow-run-id**: __GH_AW_GITHUB_RUN_ID__ + {{/if}} + + + GH_AW_PROMPT_a275553da7c95aee_EOF + cat "${RUNNER_TEMP}/gh-aw/prompts/github_mcp_tools_with_safeoutputs_prompt.md" + cat << 'GH_AW_PROMPT_a275553da7c95aee_EOF' + + {{#runtime-import .github/workflows/bash-lint-auditor.md}} + GH_AW_PROMPT_a275553da7c95aee_EOF + } > "$GH_AW_PROMPT" + - name: Interpolate variables and render templates + uses: actions/github-script@d746ffe35508b1917358783b479e04febd2b8f71 # v9.0.0 + env: + GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt + GH_AW_ENGINE_ID: "copilot" + with: + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/interpolate_prompt.cjs'); + await main(); + - name: Substitute placeholders + uses: actions/github-script@d746ffe35508b1917358783b479e04febd2b8f71 # v9.0.0 + env: + GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt + GH_AW_ALLOWED_EXTENSIONS: '' + GH_AW_CACHE_DESCRIPTION: '' + GH_AW_CACHE_DIR: '/tmp/gh-aw/cache-memory/' + GH_AW_GITHUB_ACTOR: ${{ github.actor }} + GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }} + GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }} + GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }} + GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }} + GH_AW_GITHUB_REPOSITORY: ${{ github.repository }} + GH_AW_GITHUB_RUN_ID: ${{ github.run_id }} + GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }} + GH_AW_MCP_CLI_SERVERS_LIST: '- `safeoutputs` — run `safeoutputs --help` to see available tools' + with: + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + + const substitutePlaceholders = require('${{ runner.temp }}/gh-aw/actions/substitute_placeholders.cjs'); + + // Call the substitution function + return await substitutePlaceholders({ + file: process.env.GH_AW_PROMPT, + substitutions: { + GH_AW_ALLOWED_EXTENSIONS: process.env.GH_AW_ALLOWED_EXTENSIONS, + GH_AW_CACHE_DESCRIPTION: process.env.GH_AW_CACHE_DESCRIPTION, + GH_AW_CACHE_DIR: process.env.GH_AW_CACHE_DIR, + GH_AW_GITHUB_ACTOR: process.env.GH_AW_GITHUB_ACTOR, + GH_AW_GITHUB_EVENT_COMMENT_ID: process.env.GH_AW_GITHUB_EVENT_COMMENT_ID, + GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: process.env.GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER, + GH_AW_GITHUB_EVENT_ISSUE_NUMBER: process.env.GH_AW_GITHUB_EVENT_ISSUE_NUMBER, + GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: process.env.GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER, + GH_AW_GITHUB_REPOSITORY: process.env.GH_AW_GITHUB_REPOSITORY, + GH_AW_GITHUB_RUN_ID: process.env.GH_AW_GITHUB_RUN_ID, + GH_AW_GITHUB_WORKSPACE: process.env.GH_AW_GITHUB_WORKSPACE, + GH_AW_MCP_CLI_SERVERS_LIST: process.env.GH_AW_MCP_CLI_SERVERS_LIST + } + }); + - name: Validate prompt placeholders + env: + GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt + # poutine:ignore untrusted_checkout_exec + run: bash "${RUNNER_TEMP}/gh-aw/actions/validate_prompt_placeholders.sh" + - name: Print prompt + env: + GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt + # poutine:ignore untrusted_checkout_exec + run: bash "${RUNNER_TEMP}/gh-aw/actions/print_prompt_summary.sh" + - name: Upload activation artifact + if: success() + uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 + with: + name: activation + include-hidden-files: true + path: | + /tmp/gh-aw/aw_info.json + /tmp/gh-aw/aw-prompts/prompt.txt + /tmp/gh-aw/github_rate_limits.jsonl + /tmp/gh-aw/base + if-no-files-found: ignore + retention-days: 1 + + agent: + needs: activation + runs-on: ubuntu-latest + permissions: + contents: read + issues: read + pull-requests: read + concurrency: + group: "gh-aw-copilot-${{ github.workflow }}" + env: + DEFAULT_BRANCH: ${{ github.event.repository.default_branch }} + GH_AW_ASSETS_ALLOWED_EXTS: "" + GH_AW_ASSETS_BRANCH: "" + GH_AW_ASSETS_MAX_SIZE_KB: 0 + GH_AW_MCP_LOG_DIR: /tmp/gh-aw/mcp-logs/safeoutputs + GH_AW_WORKFLOW_ID_SANITIZED: bashlintauditor + outputs: + agentic_engine_timeout: ${{ steps.detect-copilot-errors.outputs.agentic_engine_timeout || 'false' }} + checkout_pr_success: ${{ steps.checkout-pr.outputs.checkout_pr_success || 'true' }} + effective_tokens: ${{ steps.parse-mcp-gateway.outputs.effective_tokens }} + has_patch: ${{ steps.collect_output.outputs.has_patch }} + inference_access_error: ${{ steps.detect-copilot-errors.outputs.inference_access_error || 'false' }} + mcp_policy_error: ${{ steps.detect-copilot-errors.outputs.mcp_policy_error || 'false' }} + model: ${{ needs.activation.outputs.model }} + model_not_supported_error: ${{ steps.detect-copilot-errors.outputs.model_not_supported_error || 'false' }} + output: ${{ steps.collect_output.outputs.output }} + output_types: ${{ steps.collect_output.outputs.output_types }} + setup-trace-id: ${{ steps.setup.outputs.trace-id }} + steps: + - name: Setup Scripts + id: setup + uses: github/gh-aw-actions/setup@b8068426813005612b960b5ab0b8bd2c27142323 # v0.71.5 + with: + destination: ${{ runner.temp }}/gh-aw/actions + job-name: ${{ github.job }} + trace-id: ${{ needs.activation.outputs.setup-trace-id }} + env: + GH_AW_SETUP_WORKFLOW_NAME: "Bash Step Hygiene Auditor" + GH_AW_CURRENT_WORKFLOW_REF: ${{ github.repository }}/.github/workflows/bash-lint-auditor.lock.yml@${{ github.ref }} + GH_AW_INFO_VERSION: "1.0.40" + - name: Set runtime paths + id: set-runtime-paths + run: | + { + echo "GH_AW_SAFE_OUTPUTS=${RUNNER_TEMP}/gh-aw/safeoutputs/outputs.jsonl" + echo "GH_AW_SAFE_OUTPUTS_CONFIG_PATH=${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" + echo "GH_AW_SAFE_OUTPUTS_TOOLS_PATH=${RUNNER_TEMP}/gh-aw/safeoutputs/tools.json" + } >> "$GITHUB_OUTPUT" + - name: Checkout repository + uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + with: + persist-credentials: false + - name: Create gh-aw temp directory + run: bash "${RUNNER_TEMP}/gh-aw/actions/create_gh_aw_tmp_dir.sh" + - name: Configure gh CLI for GitHub Enterprise + run: bash "${RUNNER_TEMP}/gh-aw/actions/configure_gh_for_ghe.sh" + env: + GH_TOKEN: ${{ github.token }} + # Cache memory file share configuration from frontmatter processed below + - name: Create cache-memory directory + run: bash "${RUNNER_TEMP}/gh-aw/actions/create_cache_memory_dir.sh" + - name: Restore cache-memory file share data + uses: actions/cache/restore@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5 + with: + key: memory-none-nopolicy-${{ env.GH_AW_WORKFLOW_ID_SANITIZED }}-${{ github.run_id }} + path: /tmp/gh-aw/cache-memory + restore-keys: | + memory-none-nopolicy-${{ env.GH_AW_WORKFLOW_ID_SANITIZED }}- + - name: Setup cache-memory git repository + env: + GH_AW_CACHE_DIR: /tmp/gh-aw/cache-memory + GH_AW_MIN_INTEGRITY: none + run: bash "${RUNNER_TEMP}/gh-aw/actions/setup_cache_memory_git.sh" + - name: Configure Git credentials + env: + REPO_NAME: ${{ github.repository }} + SERVER_URL: ${{ github.server_url }} + GITHUB_TOKEN: ${{ github.token }} + run: | + git config --global user.email "github-actions[bot]@users.noreply.github.com" + git config --global user.name "github-actions[bot]" + git config --global am.keepcr true + # Re-authenticate git with GitHub token + SERVER_URL_STRIPPED="${SERVER_URL#https://}" + git remote set-url origin "https://x-access-token:${GITHUB_TOKEN}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git" + echo "Git configured with standard GitHub Actions identity" + - name: Checkout PR branch + id: checkout-pr + if: | + github.event.pull_request || github.event.issue.pull_request + uses: actions/github-script@d746ffe35508b1917358783b479e04febd2b8f71 # v9.0.0 + env: + GH_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} + with: + github-token: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/checkout_pr_branch.cjs'); + await main(); + - name: Install GitHub Copilot CLI + run: bash "${RUNNER_TEMP}/gh-aw/actions/install_copilot_cli.sh" 1.0.40 + env: + GH_HOST: github.com + - name: Install AWF binary + run: bash "${RUNNER_TEMP}/gh-aw/actions/install_awf_binary.sh" v0.25.40 + - name: Determine automatic lockdown mode for GitHub MCP Server + id: determine-automatic-lockdown + uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 + env: + GH_AW_GITHUB_TOKEN: ${{ secrets.GH_AW_GITHUB_TOKEN }} + GH_AW_GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN }} + with: + script: | + const determineAutomaticLockdown = require('${{ runner.temp }}/gh-aw/actions/determine_automatic_lockdown.cjs'); + await determineAutomaticLockdown(github, context, core); + - name: Download activation artifact + uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 + with: + name: activation + path: /tmp/gh-aw + - name: Restore agent config folders from base branch + if: steps.checkout-pr.outcome == 'success' + env: + GH_AW_AGENT_FOLDERS: ".agents .claude .codex .crush .gemini .github .opencode .pi" + GH_AW_AGENT_FILES: ".crush.json AGENTS.md CLAUDE.md GEMINI.md PI.md opencode.jsonc" + run: bash "${RUNNER_TEMP}/gh-aw/actions/restore_base_github_folders.sh" + - name: Download container images + run: bash "${RUNNER_TEMP}/gh-aw/actions/download_docker_images.sh" ghcr.io/github/gh-aw-firewall/agent:0.25.40@sha256:14ff567e8d9d4c2fbc5e55c973488381c71d7e0fdbe72d30ee7b8a738fd86504 ghcr.io/github/gh-aw-firewall/api-proxy:0.25.40@sha256:2883ca3e5ae9f330cafdd9345bfd4ae17fc8da36c96d4c9a1f76e922b4c45280 ghcr.io/github/gh-aw-firewall/squid:0.25.40@sha256:b084f4a2c771f584ee68084ced52fa6b3245197a1889645d817462d307d3ac51 ghcr.io/github/gh-aw-mcpg:v0.3.6@sha256:2bb8eef86006a4c5963c55616a9c51c32f27bfdecb023b8aa6f91f6718d9171c ghcr.io/github/github-mcp-server:v1.0.3@sha256:2ac27ef03461ef2b877031b838a7d1fd7f12b12d4ace7796d8cad91446d55959 node:lts-alpine@sha256:d1b3b4da11eefd5941e7f0b9cf17783fc99d9c6fc34884a665f40a06dbdfc94f + - name: Generate Safe Outputs Config + run: | + mkdir -p "${RUNNER_TEMP}/gh-aw/safeoutputs" + mkdir -p /tmp/gh-aw/safeoutputs + mkdir -p /tmp/gh-aw/mcp-logs/safeoutputs + cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_77c36be41c26e888_EOF' + {"create_pull_request":{"allowed_files":["src/data/**","src/runtimes/**/mod.rs","src/compile/extensions/**.rs","src/compile/common.rs","src/engine.rs","src/tools/**/extension.rs","tests/bash_lint_tests.rs","tests/fixtures/**","AGENTS.md","docs/extending.md"],"max":1,"max_patch_files":100,"max_patch_size":1024,"protect_top_level_dot_folders":true,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","DESIGN.md","README.md","CONTRIBUTING.md","CHANGELOG.md","SECURITY.md","CODE_OF_CONDUCT.md","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_files_policy":"fallback-to-issue"},"create_report_incomplete_issue":{},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"report_incomplete":{}} + GH_AW_SAFE_OUTPUTS_CONFIG_77c36be41c26e888_EOF + - name: Generate Safe Outputs Tools + env: + GH_AW_TOOLS_META_JSON: | + { + "description_suffixes": { + "create_pull_request": " CONSTRAINTS: Maximum 1 pull request(s) can be created." + }, + "repo_params": {}, + "dynamic_tools": [] + } + GH_AW_VALIDATION_JSON: | + { + "create_pull_request": { + "defaultMax": 1, + "fields": { + "base": { + "type": "string", + "sanitize": true, + "maxLength": 128 + }, + "body": { + "required": true, + "type": "string", + "sanitize": true, + "maxLength": 65000 + }, + "branch": { + "required": true, + "type": "string", + "sanitize": true, + "maxLength": 256 + }, + "draft": { + "type": "boolean" + }, + "labels": { + "type": "array", + "itemType": "string", + "itemSanitize": true, + "itemMaxLength": 128 + }, + "repo": { + "type": "string", + "maxLength": 256 + }, + "title": { + "required": true, + "type": "string", + "sanitize": true, + "maxLength": 128 + } + } + }, + "missing_data": { + "defaultMax": 20, + "fields": { + "alternatives": { + "type": "string", + "sanitize": true, + "maxLength": 256 + }, + "context": { + "type": "string", + "sanitize": true, + "maxLength": 256 + }, + "data_type": { + "type": "string", + "sanitize": true, + "maxLength": 128 + }, + "reason": { + "type": "string", + "sanitize": true, + "maxLength": 256 + } + } + }, + "missing_tool": { + "defaultMax": 20, + "fields": { + "alternatives": { + "type": "string", + "sanitize": true, + "maxLength": 512 + }, + "reason": { + "required": true, + "type": "string", + "sanitize": true, + "maxLength": 256 + }, + "tool": { + "type": "string", + "sanitize": true, + "maxLength": 128 + } + } + }, + "noop": { + "defaultMax": 1, + "fields": { + "message": { + "required": true, + "type": "string", + "sanitize": true, + "maxLength": 65000 + } + } + }, + "report_incomplete": { + "defaultMax": 5, + "fields": { + "details": { + "type": "string", + "sanitize": true, + "maxLength": 65000 + }, + "reason": { + "required": true, + "type": "string", + "sanitize": true, + "maxLength": 1024 + } + } + } + } + uses: actions/github-script@d746ffe35508b1917358783b479e04febd2b8f71 # v9.0.0 + with: + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/generate_safe_outputs_tools.cjs'); + await main(); + - name: Generate Safe Outputs MCP Server Config + id: safe-outputs-config + run: | + # Generate a secure random API key (360 bits of entropy, 40+ chars) + # Mask immediately to prevent timing vulnerabilities + API_KEY=$(openssl rand -base64 45 | tr -d '/+=') + echo "::add-mask::${API_KEY}" + + PORT=3001 + + # Set outputs for next steps + { + echo "safe_outputs_api_key=${API_KEY}" + echo "safe_outputs_port=${PORT}" + } >> "$GITHUB_OUTPUT" + + echo "Safe Outputs MCP server will run on port ${PORT}" + + - name: Start Safe Outputs MCP HTTP Server + id: safe-outputs-start + env: + DEBUG: '*' + GH_AW_SAFE_OUTPUTS: ${{ steps.set-runtime-paths.outputs.GH_AW_SAFE_OUTPUTS }} + GH_AW_SAFE_OUTPUTS_PORT: ${{ steps.safe-outputs-config.outputs.safe_outputs_port }} + GH_AW_SAFE_OUTPUTS_API_KEY: ${{ steps.safe-outputs-config.outputs.safe_outputs_api_key }} + GH_AW_SAFE_OUTPUTS_TOOLS_PATH: ${{ runner.temp }}/gh-aw/safeoutputs/tools.json + GH_AW_SAFE_OUTPUTS_CONFIG_PATH: ${{ runner.temp }}/gh-aw/safeoutputs/config.json + GH_AW_MCP_LOG_DIR: /tmp/gh-aw/mcp-logs/safeoutputs + run: | + # Environment variables are set above to prevent template injection + export DEBUG + export GH_AW_SAFE_OUTPUTS + export GH_AW_SAFE_OUTPUTS_PORT + export GH_AW_SAFE_OUTPUTS_API_KEY + export GH_AW_SAFE_OUTPUTS_TOOLS_PATH + export GH_AW_SAFE_OUTPUTS_CONFIG_PATH + export GH_AW_MCP_LOG_DIR + + bash "${RUNNER_TEMP}/gh-aw/actions/start_safe_outputs_server.sh" + + - name: Start MCP Gateway + id: start-mcp-gateway + env: + GH_AW_SAFE_OUTPUTS: ${{ steps.set-runtime-paths.outputs.GH_AW_SAFE_OUTPUTS }} + GH_AW_SAFE_OUTPUTS_API_KEY: ${{ steps.safe-outputs-start.outputs.api_key }} + GH_AW_SAFE_OUTPUTS_PORT: ${{ steps.safe-outputs-start.outputs.port }} + GITHUB_MCP_GUARD_MIN_INTEGRITY: ${{ steps.determine-automatic-lockdown.outputs.min_integrity }} + GITHUB_MCP_GUARD_REPOS: ${{ steps.determine-automatic-lockdown.outputs.repos }} + GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} + run: | + set -eo pipefail + mkdir -p "${RUNNER_TEMP}/gh-aw/mcp-config" + + # Export gateway environment variables for MCP config and gateway script + export MCP_GATEWAY_PORT="8080" + export MCP_GATEWAY_DOMAIN="host.docker.internal" + export MCP_GATEWAY_HOST_DOMAIN="localhost" + MCP_GATEWAY_API_KEY=$(openssl rand -base64 45 | tr -d '/+=') + echo "::add-mask::${MCP_GATEWAY_API_KEY}" + export MCP_GATEWAY_API_KEY + export MCP_GATEWAY_PAYLOAD_DIR="/tmp/gh-aw/mcp-payloads" + mkdir -p "${MCP_GATEWAY_PAYLOAD_DIR}" + export MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD="524288" + export DEBUG="*" + + export GH_AW_ENGINE="copilot" + MCP_GATEWAY_UID=$(id -u 2>/dev/null || echo '0') + MCP_GATEWAY_GID=$(id -g 2>/dev/null || echo '0') + DOCKER_SOCK_GID=$(stat -c '%g' /var/run/docker.sock 2>/dev/null || echo '0') + export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host --add-host host.docker.internal:127.0.0.1 --user '"${MCP_GATEWAY_UID}"':'"${MCP_GATEWAY_GID}"' --group-add '"${DOCKER_SOCK_GID}"' -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_GUARD_MIN_INTEGRITY -e GITHUB_MCP_GUARD_REPOS -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '"${GITHUB_WORKSPACE}"':'"${GITHUB_WORKSPACE}"':rw ghcr.io/github/gh-aw-mcpg:v0.3.6' + + mkdir -p /home/runner/.copilot + GH_AW_NODE=$(which node 2>/dev/null || command -v node 2>/dev/null || echo node) + cat << GH_AW_MCP_CONFIG_b04aa29c412452ce_EOF | "$GH_AW_NODE" "${RUNNER_TEMP}/gh-aw/actions/start_mcp_gateway.cjs" + { + "mcpServers": { + "github": { + "type": "stdio", + "container": "ghcr.io/github/github-mcp-server:v1.0.3", + "env": { + "GITHUB_HOST": "\${GITHUB_SERVER_URL}", + "GITHUB_PERSONAL_ACCESS_TOKEN": "\${GITHUB_MCP_SERVER_TOKEN}", + "GITHUB_READ_ONLY": "1", + "GITHUB_TOOLSETS": "context,repos,issues,pull_requests" + }, + "guard-policies": { + "allow-only": { + "min-integrity": "$GITHUB_MCP_GUARD_MIN_INTEGRITY", + "repos": "$GITHUB_MCP_GUARD_REPOS" + } + } + }, + "safeoutputs": { + "type": "http", + "url": "http://host.docker.internal:$GH_AW_SAFE_OUTPUTS_PORT", + "headers": { + "Authorization": "\${GH_AW_SAFE_OUTPUTS_API_KEY}" + }, + "guard-policies": { + "write-sink": { + "accept": [ + "*" + ] + } + } + } + }, + "gateway": { + "port": $MCP_GATEWAY_PORT, + "domain": "${MCP_GATEWAY_DOMAIN}", + "apiKey": "${MCP_GATEWAY_API_KEY}", + "payloadDir": "${MCP_GATEWAY_PAYLOAD_DIR}" + } + } + GH_AW_MCP_CONFIG_b04aa29c412452ce_EOF + - name: Mount MCP servers as CLIs + id: mount-mcp-clis + continue-on-error: true + env: + MCP_GATEWAY_API_KEY: ${{ steps.start-mcp-gateway.outputs.gateway-api-key }} + MCP_GATEWAY_DOMAIN: ${{ steps.start-mcp-gateway.outputs.gateway-domain }} + MCP_GATEWAY_PORT: ${{ steps.start-mcp-gateway.outputs.gateway-port }} + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 + with: + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io); + const { main } = require('${{ runner.temp }}/gh-aw/actions/mount_mcp_as_cli.cjs'); + await main(); + - name: Clean credentials + continue-on-error: true + run: bash "${RUNNER_TEMP}/gh-aw/actions/clean_git_credentials.sh" + - name: Audit pre-agent workspace + id: pre_agent_audit + continue-on-error: true + run: bash "${RUNNER_TEMP}/gh-aw/actions/audit_pre_agent_workspace.sh" + - name: Execute GitHub Copilot CLI + id: agentic_execution + # Copilot CLI tool arguments (sorted): + timeout-minutes: 20 + run: | + set -o pipefail + touch /tmp/gh-aw/agent-step-summary.md + GH_AW_NODE_BIN=$(command -v node 2>/dev/null || true) + export GH_AW_NODE_BIN + (umask 177 && touch /tmp/gh-aw/agent-stdio.log) + printf '%s\n' '{"$schema":"https://github.com/github/gh-aw-firewall/releases/download/v0.25.40/awf-config.schema.json","network":{"allowDomains":["api.business.githubcopilot.com","api.enterprise.githubcopilot.com","api.github.com","api.githubcopilot.com","api.individual.githubcopilot.com","api.snapcraft.io","archive.ubuntu.com","azure.archive.ubuntu.com","crates.io","crl.geotrust.com","crl.globalsign.com","crl.identrust.com","crl.sectigo.com","crl.thawte.com","crl.usertrust.com","crl.verisign.com","crl3.digicert.com","crl4.digicert.com","crls.ssl.com","github.com","host.docker.internal","index.crates.io","json-schema.org","json.schemastore.org","keyserver.ubuntu.com","ocsp.digicert.com","ocsp.geotrust.com","ocsp.globalsign.com","ocsp.identrust.com","ocsp.sectigo.com","ocsp.ssl.com","ocsp.thawte.com","ocsp.usertrust.com","ocsp.verisign.com","packagecloud.io","packages.cloud.google.com","packages.microsoft.com","ppa.launchpad.net","raw.githubusercontent.com","registry.npmjs.org","s.symcb.com","s.symcd.com","security.ubuntu.com","sh.rustup.rs","static.crates.io","static.rust-lang.org","telemetry.enterprise.githubcopilot.com","ts-crl.ws.symantec.com","ts-ocsp.ws.symantec.com","www.googleapis.com"]},"apiProxy":{"enabled":true,"models":{"auto":["large"],"deep-research":["copilot/deep-research*","google/deep-research*"],"gemini-flash":["copilot/gemini-*flash*","google/gemini-*flash*"],"gemini-pro":["copilot/gemini-*pro*","google/gemini-*pro*"],"gpt-4.1":["copilot/gpt-4.1*","openai/gpt-4.1*"],"gpt-5":["copilot/gpt-5*","openai/gpt-5*"],"gpt-5-codex":["copilot/gpt-5*codex*","openai/gpt-5*codex*"],"gpt-5-mini":["copilot/gpt-5*mini*","openai/gpt-5*mini*"],"gpt-5-nano":["copilot/gpt-5*nano*","openai/gpt-5*nano*"],"gpt-5-pro":["copilot/gpt-5*pro*","openai/gpt-5*pro*"],"haiku":["copilot/*haiku*","anthropic/*haiku*"],"large":["sonnet","gpt-5-pro","gpt-5","gemini-pro"],"mini":["haiku","gpt-5-mini","gpt-5-nano","gemini-flash"],"opus":["copilot/*opus*","anthropic/*opus*"],"reasoning":["copilot/o1*","copilot/o3*","copilot/o4*","openai/o1*","openai/o3*","openai/o4*"],"small":["mini"],"sonnet":["copilot/*sonnet*","anthropic/*sonnet*"]}},"container":{"imageTag":"0.25.40,squid=sha256:b084f4a2c771f584ee68084ced52fa6b3245197a1889645d817462d307d3ac51,agent=sha256:14ff567e8d9d4c2fbc5e55c973488381c71d7e0fdbe72d30ee7b8a738fd86504,api-proxy=sha256:2883ca3e5ae9f330cafdd9345bfd4ae17fc8da36c96d4c9a1f76e922b4c45280,cli-proxy=sha256:3e7152911d4b4b7b97beef9d3d7d924ff7902227e86001ef3838fb728d5d514c"}}' > "${RUNNER_TEMP}/gh-aw/awf-config.json" && cp "${RUNNER_TEMP}/gh-aw/awf-config.json" /tmp/gh-aw/awf-config.json + # shellcheck disable=SC1003 + sudo -E awf --config "${RUNNER_TEMP}/gh-aw/awf-config.json" --container-workdir "${GITHUB_WORKSPACE}" --mount "${RUNNER_TEMP}/gh-aw:${RUNNER_TEMP}/gh-aw:ro" --mount "${RUNNER_TEMP}/gh-aw:/host${RUNNER_TEMP}/gh-aw:ro" --env-all --exclude-env COPILOT_GITHUB_TOKEN --exclude-env GITHUB_MCP_SERVER_TOKEN --exclude-env MCP_GATEWAY_API_KEY --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --audit-dir /tmp/gh-aw/sandbox/firewall/audit --enable-host-access --allow-host-ports 80,443,8080 --skip-pull \ + -- /bin/bash -c 'export PATH="${RUNNER_TEMP}/gh-aw/mcp-cli/bin:$PATH" && export PATH="$(find /opt/hostedtoolcache /home/runner/work/_tool -maxdepth 4 -type d -name bin 2>/dev/null | tr '\''\n'\'' '\'':'\'')$PATH"; [ -n "$GOROOT" ] && export PATH="$GOROOT/bin:$PATH" || true && GH_AW_NODE_EXEC="${GH_AW_NODE_BIN:-}"; if [ -z "$GH_AW_NODE_EXEC" ] || [ ! -x "$GH_AW_NODE_EXEC" ]; then GH_AW_NODE_EXEC="$(command -v node 2>/dev/null || echo node)"; fi; "$GH_AW_NODE_EXEC" ${RUNNER_TEMP}/gh-aw/actions/copilot_harness.cjs /usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --disable-builtin-mcps --no-ask-user --allow-all-tools --add-dir /tmp/gh-aw/cache-memory/ --allow-all-paths --add-dir "${GITHUB_WORKSPACE}" --prompt-file /tmp/gh-aw/aw-prompts/prompt.txt' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log + env: + COPILOT_AGENT_RUNNER_TYPE: STANDALONE + COPILOT_API_KEY: dummy-byok-key-for-offline-mode + COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }} + COPILOT_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || 'claude-sonnet-4.6' }} + GH_AW_MCP_CONFIG: /home/runner/.copilot/mcp-config.json + GH_AW_PHASE: agent + GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt + GH_AW_SAFE_OUTPUTS: ${{ steps.set-runtime-paths.outputs.GH_AW_SAFE_OUTPUTS }} + GH_AW_VERSION: v0.71.5 + GITHUB_API_URL: ${{ github.api_url }} + GITHUB_AW: true + GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows + GITHUB_HEAD_REF: ${{ github.head_ref }} + GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} + GITHUB_REF_NAME: ${{ github.ref_name }} + GITHUB_SERVER_URL: ${{ github.server_url }} + GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md + GITHUB_WORKSPACE: ${{ github.workspace }} + GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com + GIT_AUTHOR_NAME: github-actions[bot] + GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com + GIT_COMMITTER_NAME: github-actions[bot] + XDG_CONFIG_HOME: /home/runner + - name: Detect Copilot errors + id: detect-copilot-errors + if: always() + continue-on-error: true + run: node "${RUNNER_TEMP}/gh-aw/actions/detect_copilot_errors.cjs" + - name: Configure Git credentials + env: + REPO_NAME: ${{ github.repository }} + SERVER_URL: ${{ github.server_url }} + GITHUB_TOKEN: ${{ github.token }} + run: | + git config --global user.email "github-actions[bot]@users.noreply.github.com" + git config --global user.name "github-actions[bot]" + git config --global am.keepcr true + # Re-authenticate git with GitHub token + SERVER_URL_STRIPPED="${SERVER_URL#https://}" + git remote set-url origin "https://x-access-token:${GITHUB_TOKEN}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git" + echo "Git configured with standard GitHub Actions identity" + - name: Copy Copilot session state files to logs + if: always() + continue-on-error: true + run: bash "${RUNNER_TEMP}/gh-aw/actions/copy_copilot_session_state.sh" + - name: Stop MCP Gateway + if: always() + continue-on-error: true + env: + MCP_GATEWAY_PORT: ${{ steps.start-mcp-gateway.outputs.gateway-port }} + MCP_GATEWAY_API_KEY: ${{ steps.start-mcp-gateway.outputs.gateway-api-key }} + GATEWAY_PID: ${{ steps.start-mcp-gateway.outputs.gateway-pid }} + run: | + bash "${RUNNER_TEMP}/gh-aw/actions/stop_mcp_gateway.sh" "$GATEWAY_PID" + - name: Redact secrets in logs + if: always() + uses: actions/github-script@d746ffe35508b1917358783b479e04febd2b8f71 # v9.0.0 + with: + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/redact_secrets.cjs'); + await main(); + env: + GH_AW_SECRET_NAMES: 'COPILOT_GITHUB_TOKEN,GH_AW_GITHUB_MCP_SERVER_TOKEN,GH_AW_GITHUB_TOKEN,GITHUB_TOKEN' + SECRET_COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }} + SECRET_GH_AW_GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN }} + SECRET_GH_AW_GITHUB_TOKEN: ${{ secrets.GH_AW_GITHUB_TOKEN }} + SECRET_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} + - name: Append agent step summary + if: always() + run: bash "${RUNNER_TEMP}/gh-aw/actions/append_agent_step_summary.sh" + - name: Copy Safe Outputs + if: always() + env: + GH_AW_SAFE_OUTPUTS: ${{ steps.set-runtime-paths.outputs.GH_AW_SAFE_OUTPUTS }} + run: | + mkdir -p /tmp/gh-aw + cp "$GH_AW_SAFE_OUTPUTS" /tmp/gh-aw/safeoutputs.jsonl 2>/dev/null || true + - name: Ingest agent output + id: collect_output + if: always() + uses: actions/github-script@d746ffe35508b1917358783b479e04febd2b8f71 # v9.0.0 + env: + GH_AW_SAFE_OUTPUTS: ${{ steps.set-runtime-paths.outputs.GH_AW_SAFE_OUTPUTS }} + GH_AW_ALLOWED_DOMAINS: "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crates.io,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,index.crates.io,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,sh.rustup.rs,static.crates.io,static.rust-lang.org,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com" + GITHUB_SERVER_URL: ${{ github.server_url }} + GITHUB_API_URL: ${{ github.api_url }} + with: + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/collect_ndjson_output.cjs'); + await main(); + - name: Parse agent logs for step summary + if: always() + uses: actions/github-script@d746ffe35508b1917358783b479e04febd2b8f71 # v9.0.0 + env: + GH_AW_AGENT_OUTPUT: /tmp/gh-aw/sandbox/agent/logs/ + with: + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/parse_copilot_log.cjs'); + await main(); + - name: Parse MCP Gateway logs for step summary + if: always() + id: parse-mcp-gateway + uses: actions/github-script@d746ffe35508b1917358783b479e04febd2b8f71 # v9.0.0 + with: + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/parse_mcp_gateway_log.cjs'); + await main(); + - name: Print firewall logs + if: always() + continue-on-error: true + env: + AWF_LOGS_DIR: /tmp/gh-aw/sandbox/firewall/logs + run: | + # Fix permissions on firewall logs/audit dirs so they can be uploaded as artifacts + # AWF runs with sudo, creating files owned by root + sudo chmod -R a+r /tmp/gh-aw/sandbox/firewall 2>/dev/null || true + # Only run awf logs summary if awf command exists (it may not be installed if workflow failed before install step) + if command -v awf &> /dev/null; then + awf logs summary | tee -a "$GITHUB_STEP_SUMMARY" + else + echo 'AWF binary not installed, skipping firewall log summary' + fi + - name: Parse token usage for step summary + if: always() + continue-on-error: true + uses: actions/github-script@d746ffe35508b1917358783b479e04febd2b8f71 # v9.0.0 + with: + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/parse_token_usage.cjs'); + await main(); + - name: Print AWF reflect summary + if: always() + continue-on-error: true + uses: actions/github-script@d746ffe35508b1917358783b479e04febd2b8f71 # v9.0.0 + with: + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/awf_reflect_summary.cjs'); + await main(); + - name: Write agent output placeholder if missing + if: always() + run: | + if [ ! -f /tmp/gh-aw/agent_output.json ]; then + echo '{"items":[]}' > /tmp/gh-aw/agent_output.json + fi + - name: Commit cache-memory changes + if: always() + env: + GH_AW_CACHE_DIR: /tmp/gh-aw/cache-memory + run: bash "${RUNNER_TEMP}/gh-aw/actions/commit_cache_memory_git.sh" + - name: Upload cache-memory data as artifact + uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 + if: always() + with: + name: cache-memory + path: /tmp/gh-aw/cache-memory + - name: Upload agent artifacts + if: always() + continue-on-error: true + uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 + with: + name: agent + path: | + /tmp/gh-aw/aw-prompts/prompt.txt + /tmp/gh-aw/sandbox/agent/logs/ + /tmp/gh-aw/redacted-urls.log + /tmp/gh-aw/mcp-logs/ + /tmp/gh-aw/agent_usage.json + /tmp/gh-aw/agent-stdio.log + /tmp/gh-aw/pre-agent-audit.txt + /tmp/gh-aw/agent/ + /tmp/gh-aw/github_rate_limits.jsonl + /tmp/gh-aw/safeoutputs.jsonl + /tmp/gh-aw/agent_output.json + /tmp/gh-aw/aw-*.patch + /tmp/gh-aw/aw-*.bundle + /tmp/gh-aw/awf-config.json + /tmp/gh-aw/sandbox/firewall/logs/ + /tmp/gh-aw/sandbox/firewall/audit/ + /tmp/gh-aw/sandbox/firewall/awf-reflect.json + if-no-files-found: ignore + + conclusion: + needs: + - activation + - agent + - detection + - safe_outputs + - update_cache_memory + if: > + always() && (needs.agent.result != 'skipped' || needs.activation.outputs.lockdown_check_failed == 'true' || + needs.activation.outputs.stale_lock_file_failed == 'true') + runs-on: ubuntu-slim + permissions: + contents: write + issues: write + pull-requests: write + concurrency: + group: "gh-aw-conclusion-bash-lint-auditor" + cancel-in-progress: false + outputs: + incomplete_count: ${{ steps.report_incomplete.outputs.incomplete_count }} + noop_message: ${{ steps.noop.outputs.noop_message }} + tools_reported: ${{ steps.missing_tool.outputs.tools_reported }} + total_count: ${{ steps.missing_tool.outputs.total_count }} + steps: + - name: Setup Scripts + id: setup + uses: github/gh-aw-actions/setup@b8068426813005612b960b5ab0b8bd2c27142323 # v0.71.5 + with: + destination: ${{ runner.temp }}/gh-aw/actions + job-name: ${{ github.job }} + trace-id: ${{ needs.activation.outputs.setup-trace-id }} + env: + GH_AW_SETUP_WORKFLOW_NAME: "Bash Step Hygiene Auditor" + GH_AW_CURRENT_WORKFLOW_REF: ${{ github.repository }}/.github/workflows/bash-lint-auditor.lock.yml@${{ github.ref }} + GH_AW_INFO_VERSION: "1.0.40" + - name: Download agent output artifact + id: download-agent-output + continue-on-error: true + uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 + with: + name: agent + path: /tmp/gh-aw/ + - name: Setup agent output environment variable + id: setup-agent-output-env + if: steps.download-agent-output.outcome == 'success' + run: | + mkdir -p /tmp/gh-aw/ + find "/tmp/gh-aw/" -type f -print + echo "GH_AW_AGENT_OUTPUT=/tmp/gh-aw/agent_output.json" >> "$GITHUB_OUTPUT" + - name: Process no-op messages + id: noop + uses: actions/github-script@d746ffe35508b1917358783b479e04febd2b8f71 # v9.0.0 + env: + GH_AW_AGENT_OUTPUT: ${{ steps.setup-agent-output-env.outputs.GH_AW_AGENT_OUTPUT }} + GH_AW_NOOP_MAX: "1" + GH_AW_WORKFLOW_NAME: "Bash Step Hygiene Auditor" + GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }} + GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }} + GH_AW_NOOP_REPORT_AS_ISSUE: "true" + with: + github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/handle_noop_message.cjs'); + await main(); + - name: Log detection run + id: detection_runs + uses: actions/github-script@d746ffe35508b1917358783b479e04febd2b8f71 # v9.0.0 + env: + GH_AW_AGENT_OUTPUT: ${{ steps.setup-agent-output-env.outputs.GH_AW_AGENT_OUTPUT }} + GH_AW_WORKFLOW_NAME: "Bash Step Hygiene Auditor" + GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }} + GH_AW_DETECTION_CONCLUSION: ${{ needs.detection.outputs.detection_conclusion }} + GH_AW_DETECTION_REASON: ${{ needs.detection.outputs.detection_reason }} + with: + github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/handle_detection_runs.cjs'); + await main(); + - name: Record missing tool + id: missing_tool + uses: actions/github-script@d746ffe35508b1917358783b479e04febd2b8f71 # v9.0.0 + env: + GH_AW_AGENT_OUTPUT: ${{ steps.setup-agent-output-env.outputs.GH_AW_AGENT_OUTPUT }} + GH_AW_MISSING_TOOL_CREATE_ISSUE: "true" + GH_AW_WORKFLOW_NAME: "Bash Step Hygiene Auditor" + with: + github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/missing_tool.cjs'); + await main(); + - name: Record incomplete + id: report_incomplete + uses: actions/github-script@d746ffe35508b1917358783b479e04febd2b8f71 # v9.0.0 + env: + GH_AW_AGENT_OUTPUT: ${{ steps.setup-agent-output-env.outputs.GH_AW_AGENT_OUTPUT }} + GH_AW_REPORT_INCOMPLETE_CREATE_ISSUE: "true" + GH_AW_WORKFLOW_NAME: "Bash Step Hygiene Auditor" + with: + github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/report_incomplete_handler.cjs'); + await main(); + - name: Handle agent failure + id: handle_agent_failure + if: always() + uses: actions/github-script@d746ffe35508b1917358783b479e04febd2b8f71 # v9.0.0 + env: + GH_AW_AGENT_OUTPUT: ${{ steps.setup-agent-output-env.outputs.GH_AW_AGENT_OUTPUT }} + GH_AW_WORKFLOW_NAME: "Bash Step Hygiene Auditor" + GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }} + GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }} + GH_AW_WORKFLOW_ID: "bash-lint-auditor" + GH_AW_ACTION_FAILURE_ISSUE_EXPIRES_HOURS: "168" + GH_AW_ENGINE_ID: "copilot" + GH_AW_SECRET_VERIFICATION_RESULT: ${{ needs.activation.outputs.secret_verification_result }} + GH_AW_CHECKOUT_PR_SUCCESS: ${{ needs.agent.outputs.checkout_pr_success }} + GH_AW_INFERENCE_ACCESS_ERROR: ${{ needs.agent.outputs.inference_access_error }} + GH_AW_MCP_POLICY_ERROR: ${{ needs.agent.outputs.mcp_policy_error }} + GH_AW_AGENTIC_ENGINE_TIMEOUT: ${{ needs.agent.outputs.agentic_engine_timeout }} + GH_AW_MODEL_NOT_SUPPORTED_ERROR: ${{ needs.agent.outputs.model_not_supported_error }} + GH_AW_ENGINE_API_HOSTS: "api.enterprise.githubcopilot.com,api.githubcopilot.com,api.business.githubcopilot.com,api.individual.githubcopilot.com" + GH_AW_CODE_PUSH_FAILURE_ERRORS: ${{ needs.safe_outputs.outputs.code_push_failure_errors }} + GH_AW_CODE_PUSH_FAILURE_COUNT: ${{ needs.safe_outputs.outputs.code_push_failure_count }} + GH_AW_LOCKDOWN_CHECK_FAILED: ${{ needs.activation.outputs.lockdown_check_failed }} + GH_AW_STALE_LOCK_FILE_FAILED: ${{ needs.activation.outputs.stale_lock_file_failed }} + GH_AW_GROUP_REPORTS: "false" + GH_AW_FAILURE_REPORT_AS_ISSUE: "true" + GH_AW_MISSING_TOOL_REPORT_AS_FAILURE: "true" + GH_AW_MISSING_DATA_REPORT_AS_FAILURE: "true" + GH_AW_TIMEOUT_MINUTES: "20" + GH_AW_CACHE_MEMORY_ENABLED: "true" + with: + github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/handle_agent_failure.cjs'); + await main(); + + detection: + needs: + - activation + - agent + if: > + always() && needs.agent.result != 'skipped' && (needs.agent.outputs.output_types != '' || needs.agent.outputs.has_patch == 'true') + runs-on: ubuntu-latest + permissions: + contents: read + outputs: + detection_conclusion: ${{ steps.detection_conclusion.outputs.conclusion }} + detection_reason: ${{ steps.detection_conclusion.outputs.reason }} + detection_success: ${{ steps.detection_conclusion.outputs.success }} + steps: + - name: Setup Scripts + id: setup + uses: github/gh-aw-actions/setup@b8068426813005612b960b5ab0b8bd2c27142323 # v0.71.5 + with: + destination: ${{ runner.temp }}/gh-aw/actions + job-name: ${{ github.job }} + trace-id: ${{ needs.activation.outputs.setup-trace-id }} + env: + GH_AW_SETUP_WORKFLOW_NAME: "Bash Step Hygiene Auditor" + GH_AW_CURRENT_WORKFLOW_REF: ${{ github.repository }}/.github/workflows/bash-lint-auditor.lock.yml@${{ github.ref }} + GH_AW_INFO_VERSION: "1.0.40" + - name: Download agent output artifact + id: download-agent-output + continue-on-error: true + uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 + with: + name: agent + path: /tmp/gh-aw/ + - name: Setup agent output environment variable + id: setup-agent-output-env + if: steps.download-agent-output.outcome == 'success' + run: | + mkdir -p /tmp/gh-aw/ + find "/tmp/gh-aw/" -type f -print + echo "GH_AW_AGENT_OUTPUT=/tmp/gh-aw/agent_output.json" >> "$GITHUB_OUTPUT" + - name: Checkout repository for patch context + if: needs.agent.outputs.has_patch == 'true' + uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + with: + persist-credentials: false + # --- Threat Detection --- + - name: Clean stale firewall files from agent artifact + run: | + rm -rf /tmp/gh-aw/sandbox/firewall/logs + rm -rf /tmp/gh-aw/sandbox/firewall/audit + - name: Download container images + run: bash "${RUNNER_TEMP}/gh-aw/actions/download_docker_images.sh" ghcr.io/github/gh-aw-firewall/agent:0.25.40@sha256:14ff567e8d9d4c2fbc5e55c973488381c71d7e0fdbe72d30ee7b8a738fd86504 ghcr.io/github/gh-aw-firewall/api-proxy:0.25.40@sha256:2883ca3e5ae9f330cafdd9345bfd4ae17fc8da36c96d4c9a1f76e922b4c45280 ghcr.io/github/gh-aw-firewall/squid:0.25.40@sha256:b084f4a2c771f584ee68084ced52fa6b3245197a1889645d817462d307d3ac51 + - name: Check if detection needed + id: detection_guard + if: always() + env: + OUTPUT_TYPES: ${{ needs.agent.outputs.output_types }} + HAS_PATCH: ${{ needs.agent.outputs.has_patch }} + run: | + if [[ -n "$OUTPUT_TYPES" || "$HAS_PATCH" == "true" ]]; then + echo "run_detection=true" >> "$GITHUB_OUTPUT" + echo "Detection will run: output_types=$OUTPUT_TYPES, has_patch=$HAS_PATCH" + else + echo "run_detection=false" >> "$GITHUB_OUTPUT" + echo "Detection skipped: no agent outputs or patches to analyze" + fi + - name: Clear MCP Config for detection + if: always() && steps.detection_guard.outputs.run_detection == 'true' + run: | + rm -f "${RUNNER_TEMP}/gh-aw/mcp-config/mcp-servers.json" + rm -f /home/runner/.copilot/mcp-config.json + rm -f "$GITHUB_WORKSPACE/.gemini/settings.json" + - name: Prepare threat detection files + if: always() && steps.detection_guard.outputs.run_detection == 'true' + run: | + mkdir -p /tmp/gh-aw/threat-detection/aw-prompts + cp /tmp/gh-aw/aw-prompts/prompt.txt /tmp/gh-aw/threat-detection/aw-prompts/prompt.txt 2>/dev/null || true + cp /tmp/gh-aw/agent_output.json /tmp/gh-aw/threat-detection/agent_output.json 2>/dev/null || true + for f in /tmp/gh-aw/aw-*.patch; do + [ -f "$f" ] && cp "$f" /tmp/gh-aw/threat-detection/ 2>/dev/null || true + done + for f in /tmp/gh-aw/aw-*.bundle; do + [ -f "$f" ] && cp "$f" /tmp/gh-aw/threat-detection/ 2>/dev/null || true + done + echo "Prepared threat detection files:" + ls -la /tmp/gh-aw/threat-detection/ 2>/dev/null || true + - name: Setup threat detection + if: always() && steps.detection_guard.outputs.run_detection == 'true' + uses: actions/github-script@d746ffe35508b1917358783b479e04febd2b8f71 # v9.0.0 + env: + WORKFLOW_NAME: "Bash Step Hygiene Auditor" + WORKFLOW_DESCRIPTION: "Audits bash bodies in compiled pipeline YAML, applies shellcheck-driven fixes, and opens a PR with the changes." + HAS_PATCH: ${{ needs.agent.outputs.has_patch }} + with: + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/setup_threat_detection.cjs'); + await main(); + - name: Ensure threat-detection directory and log + if: always() && steps.detection_guard.outputs.run_detection == 'true' + run: | + mkdir -p /tmp/gh-aw/threat-detection + touch /tmp/gh-aw/threat-detection/detection.log + - name: Setup Node.js + uses: actions/setup-node@48b55a011bda9f5d6aeb4c2d9c7362e8dae4041e # v6.4.0 + with: + node-version: '24' + package-manager-cache: false + - name: Install GitHub Copilot CLI + run: bash "${RUNNER_TEMP}/gh-aw/actions/install_copilot_cli.sh" 1.0.40 + env: + GH_HOST: github.com + - name: Install AWF binary + run: bash "${RUNNER_TEMP}/gh-aw/actions/install_awf_binary.sh" v0.25.40 + - name: Execute GitHub Copilot CLI + if: always() && steps.detection_guard.outputs.run_detection == 'true' + continue-on-error: true + id: detection_agentic_execution + # Copilot CLI tool arguments (sorted): + timeout-minutes: 20 + run: | + set -o pipefail + touch /tmp/gh-aw/agent-step-summary.md + GH_AW_NODE_BIN=$(command -v node 2>/dev/null || true) + export GH_AW_NODE_BIN + (umask 177 && touch /tmp/gh-aw/threat-detection/detection.log) + printf '%s\n' '{"$schema":"https://github.com/github/gh-aw-firewall/releases/download/v0.25.40/awf-config.schema.json","network":{"allowDomains":["api.business.githubcopilot.com","api.enterprise.githubcopilot.com","api.github.com","api.githubcopilot.com","api.individual.githubcopilot.com","github.com","host.docker.internal","telemetry.enterprise.githubcopilot.com"]},"apiProxy":{"enabled":true},"container":{"imageTag":"0.25.40,squid=sha256:b084f4a2c771f584ee68084ced52fa6b3245197a1889645d817462d307d3ac51,agent=sha256:14ff567e8d9d4c2fbc5e55c973488381c71d7e0fdbe72d30ee7b8a738fd86504,api-proxy=sha256:2883ca3e5ae9f330cafdd9345bfd4ae17fc8da36c96d4c9a1f76e922b4c45280,cli-proxy=sha256:3e7152911d4b4b7b97beef9d3d7d924ff7902227e86001ef3838fb728d5d514c"}}' > "${RUNNER_TEMP}/gh-aw/awf-config.json" && cp "${RUNNER_TEMP}/gh-aw/awf-config.json" /tmp/gh-aw/awf-config.json + # shellcheck disable=SC1003 + sudo -E awf --config "${RUNNER_TEMP}/gh-aw/awf-config.json" --container-workdir "${GITHUB_WORKSPACE}" --mount "${RUNNER_TEMP}/gh-aw:${RUNNER_TEMP}/gh-aw:ro" --mount "${RUNNER_TEMP}/gh-aw:/host${RUNNER_TEMP}/gh-aw:ro" --env-all --exclude-env COPILOT_GITHUB_TOKEN --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --audit-dir /tmp/gh-aw/sandbox/firewall/audit --enable-host-access --allow-host-ports 80,443,8080 --skip-pull \ + -- /bin/bash -c 'export PATH="$(find /opt/hostedtoolcache /home/runner/work/_tool -maxdepth 4 -type d -name bin 2>/dev/null | tr '\''\n'\'' '\'':'\'')$PATH"; [ -n "$GOROOT" ] && export PATH="$GOROOT/bin:$PATH" || true && GH_AW_NODE_EXEC="${GH_AW_NODE_BIN:-}"; if [ -z "$GH_AW_NODE_EXEC" ] || [ ! -x "$GH_AW_NODE_EXEC" ]; then GH_AW_NODE_EXEC="$(command -v node 2>/dev/null || echo node)"; fi; "$GH_AW_NODE_EXEC" ${RUNNER_TEMP}/gh-aw/actions/copilot_harness.cjs /usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --disable-builtin-mcps --no-ask-user --allow-all-tools --add-dir "${GITHUB_WORKSPACE}" --prompt-file /tmp/gh-aw/aw-prompts/prompt.txt' 2>&1 | tee -a /tmp/gh-aw/threat-detection/detection.log + env: + COPILOT_AGENT_RUNNER_TYPE: STANDALONE + COPILOT_API_KEY: dummy-byok-key-for-offline-mode + COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }} + COPILOT_MODEL: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || 'claude-sonnet-4.6' }} + GH_AW_PHASE: detection + GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt + GH_AW_VERSION: v0.71.5 + GITHUB_API_URL: ${{ github.api_url }} + GITHUB_AW: true + GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows + GITHUB_HEAD_REF: ${{ github.head_ref }} + GITHUB_REF_NAME: ${{ github.ref_name }} + GITHUB_SERVER_URL: ${{ github.server_url }} + GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md + GITHUB_WORKSPACE: ${{ github.workspace }} + GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com + GIT_AUTHOR_NAME: github-actions[bot] + GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com + GIT_COMMITTER_NAME: github-actions[bot] + XDG_CONFIG_HOME: /home/runner + - name: Upload threat detection log + if: always() && steps.detection_guard.outputs.run_detection == 'true' + uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 + with: + name: detection + path: /tmp/gh-aw/threat-detection/detection.log + if-no-files-found: ignore + - name: Parse and conclude threat detection + id: detection_conclusion + if: always() + continue-on-error: true + uses: actions/github-script@d746ffe35508b1917358783b479e04febd2b8f71 # v9.0.0 + env: + RUN_DETECTION: ${{ steps.detection_guard.outputs.run_detection }} + GH_AW_DETECTION_CONTINUE_ON_ERROR: "true" + with: + script: | + try { + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/parse_threat_detection_results.cjs'); + await main(); + } catch (loadErr) { + const continueOnError = process.env.GH_AW_DETECTION_CONTINUE_ON_ERROR !== 'false'; + const msg = 'ERR_SYSTEM: \u274C Unexpected error loading threat detection module: ' + (loadErr && loadErr.message ? loadErr.message : String(loadErr)); + core.error(msg); + core.setOutput('reason', 'parse_error'); + if (continueOnError) { + core.warning('\u26A0\uFE0F ' + msg); + core.setOutput('conclusion', 'warning'); + core.setOutput('success', 'false'); + } else { + core.setOutput('conclusion', 'failure'); + core.setOutput('success', 'false'); + core.setFailed(msg); + } + } + + safe_outputs: + needs: + - activation + - agent + - detection + if: (!cancelled()) && needs.agent.result != 'skipped' && needs.detection.result == 'success' + runs-on: ubuntu-slim + permissions: + contents: write + issues: write + pull-requests: write + timeout-minutes: 15 + env: + GH_AW_CALLER_WORKFLOW_ID: "${{ github.repository }}/bash-lint-auditor" + GH_AW_DETECTION_CONCLUSION: ${{ needs.detection.outputs.detection_conclusion }} + GH_AW_DETECTION_REASON: ${{ needs.detection.outputs.detection_reason }} + GH_AW_EFFECTIVE_TOKENS: ${{ needs.agent.outputs.effective_tokens }} + GH_AW_ENGINE_ID: "copilot" + GH_AW_ENGINE_MODEL: ${{ needs.agent.outputs.model }} + GH_AW_ENGINE_VERSION: "1.0.40" + GH_AW_WORKFLOW_ID: "bash-lint-auditor" + GH_AW_WORKFLOW_NAME: "Bash Step Hygiene Auditor" + outputs: + code_push_failure_count: ${{ steps.process_safe_outputs.outputs.code_push_failure_count }} + code_push_failure_errors: ${{ steps.process_safe_outputs.outputs.code_push_failure_errors }} + create_discussion_error_count: ${{ steps.process_safe_outputs.outputs.create_discussion_error_count }} + create_discussion_errors: ${{ steps.process_safe_outputs.outputs.create_discussion_errors }} + created_pr_number: ${{ steps.process_safe_outputs.outputs.created_pr_number }} + created_pr_url: ${{ steps.process_safe_outputs.outputs.created_pr_url }} + process_safe_outputs_processed_count: ${{ steps.process_safe_outputs.outputs.processed_count }} + process_safe_outputs_temporary_id_map: ${{ steps.process_safe_outputs.outputs.temporary_id_map }} + steps: + - name: Setup Scripts + id: setup + uses: github/gh-aw-actions/setup@b8068426813005612b960b5ab0b8bd2c27142323 # v0.71.5 + with: + destination: ${{ runner.temp }}/gh-aw/actions + job-name: ${{ github.job }} + trace-id: ${{ needs.activation.outputs.setup-trace-id }} + env: + GH_AW_SETUP_WORKFLOW_NAME: "Bash Step Hygiene Auditor" + GH_AW_CURRENT_WORKFLOW_REF: ${{ github.repository }}/.github/workflows/bash-lint-auditor.lock.yml@${{ github.ref }} + GH_AW_INFO_VERSION: "1.0.40" + - name: Download agent output artifact + id: download-agent-output + continue-on-error: true + uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 + with: + name: agent + path: /tmp/gh-aw/ + - name: Setup agent output environment variable + id: setup-agent-output-env + if: steps.download-agent-output.outcome == 'success' + run: | + mkdir -p /tmp/gh-aw/ + find "/tmp/gh-aw/" -type f -print + echo "GH_AW_AGENT_OUTPUT=/tmp/gh-aw/agent_output.json" >> "$GITHUB_OUTPUT" + - name: Download patch artifact + continue-on-error: true + uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 + with: + name: agent + path: /tmp/gh-aw/ + - name: Checkout repository + if: (!cancelled()) && needs.agent.result != 'skipped' && contains(needs.agent.outputs.output_types, 'create_pull_request') + uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + with: + ref: ${{ github.base_ref || github.event.pull_request.base.ref || github.ref_name || github.event.repository.default_branch }} + token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} + persist-credentials: false + fetch-depth: 1 + - name: Configure Git credentials + if: (!cancelled()) && needs.agent.result != 'skipped' && contains(needs.agent.outputs.output_types, 'create_pull_request') + env: + REPO_NAME: ${{ github.repository }} + SERVER_URL: ${{ github.server_url }} + GIT_TOKEN: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} + run: | + git config --global user.email "github-actions[bot]@users.noreply.github.com" + git config --global user.name "github-actions[bot]" + git config --global am.keepcr true + # Re-authenticate git with GitHub token + SERVER_URL_STRIPPED="${SERVER_URL#https://}" + git remote set-url origin "https://x-access-token:${GIT_TOKEN}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git" + echo "Git configured with standard GitHub Actions identity" + - name: Configure GH_HOST for enterprise compatibility + id: ghes-host-config + shell: bash + run: | + # Derive GH_HOST from GITHUB_SERVER_URL so the gh CLI targets the correct + # GitHub instance (GHES/GHEC). On github.com this is a harmless no-op. + GH_HOST="${GITHUB_SERVER_URL#https://}" + GH_HOST="${GH_HOST#http://}" + echo "GH_HOST=${GH_HOST}" >> "$GITHUB_ENV" + - name: Process Safe Outputs + id: process_safe_outputs + uses: actions/github-script@d746ffe35508b1917358783b479e04febd2b8f71 # v9.0.0 + env: + GH_AW_AGENT_OUTPUT: ${{ steps.setup-agent-output-env.outputs.GH_AW_AGENT_OUTPUT }} + GH_AW_ALLOWED_DOMAINS: "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crates.io,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,index.crates.io,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,sh.rustup.rs,static.crates.io,static.rust-lang.org,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com" + GITHUB_SERVER_URL: ${{ github.server_url }} + GITHUB_API_URL: ${{ github.api_url }} + GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"create_pull_request\":{\"allowed_files\":[\"src/data/**\",\"src/runtimes/**/mod.rs\",\"src/compile/extensions/**.rs\",\"src/compile/common.rs\",\"src/engine.rs\",\"src/tools/**/extension.rs\",\"tests/bash_lint_tests.rs\",\"tests/fixtures/**\",\"AGENTS.md\",\"docs/extending.md\"],\"max\":1,\"max_patch_files\":100,\"max_patch_size\":1024,\"protect_top_level_dot_folders\":true,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"DESIGN.md\",\"README.md\",\"CONTRIBUTING.md\",\"CHANGELOG.md\",\"SECURITY.md\",\"CODE_OF_CONDUCT.md\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_files_policy\":\"fallback-to-issue\"},\"create_report_incomplete_issue\":{},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" + GH_AW_CI_TRIGGER_TOKEN: ${{ secrets.GH_AW_CI_TRIGGER_TOKEN }} + with: + github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/safe_output_handler_manager.cjs'); + await main(); + - name: Upload Safe Outputs Items + if: always() + uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 + with: + name: safe-outputs-items + path: | + /tmp/gh-aw/safe-output-items.jsonl + /tmp/gh-aw/temporary-id-map.json + if-no-files-found: ignore + + update_cache_memory: + needs: + - activation + - agent + - detection + if: > + always() && (needs.detection.result == 'success' || needs.detection.result == 'skipped') && + needs.agent.result == 'success' + runs-on: ubuntu-slim + permissions: {} + env: + GH_AW_WORKFLOW_ID_SANITIZED: bashlintauditor + steps: + - name: Setup Scripts + id: setup + uses: github/gh-aw-actions/setup@b8068426813005612b960b5ab0b8bd2c27142323 # v0.71.5 + with: + destination: ${{ runner.temp }}/gh-aw/actions + job-name: ${{ github.job }} + trace-id: ${{ needs.activation.outputs.setup-trace-id }} + env: + GH_AW_SETUP_WORKFLOW_NAME: "Bash Step Hygiene Auditor" + GH_AW_CURRENT_WORKFLOW_REF: ${{ github.repository }}/.github/workflows/bash-lint-auditor.lock.yml@${{ github.ref }} + GH_AW_INFO_VERSION: "1.0.40" + - name: Download cache-memory artifact (default) + id: download_cache_default + uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 + continue-on-error: true + with: + name: cache-memory + path: /tmp/gh-aw/cache-memory + - name: Check if cache-memory folder has content (default) + id: check_cache_default + shell: bash + run: | + if [ -d "/tmp/gh-aw/cache-memory" ] && [ "$(ls -A /tmp/gh-aw/cache-memory 2>/dev/null)" ]; then + echo "has_content=true" >> "$GITHUB_OUTPUT" + else + echo "has_content=false" >> "$GITHUB_OUTPUT" + fi + - name: Save cache-memory to cache (default) + if: steps.check_cache_default.outputs.has_content == 'true' + uses: actions/cache/save@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5 + with: + key: memory-none-nopolicy-${{ env.GH_AW_WORKFLOW_ID_SANITIZED }}-${{ github.run_id }} + path: /tmp/gh-aw/cache-memory + diff --git a/.github/workflows/bash-lint-auditor.md b/.github/workflows/bash-lint-auditor.md new file mode 100644 index 0000000..95b7317 --- /dev/null +++ b/.github/workflows/bash-lint-auditor.md @@ -0,0 +1,206 @@ +--- +on: + schedule: daily around 09:00 +description: Audits bash bodies in compiled pipeline YAML, applies shellcheck-driven fixes, and opens a PR with the changes. +permissions: + contents: read + pull-requests: read + issues: read +tools: + github: + toolsets: [default] + bash: ["*"] + web-fetch: + cache-memory: true +network: + allowed: [defaults, rust] +safe-outputs: + create-pull-request: + max: 1 + protected-files: fallback-to-issue + allowed-files: + - "src/data/**" + - "src/runtimes/**/mod.rs" + - "src/compile/extensions/**.rs" + - "src/compile/common.rs" + - "src/engine.rs" + - "src/tools/**/extension.rs" + - "tests/bash_lint_tests.rs" + - "tests/fixtures/**" + - "AGENTS.md" + - "docs/extending.md" +--- + +# Bash Step Hygiene Auditor + +You are a senior Rust engineer responsible for the quality of bash steps emitted by the **ado-aw** compiler. The repository already has a PR-time lint (`tests/bash_lint_tests.rs`) that blocks regressions; your job is the *proactive* layer that runs daily and improves the situation between PRs. + +Your goal each run is to land **at most one** focused, reviewable PR that fixes real issues. If there is nothing to fix, exit cleanly without opening a PR. + +## Step 1 — Load Previous State + +Persistent memory lives at `/tmp/gh-aw/cache-memory/`. Read what the last run did so you do not propose the same change twice: + +```bash +cat /tmp/gh-aw/cache-memory/bash-hygiene-state.json 2>/dev/null || echo '{"history":[]}' +``` + +If the most recent entry says you proposed something that is still open as a PR, exit early — wait for the maintainer to act before piling on more. + +## Step 2 — Install shellcheck + +The PR-time lint requires `shellcheck`. Install it before doing anything else: + +```bash +# Prefer apt (fastest on Ubuntu runners). Fall back to the upstream static +# binary if apt is unavailable for any reason. +if sudo apt-get install -y shellcheck > /dev/null 2>&1; then + shellcheck --version +else + SC_VERSION="v0.10.0" + curl -fsSL -o /tmp/sc.tar.xz \ + "https://github.com/koalaman/shellcheck/releases/download/${SC_VERSION}/shellcheck-${SC_VERSION}.linux.x86_64.tar.xz" + tar -C /tmp -xJf /tmp/sc.tar.xz + export PATH="/tmp/shellcheck-${SC_VERSION}:$PATH" + shellcheck --version +fi +``` + +Confirm `shellcheck --version` runs and the version is `>= 0.9`. + +## Step 3 — Baseline the Lint + +Run the existing integration test under enforce mode and capture the result: + +```bash +ENFORCE_BASH_LINT=1 cargo test --test bash_lint_tests -- --nocapture 2>&1 | tee /tmp/lint-baseline.log +echo "exit=$?" +``` + +There are three possible outcomes; each takes a different path. + +**A. Lint is green (exit 0).** The PR gate is doing its job. Move to Step 4 and look for proactive improvements. + +**B. Lint is red with findings (panic with `shellcheck flagged …`).** Latent issues are on `main` — somebody bypassed the gate, or the gate's allowlist accepts something it shouldn't. Move to Step 5. + +**C. Lint is red with a coverage gap (panic with `step display names were not produced by any fixture`).** A new generator has been added but no fixture exercises it. Move to Step 6. + +## Step 4 — Proactive Improvements (when lint is green) + +When the lint is already green, audit the *quality* of the bash hygiene story. Do exactly one of the following per run, in order of priority: + +### 4a. Stale disable directives + +Find `# shellcheck disable=SCxxxx` directives that no longer fire on the bash body that contains them: + +```bash +grep -rn "shellcheck disable=" src/data/ src/runtimes/ src/compile/ src/tools/ src/engine.rs 2>/dev/null +``` + +For each hit, temporarily delete the directive, rerun `cargo test --test bash_lint_tests -- --nocapture` (with `ENFORCE_BASH_LINT=1`), and check whether the test still passes. If the directive is now unnecessary (test still passes), remove it permanently. Restore the source file if the test fails. + +### 4b. Lint exclude-list audit + +The lint excludes `SC1090,SC1091` globally (documented in `tests/bash_lint_tests.rs`). Check whether tightening would surface new findings: + +```bash +# Probe a stricter rule set +ENFORCE_BASH_LINT=1 cargo test --test bash_lint_tests 2>&1 | head -50 +``` + +If you propose tightening, add a per-line `# shellcheck disable=` comment inside the offending bash body rather than expanding the global exclude list. Keep the exclude list minimal. + +### 4c. Expand fixture coverage + +Walk `src/runtimes/`, `src/tools/`, `src/compile/extensions/` and check whether every code path that emits a `- bash: |` step is exercised by some fixture. A generator that the lint never reaches is a generator with no quality story. Add a fixture (or extend an existing one) only if you find a real, currently-unreached generator. + +If none of 4a / 4b / 4c finds anything, **exit cleanly** — use the `noop` safe output with the message "Bash hygiene is current; no actionable findings." + +## Step 5 — Fix Real Findings + +For each finding in `/tmp/lint-baseline.log`, apply the most direct, least invasive fix: + +| Finding | Canonical fix | +|---|---| +| **SC2164** `cd "$X"` without `\|\|` | `cd "$X" \|\| exit 1` | +| **SC2086** unquoted variable | wrap in `"$VAR"` | +| **SC2046** unquoted `$(…)` | wrap in `"$(…)"` | +| **SC2155** `local var=$(cmd)` | split into `local var; var=$(cmd)` so `cmd`'s exit code is visible | +| **SC2154** unset variable | quote and confirm it really is set by the surrounding ADO macros; if not, set a sane default before use | +| **SC2088** tilde in double quotes | replace with `$HOME` | +| **`grep \| sha256sum`-style masked pipeline** | prepend `set -eo pipefail` to the bash body, OR rewrite as `checksum=$(grep …) \|\| exit 1; printf '%s\n' "$checksum" \| sha256sum -c -` | + +For each fix, also confirm the generator (not just the compiled output) is updated. If the offending bash lives in a static template (`src/data/base.yml`, `src/data/1es-base.yml`), edit there. If it comes from a Rust generator (`src/runtimes/*/mod.rs`, `src/compile/common.rs`, etc.), edit the generator and verify the next compile reproduces the fix. + +After fixes, **re-run the full lint** and confirm exit 0: + +```bash +ENFORCE_BASH_LINT=1 cargo test --test bash_lint_tests -- --nocapture +``` + +## Step 6 — Add Missing Fixture Coverage + +When the coverage check is the red signal, a new generator was introduced without a fixture that exercises it. Inspect the `REQUIRED_STEP_DISPLAY_NAMES` list and the diff between fixtures and missing names. Add a fixture (or extend `runtime-coverage-agent.md`) so the missing display name appears at least once in the harvested set. + +Re-run the lint to confirm: + +```bash +ENFORCE_BASH_LINT=1 cargo test --test bash_lint_tests -- --nocapture +``` + +## Step 7 — Full Validation + +Before opening a PR, run the full test suite and clippy to make sure your changes haven't broken anything else: + +```bash +cargo test +cargo clippy --all-targets +``` + +Both must be clean. If they aren't, your fix introduced a regression — revert and rethink before continuing. + +## Step 8 — Save State + +Write the run outcome to memory so the next run knows what to skip: + +```json +{ + "history": [ + { + "date": "", + "outcome": "fixed|no-action|coverage-added|disable-removed", + "details": "", + "pr_title": "" + } + ] +} +``` + +Truncate history to the last 30 entries. Write to `/tmp/gh-aw/cache-memory/bash-hygiene-state.json`. + +## Step 9 — Open the PR + +If you made changes, open a PR with: + +- **Title** — conventional-commits format, scope `lint` or `templates` or `runtimes` depending on what changed. Examples: + - `fix(templates): quote $AGENT_EXIT_CODE in agent run step` + - `fix(runtimes): split masked-return assignment in lean install` + - `test(bash-lint): cover dotnet-with-config generator` + - `chore(bash-lint): remove stale shellcheck disable for SC2086` +- **Body** — three short sections: + 1. **What the lint found** — copy the relevant lines from `/tmp/lint-baseline.log`. + 2. **How it was fixed** — name the rule, name the canonical fix, point at the file(s) touched. + 3. **Verification** — confirm `ENFORCE_BASH_LINT=1 cargo test --test bash_lint_tests`, `cargo test`, and `cargo clippy --all-targets` all pass. + +Use the `create-pull-request` safe-output. Restrict the PR to the files you actually touched — the `allowed-files` filter in this workflow's front matter already enforces this; do not attempt to edit anything outside that list. + +If you find that a fix requires editing a file outside `allowed-files` (e.g., a new bug in the safe-output Rust code), use `report-incomplete` with a precise description so a maintainer can take over manually. + +## When NOT to Open a PR + +- The lint is green and no proactive improvement is actionable — emit `noop`. +- The previous run's PR is still open — exit without doing anything, log "waiting on PR #N". +- The change you'd need to make crosses into business logic (not just bash hygiene) — file a `missing-data` report so a maintainer reviews it. +- You cannot get `cargo test` to pass after your fix — revert and emit `report-incomplete`. + +Keep each PR small, mechanical, and reviewable. One run, one concern, one PR.