Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 17 additions & 0 deletions IMPLEMENTATION_PLAN.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
## Stage 1: Inspect Current CLI Launch Flow
**Goal**: Confirm how PromptKit detects and spawns supported CLIs on Windows.
**Success Criteria**: Relevant launcher code and tests identified.
**Tests**: Read existing `cli/lib/launch.js`, `cli/bin/cli.js`, and `cli/tests/launch.test.js`.
**Status**: Complete

## Stage 2: Add Codex Windows-Compatible Launch Support
**Goal**: Support `--cli codex` and ensure Windows launches the npm `.cmd` shim instead of relying on shell-specific resolution.
**Success Criteria**: `codex` is accepted, launched correctly on Windows, and existing CLIs retain behavior.
**Tests**: Update launch unit tests for `codex` command resolution and dry-run output.
**Status**: Complete

## Stage 3: Verify and Finalize
**Goal**: Run the CLI test suite and confirm the change is isolated.
**Success Criteria**: Relevant verification completed and docs/help text reflect Codex support.
**Tests**: `node .\\bin\\cli.js interactive --cli codex --dry-run`; attempted `node --test --test-concurrency=1 tests\\launch.test.js`
**Status**: Complete
10 changes: 10 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,7 @@ To use the interactive mode, you'll also need one of the following LLM CLI tools

- **GitHub Copilot CLI** — Install the [GitHub CLI](https://cli.github.com/), authenticate with `gh auth login`, ensure Copilot access is enabled for your account/organization, then run `gh extension install github/gh-copilot`
- **Claude Code** — [Install Claude Code](https://docs.anthropic.com/en/docs/claude-code)
- **OpenAI Codex CLI** — [Install Codex CLI](https://github.com/openai/codex)

Not using a CLI tool? See [Using with any LLM (manual)](#using-with-any-llm-manual).

Expand Down Expand Up @@ -179,6 +180,15 @@ cd promptkit
claude "Read and execute bootstrap.md"
```

### Using with Codex CLI

Codex also supports reading the bootstrap file directly from the repo root:

```bash
cd promptkit
codex "Read and execute bootstrap.md"
```

### Using with any LLM (manual)

If your tool doesn't support skills or file access, paste the bootstrap
Expand Down
2 changes: 1 addition & 1 deletion cli/bin/cli.js
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ program
.description("Launch an interactive session with your LLM CLI (default)")
.option(
"--cli <name>",
"LLM CLI to use (copilot, gh-copilot, claude)"
"LLM CLI to use (copilot, gh-copilot, claude, codex)"
)
.option(
"--dry-run",
Expand Down
48 changes: 40 additions & 8 deletions cli/lib/launch.js
Original file line number Diff line number Diff line change
Expand Up @@ -8,18 +8,37 @@ const fs = require("fs");
const path = require("path");
const os = require("os");

function pathDirs() {
return (process.env.PATH || "").split(path.delimiter).filter(Boolean);
}

function windowsPathExts() {
return (process.env.PATHEXT || ".EXE;.COM;.BAT;.CMD")
.split(";")
.map((e) => e.toLowerCase());
}

function isExactFileOnPath(fileName) {
for (const dir of pathDirs()) {
try {
fs.accessSync(path.join(dir, fileName), fs.constants.F_OK);
return true;
} catch {
// not found in this directory, continue
}
}
return false;
}

function isOnPath(cmd) {
// Search PATH entries directly rather than shelling out to `which`/`where`.
// This avoids requiring `which` to be on PATH itself (important in test
// environments where PATH is restricted to a mock directory).
const pathDirs = (process.env.PATH || "").split(path.delimiter).filter(Boolean);
const exts = process.platform === "win32"
? (process.env.PATHEXT || ".EXE;.COM;.BAT;.CMD").split(";").map((e) => e.toLowerCase())
: [""];
const exts = process.platform === "win32" ? windowsPathExts() : [""];
// On Windows, X_OK is not meaningful — any file with a matching PATHEXT
// extension is considered executable, so we check for existence (F_OK) only.
const accessFlag = process.platform === "win32" ? fs.constants.F_OK : fs.constants.X_OK;
for (const dir of pathDirs) {
for (const dir of pathDirs()) {
for (const ext of exts) {
try {
fs.accessSync(path.join(dir, cmd + ext), accessFlag);
Expand All @@ -32,6 +51,13 @@ function isOnPath(cmd) {
return false;
}

function resolveSpawnCommand(cmd) {
if (process.platform !== "win32") return cmd;

const shim = `${cmd}.cmd`;
return isExactFileOnPath(shim) ? shim : cmd;
}

function detectCli() {
// Check for GitHub Copilot CLI first (most common)
if (isOnPath("copilot")) return "copilot";
Expand All @@ -45,6 +71,7 @@ function detectCli() {
}
}
if (isOnPath("claude")) return "claude";
if (isOnPath("codex")) return "codex";
return null;
}

Expand Down Expand Up @@ -76,7 +103,8 @@ function launchInteractive(contentDir, cliName, { dryRun = false } = {}) {
"No supported LLM CLI found on PATH.\n\n" +
"Install one of:\n" +
" - GitHub Copilot CLI: gh extension install github/gh-copilot\n" +
" - Claude Code: https://docs.anthropic.com/en/docs/claude-code\n\n" +
" - Claude Code: https://docs.anthropic.com/en/docs/claude-code\n" +
" - OpenAI Codex CLI: https://github.com/openai/codex\n\n" +
"Alternatively, load bootstrap.md in your LLM manually from:\n" +
` ${contentDir}`
);
Expand Down Expand Up @@ -107,7 +135,7 @@ function launchInteractive(contentDir, cliName, { dryRun = false } = {}) {
let cmd, args;
switch (cli) {
case "copilot":
cmd = "copilot";
cmd = resolveSpawnCommand("copilot");
// --add-dir grants file access to the staging directory.
args = ["--add-dir", tmpDir, "-i", bootstrapPrompt];
break;
Expand All @@ -117,7 +145,11 @@ function launchInteractive(contentDir, cliName, { dryRun = false } = {}) {
break;
case "claude":
// --add-dir grants file access to the staging directory.
cmd = "claude";
cmd = resolveSpawnCommand("claude");
args = ["--add-dir", tmpDir, bootstrapPrompt];
break;
case "codex":
cmd = resolveSpawnCommand("codex");
args = ["--add-dir", tmpDir, bootstrapPrompt];
break;
default:
Expand Down
1 change: 1 addition & 0 deletions cli/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@
"llm",
"ai",
"copilot",
"codex",
"prompt-templates",
"agentic-ai",
"developer-tools"
Expand Down
15 changes: 10 additions & 5 deletions cli/specs/design.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ validate content availability.
by category, and displays the result. No separate `manifest.js` module
is used (see REQ-CLI-103).
- The `--cli` flag documents valid values (`copilot`, `gh-copilot`,
`claude`) in its help text (see REQ-CLI-011).
`claude`, `codex`) in its help text (see REQ-CLI-011).

**Key function**:

Expand Down Expand Up @@ -146,10 +146,15 @@ interactive session.
- CLI detection uses `execFileSync` with `where` (Windows) or `which`
(Unix) — this is the most reliable cross-platform way to check if a
command exists on PATH without actually executing it.
- The detection order (copilot → gh-copilot → claude) prioritizes GitHub
- The detection order (copilot → gh-copilot → claude → codex) prioritizes GitHub
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pre-existing nit (optional fix): This line says detection uses execFileSync with where/which, but the implementation was changed to use direct PATH scanning in a previous PR. Since you're editing this section anyway, it might be worth correcting. Totally optional — not something you introduced.

Copilot CLI as the primary target. The `gh copilot` variant is checked
by actually running `gh copilot --help` to verify the extension is
installed, not just that `gh` exists.
- On Windows, npm-installed CLIs such as `copilot`, `claude`, and `codex`
may need their `.cmd` shims invoked explicitly because Node's
`child_process.spawn()` does not resolve commands the same way an
interactive shell does. The launcher therefore prefers `<name>.cmd`
when present on `PATH`.
- Content is copied to a temp directory (`os.tmpdir()` + `mkdtempSync`)
because LLM CLIs need to read the files from their CWD, and the npm
package's `content/` directory may be in a read-only or non-obvious
Expand All @@ -176,7 +181,7 @@ Internal helper. Checks if a command exists on PATH using platform-
appropriate lookup.

```
detectCli() → "copilot" | "gh-copilot" | "claude" | null
detectCli() → "copilot" | "gh-copilot" | "claude" | "codex" | null
```
Probes PATH for supported LLM CLIs in priority order.

Expand Down Expand Up @@ -397,15 +402,15 @@ Global options:

Interactive options:
--cli <name> Override LLM CLI auto-detection
Valid values: copilot, gh-copilot, claude
Valid values: copilot, gh-copilot, claude, codex
```

### 5.2 Module Exports

**launch.js**:
```javascript
module.exports = {
detectCli, // () → "copilot" | "gh-copilot" | "claude" | null
detectCli, // () → "copilot" | "gh-copilot" | "claude" | "codex" | null
launchInteractive, // (contentDir: string, cliName: string | null) → never
copyContentToTemp // (contentDir: string) → string (tmpDir path)
}
Expand Down
27 changes: 25 additions & 2 deletions cli/tests/launch.test.js
Original file line number Diff line number Diff line change
Expand Up @@ -140,6 +140,14 @@ describe("Launch Module", () => {
assert.strictEqual(runDetectCli(), "claude");
});

it("TC-CLI-072A: detectCli finds codex after claude", () => {
removeMockCmd("copilot");
removeMockCmd("gh");
removeMockCmd("claude");
createMockCmd("codex");
assert.strictEqual(runDetectCli(), "codex");
});

it("TC-CLI-074: gh without copilot extension is not detected as gh-copilot", () => {
removeMockCmd("copilot");
removeMockCmd("claude");
Expand Down Expand Up @@ -322,7 +330,7 @@ describe("Launch Module", () => {
return JSON.parse(fs.readFileSync(captureFile, "utf8"));
}

for (const cliName of ["claude", "copilot", "gh-copilot"]) {
for (const cliName of ["claude", "copilot", "gh-copilot", "codex"]) {
// TC-CLI-082 and TC-CLI-083 combined — run once per CLI
it(`TC-CLI-082/083: ${cliName} spawned with originalCwd and --add-dir for staging dir`, () => {
const mockBinDir = path.join(cwdTestTmpDir, `mock-bin-${cliName}`);
Expand Down Expand Up @@ -371,7 +379,7 @@ describe("Launch Module", () => {
});

describe("--dry-run flag", () => {
for (const cliName of ["copilot", "gh-copilot", "claude"]) {
for (const cliName of ["copilot", "gh-copilot", "claude", "codex"]) {
it(`TC-CLI-085: --dry-run prints spawn command for ${cliName} without launching`, () => {
// --dry-run must print the command and args then exit 0 without
// spawning the real LLM CLI. We run with an empty PATH so that
Expand Down Expand Up @@ -406,10 +414,25 @@ describe("Launch Module", () => {

// Parse the args line as JSON so we verify structure, not wording.
const lines = stdout.split("\n");
const cmdLine = lines.find((l) => l.trim().startsWith("cmd:"));
const argsLine = lines.find((l) => l.trim().startsWith("args:"));
assert.ok(cmdLine, `--dry-run output should include a 'cmd:' line for ${cliName}`);
assert.ok(argsLine, `--dry-run output should include an 'args:' line for ${cliName}`);
const parsedCmd = cmdLine.trim().slice("cmd:".length).trim();
const parsedArgs = JSON.parse(argsLine.trim().slice("args:".length).trim());

if (cliName === "gh-copilot") {
assert.strictEqual(parsedCmd, "gh", "gh-copilot should spawn gh");
} else if (process.platform === "win32") {
assert.strictEqual(
parsedCmd,
`${cliName}.cmd`,
`${cliName} should spawn the Windows .cmd shim`
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This assertion will fail on Windows — which is ironic given the PR's intent! 😄

The dry-run test uses an empty PATH directory (emptyBinDir), so when resolveSpawnCommand() runs, it can't find any .cmd file and correctly falls back to the bare command name. But this assertion expects ${cliName}.cmd on Windows.

To fix, create a .cmd stub in emptyBinDir so resolveSpawnCommand() can find it:

// Create a .cmd stub so resolveSpawnCommand finds it
if (process.platform === "win32" && cliName !== "gh-copilot") {
  fs.writeFileSync(path.join(emptyBinDir, `${cliName}.cmd`), "");
}

Alternatively, accept the bare command name when no .cmd shim is present on PATH (since that is the correct resolveSpawnCommand behavior).

);
} else {
assert.strictEqual(parsedCmd, cliName, `${cliName} should spawn its bare command`);
}

// The bootstrap prompt must appear as exactly one element containing bootstrap.md,
// not split across multiple elements (the shell: true regression).
const bootstrapArgs = parsedArgs.filter((a) => a.includes("bootstrap.md"));
Expand Down
8 changes: 4 additions & 4 deletions docs/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,12 +23,12 @@ Yes. PromptKit generates standard Markdown prompts. The assembled output
can be pasted into any LLM interface — ChatGPT, Claude, Gemini, Copilot
Chat, or any other tool that accepts text input.

Interactive mode requires GitHub Copilot CLI or Claude Code, but the
Interactive mode requires GitHub Copilot CLI, Claude Code, or OpenAI Codex CLI, but the
`assemble` command produces a plain text file usable anywhere.

### Do I need GitHub Copilot CLI?

No. GitHub Copilot CLI (or Claude Code) is only needed for **interactive
No. GitHub Copilot CLI (or Claude Code or OpenAI Codex CLI) is only needed for **interactive
mode**, which launches a live prompt-building session. The `list` and
`assemble` commands work standalone with just Node.js 18+.

Expand Down Expand Up @@ -205,8 +205,8 @@ npx promptkit list

### Interactive mode says "No supported LLM CLI found"

Interactive mode requires GitHub Copilot CLI (`copilot`) or Claude Code
(`claude`) on your PATH. Install one of them, or use `assemble` mode
Interactive mode requires GitHub Copilot CLI (`copilot`), Claude Code
(`claude`), or OpenAI Codex CLI (`codex`) on your PATH. Install one of them, or use `assemble` mode
instead.

### The assembled prompt is missing a section
Expand Down
7 changes: 4 additions & 3 deletions docs/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,8 @@ your behalf.

- **Node.js 18+** — required for the `npx` CLI
- **Optional:** [GitHub Copilot CLI](https://docs.github.com/en/copilot)
or [Claude Code](https://docs.anthropic.com/en/docs/claude-code) for
or [Claude Code](https://docs.anthropic.com/en/docs/claude-code) or
[OpenAI Codex CLI](https://github.com/openai/codex) for
interactive mode

## Quick Start
Expand Down Expand Up @@ -59,8 +60,8 @@ and you're running.
npx promptkit
```

Interactive mode auto-detects your LLM CLI (GitHub Copilot CLI or Claude
Code), copies PromptKit's content to a temp directory, and launches an
Interactive mode auto-detects your LLM CLI (GitHub Copilot CLI, Claude
Code, or OpenAI Codex CLI), copies PromptKit's content to a temp directory, and launches an
interactive session with `bootstrap.md` as the custom instruction. The
bootstrap engine walks you through:

Expand Down