feat(evals): Level 3 DOCX agent benchmark suite#2664
feat(evals): Level 3 DOCX agent benchmark suite#2664tupizz merged 43 commits intoandrii/sd-2451-refactor-mcp-set-upfrom
Conversation
- Fix cwd ENOENT: create stateDir before passing to SDK query() - Fix Claude Code provider: clean up, remove pathToClaudeCodeExecutable hacks - Fix Codex provider: match real SDK API (command_execution items, approvalPolicy) - Fix test assertions: match actual fixture content - contract.docx -> report-with-formatting.docx for heading tasks - [Employee Name] -> [Candidate Name] for employment-offer.docx - Fix $150M collateral check (XML extraction splits as "1 50") - Upgrade @anthropic-ai/claude-agent-sdk to ^0.2.87
|
Ooo! |
- Copy fixture into stateDir so agents can write within their sandbox - Add stateDir fallback for output file detection - Add useClaudeSettings option to inherit local Claude Code config (MCP servers, skills, CLAUDE.md) via settingSources - Add CC-local condition for testing with user's own Claude Code setup - Wire superdocMcp config to attach SuperDoc MCP server via mcpServers - Add preeval:benchmark script to build MCP server before runs - Add model, maxTurns, systemPrompt config options
Standalone test script that verifies both providers end-to-end: - Claude baseline read/edit (without SuperDoc) - Claude superdoc-skill with MCP (superdoc_open β get_content β close) - Claude local with useClaudeSettings - Codex baseline read/edit (without SuperDoc) - Codex with SuperDoc MCP Run: node evals/scripts/smoke-test-benchmark.mjs --claude --codex
- Add system prompt for superdoc conditions instructing agents to use SuperDoc MCP tools exclusively, not raw unzip/XML - Write AGENTS.md in working directory reinforcing SuperDoc tool usage - Restrict CC-superdoc-skill allowedTools to Read/Glob/Grep (no Bash) so agents cannot fall back to raw DOCX manipulation - Add prompt reinforcement for Codex superdoc conditions - Verified: Claude superdoc-skill read + edit both use MCP exclusively (superdoc_open β search β edit β save β close, zero Bash calls)
- Pass process.env.OPENAI_API_KEY to new Codex({ apiKey }) so the SDK
uses API key auth instead of relying on codex login session
- Add Claude edit + MCP tests to smoke test script
- Verified: Codex baseline read + edit pass with API key auth
- Known: Codex MCP calls fail due to rmcp protocol incompatibility
in the Codex CLI (serde error on tool calls, Transport closed)
Root cause: console.debug('[super-editor] Telemetry: enabled') in
Editor.ts writes to stdout when superdoc_open initializes the editor.
The Codex CLI's Rust MCP client (rmcp) parses stdout as JSON-RPC and
dies with "serde error expected value at line 1 column 2" on the
non-JSON line, closing the transport.
Fixes:
- Redirect all console methods (log/info/debug/warn) to stderr in
the MCP server entry point, before any imports run
- Add mcp_auto_approve config for Codex to auto-approve MCP tool calls
(approval_policy=never only covers shell commands, not MCP)
- Add stdio wrapper script for transport debugging (logs raw bytes)
- Use runStreamed() in Codex provider to capture full MCP event lifecycle
- Pass minimal env to prevent other stdout pollution from deps
- Add preflight check for MCP server build artifact
Reduce from 18 to 6 tasks (3 reading + 3 editing) for faster iteration. Full suite: 12 runs in 3 minutes, 100% pass rate on Codex baseline + superdoc-skill conditions. Tasks: extract headings, extract entities, extract financials, replace entity, insert section, fill placeholders.
- Add per-task detail table with every metric per condition - Add input/output token breakdown (not just total) - Add p95 latency alongside median - Add estimated cost per task (based on model token pricing) - Add comprehensive recommendation with latency, token, cost, steps, and collateral comparisons between conditions - Fix task description extraction from vars.task fallback
Replace single benchmarkMetrics assertion with separate per-metric assertions (steps, latency, tokens, path), each with its own metric tag. Promptfoo displays these as individual columns with actual numeric values instead of a single "efficiency 1.00" score. Columns visible in UI: correctness, collateral, steps, latency, tokens, path
β¦ition The superdocOnPath flag was a no-op because the SuperDoc CLI was never installed as a binary on PATH. Now creates a shell wrapper script in the stateDir's bin/ that delegates to apps/cli/dist/index.js, and prepends it to the agent's PATH. Finding: even with superdoc on PATH, Codex doesn't discover or use it without explicit instruction. All superdoc-cli runs fall back to raw unzip/XML. This is valid benchmark data.
- benchmarkPath assertion now FAILS when superdoc-skill or superdoc-cli conditions don't use SuperDoc (was always passing before) - Add AGENTS.md + prompt hint for superdoc-cli condition telling agents the CLI exists on PATH with common commands - Split MCP and CLI AGENTS.md templates in both providers - Verified: all 3 Codex conditions use correct path (baseline=raw, superdoc-skill=MCP, superdoc-cli=CLI)
Add a _summary line at the top of provider JSON output showing path | steps | latency | tokens at a glance. Promptfoo renders the start of the output in each table cell, so this gives immediate visibility without clicking into the detail view.
- Add derivedMetrics: avg_latency, avg_steps, avg_tokens, superdoc_usage_pct - computed per provider after evaluation - Set weight: 0 on steps/latency/tokens assertions so they report values without affecting pass/fail score - Only correctness, collateral, and path drive pass/fail - Click "Show Charts" in Promptfoo UI for visual comparison
Add the Anthropic DOCX skill (from anthropics/skills repo) as the vendor condition. When vendorSkill: true, the skill is installed as AGENTS.md in the working directory, teaching agents to use unzip/XML for reading and docx-js for creation. This completes the benchmark matrix: - baseline: no skill, agent figures it out - vendor: Anthropic's DOCX skill (unzip + docx-js) - superdoc-skill: SuperDoc MCP server - superdoc-cli: SuperDoc CLI on PATH - choice: all available, agent picks
Claude Agent SDK reads CLAUDE.md (not AGENTS.md) for project context. Write vendor skill and CLI instructions as CLAUDE.md in the stateDir, and enable settingSources: ['project'] so the SDK loads it.
This reverts commit 85108ac.
Creates 4 DOCX fixtures designed to be fragile under raw XML edits: - consulting-agreement.docx: bold defined terms, italic refs, 6 heading sections, $250k indemnification cap, net 45 payment terms - pricing-proposal.docx: 4-row pricing table with shaded header, right-aligned prices, US Letter page size - contract-redlines.docx: 3 tracked insertions + 2 deletions by Jane Editor, 2 reviewer comments by Bob Reviewer - policy-manual.docx: 3-level nested numbered list (1./1.1/a)), header/footer with page numbers, page breaks between sections Adds create-v2-fixtures.mjs generator script and docx@9.6.1 dev dependency.
New capabilities: - docx-fidelity.mjs: OOXML structural checker (formatting, styles, numbering, tracked changes, comments, tables, XML diff) - benchmarkFidelity assertion: runs fidelity checks on output DOCX - benchmarkDiff assertion: measures XML change ratio (surgical vs rewrite) New fixtures (all synthetic names): - consulting-agreement.docx: bold terms, italic refs, numbered sections - pricing-proposal.docx: table with alignment and styled header - contract-redlines.docx: existing tracked changes and comments - policy-manual.docx: 3-level nested numbered lists 6 new fidelity tasks (CEO examples): - Mixed formatting replace (bold preservation) - Table cell edit (structure preservation) - Tracked changes edit (annotation survival) - Nested list insert (numbering continuation) - Multi-step workflow (heading style check) - Edit with existing annotations (comment survival) 92 tests total: 69 checks.cjs + 23 docx-fidelity
1. outputFile pointed to unedited fixture copy instead of localDocPath (the file the agent actually edits in stateDir) 2. Comment IDs in fidelity checks used "0","1" but fixture has "1","2" 3. Table cell text used exact match instead of includes 4. Remove overly strict paragraphStyle check on multi-step task
Category A β Structural creation (SuperDoc proven): - Create heading with Heading1 style - Create table with borders and data rows Category B β Formatting (SuperDoc proven): - Make specific text bold - Replace text preserving formatting Category C β Complex edits (track improvement): - Tracked change replacement - Add comment to clause
β¦up' into feat/level3-agent-benchmark
Remove settingSources which loaded ALL user MCP servers (43 Linear, 5 Excalidraw, Gmail, etc.) adding ~4000 tokens per turn. Pass CLAUDE.md content as systemPrompt instead. Result: 30% cost reduction ($0.97 -> $0.68 for NDA creation).
β¦r clarity Changed labels for several providers in the promptfooconfig.benchmark.yaml file to better reflect their functionality, including renaming 'CC-vendor' to 'CC-with-docx-skill', 'CC-superdoc-skill' to 'CC-superdoc-mcp', and others for consistency and improved understanding.
There was a problem hiding this comment.
π‘ Codex Review
Line 82 in 663709b
checks.cjs eagerly loads packages/sdk/tools/tools.openai.json at module import time and throws if it is missing, but that file is generated and may not exist in a fresh clone/CI job. This makes every eval assertion fail to load (including checks unrelated to formatting), breaking evals test execution before any test logic runs. Load this schema lazily inside correctFormatArgs (or provide a fallback) so non-format checks can still run.
βΉοΈ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with π.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| codexOpts.config = { | ||
| mcp_servers: { | ||
| superdoc: { | ||
| command: process.execPath, // Use exact node binary, not bare 'node' | ||
| args: [MCP_WRAPPER_PATH, process.execPath, MCP_SERVER_PATH], | ||
| }, | ||
| }, | ||
| }; |
There was a problem hiding this comment.
Preserve MCP auto-approve when configuring Codex MCP server
When superdocMcp is enabled, codexOpts.config is reassigned to a new object containing only mcp_servers, which drops the earlier mcp_auto_approve setting. In unattended benchmark runs this can force interactive approval for MCP tool calls, causing superdoc-skill conditions to stall or fail instead of executing end-to-end. Merge mcp_servers into the existing config instead of replacing it.
Useful? React with πΒ / π.
| maxTurns: this.config.maxTurns || 20, | ||
| permissionMode: 'bypassPermissions', | ||
| allowDangerouslySkipPermissions: true, | ||
| settingSources: [], // SDK isolation mode: don't load user MCP servers (Linear, Excalidraw, etc.) |
There was a problem hiding this comment.
Honor useClaudeSettings when building Claude query options
The provider documents a useClaudeSettings mode, but queryOptions always sets settingSources: [], so local Claude settings are never loaded even when the flag is true. This makes the advertised local-settings condition nonfunctional and can skew benchmark comparisons because the runtime behavior cannot match the configured scenario.
Useful? React with πΒ / π.
Summary
evals/Promptfoo infrastructureWhat's new
Providers:
claude-code-agent.mjsβ Claude Agent SDK (@anthropic-ai/claude-agent-sdk) withquery()codex-agent.mjsβ OpenAI Codex SDK (@openai/codex-sdk) withCodex.startThread().run()Tests:
Shared utilities:
extractDocxText()β lightweight DOCX text extractor (SuperDoc CLI fallback to unzip+XML)benchmarkMetricsassertion β captures steps, cost, duration, tokens, path usedReport:
benchmark-report.mjsβ generates summary table, path usage, per-task breakdown, recommendationConfig:
promptfooconfig.benchmark.yamlβ 10 conditions matrixeval:benchmark,eval:benchmark:claude,eval:benchmark:codex,eval:benchmark:reportSmoke test
Codex baseline verified end-to-end:
Read: extract all headingsonreport-with-formatting.docxβ PASSTest plan
extractDocxText