Local-LLM-first agentic coding assistant
Autonomous coding agent for local LLMs with contract-driven execution.
Session — Criteria tracking, tool calls, and streaming responses

Providers — Local LLM backend configuration

Workflows — Contract-driven execution pipeline

npm i -g openfox
openfoxOn first run, OpenFox automatically detects your local LLM backend (vLLM, sglang, ollama, llamacpp) and configures itself.
# Start server for current project
openfox
# Start on custom port
openfox --port 8080
# Start without opening browser
openfox --no-browser
# Show current configuration
openfox config
# Manage LLM providers
openfox provider add # Add new provider
openfox provider list # List configured providers
openfox provider use # Switch active provider
openfox provider remove # Remove provider| Option | Description | Default |
|---|---|---|
-p, --port <number> |
Specify server port | 10369 |
--no-browser |
Don't open browser on start | Opens browser |
-h, --help |
Show help message | - |
-v, --version |
Show version number | - |
- Node.js >= 24.0.0
- Local LLM backend with OpenAI-compatible API:
- vLLM
- sglang
- ollama
- llamacpp
- Plan → Builder Workflow: Interactive task breakdown followed by autonomous implementation
- Contract-Driven Execution: Acceptance criteria serve as immutable contract
- Iterative Verification: Agent loops until all criteria pass
- LSP Integration: Immediate feedback on code validity
- Real-Time Metrics: Prefill time, generation speed, context usage
Homepage — Project overview and session history

Project Selected — Active session with context stats

Stats — Prefill time, generation speed, token usage

Terminal — Integrated terminal for running commands

Notifications — Event log and system messages

Agents — Sub-agent management and execution

General Instructions — Global custom instructions

Vision Fallback — Image processing configuration

MIT