Exploring the human-agent collaboration space — what it means to build with AI agents that have genuine context, memory, and judgment — rather than just prompt engineering around the edges.
I treat AI systems as collaborators, not tools. That means iterating on direction together, expecting the agent to develop genuine context rather than receiving instructions fresh each session, and valuing closed loops over polished presentations. A change isn't done when it's discussed — it's done when it's verified, filed, and actually working tomorrow.
This extends to how I evaluate tools and interfaces: the best UI is one that disappears. What matters isn't the elegance of the interaction itself, but whether the task got completed and the system is in a state you'd trust to continue without you watching. I default to async, background-capable workflows, and I think most UI polish is compensating for a missing abstraction.
Good defaults > configurability. Explicit is better than smart. Verification is not optional.
MCP ecosystem, CLI productivity tools, AI agent orchestration.
mcpmate — MCP management center for production AI agent deployments. Proxy + Bridge architecture with tool aggregation, per-client access control, and LLM cost observability. Contributed MCP Rust SDK.
CodMate — macOS app for managing CLI AI sessions (Codex, Claude Code, Gemini CLI). Three-column UI with session search, project/task organization, and one-click resume/new workflows. Archived in favor of CLI-first exploration.
[MCPMate @ Cursor Meetup Chengdu] (July 2025) — cursor.meetup.2025.umate.ai
Talk on AI-driven MCPMate project practice and reflections from a Cursor Meetup in Chengdu.



