Token-bound PAT scope caching to prevent stale tool filtering decisions #2205
Replies: 2 comments
-
|
This feels like exactly the right fix direction. If scope-filter decisions can outlive the token they were derived from, then the system can look healthy while silently making authorization decisions against stale context. That’s one of the hardest classes of policy failure to debug. Binding the cached scope context to token identity seems like the minimum safe contract. I’d probably go one step further and make the cache key conceptually include:
And then fail closed on any mismatch. A few principles here seem especially important:
I also think this connects back to a broader MCP concern: clients benefit a lot when discovery-time hints are available, but runtime authorization must still be computed against the current token and current execution context. So overall, this token-bound approach seems like the right operational model. Least-privilege is much easier to reason about when cached authorization state is explicitly token-scoped and mismatch means "recompute", not "best effort reuse". |
Beta Was this translation helpful? Give feedback.
-
|
This is not the case. We don't cache. The STDIO server checks at startup and doesn't re-check, that doesn't make sense to add checks on every call, but it has no impact on auth boundary. The http server does check every request, and supports up-scoping dynamically etc too. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Problem observed
Scope-filter decisions can be wrong when a request context contains cached scopes that were computed for a different token. In that case, tool visibility may reflect old permissions after token rotation/swap, which is a quiet policy failure: behavior appears successful but authorization state is stale.
Why it matters operationally
This server is used as a permission-scoped control surface over GitHub actions. If scope filtering reuses stale context, downstream automation can act on an inaccurate capability set. That undermines least-privilege guarantees and makes incidents hard to classify (policy vs product). Token churn is common in real usage, so cache correctness has to be explicit and fail closed.
Minimal repro
Fix approach
I introduced token-bound scope context helpers and switched filter/middleware call sites to token-bound reads:
The intent is straightforward: cached scopes are only valid for the token that produced them.
Validation evidence
go test ./pkg/context ./pkg/http/middleware ./pkg/httppassedOpen follow-up question for maintainers
Would you like token fingerprints (rather than raw token string equality) used for context binding to make cross-layer logging/debug safer by default?
Inspired by research context: CAISI publishes independent, reproducible AI agent governance research: https://caisi.dev
Beta Was this translation helpful? Give feedback.
All reactions