feat(binding-mcp): implement mcp client binding#1711
Conversation
| } | ||
| } | ||
|
|
||
| private void doBeginMcp( |
There was a problem hiding this comment.
| private void doBeginMcp( | |
| private void doBegin( |
Consistency with coding patterns.
| newStream = new McpApplication( | ||
| sender, | ||
| originId, | ||
| routedId, | ||
| initialId, | ||
| route.id, | ||
| affinity, | ||
| authorization)::onAppMessage; |
There was a problem hiding this comment.
Suggest having a different stream implementation per kind, but with inheritance to handle the common parts of the code.
For example, McpLifecycleSession, McpToolsListRequest, McpToolsCallRequest etc.
There may be multiple levels of inheritance, such as McpStream (root supertype), then McpRequestStream as super type for all Mcp*Request streams.
| .extension(httpBeginEx.buffer(), httpBeginEx.offset(), httpBeginEx.sizeof()) | ||
| .build(); | ||
|
|
||
| net = streamFactory.newStream(netBegin.typeId(), netBegin.buffer(), |
There was a problem hiding this comment.
This is not following the standard separate implementation classes for each side of the client binding, needs HttpStream.
| final byte[] bodyBytes = jsonRpcBody.getBytes(StandardCharsets.UTF_8); | ||
| final UnsafeBuffer bodyBuffer = new UnsafeBuffer(bodyBytes); |
There was a problem hiding this comment.
Does not follow guidance that we should avoid allocations on the data path.
| if (responseSlot != NO_SLOT) | ||
| { | ||
| final DirectBuffer buf = bufferPool.buffer(responseSlot); | ||
| final int resultStart = findResultStart(buf, responseOffset); |
There was a problem hiding this comment.
Use decoder approach similar to McpServerFactory instead.
Also use encodeNet approach similar to McpServerFactory, to simplify flow control handling.
| private static final int PHASE_INITIALIZE = 0; | ||
| private static final int PHASE_NOTIFY = 1; | ||
|
|
||
| private final String upstreamSessionId; |
There was a problem hiding this comment.
Let's ignore the upstreamSessionId for now, so we just have sessionId.
| { | ||
| super(sender, originId, routedId, initialId, resolvedId, affinity, authorization, sessionId); | ||
| super(sender, originId, routedId, initialId, resolvedId, affinity, authorization, ""); | ||
| this.upstreamSessionId = sessionId != null ? sessionId : ""; |
There was a problem hiding this comment.
Please stop defaulting strings to "", let's handle null case if it is actually permitted, and treat it as an error case if missing when required by spec.
| final BeginFW netBegin = beginRW.wrap(writeBuffer, 0, writeBuffer.capacity()) | ||
| .originId(originId) | ||
| .routedId(resolvedId) | ||
| .streamId(http.netInitialId) | ||
| .sequence(0) | ||
| .acknowledge(0) | ||
| .maximum(0) | ||
| .traceId(traceId) | ||
| .authorization(authorization) | ||
| .affinity(affinity) | ||
| .extension(httpBeginEx.buffer(), httpBeginEx.offset(), httpBeginEx.sizeof()) | ||
| .build(); | ||
|
|
||
| http.net = streamFactory.newStream(netBegin.typeId(), netBegin.buffer(), | ||
| netBegin.offset(), netBegin.sizeof(), http::onNetMessage); |
There was a problem hiding this comment.
See how this is abstracted into a newStream method in McpServerFactory.
| final int bodyLength = writeRequestBody(codecBuffer, 0); | ||
| doData(http.net, originId, resolvedId, http.netInitialId, | ||
| traceId, authorization, DATA_FLAGS_COMPLETE, 0, bodyLength, | ||
| codecBuffer, 0, bodyLength); | ||
|
|
||
| doEnd(http.net, originId, resolvedId, http.netInitialId, traceId, authorization); |
There was a problem hiding this comment.
This is too proactive, since we have no window yet. We need encodeNet approach to solve this part.
…#1711) Per jfallows review: - Replace McpStream subclasses with HttpStream subclasses: HttpInitializeRequest, HttpNotifyInitialized, and one Http*Stream per request kind (tools/list, tools/call, prompts/list, prompts/get, resources/list, resources/read) - Each HttpStream subclass owns its HTTP headers (buildHttpBeginEx) and request body encoding (writeRequestBody); onNetBeginImpl constructs the kind-specific McpBeginExFW and calls mcp.doAppBegin() - Add encodeNet approach: buffer request body in encodeSlot after doNetBegin, flush to HTTP on onNetWindow so data is only sent when window is available - Replace streamFactory.newStream inline with factory-level newStream helper (matching McpServerFactory pattern) - Remove upstreamSessionId field; lifecycle uses sessionId directly - Remove empty-string defaults; null sessionId handled explicitly per case https://claude.ai/code/session_0174raBeXFTgt98bp4DTyRDm
- Adds McpClientFactory (client kind) to the binding-mcp module; accepts inbound mcp streams and produces outbound http streams carrying HttpBeginExFW headers taken from the route with.headers block. - Adds McpWithConfig / McpWithConfigBuilder public config classes and McpWithConfigAdapter (registered via WithConfigAdapterSpi) to deserialize the with: block in zilla.yaml. - Registers the CLIENT kind alongside SERVER in McpBindingContext using an EnumMap. - Adds client.yaml config spec, extends mcp.schema.patch.json for kind: client and the with.headers block. - Adds MCP_CLIENT_NAME / MCP_CLIENT_VERSION configuration properties and McpClientIT integration test. Closes #1670
d7f655c to
4f28b67
Compare
- Set MCP_CLIENT_NAME to 'test' to match clientInfo.name in the network server.rpt scripts that were taken from develop. - Restore the toolsList reply BEGIN read in app/tools.list.canceled/ client.rpt that was missing from develop's server-only version.
…t.canceled/server.rpt Develop's version of the server.rpt was the server-only variant that didn't include the reply BEGIN with toolsList extension. Restore the feature branch version so the peer-to-peer ApplicationIT test (shouldListToolsThenAbort) passes against the complementary client.rpt.
…ling to HttpStream Initial targeted refactor of McpClientFactory to align with McpServerFactory design patterns: - Add replySeq/replyAck/replyMax fields to HttpStream for proper flow-control tracking on the network reply direction. - Consolidate per-subclass doWindow(net, ...) calls in onNetBeginImpl into a single base-class flushNetReplyWindow() called from onNetBegin. - Wire onNetMessage dispatcher for FlushFW (previously silently dropped) and propagate replySeq advancement from DATA/FLUSH reserved bytes. - Overload doWindow(...) with an 11-arg variant that takes explicit sequence/acknowledge; keep the 9-arg form as a delegating shim for existing callers. Remaining work (deferred pending evaluation): - Proper McpState utility usage instead of boolean appClosed. - Per-stream initialSeq/Ack/Max/Bud/Pad + replySeq/Ack/Max/Bud/Pad fields on McpStream with correct values in doApp* frame builders (currently hardcoded zeros). - Decoder strategy pattern for any incremental parsing needs. - Abort of the original HTTP stream in doNotifyCancelled to avoid orphaning it after cancellation. 19 McpServerIT + 10 of 11 McpClientIT + 25 ApplicationIT/NetworkIT pass. McpClientIT#shouldListToolsThenCancel still fails with the same pattern as before refactor (stalls at k3po 'write flush' after 200 reply BEGIN).
…doNetBegin Rather than waiting for the peer's reply BEGIN before advertising receive window capacity, send WINDOW on netReplyId immediately after opening the stream. This avoids a potential deadlock where k3po-like peers hold reply BEGIN emission until they see credit on the reply direction. Item 4 investigation (abort orphan stream on cancellation) showed the failing McpClientIT#shouldListToolsThenCancel root cause is upstream of the cancellation path: HttpToolsListStream.onNetBeginImpl is never reached because the test's net-side k3po script never emits the 200 BEGIN frame. The cancellation spawn path itself is proven correct by debug tracing of the passing normal-case test.
- Add int state field on McpStream and HttpStream. - Replace boolean appClosed with appClosedEmpty (narrower semantic: closed without body) + int state tracking via McpState bitmasks. - Wire openingInitial/openedInitial on app BEGIN receipt and closedInitial on app END. - Wire openedReply on net BEGIN receipt, openedInitial on net WINDOW, closedInitial/closedReply on abort/reset/end paths. - Guard doNetEnd/doNetAbort/doNetReset with McpState.initialClosed and McpState.replyClosed so repeat close calls are safe. - Guard doAppEnd/doAppAbort/doAppReset similarly on the McpStream side. 10/11 McpClientIT and 19/19 McpServerIT still pass.
Replace hardcoded sequence/acknowledge/maximum zeros in doApp* frame builders with tracked per-direction flow-control fields, following CLAUDE.md convention: - initialSeq/initialAck/initialMax for the app-to-us direction; populated from the app's BEGIN/DATA/END frames. - replySeq/replyAck/replyMax/replyBud/replyPad for the us-to-app direction; populated from the app's WINDOW frames. doAppBegin/doAppData/doAppEnd/doAppAbort now emit frames with the tracked values. doAppData advances replySeq by reserved (length + replyPad) after sending. onAppData emits a WINDOW to advance initialAck so upstream can continue. 10/11 McpClientIT and 19/19 McpServerIT still pass; same failure profile as before refactor.
| final String sessionId = mcpBeginEx.lifecycle().sessionId().asString(); | ||
| final McpStream mcp = new McpStream( | ||
| sender, originId, routedId, initialId, | ||
| route.id, affinity, authorization, sessionId); | ||
| mcp.isLifecycle = true; | ||
| mcp.http = new HttpInitializeRequest( | ||
| supplyInitialId.applyAsLong(route.id), mcp); | ||
| sessions.put(sessionId, mcp); | ||
| newStream = mcp::onAppMessage; |
There was a problem hiding this comment.
| final String sessionId = mcpBeginEx.lifecycle().sessionId().asString(); | |
| final McpStream mcp = new McpStream( | |
| sender, originId, routedId, initialId, | |
| route.id, affinity, authorization, sessionId); | |
| mcp.isLifecycle = true; | |
| mcp.http = new HttpInitializeRequest( | |
| supplyInitialId.applyAsLong(route.id), mcp); | |
| sessions.put(sessionId, mcp); | |
| newStream = mcp::onAppMessage; | |
| final String sessionId = mcpBeginEx.lifecycle().sessionId().asString(); | |
| newStream = new McpLifecycleStream( | |
| sender, | |
| originId, | |
| routedId, | |
| initialId, | |
| route.id, | |
| sessionId)::onAppMessage; |
Let's tidy this up with a more specific Mcp subclass and move things into the constructor instead of this switch statement.
| final McpStream mcp = new McpStream( | ||
| sender, originId, routedId, initialId, | ||
| route.id, affinity, authorization, sessionId); | ||
| mcp.assignedRequestId = requestId++; | ||
| mcp.http = new HttpToolsListStream( | ||
| supplyInitialId.applyAsLong(route.id), mcp); | ||
| newStream = mcp::onAppMessage; |
There was a problem hiding this comment.
| final McpStream mcp = new McpStream( | |
| sender, originId, routedId, initialId, | |
| route.id, affinity, authorization, sessionId); | |
| mcp.assignedRequestId = requestId++; | |
| mcp.http = new HttpToolsListStream( | |
| supplyInitialId.applyAsLong(route.id), mcp); | |
| newStream = mcp::onAppMessage; | |
| newStream = new McpToolsListStream( | |
| sender, | |
| originId, | |
| routedId, | |
| initialId, | |
| route.id, | |
| sessionId)::onAppMessage; |
Similar approach here. Please apply this approach to other cases as well.
There was a problem hiding this comment.
If all the cases use constructors of the same signature, then we can create a functional interface and resolve via pre-initialized map from McpBeginEx kind to constructor lambda matching functional interface, such as McpToolsListStream::new, etc, and simplify the large case statement, while also making it straightforward to add more cases later by adding to the map.
| private HttpStream http; | ||
| boolean isLifecycle; | ||
| int state; | ||
| boolean appClosedEmpty; |
There was a problem hiding this comment.
This should not be needed, state is sufficient.
There was a problem hiding this comment.
This feedback was missed.
| private int appDataSlot = NO_SLOT; | ||
| private int appDataOffset; |
There was a problem hiding this comment.
Why do we need a buffer slot at all for app data to net data?
Instead, the net side should have encodeSlot to handle this, where app just propagates to net side.
| private final String sessionId; | ||
|
|
||
| private HttpStream http; | ||
| boolean isLifecycle; |
There was a problem hiding this comment.
No longer needed after we introduce McpStream subclasses.
| final long traceId = begin.traceId(); | ||
|
|
||
| responseSessionId = mcp.sessionId; | ||
|
|
||
| final OctetsFW ext = begin.extension(); | ||
| if (ext.sizeof() > 0) | ||
| { | ||
| final HttpBeginExFW httpBeginEx = httpBeginExRO.tryWrap( | ||
| ext.buffer(), ext.offset(), ext.limit()); | ||
| if (httpBeginEx != null) | ||
| { | ||
| final HttpHeaderFW sessionHeader = httpBeginEx.headers() | ||
| .matchFirst(h -> HTTP_HEADER_SESSION.equals(h.name().asString())); | ||
| if (sessionHeader != null) | ||
| { | ||
| responseSessionId = sessionHeader.value().asString(); | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Let's try without the overrides for these methods (inline instead, per subclass) and instead just have inherited state fields and abstract doNet... method so they can be called by McpStream implementation.
| void onNetEndImpl( | ||
| EndFW end) | ||
| { | ||
| final long traceId = end.traceId(); | ||
|
|
||
| final long netInitialId2 = supplyInitialId.applyAsLong(mcp.resolvedId); | ||
| final HttpNotifyInitialized notify = new HttpNotifyInitialized( | ||
| netInitialId2, mcp, responseSessionId); | ||
| mcp.http = notify; | ||
| notify.doNetBegin(traceId, mcp.authorization); |
There was a problem hiding this comment.
Same as above regarding this override.
| int writeRequestBody( | ||
| MutableDirectBuffer buffer, | ||
| int offset) | ||
| { | ||
| return buffer.putStringWithoutLengthAscii(offset, | ||
| "{\"jsonrpc\":\"2.0\",\"method\":\"notifications/initialized\"}"); | ||
| } |
There was a problem hiding this comment.
In McpServerFactory, we added doEncodedResponseBegin etc, so for McpClientFactory, that would be doEncodedRequestBegin etc, agree?
| read zilla:begin.ext ${mcp:matchBeginEx() | ||
| .typeId(zilla:id("mcp")) | ||
| .toolsList() | ||
| .sessionId("session-1") | ||
| .build() | ||
| .build()} | ||
|
|
There was a problem hiding this comment.
Let's be consistent and add this to all app client scripts equivalently for the lifecycle stream.
Same for app peer server scripts.
- Delete McpWithConfig / McpWithConfigBuilder / McpWithConfigAdapter (unused, no IT scenario exercised it); drop WithConfigAdapterSpi provides entry from module-info; remove service registration file; remove with.headers block from the schema patch. - Strip the 'with' field from McpRouteConfig. - Reorder specs/pom.xml so binding-mcp.spec appears before binding-mqtt.spec. - Rename HttpStream.netInitialId / netReplyId to initialId / replyId for consistency with McpServerFactory naming. - Rename HttpStream.responseSlot / responseOffset to decodeSlot / decodeSlotOffset to match McpServerFactory. - Move McpStream and HttpStream 'state' fields below the initial*/reply* flow-control fields and mark private. No functional behavior change. 10/10 McpClientIT pass, 19/19 McpServerIT pass, 25/25 binding-mcp.spec ITs pass. shouldListToolsThenCancel continues to be flaky (k3po emission timing).
Per PR review: each Mcp*Stream should be a dedicated subclass with inheritance handling common parts, rather than a monolithic McpStream with an isLifecycle flag and kind-specific branches in onAppEnd. - Convert McpStream to abstract with onAppBeginImpl(traceId) and onAppEndImpl(traceId) template methods called from onAppBegin / onAppEnd after the shared flow-control and state work. - Introduce McpRequestStream (abstract) as super for all Mcp*Request kinds; centralises assignedRequestId assignment and the onAppEndImpl branch that either sends buffered body or marks appClosedEmpty. - Introduce seven concrete subclasses (McpLifecycleStream, McpToolsListStream, McpToolsCallStream, McpPromptsListStream, McpPromptsGetStream, McpResourcesListStream, McpResourcesReadStream). Each constructor spawns its matching Http*Stream and the McpLifecycleStream constructor registers the session. - Replace the verbose switch-case in newStream with kind-specific subclass instantiations. - Drop the isLifecycle field; the Mcp subclass encodes the intent. - HttpRequestStream and HttpNotifyCancelled now typed on McpRequestStream to access assignedRequestId without casts. 10/10 McpClientIT, 19/19 McpServerIT, 25/25 ApplicationIT+NetworkIT all pass.
…ttpStream subclass Per PR review: avoid the buildHttpBeginEx abstraction; each HttpStream subclass now builds its HTTP BEGIN extension directly inside its doNetBegin override and inlines its request body encoding. - Replace abstract HttpStream.buildHttpBeginEx + writeRequestBody with a single abstract doNetBegin(traceId, authorization). - Keep a protected template doNetBegin(traceId, authorization, ext, bodyLength) on the base to handle the common newStream + preemptive reply WINDOW + encodeAndEnd steps. - Each concrete HttpStream subclass (HttpInitializeRequest, HttpNotifyInitialized, HttpToolsListStream, HttpToolsCallStream, HttpPromptsListStream, HttpPromptsGetStream, HttpResourcesListStream, HttpResourcesReadStream) overrides doNetBegin to build its BEGIN ex inline, encode its body into codecBuffer inline, then delegate to the protected template. - HttpRequestStream no longer overrides buildHttpBeginEx (removed with the abstraction).
…pers Per PR review: use factory-level doBegin/doData/doEnd/doAbort/doReset instead of inlining frame builders inside McpStream methods. - Add a factory doBegin(receiver, originId, routedId, streamId, sequence, acknowledge, maximum, traceId, authorization, affinity, extBuffer, extOffset, extLength) helper. - Add (sequence, acknowledge, maximum)-taking overloads to the existing factory doData/doEnd/doAbort/doReset helpers; keep the legacy no-seq signatures as delegating shims for existing callers. - McpStream.doAppBegin/Data/End/Abort/Reset now delegate to the factory helpers using their tracked flow-control fields, replacing the inline beginRW/dataRW/endRW/abortRW/resetRW builders. - Invert the McpState guards to use an outer if-block instead of preamble-if with early return, matching McpServerFactory style.
… scripts
Per PR review: be consistent across app client/server scripts for each
request kind by explicitly reading/writing the reply BEGIN with matching
kind ext (matching the tools.list.canceled exemplar).
For each app/<kind>/client.rpt, insert after the request stream's
'write close':
read zilla:begin.ext ${mcp:matchBeginEx()
.typeId(zilla:id("mcp"))
.<kind>()
<params>
.build()
.build()}
For each app/<kind>/server.rpt, insert the matching write BEGIN ext:
- simple kinds (tools.list, prompts.list, resources.list): before
'read closed' on the request stream's accepted block.
- params kinds (tools.call, prompts.get, resources.read + 10k/100k
variants): after 'read closed' and before the response body write.
Applies to: tools.list, tools.call, tools.call.10k, tools.call.100k,
prompts.list, prompts.get, resources.list, resources.read,
resources.read.10k, resources.read.100k.
All 10/10 McpClientIT, 19/19 McpServerIT, 25/25 ApplicationIT+NetworkIT
remain green.
…ad of byte-scan Per PR review: replace the crude findResultStart byte-scan with proper JSON navigation using StreamingJson from common-json, consistent with McpServerFactory's parsing approach. - Add DirectBufferInputStreamEx inputRO flyweight field. - Import StreamingJson + DirectBufferInputStreamEx + JsonParser. - findResultStart now wraps the buffered response, creates a JsonParser, walks to depth-1 'result' key, and returns the byte offset of the value's START_OBJECT / START_ARRAY via JsonLocation.getStreamOffset(). This is robust against 'result' appearing in string values or nested objects.
…methods Per PR review: mirror McpServerFactory's approach of asserting internal consistency of sequence/acknowledge/maximum on each inbound flow-control event. - HttpStream.onNetBegin: assert acknowledge <= sequence; assert replyAck <= replySeq after initialization. - HttpStream.onNetData / onNetFlush: assert acknowledge <= sequence, sequence >= replySeq, acknowledge <= replyAck before advancing, and assert replyAck <= replySeq after. - HttpStream.onNetWindow: assert acknowledge <= sequence on inbound WINDOW acking our outbound initial direction. - McpStream.onAppData: assert acknowledge <= sequence, sequence >= initialSeq, acknowledge <= initialAck before advancing; assert initialAck <= initialSeq after. - McpStream.onAppWindow: assert acknowledge <= sequence, acknowledge >= replyAck, maximum >= replyMax before storing; assert replyAck <= replySeq after.
Per PR review: HttpStream methods should reference http-local fields, not mcp.originId / mcp.resolvedId / mcp.affinity / mcp.authorization. And authorization should be passed explicitly, not reached via mcp. - Add protected originId, routedId, affinity fields on HttpStream, populated in the constructor from the mcp instance. - Replace mcp.originId -> originId, mcp.resolvedId -> routedId, mcp.affinity -> affinity throughout HttpStream subclasses. - flushNetReplyWindow now takes an explicit authorization parameter rather than reading mcp.authorization; the three callers (onNetBegin / onNetData / onNetFlush) pass it from the respective frame. - HttpInitializeRequest.onNetEndImpl takes authorization from end.authorization() rather than mcp.authorization.
| doWindow(sender, originId, routedId, initialId, | ||
| initialSeq, initialAck, initialMax, | ||
| traceId, authorization, 0L, 0); |
There was a problem hiding this comment.
This should be doAppWindow, following pattern from McpServerFactory.
| private HttpStream http; | ||
| boolean isLifecycle; | ||
| int state; | ||
| boolean appClosedEmpty; |
There was a problem hiding this comment.
This feedback was missed.
| protected int appDataSlot = NO_SLOT; | ||
| protected int appDataOffset; |
There was a problem hiding this comment.
No slot needed by app stream, only by net stream, for both encode and decode.
| { | ||
| this.originId = mcp.originId; | ||
| this.routedId = mcp.resolvedId; | ||
| this.initialId = initialId; |
There was a problem hiding this comment.
| this.initialId = initialId; | |
| this.initialId = supplyInitialId.applyAsLong(routedId); |
| private int state; | ||
|
|
||
| HttpStream( | ||
| long initialId, |
There was a problem hiding this comment.
Remove parameter, not needed by any HttpStream subclass either.
| cleanupEncodeSlot(); | ||
|
|
||
| encodeNet(traceId, authorization, encodeBuffer, 0, limit); | ||
| doNetEnd(traceId, authorization); |
There was a problem hiding this comment.
This should be condition only if McpState.initialClosing(state) and moved to encodeNet, not in onNetWindow.
| long traceId, | ||
| long authorization, | ||
| HttpBeginExFW httpBeginEx, | ||
| int bodyLength) |
There was a problem hiding this comment.
Remove this parameter, not appropriate for doNetBegin.
|
|
||
| if (net != null) | ||
| { | ||
| replyMax = writeBuffer.capacity(); |
There was a problem hiding this comment.
This is wrong, we can only receive up to decodeMax bytes, defined by bufferPool.slotCapacity(), same as McpServerFactory approach.
| doWindow(net, originId, routedId, replyId, | ||
| 0L, 0L, replyMax, | ||
| traceId, authorization, 0L, 0); |
There was a problem hiding this comment.
Use doNetWindow instead for brevity.
| protected void encodeAndEnd( | ||
| long traceId, | ||
| long authorization, | ||
| int bodyLength) |
There was a problem hiding this comment.
Remove this method, only need encodeNet.
Per PR review: use a doAppWindow helper on McpStream and a doNetWindow helper on HttpStream instead of direct doWindow(sender, ...) / doWindow(net, ...) calls, matching the McpServerFactory pattern. - McpStream.doAppWindow(traceId, authorization, budgetId, padding) wraps the factory doWindow for sender/originId/routedId/initialId using the stream's tracked initialSeq/initialAck/initialMax. - HttpStream.doNetWindow(traceId, authorization, budgetId, padding) replaces flushNetReplyWindow; wraps the factory doWindow for net/originId/routedId/replyId using tracked replySeq/replyAck/replyMax. - McpStream.onAppBegin and onAppData now call doAppWindow. - HttpStream.onNetBegin / onNetData / onNetFlush / doNetBegin all route through doNetWindow.
Per PR review: remove the initialId parameter from HttpStream (and HttpTerminateSession / HttpNotifyCancelled) constructors; compute it internally as supplyInitialId.applyAsLong(routedId) / mcp.resolvedId. - HttpStream constructor now takes only McpStream; initializes initialId via supplyInitialId.applyAsLong(routedId). - All 8 HttpStream subclass constructors follow suit, dropping the redundant long initialId parameter. - HttpTerminateSession and HttpNotifyCancelled constructors similarly drop initialId and compute internally. - All call sites updated: no more new HttpXxx(supplyInitialId.applyAsLong(resolvedId), this, ...) or netInitialId2 locals at construction points.
…plyMax Per PR review: replyMax should reflect the receive buffer bound, which is bufferPool.slotCapacity() (same as McpServerFactory's decodeMax), not the engine writeBuffer capacity. - Add private final int decodeMax field on McpClientFactory, initialized to bufferPool.slotCapacity(). - HttpStream sets replyMax = decodeMax in doNetBegin (preemptive window) and onNetBegin rather than writeBuffer.capacity().
…odeNet Per PR review: no slot needed by the app stream for encoding; the net side's encodeSlot handles any credit-limited buffering. Also encodeNet should internally buffer on short credit and emit END when the stream is closing, instead of a separate encodeAndEnd helper + an onNetWindow flush block. And doNetBegin shouldn't take a bodyLength. - Remove McpStream appDataSlot / appDataOffset fields. McpStream.onAppData now forwards payload directly via http.doNetData(...). - McpRequestStream.onAppEndImpl tracks appHasData and flags appClosedEmpty when no body was streamed, then calls http.doNetEnd. - HttpStream grows a public doNetData(traceId, auth, buffer, off, lim) and doNetEnd(traceId, auth) method. doNetEnd transitions to McpState.closingInitial and either calls doEnd directly when the encodeSlot is empty or flushes via encodeNet to drain the pending slot before the END is emitted. - HttpStream grows initialSeq / initialAck fields to support the new credit-aware encodeNet. - encodeNet reworked: writes length = min(initialMax - (initialSeq - initialAck), maxLength) bytes to net, buffers any remainder in encodeSlot, and on full drain emits doEnd + transitions to closedInitial when the state is initialClosing. - onNetWindow no longer does its own encode-slot flush + end: it updates initialAck / initialMax and lets encodeNet handle any pending bytes. - doNetBegin no longer takes bodyLength; the body is written by each HttpStream subclass via doNetData immediately after doNetBegin returns, followed by doNetEnd. - HttpToolsCallStream writes its JSON prefix at open time via doNetData, overrides doNetEnd to write the closing '}' suffix before delegating. - encodeAndEnd method removed.
| switch (mcpBeginEx.kind()) | ||
| { | ||
| case McpBeginExFW.KIND_LIFECYCLE: | ||
| newStream = new McpLifecycleStream( | ||
| sender, originId, routedId, initialId, route.id, affinity, authorization, | ||
| mcpBeginEx.lifecycle().sessionId().asString())::onAppMessage; | ||
| break; | ||
| case McpBeginExFW.KIND_TOOLS_LIST: | ||
| newStream = new McpToolsListStream( | ||
| sender, originId, routedId, initialId, route.id, affinity, authorization, | ||
| mcpBeginEx.toolsList().sessionId().asString())::onAppMessage; | ||
| break; | ||
| case McpBeginExFW.KIND_TOOLS_CALL: | ||
| newStream = new McpToolsCallStream( | ||
| sender, originId, routedId, initialId, route.id, affinity, authorization, | ||
| mcpBeginEx.toolsCall().sessionId().asString(), | ||
| mcpBeginEx.toolsCall().name().asString())::onAppMessage; | ||
| break; | ||
| case McpBeginExFW.KIND_PROMPTS_LIST: | ||
| newStream = new McpPromptsListStream( | ||
| sender, originId, routedId, initialId, route.id, affinity, authorization, | ||
| mcpBeginEx.promptsList().sessionId().asString())::onAppMessage; | ||
| break; | ||
| case McpBeginExFW.KIND_PROMPTS_GET: | ||
| newStream = new McpPromptsGetStream( | ||
| sender, originId, routedId, initialId, route.id, affinity, authorization, | ||
| mcpBeginEx.promptsGet().sessionId().asString(), | ||
| mcpBeginEx.promptsGet().name().asString())::onAppMessage; | ||
| break; | ||
| case McpBeginExFW.KIND_RESOURCES_LIST: | ||
| newStream = new McpResourcesListStream( | ||
| sender, originId, routedId, initialId, route.id, affinity, authorization, | ||
| mcpBeginEx.resourcesList().sessionId().asString())::onAppMessage; | ||
| break; | ||
| case McpBeginExFW.KIND_RESOURCES_READ: | ||
| newStream = new McpResourcesReadStream( | ||
| sender, originId, routedId, initialId, route.id, affinity, authorization, | ||
| mcpBeginEx.resourcesRead().sessionId().asString(), | ||
| mcpBeginEx.resourcesRead().uri().asString())::onAppMessage; | ||
| break; | ||
| default: | ||
| break; | ||
| } |
There was a problem hiding this comment.
Replace with Int2ObjectHashMap<McpRequestStreamFactory> where McpRequestStreamFactory is an inner interface.
private interface McpRequestStreamFactory
{
McpRequestStream newStream(
MessageConsumer sender,
long originId,
long routedId,
long initialId,
long resolvedId,
long affinity,
long authorization,
String sessionId);
}
Then special case only the lifecycle stream and lookup the rest by beginEx kind.
| String sessionId) | ||
| { | ||
| super(sender, originId, routedId, initialId, resolvedId, affinity, authorization, sessionId); | ||
| this.assignedRequestId = requestId++; |
There was a problem hiding this comment.
requestId count should be scoped by session, this means we need the lifecycle session to be passed into the request streams, then added implicitly to the session managed requests map by id and when the lifecycle session stream ends, it needs to clean up any existing request streams
Per PR review: previous feedback missed the ordering — the subclass onAppBeginImpl should run before the initial app WINDOW is emitted, matching McpServerFactory's pattern.
Per PR review: replace the per-kind switch in newStream with an Int2ObjectHashMap<McpRequestStreamFactory> keyed by McpBeginExFW kind. Only the lifecycle stream remains special-cased; every request kind is resolved via the map. - Add a @FunctionalInterface McpRequestStreamFactory that takes all shared stream ids + affinity/authorization + the mcpBeginEx flyweight (so each factory extracts its own kind-specific fields). - Build the requestFactories map in the McpClientFactory constructor, with one entry per request kind: toolsList, toolsCall, promptsList, promptsGet, resourcesList, resourcesRead. - newStream checks kind == KIND_LIFECYCLE inline; otherwise looks up the factory, returns null if unknown.
Per PR review: requestId should be allocated per session rather than
from a global factory counter, and the lifecycle session should own its
outstanding request streams and clean them up when it ends.
- McpLifecycleStream now owns:
- an Int2ObjectHashMap<McpRequestStream> requests map keyed by the
per-session request id;
- a nextRequestId counter starting at 2 (id 1 is reserved for
initialize);
- registerRequest(McpRequestStream) -> int id which assigns a new id
and adds the stream to the map;
- unregisterRequest(int id) which removes it.
- McpRequestStream constructor now takes the McpLifecycleStream session
(not a raw sessionId string); it calls session.registerRequest(this)
to assign its id and register itself.
- McpRequestStream.onAppEndImpl calls session.unregisterRequest(id) on
normal completion.
- McpLifecycleStream.onAppEndImpl now iterates the requests map and
aborts each still-outstanding request (both on the app side and net
side) before terminating the session, guaranteeing cleanup.
- Factory-level requestId counter removed.
- McpRequestStreamFactory lookup now resolves the session via a
lookupSession(sessionId) helper before constructing the request; the
factory returns null when the session is unknown and newStream falls
through to returning null.
- McpToolsListStream / McpToolsCallStream / McpPromptsListStream /
McpPromptsGetStream / McpResourcesListStream / McpResourcesReadStream
constructors updated to take McpLifecycleStream session parameter.
…ing session map ownership
…oCancel HttpStream.doNotifyCancelled and HttpRequestStream's override are removed. McpStream.onAppAbortImpl/onAppResetImpl now call doCancel(traceId, authz), which is a no-op on McpStream and overridden by McpRequestStream to fire HttpNotifyCancelled(this) when the session is still active. HttpNotifyCancelled now takes McpRequestStream directly instead of reaching through HttpRequestStream.
Replaces onAppEndImpl/onAppAbortImpl/onAppResetImpl with a single onAppClosed hook on McpStream (empty default), mirroring the pattern in McpServerFactory. onAppEnd/Abort/Reset are now fully concrete in McpStream: they update state, propagate to http (doEncodeRequestEnd / doNetAbort / doNetReset), and call onAppClosed. - McpLifecycleStream.onAppClosed is idempotent (guarded by sessions.remove) and tears down the session: aborts pending requests and sends HttpTerminateSession. - McpRequestStream.onAppClosed fires only when McpState.closed(state) is true and the request is still registered; it sends HttpNotifyCancelled.
Replaces HttpStream.onNetBeginImpl with an onNetBegin hook on McpStream. HttpStream.onNetBegin now delegates to mcp.onNetBegin(begin) after state updates. McpStream provides an empty default; subclasses override as needed. - McpLifecycleStream.onNetBegin captures the server-assigned session id from the initialize response and stores it as responseSessionId. HttpNotifyInitialized reads this via cast. - McpXxxStream subclasses (tools/list, tools/call, prompts/list, prompts/get, resources/list, resources/read) each override onNetBegin to build their kind-specific McpBeginExFW and call doAppBegin. - Tool name, prompt name, and resource uri now live on the Mcp subclasses (not the Http subclasses), captured in onAppBeginImpl from mcpBeginEx. The Http subclass constructors no longer take those parameters.
…eNet from onNetData Mirrors McpServerFactory's decoder structure: - decodeJsonRpc (new initial state) sets up the StreamingJson parser and transitions to decodeJsonRpcStart on the next call - decodeJsonRpcStart consumes the outer START_OBJECT - decodeJsonRpcNext scans top-level keys, transitioning to decodeJsonRpcResult on "result" or decodeIgnore on END_OBJECT - decodeJsonRpcResult captures the result value start offset - decodeIgnore drains any remainder The decode loop driver is renamed to decodeNet and is now invoked from onNetDataImpl as each chunk arrives (buffered into decodeSlot), so parsing runs incrementally. onNetEndImpl runs a final decodeNet pass then emits the captured result slice to the app.
…reams
Request streams (tools/list, tools/call, prompts/list, prompts/get,
resources/list, resources/read) no longer echo their McpBeginExFW back
to the app on the reply direction. The caller initiated the request and
already knows what was called; echoing back was redundant and forced
each McpXxxStream to carry request-specific state (toolName, promptName,
resourceUri) solely for begin-reply construction.
McpRequestStream.onNetBegin now calls doAppBegin(traceId, authz, null) —
the empty BEGIN frame still opens the reply direction. Per-subclass
onNetBegin overrides are removed, as are the name/uri fields that held
begin-initial state across the request/reply boundary.
Lifecycle streams still echo — the server's initialize response may carry
a newly-assigned mcp-session-id that differs from the one the app
requested, so the reply extension still conveys responseSessionId.
Spec scripts updated in lockstep:
- app/{kind}/client.rpt: drop the `read zilla:begin.ext ${matchBeginEx...}`
directive for non-lifecycle reply BEGIN.
- app/{kind}/server.rpt: drop the matching `write zilla:begin.ext
${beginEx...}` on reply BEGIN, keeping only `write flush` so the empty
reply BEGIN frame still fires — the mcp-server binding uses the
app-side reply BEGIN as the trigger to write the HTTP response
preamble via doEncodeBeginResponse.
Known issue: McpClientIT.shouldListToolsThenCancel times out. In the
previous flow, the explicit `read zilla:begin.ext ${matchBeginEx.toolsList()}`
directive on the app side appears to have advanced k3po's reply-direction
state machine in a way that `read abort` alone does not, allowing the
downstream cancel notification to propagate. The exact mechanic isn't
yet understood; follow-up can either adjust the test (e.g., proactive
app-side abort) or tweak the client binding to trigger HttpNotifyCancelled
from onAppEnd when the reply is still pending.
McpServerIT (19/19), ApplicationIT (18/18), McpClientIT (9/10) pass.
…, not mcp.originId
HttpNotifyCancelled now tracks initialSeq/initialAck/initialMax and replySeq/replyAck/replyMax with a state int, like HttpStream. The bodySent boolean is replaced with McpState.initialClosed checks. doNetData/doNetEnd/doNetAbort/doNetReset are exposed with guards (initialClosed for end/abort, replyClosed for reset). Message handlers onNetEnd/onNetAbort/onNetReset delegate to their doNet* counterparts. The factory-level doData convenience overload (with implicit 0L sequence/acknowledge/0 maximum) is removed; all callers now pass sequences explicitly.
…eam.onNetEnd HttpStream.onNetMessage now routes EndFW to a private local onNetEnd that applies sequence/acknowledge asserts and sets McpState.closedReply before delegating to mcp.onNetEnd(end). This mirrors the onNetBegin pattern introduced earlier. - McpStream.onNetEnd: empty default hook - McpLifecycleStream.onNetEnd: chains HttpInitializeRequest -> HttpNotifyInitialized, then fires doAppBegin with the captured responseSessionId on the second end - McpRequestStream.onNetEnd: flushResponseToApp + doAppEnd + session.unregister - HttpRequestStream exposes flushResponseToApp as an instance method for McpRequestStream.onNetEnd to invoke abstract onNetEndImpl is removed from HttpStream, along with the per-subclass overrides on HttpInitializeRequest, HttpNotifyInitialized, and HttpRequestStream.
… in client binding Rework McpClientFactory flow control so that tool/prompt/resource call requests and responses can span multiple DATA frames. Advertise app-initial window bounded by the encode slot capacity and reflect buffered bytes via initialAck, so the app throttles when the downstream net cannot yet accept. On the response path, drain result bytes incrementally to the app (holding back one trailing byte for the outer JSON-RPC object's closing brace) under the app's reply window, and only advance net-side replyAck as decoded bytes are consumed. Add 10k/100k ITs for tools.call and resources.read mirroring the existing McpServerIT coverage to prove fragmentation in both directions.
…ream.onNetData Mirror the onNetBegin/onNetEnd hook pattern: HttpStream.onNetData keeps the flow-control asserts and replySeq update locally, then delegates to mcp.onNetData. Remove the abstract onNetDataImpl and the per-subclass overrides; McpLifecycleStream acks lifecycle response bytes directly via http.doReplyConsumed, and McpRequestStream forwards to HttpRequestStream.onNetResponseData for decode + drain.
…constructor Pass Function<McpStream, HttpStream> httpFactory to McpStream so the http peer is created once, up front, rather than inside each subclass's onAppBeginImpl. McpLifecycleStream supplies HttpInitializeRequest::new; each McpRequestStream subclass supplies its kind-specific HttpXxx::new. HttpStream subclass constructors now take McpStream (HttpRequestStream casts to McpRequestStream) so the method references match the factory signature.
…onAppBegin Now that every McpStream is constructed with its http peer, the base onAppBegin can emit the http request BEGIN uniformly. onAppBeginImpl becomes a no-op default and is only overridden by McpLifecycleStream, which still needs to chain doEncodeRequestEnd after the initialize preamble since the lifecycle request has no app-side body.
…ming result via parser Move decoder ownership up to HttpStream so its onNetData accumulates the response payload into decodeSlot and invokes a no-arg decodeNet helper (wraps input with parser-progress delta, runs the decoder loop, compacts the slot, bumps replyAck). onAppWindow on HttpStream now re-enters decodeNet so blocked drains resume when the app grants reply credit. mcp.onNetData is no longer a hook. Split decodeJsonRpcResult into a single transition plus a streaming decodeJsonRpcResultValue decoder modeled on McpServerFactory's decodeJsonRpcParamsEnd: it advances the JsonParser through the result object tracking nesting depth, drains the spanned bytes to mcp.doAppData under replyMax credit, and transitions to a terminal decodeJsonRpcResultEnd once depth == 0 and every drained byte is flushed. HttpRequestStream.onNetEndDecoded sets netEnded and re-runs decodeNet; doAppEnd fires only once drain is complete via flushAppEnd. Fix tools.call spec scripts to escape `\n` as `\\n` so the on-wire JSON stays RFC 8259-compliant (StreamingJsonParser rejects unescaped control characters in strings, matching the spec); both network and application sides now carry the legal two-byte backslash-n escape and the parser can traverse the result body end-to-end.
Add a kind=ping app-level stream so the mcp client binding can emit
`{"jsonrpc":"2.0","id":N,"method":"ping"}` over its established lifecycle
session and surface the empty `{}` result back to the app. IDL gains
McpPingBeginEx and the KIND_PING = 7 discriminator; McpFunctions gains
the ping() builder and matcher so spec scripts can model the ping
stream. McpClientFactory wires resolvers.put(KIND_PING, ...) and
requestFactories.put(KIND_PING, McpPingStream::new), and HttpPingStream
emits the ping preamble on the shared http encode path.
Switch the existing net/lifecycle.ping scripts from id:3 to id:2 so the
client's first post-init requestId matches the wire id; this keeps the
NetworkIT peer-to-peer test and McpServerIT shouldPingLifecycle passing
because the server just echoes the id it receives.
Add app/lifecycle.ping/{client,server}.rpt to exercise the app-level
ping stream peer-to-peer, a shouldPingLifecycle IT in ApplicationIT,
and a shouldPingLifecycle IT in McpClientIT pairing the new app-client
script with the existing net-server script.
…ing" This reverts commit 7d16fc1.
…lient binding
Wire Signaler + MCP_INACTIVITY_TIMEOUT into McpClientFactory so the
lifecycle session auto-pings upstream every inactivityTimeout/2 without
any app-level ping stream (symmetric with the mcp-server handling ping
purely at the JSON-RPC layer). McpLifecycleStream schedules a
KEEPALIVE_SIGNAL via signaler.signalAt on replyId (the stream id
registered in the engine throttle table for app-initiated streams);
when the signal fires onAppSignal opens an internal HttpKeepalive net
stream that emits the `{"jsonrpc":"2.0","id":N,"method":"ping"}`
preamble, reads the `{"result":{}}` reply, and reschedules the next
keepalive in the net END handler.
Keepalive is cancelled when the session is torn down via
doAppTerminate. HttpKeepalive uses the lifecycle's nextRequestId to
stay consistent with the per-session request-id counter.
Flip the on-wire id in net/lifecycle.ping/{client,server}.rpt from id=3
to id=2 so the client's first post-init requestId lines up with what
the net peer reads; server still just echoes the incoming id, so
NetworkIT.shouldPingLifecycle and McpServerIT.shouldPingLifecycle
stay green.
Add app/lifecycle.ping/client.rpt as a test-only coordinator that opens
the lifecycle, reads the extension, awaits a KEEPALIVE_COMPLETE barrier
emitted by the net peer after the ping exchange, then lets the test
finish. McpClientIT.shouldPingLifecycle pairs it with net/lifecycle.ping
and configures MCP_INACTIVITY_TIMEOUT=PT0.2S so the keepalive fires
within the 10s IT budget.
…cutive failed pings Teach McpLifecycleStream to count consecutive failed keepalives and terminate the lifecycle session when the count reaches a configurable tolerance (zilla.binding.mcp.keepalive.tolerance, default 2). A ping is only emitted when the session has actually been idle for inactivityTimeout/2. The lifecycle stream tracks lastActiveAt (bumped on request register/unregister and on a successful ping response via touch()); when the keepalive signal fires we compare lastActiveAt + inactivityTimeout/2 against now — if activity happened since we scheduled, we reschedule further out instead of pinging. Once the gap really is reached we preemptively increment failedKeepalives, fire the ping, and reset the counter on the HttpKeepalive net END. If the keepalive net stream aborts/resets we schedule the next fire without touching lastActiveAt, so failedKeepalives accumulates; when it hits the tolerance we call doAppTerminate, which now also sends doAppEnd on the reply so the app observes the session ending. To reuse the existing abort/reset hooks from other http peers without terminating the mcp lifecycle on keepalive failure, HttpStream.onNetAbort and HttpStream.onNetReset now dispatch through mcp.onNetAbort / mcp.onNetReset hooks. The default implementations preserve the previous behavior (doAppAbort / doAppReset); McpLifecycleStream overrides them to treat failures from HttpKeepalive as ping-failures (reschedule) and all other http peers identically to before. Also add shouldTimeoutLifecycleKeepalive IT exercising the new path with MCP_INACTIVITY_TIMEOUT=PT0.2S and a net scenario that aborts two consecutive ping attempts, then accepts the DELETE triggered by the client-side terminate.
…e.timeout.rejected Rename the scenario from "keepalive" (mechanism) to "rejected" (failure mode) so it reads as a sibling of lifecycle.timeout — both scenarios tell the session-timeout story, one for the idle path and one for the path where the upstream actively rejects each keepalive ping until the configured tolerance is exhausted. The IT method is renamed from shouldTimeoutLifecycleKeepalive to shouldTimeoutLifecycleRejected to match.
Add a new lifecycle.initialize.aborted and per-request-kind .aborted
scenarios (tools.list, tools.call, prompts.list, prompts.get,
resources.list, resources.read). Each scenario exercises the path
where the upstream server aborts the reply mid-response (after BEGIN
but before body completion) and the binding-under-test propagates
that abort to the app.
For each kind add four scripts under
specs/binding-mcp.spec/src/main/scripts/.../streams/{application,network}/
<kind>.aborted/{client,server}.rpt and wire four IT methods named
shouldAbort<Kind> across NetworkIT, ApplicationIT, McpClientIT, and
McpServerIT. The client-role scripts close their initial direction
normally and read abort on the reply; the server-role scripts write
response BEGIN (and for body-bearing requests, write flush) and then
write abort instead of the response body.
lifecycle.initialize.aborted is only wired into NetworkIT,
ApplicationIT, and McpClientIT because McpServerFactory generates the
initialize response inline without opening an app stream, so there is
no server-side app-abort path to exercise.
All 49 binding ITs (McpConfigurationTest 1, McpClientIT 23,
McpServerIT 25) and all 51 spec ITs (SchemaTest 1, McpFunctionsTest
24, NetworkIT 26, ApplicationIT 25) remain green.
…timeout.rejected The scenario previously shipped only with the two scripts used by McpClientIT.shouldTimeoutLifecycleRejected: app/client.rpt and net/server.rpt. Every other scenario in this spec ships all four scripts plus NetworkIT/ApplicationIT peer-to-peer coverage; this one should too. Add app/lifecycle.timeout.rejected/server.rpt as the app-side peer of the existing app-client script — it accepts the lifecycle BEGIN, writes the matching reply BEGIN, then write close to mirror the binding's doAppEnd after tolerance is hit. Add net/lifecycle.timeout.rejected/client.rpt as the net-side peer of the existing net-server script — it replays the exact on-wire sequence the client binding emits when keepalive.tolerance consecutive pings are aborted: init POST + response, notify POST + 202, ping #1 + aborted, ping #2 + aborted, DELETE + 200. Cross-connect coordination uses new LIFECYCLE_INITIALIZING / LIFECYCLE_INITIALIZED / PING_1_REJECTED / PING_2_REJECTED barriers. Wire shouldTimeoutLifecycleRejected into NetworkIT (net peer-to-peer, 27 tests) and ApplicationIT (app peer-to-peer, 26 tests). No binding code changes; McpClientIT.shouldTimeoutLifecycleRejected continues to pair the app-client with the net-server through the engine and still passes.
Summary
Adds the
mcpclient binding (closes #1670), alongside refactorsto the existing
mcpserver binding so the two share a consistentcoding style. The client binding makes Zilla an MCP HTTP client that
maps app-level
mcpstreams (lifecycle, tools, prompts, resources)to upstream JSON-RPC over HTTP POST, with full flow-control,
fragmentation, keepalive, and abort-propagation support.
What's new
runtime/binding-mcp/src/main/java/.../stream/McpClientFactory.javainitialize→notifications/initialized→ per-request POSTs (
tools/list,tools/call,prompts/list,prompts/get,resources/list,resources/read) →DELETEonsession teardown. Each app stream kind maps to a dedicated
HttpRequestStreamsubclass that emits its JSON-RPC preamble.decodeJsonRpcParamsEnd:decodeJsonRpc→decodeJsonRpcNext→decodeJsonRpcResultStart→decodeJsonRpcResultValuestreams theresultvalue bytes tomcp.doAppDataundermcp.replyMaxcredit,using
decodedParserProgress/decodedResultProgressto handlefragmented responses larger than one DATA frame.
McpStreamadvertisesinitialMax = encodeMaxand usesflushAppWindow(...)so the appthrottles when the downstream encode slot fills;
HttpStream.onNetWindowdrains the slot and propagates credit back.
Signaler.McpLifecycleStreamschedules a
KEEPALIVE_SIGNALatlastActiveAt + inactivityTimeout/2;on fire it either reschedules (if activity has happened since) or
sends a ping via a new
HttpKeepalivestream. Afterkeepalive.toleranceconsecutive ping rejections, the lifecycle isterminated via
doAppTerminate(which emitsdoAppEndon the replythen sends
DELETEviaHttpTerminateSession). BothMCP_INACTIVITY_TIMEOUTand a newMCP_KEEPALIVE_TOLERANCE(default2) are exposed onMcpConfiguration.runtime/binding-mcp/src/main/java/.../stream/McpServerFactory.java(
HttpStream.onNetBegin/onNetData/onNetEnd/onNetAbort/onNetResethooks dispatching tomcp.*;onAppBeginemittinghttp.doEncodeRequestBegincentrally;HttpStreaminitializationmoved into the
McpStreamconstructor viaFunction<McpStream, HttpStream> httpFactory).tools.list.canceledcan cleanly abort without leaving partial bodybytes in the client's decoder.
specs/binding-mcp.speclifecycle.pinglifecycle.timeout.rejectedlifecycle.initialize.aborted{tools.list, tools.call, prompts.list, prompts.get, resources.list, resources.read}.abortedtools.call.10k,tools.call.100k,resources.read.10k,resources.read.100kfragmentation scenarios.Test coverage
runtime/binding-mcpspecs/binding-mcp.specAll scenarios have peer-to-peer
NetworkIT/ApplicationITcoverage(self-consistent k3po scripts without the engine) in addition to
full-path
McpClientIT/McpServerITcoverage.Test plan
./mvnw verify -pl runtime/binding-mcp./mvnw verify -pl specs/binding-mcp.spec./mvnw notice:check -pl specs/binding-mcp.spec/runtime/binding-mcp