MCP App: Progressive tool discovery and inline image output for anomaly plots#16
MCP App: Progressive tool discovery and inline image output for anomaly plots#16MatthewKhouzam wants to merge 3 commits intoeclipse-tmll:mainfrom
Conversation
Signed-off-by: Matthew Khouzam <matthew.khouzam@ericsson.com>
Allow less context usage Signed-off-by: Matthew Khouzam <matthew.khouzam@ericsson.com>
| @mcp.tool() | ||
| def cluster_data(experiment_id: str, keywords: Optional[list[str]] = None, n_clusters: Optional[int] = None, method: Optional[str] = None) -> str: | ||
| """Perform clustering analysis on trace data (kmeans, dbscan, hierarchical).""" | ||
| args = build_args({"keywords": ("-k", keywords or ["cpu usage"]), "n_clusters": ("-n", n_clusters or 3), "method": ("-m", method or "kmeans")}) | ||
| return run_cli("cluster", experiment_id, *args) |
There was a problem hiding this comment.
This tool is redundant, as we the clustering module isn't doing something meaningful yet. You may remote it for now.
| def _process(self, outputs: Optional[List[Output]] = None, **kwargs) -> None: | ||
| super()._process(outputs=outputs, | ||
| normalize=False, | ||
| min_size=kwargs.get("min_size", MINIMUM_REQUIRED_DATAPOINTS), |
There was a problem hiding this comment.
I could see that we now pass the min_size through the CLI, but I'm not sure if removign it from here would help. Although it removes a duplication, but if it's something other than the MCP CLI, they we may need to have this optional parameter passing (e.g., the user instantiates the anomaly detection module in their code without passing min_size).
| "command": "python3", | ||
| "args": ["/path/to/tmll/mcp_server_cli.py"] |
There was a problem hiding this comment.
This is correct, but just for further development (out of the scope of this PR), we need to decouple TMLL MCP from TMLL itself, so we can run the MCP without needing to setup TMLL. Also, as TMLL has lots of dependencies, I think using the python3 as a command would lead to lots of ModuleNotFound exceptions. One way around I could find was to exactly tell which Python environment to use:
"mcpServers": {
"tmll": {
"type": "stdio",
"command": "/home/kavehshahedi/Desktop/tmll-eclipse/venv/bin/python",
"args": [
"-m",
"tmll.mcp.server"
],
"env": {
"PYTHONPATH": "/home/kavehshahedi/Desktop/tmll-eclipse"
}
}| - `analyze_correlation`: Perform root cause correlation analysis | ||
| - `detect_idle_resources`: Identify underutilized resources | ||
| - `plan_capacity`: Run capacity planning predictions | ||
| - `cluster_data`: Perform clustering analysis |
There was a problem hiding this comment.
Remove cluster_data if possible.
| tmll_cli.py plan-capacity --experiment <UUID> --horizon 30 | ||
|
|
||
| # Perform clustering | ||
| tmll_cli.py cluster --experiment <UUID> --method kmeans --n-clusters 3 |
There was a problem hiding this comment.
See the other comment.
|
Overall, it looks super cool to me! I tried it, and being able to have the plots through AI agents was very fun :D |
Signed-off-by: Matthew Khouzam <matthew.khouzam@ericsson.com>
What it does
This PR reworks the MCP server in two commits to improve context
efficiency and enable richer AI output.
Progressive MCP tool discovery (99b7fcb)
Migrates from the low-level Server API (with manual list_tools/call_tool
dispatching) to FastMCP with decorated @mcp.tool() functions. This
eliminates the monolithic tool schema that was sent upfront in every
conversation, replacing it with progressive discovery where the AI only
loads tool definitions as needed.
less code to maintain and less context consumed per MCP session
in a giant if/elif chain
properly
MCP App with image output (ff6b48a)
Adds plot_xy_with_anomalies — a new tool that returns ImageContent
directly in the MCP response, allowing the AI to embed annotated anomaly-
detection charts inline in its output rather than just returning text
summaries.
and outlier points marked
a text summary
server if not running
data via a new _add_dataframe helper
prevent stray prints from corrupting the MCP stdio transport
Context savings: The old server shipped all 11 tool schemas in a single
list_tools response on every connection. With FastMCP's progressive
discovery, the client only resolves schemas for tools it actually invokes,
reducing per-session context overhead — particularly valuable when the AI
has a limited context window and every token counts.
How to test
Load in an AI ide and ask to plot an XY graph
tc-mcp-app.mp4
Follow-ups
Review checklist