Skip to content

feat: add VibrationAgent MCP server for vibration analysis benchmarks#190

Open
LGDiMaggio wants to merge 7 commits intoIBM:mainfrom
LGDiMaggio:feat/vibration-mcp-server
Open

feat: add VibrationAgent MCP server for vibration analysis benchmarks#190
LGDiMaggio wants to merge 7 commits intoIBM:mainfrom
LGDiMaggio:feat/vibration-mcp-server

Conversation

@LGDiMaggio
Copy link

Summary

This PR adds a VibrationAgent MCP server to AssetOpsBench, introducing industrial vibration diagnostics capabilities as described in Issue #178.

What's included

Area Files Description
MCP Server src/servers/vibration/main.py 8 MCP tools (FFT, envelope spectrum, ISO 10816, bearing frequencies, full diagnosis)
DSP Core src/servers/vibration/dsp/ FFT analysis, envelope (Hilbert), bearing characteristic frequencies, fault detection
Data Layer data_store.py, couchdb_client.py In-memory signal store + CouchDB integration (reuses IoTAgent env vars)
Tests src/servers/vibration/tests/ 26 DSP unit tests + 17 MCP tool tests
Scenarios vibration_utterance.json 20 benchmark scenarios (IDs 301-320)
Integration pyproject.toml, executor.py, INSTRUCTIONS.md scipy dep, entry point, DEFAULT_SERVER_PATHS registration, full docs

Tools provided

Tool Description
get_vibration_data Fetch vibration time-series from CouchDB
list_vibration_sensors List available sensor fields for an asset
compute_fft_spectrum FFT amplitude spectrum with top-N peaks
compute_envelope_spectrum Envelope spectrum for bearing fault detection
assess_vibration_severity ISO 10816 severity classification (Zones A-D)
calculate_bearing_frequencies BPFO, BPFI, BSF, FTF computation
list_known_bearings Built-in bearing database (9 designations)
diagnose_vibration Full automated diagnostic pipeline with markdown report

Origin

DSP core adapted from vibration-analysis-mcp (Apache-2.0), with reliability fixes (kurtosis standardisation, velocity vectorisation, ddof consistency).

Testing

  • 26/26 DSP tests pass (uv run pytest src/servers/vibration/tests/test_dsp.py -v)
  • MCP tool tests (test_tools.py): blocked by Pydantic 2.12.5 + Python 3.14 incompatibility (repo-wide issue affecting all servers, not specific to this PR)

Notes

  • Development assisted by GitHub Copilot (Claude)
  • No breaking changes to existing servers or workflows

Ref #178

@florenzi002
Copy link
Member

@LGDiMaggio I've seen the same issue with Pydantic in #187. That PR downgrades to 3.12 and the tests run just fine.

@ShuxinLin given this is a repo wide issue do you want a separate PR to fix versioning between python and pydantic versions (for visiblity) or are you happy with any of the incoming PRs to fix that?

@LGDiMaggio
Copy link
Author

Thanks for the heads-up on #187 — good to know the 3.12 downgrade fixes it. Happy to align this PR with whatever versioning approach you settle on. Just let me know if any changes are needed on my side.

@DhavalRepo18 DhavalRepo18 requested a review from ShuxinLin March 4, 2026 16:03
@ShuxinLin
Copy link
Collaborator

I downgraded the py version in this commit 8ef012c. It should be in main branch, no?

@LGDiMaggio
Copy link
Author

Yes, 8ef012c is already in main — this PR branch is based on top of it.

I also just verified: after installing Python 3.12 via uv python install 3.12, all 42 vibration tests pass (26 DSP + 16 MCP tool tests), with 2 integration tests correctly skipped (no CouchDB).

======================== 42 passed, 2 skipped in 2.28s ========================

I'll push a small follow-up commit shortly to fix a minor test helper compatibility issue with fastmcp 2.14.5's call_tool return type.

@DhavalRepo18 DhavalRepo18 requested a review from nianjunz March 4, 2026 20:36
@DhavalRepo18
Copy link
Collaborator

@LGDiMaggio, we will review this PR progressively.

"id": 306,
"type": "Vibration",
"text": "What is the vibration severity classification for a machine with an RMS velocity of 4.5 mm/s? It is a medium-sized machine on rigid foundations.",
"category": "ISO Assessment",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ISO might be too specific. How about Condition Assessment?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. Renamed the category from 'ISO Assessment' to 'Condition Assessment' in both utterances (306, 307).

{
"id": 308,
"type": "Vibration",
"text": "Fetch vibration sensor data from Chiller 6, sensor Current, starting from 2020-06-01.",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It could be a little bit fuzzy for the ending time. If the vibration is measured at high frequency, say every couple of seconds, the data volume could be too large. It is better to have the ending time.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. Updated the utterance to include an explicit end time:

Fetch vibration sensor data from Chiller 6, sensor Current, from 2020-06-01 to 2020-06-07 at site MAIN.

Additionally, the CouchDB client already enforces a server-side limit=10000 documents per query as a safety net against unbounded fetches.

{
"id": 311,
"type": "Vibration",
"text": "Generate the FFT spectrum using a Blackman window for the loaded signal.",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a little bit fuzzy. We need to know which signal for the loaded dataset, for example the signal sensor name or ID. Maybe need to specific the return should be the peak frequencies (say top 1, 2, ...,).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. Updated all signal analysis and diagnosis utterances (IDs 310-320) to reference explicit signal IDs (e.g. 'vib_001') and specify the expected output format. For example:

Compute the FFT spectrum for signal 'vib_001' and return the top 5 peak frequencies with amplitudes.

Compute the FFT spectrum of signal 'vib_001' using a Blackman window and return the top 5 peaks.

{
"id": 312,
"type": "Vibration",
"text": "Perform envelope analysis on the vibration signal to look for bearing defect frequencies.",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, this is a little bit fuzzy, the implemented system might not know that there is a need for a bandpass filter (or it might be with a certain prompt examples). Ideally, we can be more specific:

  1. Add the signal ID:
  2. Specify the bandpass filter.
  3. Using certiain algorithm for the envelope.

The current utterance could work, but it increases the opportunities of hulliciation.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated. The envelope utterance now specifies all three points:

  1. Signal ID: 'vib_001'
  2. Bandpass filter: 500 Hz to 1500 Hz
  3. Algorithm: Hilbert transform

Compute the envelope spectrum of signal 'vib_001' using a bandpass filter from 500 Hz to 1500 Hz (Hilbert transform) and return the top 5 peaks.

This removes ambiguity and lets the grader verify exact parameters.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current implementation is based on the scipy package. An alternative is to use existing packages for FFT, time series analysis, and other package. That could help to expand the availability of the analytic functions and focus on the scenario development.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback @nianjunz.

The DSP modules are thin wrappers around scipy.signal, not reimplementations. The actual scipy-specific code is about 25 lines across fft_analysis.py and envelope.py (calls to welch, hilbert, butter, sosfilt, find_peaks). The remaining about 950 lines are domain knowledge that no existing package provides:

bearing_freqs.py is rolling-element bearing kinematics (BPFO/BPFI/BSF/FTF formulas and bearing database). Zero scipy.
fault_detection.py is ISO 10816 thresholds, shaft feature extraction, rule-based fault classification. Zero scipy.
scipy.signal is the de facto standard Python DSP library. I evaluated alternatives, but dedicated vibration packages in the Python ecosystem are either unmaintained or internally depend on scipy anyway.

That said, I'm very open to integrating a specific package if you have one in mind, happy to discuss. The modular dsp makes it straightforward to swap implementations without touching the MCP tool layer.

Regarding focusing on scenario development fully agreed, and that's the direction for the next iteration (addressing your other comments on utterance specificity and missing categories).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make the knowledge query utternaces are good enough for now. We can incremental add the other type of predictive and decision support utterances as time goes.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed — the current 24 utterances cover knowledge extraction, condition assessment, diagnostic and decision support well enough as a first iteration. Happy to add predictive utterances (e.g. RUL estimation, trend extrapolation) incrementally once the TSFM integration is in place. Thanks for the thorough review!

Copy link
Collaborator

@nianjunz nianjunz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made some initial comments on the fork - feat: add VibrationAgent MCP server for vibration analysis benchmarks
#190.
The work is great. It gives an opportunity to integrate a new application of asset management using the vibration signals. It might be that the AssetOpsBench is closer to the real engineering practice.

The main comments are: 1. Should we use the existing package to implement the MCP tools; 2. The utterances in general are a little bit fuzzy; 3. The utterances mainly focus on the knowledge extraction; we do not have utterances in prediction, diagnotic and decision support.

@LGDiMaggio
Copy link
Author

Thanks @nianjunz for the thorough review. Here is a summary of all changes addressing your feedback:

  1. Existing packages (scipy): Replied inline on fft_analysis.py. The DSP modules are thin wrappers around scipy.signal (about 25 lines of scipy calls). The remaining 950 lines are pure domain knowledge (bearing kinematics, ISO 10816, fault rules) that no existing package provides.

  2. Utterance specificity: All utterances updated:

    • Data retrieval (308): explicit start and end time, site name
    • Signal analysis (310, 311, 312): explicit signal ID, output format (top N peaks), bandpass filter params and algorithm
    • Diagnosis (313-316): explicit signal ID
    • Fault classification (317-320): explicit signal ID
    • CouchDB client already enforces limit=10000 as server-side safety net
  3. Category rename: 'ISO Assessment' renamed to 'Condition Assessment' (306, 307)

  4. New scenario categories: Added 4 new utterances (321-324):

    • Diagnostic (321-322): symptom-driven scenarios requiring multi-tool orchestration to identify root cause
    • Decision Support (323-324): scenarios requiring the agent to combine tool results and provide justified maintenance recommendations

    Regarding prediction: I intentionally deferred prognostic scenarios. Reliable prediction requires integration with TSFMAgent as a time-series regressor (e.g., trending RMS velocity over time). Without a defined prognostic workflow, the LLM would hallucinate predictions without verifiable backing. This is planned as a follow-up once the TSFM integration path is established.

All changes are committed locally and will be pushed shortly.

@LGDiMaggio LGDiMaggio force-pushed the feat/vibration-mcp-server branch from b21bf19 to 86efdf2 Compare March 11, 2026 18:09
@LGDiMaggio
Copy link
Author

One additional note on multi-server orchestration and hallucination mitigation:

I have been exploring the use of SKILL.md files as a coordination layer for complex MCP workflows — essentially structured instructions that guide the LLM through multi-tool pipelines with explicit constraints and verification steps. Early results in my project (https://github.com/LGDiMaggio/claude-stwinbox-diagnostics) show significant improvements in both task completion accuracy and hallucination reduction.

However, this approach is currently tied to a specific LLM and is not model-agnostic, so it would not be appropriate for AssetOpsBench at this stage. Mentioning it here as a direction worth discussing for future iterations, especially for cross-server scenarios (e.g., VibrationAgent + TSFMAgent prognostic workflows).

@DhavalRepo18
Copy link
Collaborator

DhavalRepo18 commented Mar 11, 2026

@ShuxinLin, I would like to request that you review this PR now. Please make all cross-checks on your side of implementation. And pay special attention to the question I have: how data is loaded into CouchDB and how it's accessed.

@DhavalRepo18
Copy link
Collaborator

One additional note on multi-server orchestration and hallucination mitigation:

I have been exploring the use of SKILL.md files as a coordination layer for complex MCP workflows — essentially structured instructions that guide the LLM through multi-tool pipelines with explicit constraints and verification steps. Early results in my project (https://github.com/LGDiMaggio/claude-stwinbox-diagnostics) show significant improvements in both task completion accuracy and hallucination reduction.

However, this approach is currently tied to a specific LLM and is not model-agnostic, so it would not be appropriate for AssetOpsBench at this stage. Mentioning it here as a direction worth discussing for future iterations, especially for cross-server scenarios (e.g., VibrationAgent + TSFMAgent prognostic workflows).

I acknowledge that this was our discussion to start slow and then grow faster.

@DhavalRepo18
Copy link
Collaborator

DhavalRepo18 commented Mar 12, 2026

@LGDiMaggio I have one question about

  1. "id": 309,
    "type": "Vibration",
    "text": "What sensors are available for Chiller 6 at site MAIN?",
    "category": "Data Retrieval",
    "characteristic_form": "The expected response should list available sensor fields for Chiller 6 from CouchDB."
    }

---> Question and answers are a bit disconnected. Shall we list the available sensor fields?
---> Also, is the query part of your server?

@LGDiMaggio
Copy link
Author

@DhavalRepo18 good questions, two answers:

  1. The characteristic_form for scenario 309 was indeed too vague — updated to explicitly state that the expected response should return the list of vibration sensor field names available for the asset in CouchDB (e.g., Vibration_X, Vibration_Y, Vibration_Z).

  2. Yes, list_vibration_sensors is one of the 8 MCP tools in src/servers/vibration/main.py. It calls list_sensor_fields(asset_id) on the CouchDB client.

Regarding the broader question of how data is loaded into CouchDB and accessed: the couchdb_client.py is a generic client that queries any asset and sensor field present in the database. It reuses the same environment variables as IoTAgent (COUCHDB_URL, COUCHDB_DBNAME, COUCHDB_USERNAME, COUCHDB_PASSWORD). The current sample data in src/couchdb/sample_data/ contains only thermal and energy sensors for the Chillers, which are not suitable for vibration analysis.

For this reason I also updated the data retrieval scenarios (308, 309, 321, 322) to reference a dedicated vibration asset (Motor_01, sensor Vibration_X) rather than Chiller 6. Populating CouchDB with vibration time-series data (acceleration in g, sampled at >= 1 kHz) is a prerequisite for the data retrieval scenarios to execute end-to-end. The characteristic_form for those scenarios now makes this dependency explicit.

@DhavalRepo18
Copy link
Collaborator

@DhavalRepo18 good questions, two answers:

  1. The characteristic_form for scenario 309 was indeed too vague — updated to explicitly state that the expected response should return the list of vibration sensor field names available for the asset in CouchDB (e.g., Vibration_X, Vibration_Y, Vibration_Z).
  2. Yes, list_vibration_sensors is one of the 8 MCP tools in src/servers/vibration/main.py. It calls list_sensor_fields(asset_id) on the CouchDB client.

Regarding the broader question of how data is loaded into CouchDB and accessed: the couchdb_client.py is a generic client that queries any asset and sensor field present in the database. It reuses the same environment variables as IoTAgent (COUCHDB_URL, COUCHDB_DBNAME, COUCHDB_USERNAME, COUCHDB_PASSWORD). The current sample data in src/couchdb/sample_data/ contains only thermal and energy sensors for the Chillers, which are not suitable for vibration analysis.

For this reason I also updated the data retrieval scenarios (308, 309, 321, 322) to reference a dedicated vibration asset (Motor_01, sensor Vibration_X) rather than Chiller 6. Populating CouchDB with vibration time-series data (acceleration in g, sampled at >= 1 kHz) is a prerequisite for the data retrieval scenarios to execute end-to-end. The characteristic_form for those scenarios now makes this dependency explicit.

Did you added dataset?

@LGDiMaggio
Copy link
Author

@DhavalRepo18 not yet — let me explain the options and the tradeoff.

The couchdb_client.py is intentionally generic: it queries any asset and sensor field stored with the structure {asset_id, timestamp, sensor_field: value}. This is the same structure used by the existing IoT sample data, so VibrationAgent is fully compatible with the existing CouchDB schema.

Two options for the dataset:

  1. Synthetic data in this PR: generate a bulk_docs_vibration.json with sinusoidal signals at known frequencies (e.g., 1x/2x shaft harmonics, BPFO bearing component at 4096 Hz) and load it into CouchDB via the existing couchdb_setup.sh. This gives reproducible ground truth, is small in size, and is immediately usable for benchmarking.

  2. Real open-source dataset (e.g., CWRU Bearing): widely used in the literature, but the raw files are MATLAB .mat format with dataset-specific field names (DE_time, FE_time). Loading CWRU would require a dedicated parser and field mapping, which would make the server specific to that dataset and break the generic CouchDB schema that makes VibrationAgent adaptable to any deployment.

My recommendation is option 1 for this PR (synthetic, reproducible, schema-compatible), with the CouchDB loader script serving as the template for anyone who wants to adapt a real dataset later. Happy to proceed with that if you agree.

@DhavalRepo18
Copy link
Collaborator

@DhavalRepo18 not yet — let me explain the options and the tradeoff.

The couchdb_client.py is intentionally generic: it queries any asset and sensor field stored with the structure {asset_id, timestamp, sensor_field: value}. This is the same structure used by the existing IoT sample data, so VibrationAgent is fully compatible with the existing CouchDB schema.

Two options for the dataset:

  1. Synthetic data in this PR: generate a bulk_docs_vibration.json with sinusoidal signals at known frequencies (e.g., 1x/2x shaft harmonics, BPFO bearing component at 4096 Hz) and load it into CouchDB via the existing couchdb_setup.sh. This gives reproducible ground truth, is small in size, and is immediately usable for benchmarking.
  2. Real open-source dataset (e.g., CWRU Bearing): widely used in the literature, but the raw files are MATLAB .mat format with dataset-specific field names (DE_time, FE_time). Loading CWRU would require a dedicated parser and field mapping, which would make the server specific to that dataset and break the generic CouchDB schema that makes VibrationAgent adaptable to any deployment.

My recommendation is option 1 for this PR (synthetic, reproducible, schema-compatible), with the CouchDB loader script serving as the template for anyone who wants to adapt a real dataset later. Happy to proceed with that if you agree.

Yes I agree with option 1, Synthetic data reduce our data license need also. Also we can point to a real data so user can do those extra steps for doing benchmark on a real dataset. We can provide a python sciript for any processing of the data needed and avoid copying any data. Pointer to one real dataset sitting somewhere but getting benefits of this work is highly appreciated.

We are internded to merge this PR by early Next week.

@DhavalRepo18
Copy link
Collaborator

DhavalRepo18 commented Mar 13, 2026

@ShuxinLin can you please initiate a code review and all the other observation you have from code perspective. If you can provide all of your review by Monday, @LGDiMaggio can make necessary changes and we can close the PR by early next week.

@LGDiMaggio
Copy link
Author

Synthetic vibration data & generation script

@DhavalRepo18 here's an update on the data side. This commit adds:

  1. generate_synthetic_vibration.py — a self-contained, documented generation script in src/servers/vibration/sample_data/. It implements the McFadden & Smith (1984) impulsive bearing-fault model:

    • Periodic impulse train at BPFO (outer-race defect, SKF 6205 @ 1800 RPM)
    • Structural resonance ring-down at 3200 Hz
    • Load-zone amplitude modulation at shaft frequency
    • The script includes a --check flag that prints RMS, peak, crest factor, kurtosis — the same features the server computes, so you can verify signal quality before ingestion
    • Motor slip is neglected (documented as a known simplification)
  2. bulk_docs_vibration.json — 4096 CouchDB documents (1 s at 4096 Hz), Motor_01/Vibration_X. Signal stats: RMS=0.17g, peak=2.1g, crest factor=12.3, excess kurtosis=48 — consistent with a moderate outer-race fault.

  3. Updated couchdb_setup.sh — loads vibration data after the existing chiller data. Please verify this is compatible with your CouchDB deployment workflow — I had active tool support on this part and would appreciate a human check.

Regarding real datasets: I am currently working on making vibration datasets from my research and teaching activities available in an open format, but they are not yet publicly released under a compatible license. We can evaluate including them in a future iteration once the licensing is resolved.

Regarding testing with the new vibration data: It would be valuable if someone from the IBM team could run an end-to-end smoke test (CouchDB load -> scenario execution) to validate that the data pipeline works correctly in the containerized environment.

Also fixed a minor inconsistency: data_store.py was using ddof=0 for kurtosis while main.py and fault_detection.py both use ddof=1 — now aligned everywhere.

Added myself (LGDiMaggio) to .all-contributorsrc.

@DhavalRepo18
Copy link
Collaborator

Added myself (LGDiMaggio) to .all-contributorsrc.

Added myself (LGDiMaggio) to .all-contributorsrc. ---> the moment your PR merged, the automated hook will revise. We have an auto policy.

Yes, code will be executed before merge, which is being handled by @ShuxinLin

@DhavalRepo18
Copy link
Collaborator

DhavalRepo18 commented Mar 16, 2026

@LGDiMaggio final question from me.

  1. Given our repo is mainly for a benchmark, what leaderboard can we prepare using your 24 scenarios? Can we say with the help of the vibration server, scenarios were better answer vs w/o.

@LGDiMaggio
Copy link
Author

@DhavalRepo18 great question. The 24 scenarios naturally support a tool-augmented vs. baseline leaderboard using the existing 6-dimensional evaluation framework (Task Completion, Data Retrieval Accuracy, Result Verification, Agent Sequence, Clarity & Justification, Hallucination Check).

Experimental design:

  • Setup A (with vibration-mcp-server): LLM agent has access to all 8 vibration MCP tools via plan-execute
  • Setup B (without vibration-mcp-server): Same LLM, same questions, no vibration tools. The LLM relies on parametric knowledge only

Expected results by category:

Category Scenarios With Tools Without Tools
Knowledge Query 301-303 High Medium. LLMs have general knowledge but can't enumerate tool-specific capabilities
Bearing Analysis 304-305 High Low. Exact numerical computation vs. hallucinated values
Condition Assessment 306-307 High Medium. ISO thresholds exist in training data but tools guarantee precision
Data Retrieval 308-309 High Fail. Impossible without CouchDB access
Signal Analysis 310-312 High Fail. FFT/envelope require actual DSP computation
Diagnosis 313-316 High Very Low. Needs multi-tool orchestration
Fault Classification 317-320 High Low. Requires spectral evidence
Diagnostic (multi-step) 321-322 High Fail. Requires data retrieval + analysis chain
Decision Support 323-324 High Low. Recommendations need analysis results as evidence

At least 11 of 24 scenarios (Data Retrieval, Signal Analysis, multi-step Diagnostic) are structurally impossible without the tools because they require real DSP computation or CouchDB access. This alone guarantees a significant delta on the leaderboard. The remaining 13 test whether tools also improve accuracy where LLMs have parametric knowledge (bearing math, ISO thresholds), where the Hallucination dimension should show the clearest improvement (exact computed values vs. approximations).

Leaderboard format (consistent with the existing Kaggle benchmark):

  • Rows: LLM models
  • Columns: overall accuracy, per-category, per-dimension (6D)
  • Two sections: with / without vibration-mcp-server
  • Delta (Δ) column showing improvement

This directly demonstrates the benchmark's core thesis: domain-specific tools measurably improve agent performance on industrial tasks.

@DhavalRepo18
Copy link
Collaborator

@nianjunz If your all questions are answered please approve the PR. We will also wait for @ShuxinLin to give her view.

Add a new VibrationAgent MCP server that provides 8 tools for industrial
vibration diagnostics:

- get_vibration_data / list_vibration_sensors (CouchDB integration)
- compute_fft_spectrum / compute_envelope_spectrum (DSP analysis)
- assess_vibration_severity (ISO 10816 classification)
- calculate_bearing_frequencies / list_known_bearings (bearing analysis)
- diagnose_vibration (full automated diagnostic pipeline)

DSP core adapted from vibration-analysis-mcp (Apache-2.0):
https://github.com/LGDiMaggio/claude-stwinbox-diagnostics/tree/main/mcp-servers/vibration-analysis-mcp

Also includes:
- 20 benchmark scenarios (vibration_utterance.json, IDs 301-320)
- Registration in workflow executor (DEFAULT_SERVER_PATHS)
- scipy>=1.10.0 dependency and vibration-mcp-server entry point
- 26 unit tests (DSP) + 17 MCP tool tests
- Full documentation in INSTRUCTIONS.md

Ref IBM#178 -- ready for review and testing

Signed-off-by: Luigi Di Maggio <luigi.dimaggio@polito.it>
- conftest.py: handle both tuple and direct return from FastMCP.call_tool()
- test_tools.py: fix bearing name assertion (includes description suffix)
- test_tools.py: fix key name 'bearing' (was 'bearing_name') in to_dict()

All 42 unit tests pass on Python 3.12; 2 integration tests skipped (no CouchDB).

Signed-off-by: Luigi Di Maggio <luigi.dimaggio@polito.it>
- Add explicit end time to data retrieval scenario (308)
- Add signal IDs and output format to all analysis/diagnosis/fault utterances (310-320)
- Specify bandpass filter params and algorithm for envelope analysis (312)
- Rename 'ISO Assessment' category to 'Condition Assessment' (306, 307)
- Add Diagnostic scenarios (321-322): symptom-driven multi-tool orchestration
- Add Decision Support scenarios (323-324): maintenance recommendations
- Total scenarios: 20 -> 24 (IDs 301-324)

Signed-off-by: Luigi Di Maggio <luigi.dimaggio@polito.it>
- Replace 'Chiller 6 / sensor Current' with 'Motor_01 / sensor Vibration_X'
  in scenarios 308, 309, 321, 322
- Add note in characteristic_form that CouchDB must be populated with
  vibration time-series data (acceleration in g, >= 1 kHz) for data
  retrieval scenarios to execute

Signed-off-by: Luigi Di Maggio <luigi.dimaggio@polito.it>
- Add generate_synthetic_vibration.py (McFadden & Smith 1984 impulsive model)
- Add bulk_docs_vibration.json (4096 docs, Motor_01/Vibration_X, BPFO fault)
- Update couchdb_setup.sh to load vibration sample data
- Fix kurtosis ddof inconsistency in data_store.py (ddof=0 -> ddof=1)
- Add LGDiMaggio to .all-contributorsrc

Signed-off-by: Luigi Di Maggio <luigi.dimaggio@polito.it>
Signed-off-by: Luigi Di Maggio <luigi.dimaggio@polito.it>
@LGDiMaggio LGDiMaggio force-pushed the feat/vibration-mcp-server branch from a7deebe to ae33722 Compare March 17, 2026 21:36
@LGDiMaggio
Copy link
Author

Rebased on latest main (includes WorkOrderAgent from #191). All 4 conflicts resolved:

  • pyproject.toml: both entry points kept (wo-mcp-server + vibration-mcp-server)
  • executor.py: both agents registered in DEFAULT_SERVER_PATHS
  • INSTRUCTIONS.md: documentation for all 6 servers merged
  • couchdb_setup.sh: vibration data loading adapted to the new Python-based init pattern (reuses init_asset_data.py)

All 42 tests pass, 2 integration tests correctly skipped (no CouchDB).

@DhavalRepo18
Copy link
Collaborator

@ShuxinLin, can you please prioritize this PR now?

@LGDiMaggio
Copy link
Author

Conflicts resolved, ready to merge.

@DhavalRepo18
Copy link
Collaborator

We typically run the PR prior to merge. We plan to get this in the mainstream in a week.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tmp/ is going to be removed soon. Consolidate the scenarios to HF @DhavalRepo18

Copy link
Collaborator

@ShuxinLin ShuxinLin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have not run the vibration server locally since I feel couchdb setup with vibration data is not working.

--db "${WO_DBNAME:-workorder}"

# Load vibration sample data (Motor_01 bearing fault) into the IoT database
VIBRATION_FILE="/sample_data/bulk_docs_vibration.json"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found the bulk_docs_vibration.json is under servers/vibration/sample_data/. Any file moving operation I missed?

"fmsr": "fmsr-mcp-server",
"tsfm": "tsfm-mcp-server",
"wo": "wo-mcp-server",
"IoTAgent": "iot-mcp-server",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we are removing the "agent" from mcp servers. this conflict merge is not correct

- [FMSRAgent](#fmsragent)
- [TSFMAgent](#tsfmagent)
- [WorkOrderAgent](#workorderagent)
- [VibrationAgent](#vibrationagent)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

similarly we renamed the mcp servers. you can change "VibrationAgent" to just "vibration"

if [ -f "$VIBRATION_FILE" ]; then
echo "Loading vibration data..."
COUCHDB_URL="http://localhost:5984" \
python3 /couchdb/init_asset_data.py \
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you are running init_asset_data.py to load the vibration data but init_asset_data.py file was not modified. Is the change complete?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants