Skip to content

test: enhance test framework with comprehensive fixtures and mocks#5354

Merged
Soulter merged 3 commits intoAstrBotDevs:masterfrom
whatevertogo:test/test-framework-foundation
Feb 23, 2026
Merged

test: enhance test framework with comprehensive fixtures and mocks#5354
Soulter merged 3 commits intoAstrBotDevs:masterfrom
whatevertogo:test/test-framework-foundation

Conversation

@whatevertogo
Copy link
Contributor

@whatevertogo whatevertogo commented Feb 23, 2026

Enhance the test framework with reusable mock utilities and helper functions to reduce code duplication across platform adapter tests and improve test maintainability.

Modifications / 改动点

  • Added tests/fixtures/mocks/ with mock builders for Telegram, Discord, and Aiocqhttp

  • Added tests/fixtures/helpers.py with utility functions for creating test data

  • Expanded tests/conftest.py with test profile support (all/blocking)

  • Updated coverage workflow to target correct module

  • This is NOT a breaking change. / 这不是一个破坏性变更。

Screenshots or Test Results / 运行截图或测试结果

# Verification: Mock builders can be imported
$ python -c "from tests.fixtures.mocks import MockTelegramBuilder, MockDiscordBuilder, MockAiocqhttpBuilder; print('Mock builders imported successfully')"
Mock builders imported successfully

# Verification: Conftest fixtures work
$ pytest tests/conftest.py --collect-only
collected 0 items

Checklist / 检查清单

  • 😊 如果 PR 中有新加入的功能,已经通过 Issue / 邮件等方式和作者讨论过。/ If there are new features added in the PR, I have discussed it with the authors through issues/emails, etc.
  • 👀 我的更改经过了良好的测试,并已在上方提供了"验证步骤"和"运行截图"。/ My changes have been well-tested, and "Verification Steps" and "Screenshots" have been provided above.
  • 🤓 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到了 requirements.txtpyproject.toml 文件相应位置。/ I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations in requirements.txt and pyproject.toml.
  • 😮 我的更改没有引入恶意代码。/ My changes do not introduce malicious code.

Summary by Sourcery

通过共享 fixtures、mocks 和辅助工具增强测试基础设施,以支持平台适配器和插件测试,同时收紧覆盖率配置。

Enhancements:

  • 添加集中式 pytest 配置,用于测试执行顺序、运行配置(profiles)以及通用的 AstrBot 相关 fixtures。
  • 引入可复用的辅助工具,用于构建平台配置、消息组件和 LLM 响应。
  • 提供一个最小化的测试插件以及静态 fixture 结构,用于插件和消息测试。

Build:

  • 调整覆盖率工作流,仅对 astrbot 包进行覆盖率统计。

Tests:

  • 添加 Telegram、Discord 和 Aiocqhttp 的共享 mock 模块和构造器,简化适配器测试。
  • 建立一个 fixtures 包,内含可在测试间复用的辅助函数、配置、消息以及插件元数据。
Original summary in English

Summary by Sourcery

Enhance the testing infrastructure with shared fixtures, mocks, and helpers to support platform adapter and plugin tests while tightening coverage configuration.

Enhancements:

  • Add centralized pytest configuration for test ordering, profiles, and common AstrBot-related fixtures.
  • Introduce reusable helper utilities for constructing platform configs, message components, and LLM responses.
  • Provide a minimal test plugin and static fixture structure for plugin and message testing.

Build:

  • Adjust coverage workflow to measure coverage only for the astrbot package.

Tests:

  • Add shared mock modules and builders for Telegram, Discord, and Aiocqhttp to simplify adapter testing.
  • Establish a fixtures package with helpers, configs, messages, and plugin metadata for reuse across tests.

- Add shared mock builders for aiocqhttp, discord, telegram
- Add test helpers for platform configs and mock objects
- Expand conftest.py with test profile support
- Update coverage test workflow configuration

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings February 23, 2026 00:56
@dosubot dosubot bot added the size:XXL This PR changes 1000+ lines, ignoring generated files. label Feb 23, 2026
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @whatevertogo, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly upgrades the project's testing infrastructure by introducing a more organized and reusable approach to writing tests. It centralizes common test setup, mock objects, and helper functions, aiming to reduce boilerplate code, improve test maintainability, and provide flexible test execution profiles. These enhancements will lead to a more robust and efficient testing workflow across the project.

Highlights

  • Enhanced Test Framework: Introduced a comprehensive conftest.py for Pytest, centralizing fixtures, test utilities, and test execution profiles.
  • Structured Mocking Utilities: Added a dedicated tests/fixtures/mocks directory with reusable mock builders for Telegram, Discord, and Aiocqhttp platforms, significantly reducing boilerplate in adapter tests.
  • Flexible Test Profiles: Implemented new test profiles (all and blocking) in conftest.py to control test execution based on predefined tiers (unit, integration, slow, platform, provider, db).
  • Centralized Test Helpers: Provided a tests/fixtures/helpers.py with utility functions for creating various mock objects, such as LLM responses, message components, and platform configurations.
  • Example Test Data and Plugins: Established a structured way to manage test data, configurations, and example plugins within the tests/fixtures directory, including sample JSON configs and a minimal test plugin.
Changelog
  • tests/conftest.py
    • Added a new conftest.py file to centralize pytest configurations and fixtures.
    • Implemented pytest_collection_modifyitems to reorder tests (unit first, then integration) and support test profiles (all, blocking).
    • Defined custom pytest markers for unit, integration, slow, platform, provider, db, tier_c, and tier_d tests.
    • Included fixtures for temporary directories, event queues, platform settings, and temporary data/config/db files.
    • Provided mock fixtures for core components like Provider, Platform, Conversation, AstrMessageEvent, AstrBotConfig, MainAgentBuildConfig, and Context.
    • Added utility functions create_mock_llm_response and create_mock_message_component.
    • Implemented pytest_runtest_setup for conditional test skipping based on environment variables (API keys, platform enablement).
  • tests/fixtures/init.py
    • Added __init__.py to define the fixtures directory as a Python package.
    • Exported FIXTURES_DIR, load_fixture, get_fixture_path, and various helper functions and mock builders.
  • tests/fixtures/configs/test_cmd_config.json
    • Added a sample JSON configuration file for testing purposes, including provider and platform settings.
  • tests/fixtures/helpers.py
    • Added NoopAwaitable class for mocking async operations.
    • Implemented make_platform_config to generate mock platform configurations for Telegram, Discord, Aiocqhttp, Webchat, and Wecom.
    • Provided create_mock_update and create_mock_file for Telegram-specific mocks.
    • Included create_mock_discord_attachment, create_mock_discord_user, and create_mock_discord_channel for Discord-specific mocks.
    • Added create_mock_message_component and create_mock_llm_response for generic message and LLM response mocks.
  • tests/fixtures/messages/test_messages.json
    • Added a JSON file containing various mock message structures (plain, image, at, reply, file, combined) for testing message parsing and handling.
  • tests/fixtures/mocks/init.py
    • Added __init__.py to define the mocks directory as a Python package.
    • Exported mock utilities and builders for Telegram, Discord, and Aiocqhttp.
  • tests/fixtures/mocks/aiocqhttp.py
    • Added create_mock_aiocqhttp_modules to create mock aiocqhttp modules.
    • Provided mock_aiocqhttp_modules pytest fixture for auto-mocking aiocqhttp in tests.
    • Implemented MockAiocqhttpBuilder with a static method create_bot for building mock CQHttp bot instances.
  • tests/fixtures/mocks/discord.py
    • Added create_mock_discord_modules to create mock discord modules and submodules.
    • Provided mock_discord_modules pytest fixture for auto-mocking discord in tests.
    • Implemented MockDiscordBuilder with a static method create_client for building mock Discord client instances.
  • tests/fixtures/mocks/telegram.py
    • Added create_mock_telegram_modules to create mock telegram, telegram.ext, telegramify_markdown, and apscheduler modules.
    • Provided mock_telegram_modules pytest fixture for auto-mocking Telegram-related modules in tests.
    • Implemented MockTelegramBuilder with static methods create_bot, create_application, and create_scheduler for building mock Telegram bot, application, and APScheduler instances.
  • tests/fixtures/plugins/fixture_plugin.py
    • Added a minimal TestPlugin class, demonstrating command, LLM tool, and regex handlers for plugin system testing.
  • tests/fixtures/plugins/metadata.yaml
    • Added a YAML metadata file for the test_plugin, including name, description, version, author, and repository.
Ignored Files
  • Ignored by pattern: .github/workflows/** (1)
    • .github/workflows/coverage_test.yml
Activity
  • No human activity (comments, reviews) was detected on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@dosubot dosubot bot added the area:platform The bug / feature is about IM platform adapter, such as QQ, Lark, Telegram, WebChat and so on. label Feb 23, 2026
@dosubot
Copy link

dosubot bot commented Feb 23, 2026

Related Documentation

Checked 1 published document(s) in 1 knowledge base(s). No updates required.

How did I do? Any feedback?  Join Discord

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request significantly enhances the test framework by introducing a comprehensive set of reusable fixtures, mocks, and helper functions. The new structure under tests/fixtures is well-organized and will greatly improve test maintainability and reduce code duplication across platform adapter tests. The introduction of mock builders and module-level mocks for external libraries is a solid approach. I've identified a couple of areas for improvement, mainly around code duplication and using more idiomatic pytest patterns for better efficiency and readability. Overall, this is a great contribution to the project's test infrastructure.

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - 我发现了 3 个问题,并给出了一些整体性的反馈:

  • 辅助函数 create_mock_llm_responsecreate_mock_message_component 同时在 tests/conftest.pytests/fixtures/helpers.py 中实现;建议将它们合并到单一模块(例如 helpers.py)中,然后从那里导入,以避免多个实现之间发生偏差。
  • tests/fixtures/mocks/* 中以模块为作用域且 autouse=True 的 fixture 会影响所有导入它们的测试模块,这会让测试行为不够显式;建议改为显式请求的 fixture(去掉 autouse=True),或者在可能的情况下收窄它们的作用域。
给 AI Agent 的提示词
请根据本次代码评审中的评论进行修改:

## 整体评论
- 辅助函数 `create_mock_llm_response``create_mock_message_component` 同时在 `tests/conftest.py``tests/fixtures/helpers.py` 中实现;建议将它们合并到单一模块(例如 `helpers.py`)中,然后从那里导入,以避免多个实现之间发生偏差。
- `tests/fixtures/mocks/*` 中以模块为作用域且 `autouse=True` 的 fixture 会影响所有导入它们的测试模块,这会让测试行为不够显式;建议改为显式请求的 fixture(去掉 `autouse=True`),或者在可能的情况下收窄它们的作用域。

## 单独评论

### Comment 1
<location> `tests/conftest.py:33-42` </location>
<code_context>
+def pytest_collection_modifyitems(session, config, items):  # noqa: ARG001
</code_context>

<issue_to_address>
**suggestion (testing):** 为新的收集顺序和 `--test-profile` 行为添加测试用例

这些 hook 编码了很多策略(单元测试 vs 集成测试的顺序、自动打标、`tier_c`/`tier_d`,以及 `ASTRBOT_TEST_PROFILE`/`--test-profile` 的交互),但目前都没有被测试覆盖。请添加测试用例:(1)构造带有不同路径/标记的合成 `Item`,并断言 `pytest_collection_modifyitems` 之后的排序结果;(2)检查当 `profile == 'blocking'` 时会取消选择 tier_c/tier_d 测试并触发 `pytest_deselected`;(3)验证环境变量和 CLI 选项之间的优先级。这将有助于在将来的改动中保持收集行为的稳定性。

建议实现:

```python
import os
from pathlib import Path

import pytest


def _write_basic_tests(pytester: pytest.Pytester) -> None:
    """Create a minimal test layout with unit / integration and tier markers."""
    pytester.makepyfile(
        **{
            # Simulate conventional unit-test location
            "unit/test_unit_ordering.py": """
import pytest

def test_unit_first():
    pass

@pytest.mark.tier_c
def test_unit_tier_c():
    pass
""",
            # Simulate conventional integration-test location
            "integration/test_integration_ordering.py": """
import pytest

@pytest.mark.integration
def test_integration_second():
    pass

@pytest.mark.tier_d
def test_integration_tier_d():
    pass
""",
        }
    )


def test_collection_ordering_unit_before_integration(pytester: pytest.Pytester) -> None:
    """
    Unit tests should be ordered before integration tests after collection.

    This asserts the ordering policy enforced by `pytest_collection_modifyitems`.
    """
    _write_basic_tests(pytester)

    result = pytester.runpytest("--collect-only", "-q")
    result.stdout.fnmatch_lines(
        [
            # Both unit tests should appear before integration tests
            "unit/test_unit_ordering.py::test_unit_first",
            "unit/test_unit_ordering.py::test_unit_tier_c",
            "integration/test_integration_ordering.py::test_integration_second",
            "integration/test_integration_ordering.py::test_integration_tier_d",
        ],
        consecutive=True,
    )
    result.assert_outcomes()  # just ensure collection succeeded


def test_blocking_profile_deselects_tier_c_and_tier_d(pytester: pytest.Pytester) -> None:
    """
    When running with `--test-profile=blocking`, tier_c and tier_d tests
    should be deselected and reported via `pytest_deselected`.
    """
    _write_basic_tests(pytester)

    result = pytester.runpytest("--test-profile=blocking", "-q")
    # All tests should be collected successfully, but tier_c/tier_d deselected
    outcomes = result.parseoutcomes()
    # We expect at least the two tiered tests to be deselected
    assert outcomes.get("deselected", 0) >= 2
    # The un-tiered tests should run
    assert outcomes.get("passed", 0) >= 2


def test_env_profile_blocks_when_cli_not_set(pytester: pytest.Pytester, monkeypatch: pytest.MonkeyPatch) -> None:
    """
    If only ASTRBOT_TEST_PROFILE=blocking is set (no CLI option),
    tier_c/tier_d tests should be deselected.
    """
    _write_basic_tests(pytester)
    monkeypatch.setenv("ASTRBOT_TEST_PROFILE", "blocking")

    result = pytester.runpytest("-q")
    outcomes = result.parseoutcomes()
    # Environment profile should be honored
    assert outcomes.get("deselected", 0) >= 2
    assert outcomes.get("passed", 0) >= 2


def test_cli_profile_overrides_env_profile(pytester: pytest.Pytester, monkeypatch: pytest.MonkeyPatch) -> None:
    """
    CLI `--test-profile` should take precedence over ASTRBOT_TEST_PROFILE
    according to the `pytest_collection_modifyitems` implementation.

    With env=blocking but CLI=all, tier_c/tier_d tests should NOT be deselected.
    """
    _write_basic_tests(pytester)
    monkeypatch.setenv("ASTRBOT_TEST_PROFILE", "blocking")

    # Explicitly request profile=all via CLI, which should override env
    result = pytester.runpytest("--test-profile=all", "-q")
    outcomes = result.parseoutcomes()

    # Nothing should be deselected because CLI profile=all overrides env=blocking
    assert outcomes.get("deselected", 0) == 0
    # All four tests should run
    assert outcomes.get("passed", 0) == 4

```

这些测试基于你现有 hook 实现的以下假设:

1. 单元测试 vs 集成测试的执行顺序主要由文件系统布局驱动(`unit/` vs `integration/`),或等价逻辑,保证在 `pytest_collection_modifyitems` 之后单元测试排在集成测试之前。
2. 分层标记命名为 `tier_c``tier_d`,并且当 `profile == "blocking"` 时,hook 会取消选择这些条目并触发 `pytest_deselected`3. `pytest_collection_modifyitems` 如下确定 profile:
   `profile = config.getoption("--test-profile") or os.environ.get("ASTRBOT_TEST_PROFILE", "all")`,即 CLI 选项优先于环境变量。
4. `--test-profile` 选项已注册(例如在 `pytest_addoption` 中),并至少支持 `"all"``"blocking"` 两种取值。

如果这些假设与你的实际实现不完全一致,你应当调整:
- `_write_basic_tests` 中的路径/标记以匹配真实的单元/集成/分层标记约定;
- 断言中期望的结果(例如取消选择/通过测试数量),以符合你的 hook 实际实施的策略。
</issue_to_address>

### Comment 2
<location> `tests/conftest.py:361-370` </location>
<code_context>
+# ============================================================
+
+
+def create_mock_llm_response(
+    completion_text: str = "Hello! How can I help you?",
+    role: str = "assistant",
+    tools_call_name: list[str] | None = None,
+    tools_call_args: list[dict] | None = None,
+    tools_call_ids: list[str] | None = None,
+):
+    """创建模拟的 LLM 响应。"""
+    from astrbot.core.provider.entities import LLMResponse, TokenUsage
+
+    return LLMResponse(
+        role=role,
+        completion_text=completion_text,
+        tools_call_name=tools_call_name or [],
+        tools_call_args=tools_call_args or [],
+        tools_call_ids=tools_call_ids or [],
+        usage=TokenUsage(input_other=10, output=5),
+    )
+
</code_context>

<issue_to_address>
**suggestion:** 避免在 `conftest.py``tests/fixtures/helpers.py` 中重复实现 `create_mock_llm_response``create_mock_message_component`

这些辅助函数现在同时存在于此文件和 `tests/fixtures/helpers.py` 中,并且行为基本一致。请将它们集中到一个模块中(例如保留在 `tests/fixtures/helpers.py`,并在 `conftest.py` 中导入,或者反过来),以避免随着时间推移产生差异以及因导入路径不同导致测试行为不一致。

建议实现:

```python
# 工具函数
# ============================================================



提供共享的 pytest fixtures 和测试工具。
"""

import json
=======
# ============================================================
# 工具函数
# ============================================================

# 使用 tests/fixtures/helpers.py 中的共享工具函数,避免在 conftest.py 中重复实现。
from .fixtures.helpers import create_mock_llm_response, create_mock_message_component

提供共享的 pytest fixtures 和测试工具。
"""

import json

```

1. 删除 `tests/conftest.py` 中本地定义的 `create_mock_llm_response``create_mock_message_component`(即它们的 `def ...` 实现),以确保只从 `tests/fixtures/helpers.py` 导入这些工具函数。
2. 确认 `tests/fixtures/helpers.py` 中已经包含这两个函数的实现,并且它们的签名与当前测试代码中的使用方式一致。
3. 如果其他地方有从 `tests.conftest` 直接导入这些函数(而不是通过 pytest fixture 机制间接使用),将这些导入改为从 `tests.fixtures.helpers` 导入,或者使用相对导入 `.fixtures.helpers`。
</issue_to_address>

### Comment 3
<location> `tests/fixtures/plugins/fixture_plugin.py:11-20` </location>
<code_context>
+@star.register("test_plugin", "AstrBot Team", "测试插件 - 用于插件系统测试", "1.0.0")
</code_context>

<issue_to_address>
**suggestion (testing):** 添加集成测试风格的用例,使用新的 `mock_context` 和事件类 fixtures 来验证该 fixture 插件

为验证新的共享 fixtures 和插件接线逻辑,请添加测试:(1)通过 `mock_context` 实例化 `TestPlugin` 并断言 `terminate` 调用前后 `initialized` 的状态变化;(2)发送一个 `AstrMessageEvent`,验证 `test_command` / `test_regex_handler` 产生预期的 `MessageEventResult`;(3)直接调用 `test_llm_tool`,确认工具可以被独立调用。

建议实现:

```python
import re

from astrbot.api import llm_tool, star
from astrbot.api.event import AstrMessageEvent, MessageEventResult, filter

```

```python
    def __init__(self, context: star.Context) -> None:
        super().__init__(context)
        self.initialized = True

    async def terminate(self) -> None:
        """插件终止"""
        self.initialized = False

    @filter.command("test")
    async def test_command(self, event: AstrMessageEvent) -> MessageEventResult:
        """测试命令处理器,用于集成测试"""
        return MessageEventResult(
            content="test_command_ok",
            event=event,
        )

    @filter.regex(r"test-regex:(.+)")
    async def test_regex_handler(
        self,
        event: AstrMessageEvent,
        match: "re.Match[str]",
    ) -> MessageEventResult:
        """测试正则处理器,用于集成测试"""
        matched = match.group(1)
        return MessageEventResult(
            content=f"test_regex_ok:{matched}",
            event=event,
        )

    @llm_tool
    async def test_llm_tool(self, query: str) -> str:
        """测试 LLM 工具,可被独立调用"""
        return f"test_llm_tool_ok:{query}"

```

为了完整地落实该评审意见,你还需要新增一个集成测试风格的测试模块,例如 `tests/integration/test_fixture_plugin.py`,并使用你的共享 fixtures:

1. 使用 `mock_context` fixture 实例化插件并断言:
   ```python
   async def test_fixture_plugin_initialized_and_terminated(mock_context):
       plugin = TestPlugin(mock_context)
       assert plugin.initialized is True
       await plugin.terminate()
       assert plugin.initialized is False
   ```
2. 使用你的 `AstrMessageEvent` / 事件 fixtures(例如类似 `message_event_factory``mock_event` 这样的 fixture):
   - 构造一个用于 `/test` 的事件(或你框架中会路由到 `filter.command("test")` 的任意命令语法),
   - 通过正常的插件 / 事件处理流程分发该事件,
   - 断言返回的 `MessageEventResult``content == "test_command_ok"`,并符合其他预期字段。
3. 同样地,创建一个内容匹配 `test-regex:hello` 的事件,并验证路由到的处理器返回的 `MessageEventResult``content == "test_regex_ok:hello"`4. 最后,在测试中直接调用 `plugin.test_llm_tool("foo")`,并断言返回值为 `"test_llm_tool_ok:foo"`。

你可能需要根据代码库中 `MessageEventResult` 的实际构造方式或辅助 API(例如是否有 `MessageEventResult.reply("...")` 等)来调整插件方法中的构造方式,并在断言中保持一致。
</issue_to_address>

Sourcery 对开源项目免费使用——如果你觉得这些评审有帮助,可以考虑分享一下 ✨
帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据反馈改进后续评审。
Original comment in English

Hey - I've found 3 issues, and left some high level feedback:

  • The helper functions create_mock_llm_response and create_mock_message_component are implemented both in tests/conftest.py and tests/fixtures/helpers.py; consider consolidating them in a single module (e.g., helpers.py) and importing from there to avoid divergence between implementations.
  • The module-scoped, autouse=True fixtures in tests/fixtures/mocks/* will affect any test module that imports them, which can make test behavior less explicit; consider switching to explicitly requested fixtures (dropping autouse=True) or scoping them more narrowly where possible.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The helper functions `create_mock_llm_response` and `create_mock_message_component` are implemented both in `tests/conftest.py` and `tests/fixtures/helpers.py`; consider consolidating them in a single module (e.g., `helpers.py`) and importing from there to avoid divergence between implementations.
- The module-scoped, `autouse=True` fixtures in `tests/fixtures/mocks/*` will affect any test module that imports them, which can make test behavior less explicit; consider switching to explicitly requested fixtures (dropping `autouse=True`) or scoping them more narrowly where possible.

## Individual Comments

### Comment 1
<location> `tests/conftest.py:33-42` </location>
<code_context>
+def pytest_collection_modifyitems(session, config, items):  # noqa: ARG001
</code_context>

<issue_to_address>
**suggestion (testing):** Add tests that cover the new collection ordering and `--test-profile` behavior

These hooks encode a lot of policy (ordering unit vs integration, auto-marking, `tier_c`/`tier_d`, and `ASTRBOT_TEST_PROFILE`/`--test-profile` interaction), but none of it is exercised by tests. Please add tests that: (1) build synthetic `Item`s with different paths/markers and assert the resulting order from `pytest_collection_modifyitems`; (2) check that `profile == 'blocking'` deselects tier_c/tier_d tests and triggers `pytest_deselected`; and (3) verify the intended precedence between the env var and CLI option. This will help keep collection behavior stable during future changes.

Suggested implementation:

```python
import os
from pathlib import Path

import pytest


def _write_basic_tests(pytester: pytest.Pytester) -> None:
    """Create a minimal test layout with unit / integration and tier markers."""
    pytester.makepyfile(
        **{
            # Simulate conventional unit-test location
            "unit/test_unit_ordering.py": """
import pytest

def test_unit_first():
    pass

@pytest.mark.tier_c
def test_unit_tier_c():
    pass
""",
            # Simulate conventional integration-test location
            "integration/test_integration_ordering.py": """
import pytest

@pytest.mark.integration
def test_integration_second():
    pass

@pytest.mark.tier_d
def test_integration_tier_d():
    pass
""",
        }
    )


def test_collection_ordering_unit_before_integration(pytester: pytest.Pytester) -> None:
    """
    Unit tests should be ordered before integration tests after collection.

    This asserts the ordering policy enforced by `pytest_collection_modifyitems`.
    """
    _write_basic_tests(pytester)

    result = pytester.runpytest("--collect-only", "-q")
    result.stdout.fnmatch_lines(
        [
            # Both unit tests should appear before integration tests
            "unit/test_unit_ordering.py::test_unit_first",
            "unit/test_unit_ordering.py::test_unit_tier_c",
            "integration/test_integration_ordering.py::test_integration_second",
            "integration/test_integration_ordering.py::test_integration_tier_d",
        ],
        consecutive=True,
    )
    result.assert_outcomes()  # just ensure collection succeeded


def test_blocking_profile_deselects_tier_c_and_tier_d(pytester: pytest.Pytester) -> None:
    """
    When running with `--test-profile=blocking`, tier_c and tier_d tests
    should be deselected and reported via `pytest_deselected`.
    """
    _write_basic_tests(pytester)

    result = pytester.runpytest("--test-profile=blocking", "-q")
    # All tests should be collected successfully, but tier_c/tier_d deselected
    outcomes = result.parseoutcomes()
    # We expect at least the two tiered tests to be deselected
    assert outcomes.get("deselected", 0) >= 2
    # The un-tiered tests should run
    assert outcomes.get("passed", 0) >= 2


def test_env_profile_blocks_when_cli_not_set(pytester: pytest.Pytester, monkeypatch: pytest.MonkeyPatch) -> None:
    """
    If only ASTRBOT_TEST_PROFILE=blocking is set (no CLI option),
    tier_c/tier_d tests should be deselected.
    """
    _write_basic_tests(pytester)
    monkeypatch.setenv("ASTRBOT_TEST_PROFILE", "blocking")

    result = pytester.runpytest("-q")
    outcomes = result.parseoutcomes()
    # Environment profile should be honored
    assert outcomes.get("deselected", 0) >= 2
    assert outcomes.get("passed", 0) >= 2


def test_cli_profile_overrides_env_profile(pytester: pytest.Pytester, monkeypatch: pytest.MonkeyPatch) -> None:
    """
    CLI `--test-profile` should take precedence over ASTRBOT_TEST_PROFILE
    according to the `pytest_collection_modifyitems` implementation.

    With env=blocking but CLI=all, tier_c/tier_d tests should NOT be deselected.
    """
    _write_basic_tests(pytester)
    monkeypatch.setenv("ASTRBOT_TEST_PROFILE", "blocking")

    # Explicitly request profile=all via CLI, which should override env
    result = pytester.runpytest("--test-profile=all", "-q")
    outcomes = result.parseoutcomes()

    # Nothing should be deselected because CLI profile=all overrides env=blocking
    assert outcomes.get("deselected", 0) == 0
    # All four tests should run
    assert outcomes.get("passed", 0) == 4

```

These tests assume the following about your existing hook implementation:

1. Unit vs integration ordering is primarily driven by the filesystem layout (`unit/` vs `integration/`) or equivalent logic that keeps unit tests before integration tests after `pytest_collection_modifyitems`.
2. Tier markers are named `tier_c` and `tier_d`, and when `profile == "blocking"` the hook deselects those items and triggers `pytest_deselected`.
3. `pytest_collection_modifyitems` determines the profile with:
   `profile = config.getoption("--test-profile") or os.environ.get("ASTRBOT_TEST_PROFILE", "all")`, meaning the CLI option takes precedence over the env var.
4. The `--test-profile` option is registered (e.g. in `pytest_addoption`) and supports at least `"all"` and `"blocking"` values.

If any of these assumptions differ from your actual implementation, you should adjust:
- The paths/markers in `_write_basic_tests` to match your real unit/integration/tier-marking conventions.
- The expected outcomes in the assertions (e.g. counts of deselected/passed tests) to align with the exact policy your hooks implement.
</issue_to_address>

### Comment 2
<location> `tests/conftest.py:361-370` </location>
<code_context>
+# ============================================================
+
+
+def create_mock_llm_response(
+    completion_text: str = "Hello! How can I help you?",
+    role: str = "assistant",
+    tools_call_name: list[str] | None = None,
+    tools_call_args: list[dict] | None = None,
+    tools_call_ids: list[str] | None = None,
+):
+    """创建模拟的 LLM 响应。"""
+    from astrbot.core.provider.entities import LLMResponse, TokenUsage
+
+    return LLMResponse(
+        role=role,
+        completion_text=completion_text,
+        tools_call_name=tools_call_name or [],
+        tools_call_args=tools_call_args or [],
+        tools_call_ids=tools_call_ids or [],
+        usage=TokenUsage(input_other=10, output=5),
+    )
+
</code_context>

<issue_to_address>
**suggestion:** Avoid duplicating `create_mock_llm_response` and `create_mock_message_component` in both `conftest.py` and `tests/fixtures/helpers.py`

These helpers now exist here and in `tests/fixtures/helpers.py` with effectively identical behavior. Centralize them in one module (for example, keep them in `tests/fixtures/helpers.py` and import into `conftest.py`, or the other way around) to avoid drift and inconsistent test behavior based on import path.

Suggested implementation:

```python
# 工具函数
# ============================================================



提供共享的 pytest fixtures 和测试工具。
"""

import json
=======
# ============================================================
# 工具函数
# ============================================================

# 使用 tests/fixtures/helpers.py 中的共享工具函数,避免在 conftest.py 中重复实现。
from .fixtures.helpers import create_mock_llm_response, create_mock_message_component

提供共享的 pytest fixtures 和测试工具。
"""

import json

```

1. 删除 `tests/conftest.py` 中本地定义的 `create_mock_llm_response``create_mock_message_component`(即它们的 `def ...` 实现),以确保只从 `tests/fixtures/helpers.py` 导入这些工具函数。
2. 确认 `tests/fixtures/helpers.py` 中已经包含这两个函数的实现,并且它们的签名与当前测试代码中的使用方式一致。
3. 如果其他地方有从 `tests.conftest` 直接导入这些函数(而不是通过 pytest fixture 机制间接使用),将这些导入改为从 `tests.fixtures.helpers` 导入,或者使用相对导入 `.fixtures.helpers`。
</issue_to_address>

### Comment 3
<location> `tests/fixtures/plugins/fixture_plugin.py:11-20` </location>
<code_context>
+@star.register("test_plugin", "AstrBot Team", "测试插件 - 用于插件系统测试", "1.0.0")
</code_context>

<issue_to_address>
**suggestion (testing):** Add integration-style tests that exercise this fixture plugin with the new `mock_context` and event fixtures

To validate the new shared fixtures and plugin wiring, please add tests that: (1) instantiate `TestPlugin` via `mock_context` and assert `initialized` toggles on `terminate`; (2) send an `AstrMessageEvent` and verify `test_command`/`test_regex_handler` produce the expected `MessageEventResult`; and (3) invoke `test_llm_tool` directly to confirm tools can be called in isolation.

Suggested implementation:

```python
import re

from astrbot.api import llm_tool, star
from astrbot.api.event import AstrMessageEvent, MessageEventResult, filter

```

```python
    def __init__(self, context: star.Context) -> None:
        super().__init__(context)
        self.initialized = True

    async def terminate(self) -> None:
        """插件终止"""
        self.initialized = False

    @filter.command("test")
    async def test_command(self, event: AstrMessageEvent) -> MessageEventResult:
        """测试命令处理器,用于集成测试"""
        return MessageEventResult(
            content="test_command_ok",
            event=event,
        )

    @filter.regex(r"test-regex:(.+)")
    async def test_regex_handler(
        self,
        event: AstrMessageEvent,
        match: "re.Match[str]",
    ) -> MessageEventResult:
        """测试正则处理器,用于集成测试"""
        matched = match.group(1)
        return MessageEventResult(
            content=f"test_regex_ok:{matched}",
            event=event,
        )

    @llm_tool
    async def test_llm_tool(self, query: str) -> str:
        """测试 LLM 工具,可被独立调用"""
        return f"test_llm_tool_ok:{query}"

```

To fully implement your review comment, you’ll also want a new integration-style test module, for example `tests/integration/test_fixture_plugin.py`, that uses your shared fixtures:

1. Use the `mock_context` fixture to instantiate the plugin and assert:
   ```python
   async def test_fixture_plugin_initialized_and_terminated(mock_context):
       plugin = TestPlugin(mock_context)
       assert plugin.initialized is True
       await plugin.terminate()
       assert plugin.initialized is False
   ```
2. Use your `AstrMessageEvent` / event fixtures (e.g. something like `message_event_factory` or `mock_event`) to:
   - Build an event for `/test` (or whatever command syntax your framework routes to `filter.command("test")`),
   - Dispatch it through the normal plugin/event-processing path,
   - Assert the resulting `MessageEventResult` has `content == "test_command_ok"` and any other expected fields.
3. Similarly, create an event whose content matches `test-regex:hello` and verify the routed handler produces a `MessageEventResult` whose `content == "test_regex_ok:hello"`.
4. Finally, call `plugin.test_llm_tool("foo")` directly in a test and assert the return value is `"test_llm_tool_ok:foo"`.

You may need to adjust how `MessageEventResult` is constructed in the plugin methods above to match the actual constructor or helper APIs in your codebase (e.g. `MessageEventResult.reply("...")` or similar) and mirror that in your assertions.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR enhances AstrBot’s Python test infrastructure by adding reusable fixtures, mock module builders for platform adapters, and shared test data/config files, aiming to reduce duplication across tests and improve maintainability.

Changes:

  • Added reusable mock-module fixtures and builder utilities for Telegram/Discord/Aiocqhttp adapters under tests/fixtures/mocks/.
  • Added shared fixture helpers and static fixture data (messages/configs/plugins) under tests/fixtures/.
  • Introduced a new tests/conftest.py with test ordering/marking and a --test-profile selection mode; adjusted CI coverage target to astrbot.

Reviewed changes

Copilot reviewed 12 out of 12 changed files in this pull request and generated 8 comments.

Show a summary per file
File Description
tests/fixtures/plugins/metadata.yaml Adds a minimal plugin metadata fixture for plugin-system tests.
tests/fixtures/plugins/fixture_plugin.py Adds a minimal test plugin implementation (commands/regex/tool) for plugin-system testing.
tests/fixtures/mocks/telegram.py Provides Telegram + apscheduler module mocks and a builder for bot/app/scheduler test doubles.
tests/fixtures/mocks/discord.py Provides Discord module mocks and a client builder for adapter tests.
tests/fixtures/mocks/aiocqhttp.py Provides aiocqhttp module mocks and a bot builder for adapter tests.
tests/fixtures/mocks/__init__.py Exposes mock fixtures/builders via a single import surface.
tests/fixtures/messages/test_messages.json Adds representative message payload fixtures for component parsing tests.
tests/fixtures/helpers.py Adds shared helper functions for configs, message components, and LLM responses.
tests/fixtures/configs/test_cmd_config.json Adds a config fixture for command/config-related tests.
tests/fixtures/__init__.py Adds fixture loading helpers and re-exports common helper functions.
tests/conftest.py Adds pytest configuration, ordering/marking, selection profiles, and shared fixtures.
.github/workflows/coverage_test.yml Updates pytest coverage collection to target the astrbot module.

Comment on lines +33 to +37
mock_aiocqhttp = create_mock_aiocqhttp_modules()
monkeypatch = pytest.MonkeyPatch()

monkeypatch.setitem(sys.modules, "aiocqhttp", mock_aiocqhttp)
monkeypatch.setitem(sys.modules, "aiocqhttp.exceptions", mock_aiocqhttp.exceptions)
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same issue as the other mock modules: aiocqhttp is inserted into sys.modules as a MagicMock, but the code imports aiocqhttp.exceptions as a submodule. If aiocqhttp doesn't look like a package (no __path__), from aiocqhttp.exceptions import ... can fail with "'aiocqhttp' is not a package". Prefer types.ModuleType + __path__ for package-like mocks.

Copilot uses AI. Check for mistakes.
@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Feb 23, 2026
@Soulter Soulter merged commit 7b731eb into AstrBotDevs:master Feb 23, 2026
6 checks passed
astrbot-doc-agent bot pushed a commit to AstrBotDevs/AstrBot-docs that referenced this pull request Feb 23, 2026
@astrbot-doc-agent
Copy link

Generated docs update PR (pending manual review):
AstrBotDevs/AstrBot-docs#143
Trigger: PR merged


AI change summary:

  • 新增插件测试指南文档 zh/dev/star/guides/testing.mden/dev/star/guides/testing.md,涵盖测试框架结构、Fixtures、平台适配器 Mock 及最佳实践。
  • 更新 .vitepress/config.mjs,在“插件开发”侧边栏中添加“插件测试”导航链接。
  • i18n:已同步中英文版本。

Experimental bot notice:

  • This output is generated by AstrBot-Doc-Agent for review only.
  • It does not represent the final documentation form.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:platform The bug / feature is about IM platform adapter, such as QQ, Lark, Telegram, WebChat and so on. lgtm This PR has been approved by a maintainer size:XXL This PR changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants