test: enhance test framework with comprehensive fixtures and mocks#5354
Conversation
- Add shared mock builders for aiocqhttp, discord, telegram - Add test helpers for platform configs and mock objects - Expand conftest.py with test profile support - Update coverage test workflow configuration Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Summary of ChangesHello @whatevertogo, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly upgrades the project's testing infrastructure by introducing a more organized and reusable approach to writing tests. It centralizes common test setup, mock objects, and helper functions, aiming to reduce boilerplate code, improve test maintainability, and provide flexible test execution profiles. These enhancements will lead to a more robust and efficient testing workflow across the project. Highlights
Changelog
Ignored Files
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request significantly enhances the test framework by introducing a comprehensive set of reusable fixtures, mocks, and helper functions. The new structure under tests/fixtures is well-organized and will greatly improve test maintainability and reduce code duplication across platform adapter tests. The introduction of mock builders and module-level mocks for external libraries is a solid approach. I've identified a couple of areas for improvement, mainly around code duplication and using more idiomatic pytest patterns for better efficiency and readability. Overall, this is a great contribution to the project's test infrastructure.
There was a problem hiding this comment.
Hey - 我发现了 3 个问题,并给出了一些整体性的反馈:
- 辅助函数
create_mock_llm_response和create_mock_message_component同时在tests/conftest.py和tests/fixtures/helpers.py中实现;建议将它们合并到单一模块(例如helpers.py)中,然后从那里导入,以避免多个实现之间发生偏差。 tests/fixtures/mocks/*中以模块为作用域且autouse=True的 fixture 会影响所有导入它们的测试模块,这会让测试行为不够显式;建议改为显式请求的 fixture(去掉autouse=True),或者在可能的情况下收窄它们的作用域。
给 AI Agent 的提示词
请根据本次代码评审中的评论进行修改:
## 整体评论
- 辅助函数 `create_mock_llm_response` 和 `create_mock_message_component` 同时在 `tests/conftest.py` 和 `tests/fixtures/helpers.py` 中实现;建议将它们合并到单一模块(例如 `helpers.py`)中,然后从那里导入,以避免多个实现之间发生偏差。
- `tests/fixtures/mocks/*` 中以模块为作用域且 `autouse=True` 的 fixture 会影响所有导入它们的测试模块,这会让测试行为不够显式;建议改为显式请求的 fixture(去掉 `autouse=True`),或者在可能的情况下收窄它们的作用域。
## 单独评论
### Comment 1
<location> `tests/conftest.py:33-42` </location>
<code_context>
+def pytest_collection_modifyitems(session, config, items): # noqa: ARG001
</code_context>
<issue_to_address>
**suggestion (testing):** 为新的收集顺序和 `--test-profile` 行为添加测试用例
这些 hook 编码了很多策略(单元测试 vs 集成测试的顺序、自动打标、`tier_c`/`tier_d`,以及 `ASTRBOT_TEST_PROFILE`/`--test-profile` 的交互),但目前都没有被测试覆盖。请添加测试用例:(1)构造带有不同路径/标记的合成 `Item`,并断言 `pytest_collection_modifyitems` 之后的排序结果;(2)检查当 `profile == 'blocking'` 时会取消选择 tier_c/tier_d 测试并触发 `pytest_deselected`;(3)验证环境变量和 CLI 选项之间的优先级。这将有助于在将来的改动中保持收集行为的稳定性。
建议实现:
```python
import os
from pathlib import Path
import pytest
def _write_basic_tests(pytester: pytest.Pytester) -> None:
"""Create a minimal test layout with unit / integration and tier markers."""
pytester.makepyfile(
**{
# Simulate conventional unit-test location
"unit/test_unit_ordering.py": """
import pytest
def test_unit_first():
pass
@pytest.mark.tier_c
def test_unit_tier_c():
pass
""",
# Simulate conventional integration-test location
"integration/test_integration_ordering.py": """
import pytest
@pytest.mark.integration
def test_integration_second():
pass
@pytest.mark.tier_d
def test_integration_tier_d():
pass
""",
}
)
def test_collection_ordering_unit_before_integration(pytester: pytest.Pytester) -> None:
"""
Unit tests should be ordered before integration tests after collection.
This asserts the ordering policy enforced by `pytest_collection_modifyitems`.
"""
_write_basic_tests(pytester)
result = pytester.runpytest("--collect-only", "-q")
result.stdout.fnmatch_lines(
[
# Both unit tests should appear before integration tests
"unit/test_unit_ordering.py::test_unit_first",
"unit/test_unit_ordering.py::test_unit_tier_c",
"integration/test_integration_ordering.py::test_integration_second",
"integration/test_integration_ordering.py::test_integration_tier_d",
],
consecutive=True,
)
result.assert_outcomes() # just ensure collection succeeded
def test_blocking_profile_deselects_tier_c_and_tier_d(pytester: pytest.Pytester) -> None:
"""
When running with `--test-profile=blocking`, tier_c and tier_d tests
should be deselected and reported via `pytest_deselected`.
"""
_write_basic_tests(pytester)
result = pytester.runpytest("--test-profile=blocking", "-q")
# All tests should be collected successfully, but tier_c/tier_d deselected
outcomes = result.parseoutcomes()
# We expect at least the two tiered tests to be deselected
assert outcomes.get("deselected", 0) >= 2
# The un-tiered tests should run
assert outcomes.get("passed", 0) >= 2
def test_env_profile_blocks_when_cli_not_set(pytester: pytest.Pytester, monkeypatch: pytest.MonkeyPatch) -> None:
"""
If only ASTRBOT_TEST_PROFILE=blocking is set (no CLI option),
tier_c/tier_d tests should be deselected.
"""
_write_basic_tests(pytester)
monkeypatch.setenv("ASTRBOT_TEST_PROFILE", "blocking")
result = pytester.runpytest("-q")
outcomes = result.parseoutcomes()
# Environment profile should be honored
assert outcomes.get("deselected", 0) >= 2
assert outcomes.get("passed", 0) >= 2
def test_cli_profile_overrides_env_profile(pytester: pytest.Pytester, monkeypatch: pytest.MonkeyPatch) -> None:
"""
CLI `--test-profile` should take precedence over ASTRBOT_TEST_PROFILE
according to the `pytest_collection_modifyitems` implementation.
With env=blocking but CLI=all, tier_c/tier_d tests should NOT be deselected.
"""
_write_basic_tests(pytester)
monkeypatch.setenv("ASTRBOT_TEST_PROFILE", "blocking")
# Explicitly request profile=all via CLI, which should override env
result = pytester.runpytest("--test-profile=all", "-q")
outcomes = result.parseoutcomes()
# Nothing should be deselected because CLI profile=all overrides env=blocking
assert outcomes.get("deselected", 0) == 0
# All four tests should run
assert outcomes.get("passed", 0) == 4
```
这些测试基于你现有 hook 实现的以下假设:
1. 单元测试 vs 集成测试的执行顺序主要由文件系统布局驱动(`unit/` vs `integration/`),或等价逻辑,保证在 `pytest_collection_modifyitems` 之后单元测试排在集成测试之前。
2. 分层标记命名为 `tier_c` 和 `tier_d`,并且当 `profile == "blocking"` 时,hook 会取消选择这些条目并触发 `pytest_deselected`。
3. `pytest_collection_modifyitems` 如下确定 profile:
`profile = config.getoption("--test-profile") or os.environ.get("ASTRBOT_TEST_PROFILE", "all")`,即 CLI 选项优先于环境变量。
4. `--test-profile` 选项已注册(例如在 `pytest_addoption` 中),并至少支持 `"all"` 和 `"blocking"` 两种取值。
如果这些假设与你的实际实现不完全一致,你应当调整:
- `_write_basic_tests` 中的路径/标记以匹配真实的单元/集成/分层标记约定;
- 断言中期望的结果(例如取消选择/通过测试数量),以符合你的 hook 实际实施的策略。
</issue_to_address>
### Comment 2
<location> `tests/conftest.py:361-370` </location>
<code_context>
+# ============================================================
+
+
+def create_mock_llm_response(
+ completion_text: str = "Hello! How can I help you?",
+ role: str = "assistant",
+ tools_call_name: list[str] | None = None,
+ tools_call_args: list[dict] | None = None,
+ tools_call_ids: list[str] | None = None,
+):
+ """创建模拟的 LLM 响应。"""
+ from astrbot.core.provider.entities import LLMResponse, TokenUsage
+
+ return LLMResponse(
+ role=role,
+ completion_text=completion_text,
+ tools_call_name=tools_call_name or [],
+ tools_call_args=tools_call_args or [],
+ tools_call_ids=tools_call_ids or [],
+ usage=TokenUsage(input_other=10, output=5),
+ )
+
</code_context>
<issue_to_address>
**suggestion:** 避免在 `conftest.py` 和 `tests/fixtures/helpers.py` 中重复实现 `create_mock_llm_response` 和 `create_mock_message_component`
这些辅助函数现在同时存在于此文件和 `tests/fixtures/helpers.py` 中,并且行为基本一致。请将它们集中到一个模块中(例如保留在 `tests/fixtures/helpers.py`,并在 `conftest.py` 中导入,或者反过来),以避免随着时间推移产生差异以及因导入路径不同导致测试行为不一致。
建议实现:
```python
# 工具函数
# ============================================================
提供共享的 pytest fixtures 和测试工具。
"""
import json
=======
# ============================================================
# 工具函数
# ============================================================
# 使用 tests/fixtures/helpers.py 中的共享工具函数,避免在 conftest.py 中重复实现。
from .fixtures.helpers import create_mock_llm_response, create_mock_message_component
提供共享的 pytest fixtures 和测试工具。
"""
import json
```
1. 删除 `tests/conftest.py` 中本地定义的 `create_mock_llm_response` 和 `create_mock_message_component`(即它们的 `def ...` 实现),以确保只从 `tests/fixtures/helpers.py` 导入这些工具函数。
2. 确认 `tests/fixtures/helpers.py` 中已经包含这两个函数的实现,并且它们的签名与当前测试代码中的使用方式一致。
3. 如果其他地方有从 `tests.conftest` 直接导入这些函数(而不是通过 pytest fixture 机制间接使用),将这些导入改为从 `tests.fixtures.helpers` 导入,或者使用相对导入 `.fixtures.helpers`。
</issue_to_address>
### Comment 3
<location> `tests/fixtures/plugins/fixture_plugin.py:11-20` </location>
<code_context>
+@star.register("test_plugin", "AstrBot Team", "测试插件 - 用于插件系统测试", "1.0.0")
</code_context>
<issue_to_address>
**suggestion (testing):** 添加集成测试风格的用例,使用新的 `mock_context` 和事件类 fixtures 来验证该 fixture 插件
为验证新的共享 fixtures 和插件接线逻辑,请添加测试:(1)通过 `mock_context` 实例化 `TestPlugin` 并断言 `terminate` 调用前后 `initialized` 的状态变化;(2)发送一个 `AstrMessageEvent`,验证 `test_command` / `test_regex_handler` 产生预期的 `MessageEventResult`;(3)直接调用 `test_llm_tool`,确认工具可以被独立调用。
建议实现:
```python
import re
from astrbot.api import llm_tool, star
from astrbot.api.event import AstrMessageEvent, MessageEventResult, filter
```
```python
def __init__(self, context: star.Context) -> None:
super().__init__(context)
self.initialized = True
async def terminate(self) -> None:
"""插件终止"""
self.initialized = False
@filter.command("test")
async def test_command(self, event: AstrMessageEvent) -> MessageEventResult:
"""测试命令处理器,用于集成测试"""
return MessageEventResult(
content="test_command_ok",
event=event,
)
@filter.regex(r"test-regex:(.+)")
async def test_regex_handler(
self,
event: AstrMessageEvent,
match: "re.Match[str]",
) -> MessageEventResult:
"""测试正则处理器,用于集成测试"""
matched = match.group(1)
return MessageEventResult(
content=f"test_regex_ok:{matched}",
event=event,
)
@llm_tool
async def test_llm_tool(self, query: str) -> str:
"""测试 LLM 工具,可被独立调用"""
return f"test_llm_tool_ok:{query}"
```
为了完整地落实该评审意见,你还需要新增一个集成测试风格的测试模块,例如 `tests/integration/test_fixture_plugin.py`,并使用你的共享 fixtures:
1. 使用 `mock_context` fixture 实例化插件并断言:
```python
async def test_fixture_plugin_initialized_and_terminated(mock_context):
plugin = TestPlugin(mock_context)
assert plugin.initialized is True
await plugin.terminate()
assert plugin.initialized is False
```
2. 使用你的 `AstrMessageEvent` / 事件 fixtures(例如类似 `message_event_factory` 或 `mock_event` 这样的 fixture):
- 构造一个用于 `/test` 的事件(或你框架中会路由到 `filter.command("test")` 的任意命令语法),
- 通过正常的插件 / 事件处理流程分发该事件,
- 断言返回的 `MessageEventResult` 其 `content == "test_command_ok"`,并符合其他预期字段。
3. 同样地,创建一个内容匹配 `test-regex:hello` 的事件,并验证路由到的处理器返回的 `MessageEventResult` 其 `content == "test_regex_ok:hello"`。
4. 最后,在测试中直接调用 `plugin.test_llm_tool("foo")`,并断言返回值为 `"test_llm_tool_ok:foo"`。
你可能需要根据代码库中 `MessageEventResult` 的实际构造方式或辅助 API(例如是否有 `MessageEventResult.reply("...")` 等)来调整插件方法中的构造方式,并在断言中保持一致。
</issue_to_address>帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据反馈改进后续评审。
Original comment in English
Hey - I've found 3 issues, and left some high level feedback:
- The helper functions
create_mock_llm_responseandcreate_mock_message_componentare implemented both intests/conftest.pyandtests/fixtures/helpers.py; consider consolidating them in a single module (e.g.,helpers.py) and importing from there to avoid divergence between implementations. - The module-scoped,
autouse=Truefixtures intests/fixtures/mocks/*will affect any test module that imports them, which can make test behavior less explicit; consider switching to explicitly requested fixtures (droppingautouse=True) or scoping them more narrowly where possible.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The helper functions `create_mock_llm_response` and `create_mock_message_component` are implemented both in `tests/conftest.py` and `tests/fixtures/helpers.py`; consider consolidating them in a single module (e.g., `helpers.py`) and importing from there to avoid divergence between implementations.
- The module-scoped, `autouse=True` fixtures in `tests/fixtures/mocks/*` will affect any test module that imports them, which can make test behavior less explicit; consider switching to explicitly requested fixtures (dropping `autouse=True`) or scoping them more narrowly where possible.
## Individual Comments
### Comment 1
<location> `tests/conftest.py:33-42` </location>
<code_context>
+def pytest_collection_modifyitems(session, config, items): # noqa: ARG001
</code_context>
<issue_to_address>
**suggestion (testing):** Add tests that cover the new collection ordering and `--test-profile` behavior
These hooks encode a lot of policy (ordering unit vs integration, auto-marking, `tier_c`/`tier_d`, and `ASTRBOT_TEST_PROFILE`/`--test-profile` interaction), but none of it is exercised by tests. Please add tests that: (1) build synthetic `Item`s with different paths/markers and assert the resulting order from `pytest_collection_modifyitems`; (2) check that `profile == 'blocking'` deselects tier_c/tier_d tests and triggers `pytest_deselected`; and (3) verify the intended precedence between the env var and CLI option. This will help keep collection behavior stable during future changes.
Suggested implementation:
```python
import os
from pathlib import Path
import pytest
def _write_basic_tests(pytester: pytest.Pytester) -> None:
"""Create a minimal test layout with unit / integration and tier markers."""
pytester.makepyfile(
**{
# Simulate conventional unit-test location
"unit/test_unit_ordering.py": """
import pytest
def test_unit_first():
pass
@pytest.mark.tier_c
def test_unit_tier_c():
pass
""",
# Simulate conventional integration-test location
"integration/test_integration_ordering.py": """
import pytest
@pytest.mark.integration
def test_integration_second():
pass
@pytest.mark.tier_d
def test_integration_tier_d():
pass
""",
}
)
def test_collection_ordering_unit_before_integration(pytester: pytest.Pytester) -> None:
"""
Unit tests should be ordered before integration tests after collection.
This asserts the ordering policy enforced by `pytest_collection_modifyitems`.
"""
_write_basic_tests(pytester)
result = pytester.runpytest("--collect-only", "-q")
result.stdout.fnmatch_lines(
[
# Both unit tests should appear before integration tests
"unit/test_unit_ordering.py::test_unit_first",
"unit/test_unit_ordering.py::test_unit_tier_c",
"integration/test_integration_ordering.py::test_integration_second",
"integration/test_integration_ordering.py::test_integration_tier_d",
],
consecutive=True,
)
result.assert_outcomes() # just ensure collection succeeded
def test_blocking_profile_deselects_tier_c_and_tier_d(pytester: pytest.Pytester) -> None:
"""
When running with `--test-profile=blocking`, tier_c and tier_d tests
should be deselected and reported via `pytest_deselected`.
"""
_write_basic_tests(pytester)
result = pytester.runpytest("--test-profile=blocking", "-q")
# All tests should be collected successfully, but tier_c/tier_d deselected
outcomes = result.parseoutcomes()
# We expect at least the two tiered tests to be deselected
assert outcomes.get("deselected", 0) >= 2
# The un-tiered tests should run
assert outcomes.get("passed", 0) >= 2
def test_env_profile_blocks_when_cli_not_set(pytester: pytest.Pytester, monkeypatch: pytest.MonkeyPatch) -> None:
"""
If only ASTRBOT_TEST_PROFILE=blocking is set (no CLI option),
tier_c/tier_d tests should be deselected.
"""
_write_basic_tests(pytester)
monkeypatch.setenv("ASTRBOT_TEST_PROFILE", "blocking")
result = pytester.runpytest("-q")
outcomes = result.parseoutcomes()
# Environment profile should be honored
assert outcomes.get("deselected", 0) >= 2
assert outcomes.get("passed", 0) >= 2
def test_cli_profile_overrides_env_profile(pytester: pytest.Pytester, monkeypatch: pytest.MonkeyPatch) -> None:
"""
CLI `--test-profile` should take precedence over ASTRBOT_TEST_PROFILE
according to the `pytest_collection_modifyitems` implementation.
With env=blocking but CLI=all, tier_c/tier_d tests should NOT be deselected.
"""
_write_basic_tests(pytester)
monkeypatch.setenv("ASTRBOT_TEST_PROFILE", "blocking")
# Explicitly request profile=all via CLI, which should override env
result = pytester.runpytest("--test-profile=all", "-q")
outcomes = result.parseoutcomes()
# Nothing should be deselected because CLI profile=all overrides env=blocking
assert outcomes.get("deselected", 0) == 0
# All four tests should run
assert outcomes.get("passed", 0) == 4
```
These tests assume the following about your existing hook implementation:
1. Unit vs integration ordering is primarily driven by the filesystem layout (`unit/` vs `integration/`) or equivalent logic that keeps unit tests before integration tests after `pytest_collection_modifyitems`.
2. Tier markers are named `tier_c` and `tier_d`, and when `profile == "blocking"` the hook deselects those items and triggers `pytest_deselected`.
3. `pytest_collection_modifyitems` determines the profile with:
`profile = config.getoption("--test-profile") or os.environ.get("ASTRBOT_TEST_PROFILE", "all")`, meaning the CLI option takes precedence over the env var.
4. The `--test-profile` option is registered (e.g. in `pytest_addoption`) and supports at least `"all"` and `"blocking"` values.
If any of these assumptions differ from your actual implementation, you should adjust:
- The paths/markers in `_write_basic_tests` to match your real unit/integration/tier-marking conventions.
- The expected outcomes in the assertions (e.g. counts of deselected/passed tests) to align with the exact policy your hooks implement.
</issue_to_address>
### Comment 2
<location> `tests/conftest.py:361-370` </location>
<code_context>
+# ============================================================
+
+
+def create_mock_llm_response(
+ completion_text: str = "Hello! How can I help you?",
+ role: str = "assistant",
+ tools_call_name: list[str] | None = None,
+ tools_call_args: list[dict] | None = None,
+ tools_call_ids: list[str] | None = None,
+):
+ """创建模拟的 LLM 响应。"""
+ from astrbot.core.provider.entities import LLMResponse, TokenUsage
+
+ return LLMResponse(
+ role=role,
+ completion_text=completion_text,
+ tools_call_name=tools_call_name or [],
+ tools_call_args=tools_call_args or [],
+ tools_call_ids=tools_call_ids or [],
+ usage=TokenUsage(input_other=10, output=5),
+ )
+
</code_context>
<issue_to_address>
**suggestion:** Avoid duplicating `create_mock_llm_response` and `create_mock_message_component` in both `conftest.py` and `tests/fixtures/helpers.py`
These helpers now exist here and in `tests/fixtures/helpers.py` with effectively identical behavior. Centralize them in one module (for example, keep them in `tests/fixtures/helpers.py` and import into `conftest.py`, or the other way around) to avoid drift and inconsistent test behavior based on import path.
Suggested implementation:
```python
# 工具函数
# ============================================================
提供共享的 pytest fixtures 和测试工具。
"""
import json
=======
# ============================================================
# 工具函数
# ============================================================
# 使用 tests/fixtures/helpers.py 中的共享工具函数,避免在 conftest.py 中重复实现。
from .fixtures.helpers import create_mock_llm_response, create_mock_message_component
提供共享的 pytest fixtures 和测试工具。
"""
import json
```
1. 删除 `tests/conftest.py` 中本地定义的 `create_mock_llm_response` 和 `create_mock_message_component`(即它们的 `def ...` 实现),以确保只从 `tests/fixtures/helpers.py` 导入这些工具函数。
2. 确认 `tests/fixtures/helpers.py` 中已经包含这两个函数的实现,并且它们的签名与当前测试代码中的使用方式一致。
3. 如果其他地方有从 `tests.conftest` 直接导入这些函数(而不是通过 pytest fixture 机制间接使用),将这些导入改为从 `tests.fixtures.helpers` 导入,或者使用相对导入 `.fixtures.helpers`。
</issue_to_address>
### Comment 3
<location> `tests/fixtures/plugins/fixture_plugin.py:11-20` </location>
<code_context>
+@star.register("test_plugin", "AstrBot Team", "测试插件 - 用于插件系统测试", "1.0.0")
</code_context>
<issue_to_address>
**suggestion (testing):** Add integration-style tests that exercise this fixture plugin with the new `mock_context` and event fixtures
To validate the new shared fixtures and plugin wiring, please add tests that: (1) instantiate `TestPlugin` via `mock_context` and assert `initialized` toggles on `terminate`; (2) send an `AstrMessageEvent` and verify `test_command`/`test_regex_handler` produce the expected `MessageEventResult`; and (3) invoke `test_llm_tool` directly to confirm tools can be called in isolation.
Suggested implementation:
```python
import re
from astrbot.api import llm_tool, star
from astrbot.api.event import AstrMessageEvent, MessageEventResult, filter
```
```python
def __init__(self, context: star.Context) -> None:
super().__init__(context)
self.initialized = True
async def terminate(self) -> None:
"""插件终止"""
self.initialized = False
@filter.command("test")
async def test_command(self, event: AstrMessageEvent) -> MessageEventResult:
"""测试命令处理器,用于集成测试"""
return MessageEventResult(
content="test_command_ok",
event=event,
)
@filter.regex(r"test-regex:(.+)")
async def test_regex_handler(
self,
event: AstrMessageEvent,
match: "re.Match[str]",
) -> MessageEventResult:
"""测试正则处理器,用于集成测试"""
matched = match.group(1)
return MessageEventResult(
content=f"test_regex_ok:{matched}",
event=event,
)
@llm_tool
async def test_llm_tool(self, query: str) -> str:
"""测试 LLM 工具,可被独立调用"""
return f"test_llm_tool_ok:{query}"
```
To fully implement your review comment, you’ll also want a new integration-style test module, for example `tests/integration/test_fixture_plugin.py`, that uses your shared fixtures:
1. Use the `mock_context` fixture to instantiate the plugin and assert:
```python
async def test_fixture_plugin_initialized_and_terminated(mock_context):
plugin = TestPlugin(mock_context)
assert plugin.initialized is True
await plugin.terminate()
assert plugin.initialized is False
```
2. Use your `AstrMessageEvent` / event fixtures (e.g. something like `message_event_factory` or `mock_event`) to:
- Build an event for `/test` (or whatever command syntax your framework routes to `filter.command("test")`),
- Dispatch it through the normal plugin/event-processing path,
- Assert the resulting `MessageEventResult` has `content == "test_command_ok"` and any other expected fields.
3. Similarly, create an event whose content matches `test-regex:hello` and verify the routed handler produces a `MessageEventResult` whose `content == "test_regex_ok:hello"`.
4. Finally, call `plugin.test_llm_tool("foo")` directly in a test and assert the return value is `"test_llm_tool_ok:foo"`.
You may need to adjust how `MessageEventResult` is constructed in the plugin methods above to match the actual constructor or helper APIs in your codebase (e.g. `MessageEventResult.reply("...")` or similar) and mirror that in your assertions.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
There was a problem hiding this comment.
Pull request overview
This PR enhances AstrBot’s Python test infrastructure by adding reusable fixtures, mock module builders for platform adapters, and shared test data/config files, aiming to reduce duplication across tests and improve maintainability.
Changes:
- Added reusable mock-module fixtures and builder utilities for Telegram/Discord/Aiocqhttp adapters under
tests/fixtures/mocks/. - Added shared fixture helpers and static fixture data (messages/configs/plugins) under
tests/fixtures/. - Introduced a new
tests/conftest.pywith test ordering/marking and a--test-profileselection mode; adjusted CI coverage target toastrbot.
Reviewed changes
Copilot reviewed 12 out of 12 changed files in this pull request and generated 8 comments.
Show a summary per file
| File | Description |
|---|---|
tests/fixtures/plugins/metadata.yaml |
Adds a minimal plugin metadata fixture for plugin-system tests. |
tests/fixtures/plugins/fixture_plugin.py |
Adds a minimal test plugin implementation (commands/regex/tool) for plugin-system testing. |
tests/fixtures/mocks/telegram.py |
Provides Telegram + apscheduler module mocks and a builder for bot/app/scheduler test doubles. |
tests/fixtures/mocks/discord.py |
Provides Discord module mocks and a client builder for adapter tests. |
tests/fixtures/mocks/aiocqhttp.py |
Provides aiocqhttp module mocks and a bot builder for adapter tests. |
tests/fixtures/mocks/__init__.py |
Exposes mock fixtures/builders via a single import surface. |
tests/fixtures/messages/test_messages.json |
Adds representative message payload fixtures for component parsing tests. |
tests/fixtures/helpers.py |
Adds shared helper functions for configs, message components, and LLM responses. |
tests/fixtures/configs/test_cmd_config.json |
Adds a config fixture for command/config-related tests. |
tests/fixtures/__init__.py |
Adds fixture loading helpers and re-exports common helper functions. |
tests/conftest.py |
Adds pytest configuration, ordering/marking, selection profiles, and shared fixtures. |
.github/workflows/coverage_test.yml |
Updates pytest coverage collection to target the astrbot module. |
| mock_aiocqhttp = create_mock_aiocqhttp_modules() | ||
| monkeypatch = pytest.MonkeyPatch() | ||
|
|
||
| monkeypatch.setitem(sys.modules, "aiocqhttp", mock_aiocqhttp) | ||
| monkeypatch.setitem(sys.modules, "aiocqhttp.exceptions", mock_aiocqhttp.exceptions) |
There was a problem hiding this comment.
Same issue as the other mock modules: aiocqhttp is inserted into sys.modules as a MagicMock, but the code imports aiocqhttp.exceptions as a submodule. If aiocqhttp doesn't look like a package (no __path__), from aiocqhttp.exceptions import ... can fail with "'aiocqhttp' is not a package". Prefer types.ModuleType + __path__ for package-like mocks.
|
Generated docs update PR (pending manual review): AI change summary:
Experimental bot notice:
|
Enhance the test framework with reusable mock utilities and helper functions to reduce code duplication across platform adapter tests and improve test maintainability.
Modifications / 改动点
Added
tests/fixtures/mocks/with mock builders for Telegram, Discord, and AiocqhttpAdded
tests/fixtures/helpers.pywith utility functions for creating test dataExpanded
tests/conftest.pywith test profile support (all/blocking)Updated coverage workflow to target correct module
This is NOT a breaking change. / 这不是一个破坏性变更。
Screenshots or Test Results / 运行截图或测试结果
Checklist / 检查清单
requirements.txt和pyproject.toml文件相应位置。/ I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations inrequirements.txtandpyproject.toml.Summary by Sourcery
通过共享 fixtures、mocks 和辅助工具增强测试基础设施,以支持平台适配器和插件测试,同时收紧覆盖率配置。
Enhancements:
Build:
astrbot包进行覆盖率统计。Tests:
Original summary in English
Summary by Sourcery
Enhance the testing infrastructure with shared fixtures, mocks, and helpers to support platform adapter and plugin tests while tightening coverage configuration.
Enhancements:
Build:
Tests: