Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
52 changes: 45 additions & 7 deletions AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,13 +44,22 @@ The engine follows the SCXML run-to-completion (RTC) model with two processing l
### Error handling (`error_on_execution`)

- `StateChart` has `error_on_execution=True` by default; `StateMachine` has `False`.
- Errors are caught at the **block level** (per onentry/onexit block), not per microstep.
- This means `after` callbacks still run even when an action raises — making `after_<event>()`
a natural **finalize** hook (runs on both success and failure paths).
- Errors are caught at the **block level** (per onentry/onexit/transition `on` block), not per
microstep. This means `after` callbacks still run even when an action raises — making
`after_<event>()` a natural **finalize** hook (runs on both success and failure paths).
- `error.execution` is dispatched as an internal event; define transitions for it to handle
errors within the statechart.
- Error during `error.execution` handling → ignored to prevent infinite loops.

#### `on_error` asymmetry: transition `on` vs onentry/onexit

Transition `on` content uses `on_error` **only for non-`error.execution` events**. During
`error.execution` processing, `on_error` is disabled for transition `on` content — errors
propagate to `microstep()` where `_send_error_execution` ignores them. This prevents infinite
loops in self-transition error handlers (e.g., `error_execution = s1.to(s1, on="handler")`
where `handler` raises). `onentry`/`onexit` blocks always use `on_error` regardless of the
current event.

### Eventless transitions

- Bare transition statements (not assigned to a variable) are **eventless** — they fire
Expand All @@ -68,6 +77,21 @@ The engine follows the SCXML run-to-completion (RTC) model with two processing l
- `on_error_execution()` works via naming convention but **only** when a transition for
`error.execution` is declared — it is NOT a generic callback.

### Invoke (`<invoke>`)

- `invoke.py` — `InvokeManager` on the engine manages the lifecycle: `mark_for_invoke()`,
`cancel_for_state()`, `spawn_pending_sync/async()`, `send_to_child()`.
- `_cleanup_terminated()` only removes invocations that are both terminated **and** cancelled.
A terminated-but-not-cancelled invocation means the handler's `run()` returned but the owning
state is still active — it must stay in `_active` so `send_to_child()` can still route events.
- **Child machine constructor blocks** in the processing loop. Use a listener pattern (e.g.,
`_ChildRefSetter`) to capture the child reference during the first `on_enter_state`, before
the loop spins.
- `#_<invokeid>` send target: routed via `_send_to_invoke()` in `io/scxml/actions.py` →
`InvokeManager.send_to_child()` → handler's `on_event()`.
- **Tests with blocking threads**: use `threading.Event.wait(timeout=)` instead of
`time.sleep()` for interruptible waits — avoids thread leak errors in teardown.

## Environment setup

```bash
Expand All @@ -77,11 +101,11 @@ pre-commit install

## Running tests

Always use `uv` to run commands:
Always use `uv` to run commands. Also, use a timeout to avoid being stuck in the case of a leaked thread or infinite loop:

```bash
# Run all tests (parallel)
uv run pytest -n auto
timeout 120 uv run pytest -n 4

# Run a specific test file
uv run pytest tests/test_signature.py
Expand All @@ -98,10 +122,24 @@ Don't specify the directory `tests/`, because this will exclude doctests from bo
(`--doctest-glob=*.md`) (enabled by default):

```bash
uv run pytest -n auto
timeout 120 uv run pytest -n 4
```

Testes normally run under 60s (~40s on average), so take a closer look if they take longer, it can be a regression.

Coverage is enabled by default (`--cov` is in `pyproject.toml`'s `addopts`). To generate a
coverage report to a file, pass `--cov-report` **in addition to** `--cov`:

```bash
# JSON report (machine-readable, includes missing_lines per file)
timeout 120 uv run pytest -n auto --cov=statemachine --cov-report=json:cov.json

# Terminal report with missing lines
timeout 120 uv run pytest -n auto --cov=statemachine --cov-report=term-missing
```

Coverage is enabled by default.
Note: `--cov=statemachine` is required to activate coverage collection; `--cov-report`
alone only changes the output format.

### Testing both sync and async engines

Expand Down
32 changes: 18 additions & 14 deletions docs/invoke.md
Original file line number Diff line number Diff line change
Expand Up @@ -456,24 +456,28 @@ is cancelled.

Pass a `StateChart` subclass to spawn a child machine:

```python
from statemachine import State, StateChart
```py
>>> class ChildMachine(StateChart):
... start = State(initial=True)
... end = State(final=True)
... go = start.to(end)
...
... def on_enter_start(self, **kwargs):
... self.send("go")

>>> class ParentMachine(StateChart):
... loading = State(initial=True, invoke=ChildMachine)
... ready = State(final=True)
... done_invoke_loading = loading.to(ready)

class ChildMachine(StateChart):
start = State(initial=True)
end = State(final=True)
go = start.to(end)
>>> sm = ParentMachine()
>>> time.sleep(0.2)

def on_enter_start(self, **kwargs):
self.send("go")
>>> "ready" in sm.configuration_values
True

class ParentMachine(StateChart):
loading = State(initial=True, invoke=ChildMachine)
ready = State(final=True)
done_invoke_loading = loading.to(ready)
```

The child machine is instantiated and run when the parent's `loading` state is entered.
When the child terminates (reaches a final state), a `done.invoke` event is sent to the
parent, triggering the `done_invoke_loading` transition. See
`tests/test_invoke.py::TestInvokeStateChartChild` for a working example.
parent, triggering the `done_invoke_loading` transition.
6 changes: 4 additions & 2 deletions docs/processing_model.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,8 +111,10 @@ and executes them atomically:

If an error occurs during steps 1–4 and `error_on_execution` is enabled, the error is
caught at the **block level** — meaning remaining actions in that block are skipped, but
the microstep continues and `after` callbacks still run (see
{ref}`cleanup / finalize pattern <sphx_glr_auto_examples_statechart_cleanup_machine.py>`).
the microstep continues and `after` callbacks still run. Each phase (exit, `on`, enter)
is an independent block, so an error in the transition `on` action does not prevent target
states from being entered. See {ref}`block-level error catching <error-execution>` and the
{ref}`cleanup / finalize pattern <sphx_glr_auto_examples_statechart_cleanup_machine.py>`.

### Macrostep

Expand Down
16 changes: 11 additions & 5 deletions docs/releases/3.0.0.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,10 @@ machines can receive context at creation time:

```

Invoke also supports child state machines (pass a `StateChart` subclass) and SCXML
`<invoke>` with `<finalize>`, autoforward, and `#_<invokeid>` / `#_parent` send targets
for parent-child communication.

See {ref}`invoke` for full documentation.

### Compound states
Expand Down Expand Up @@ -336,6 +340,11 @@ True

```

Errors are caught at the **block level**: each microstep phase (exit, transition `on`,
enter) is an independent block. An error in one block does not prevent subsequent blocks
from executing — in particular, `after` callbacks always run, making `after_<event>()` a
natural finalize hook. See {ref}`block-level error catching <error-execution>`.

The error object is available as `error` in handler kwargs. See {ref}`error-execution`
for full details.

Expand Down Expand Up @@ -504,11 +513,8 @@ TODO.

The following SCXML features are **not yet implemented** and are deferred to a future release:

- `<invoke>` — invoking external services or sub-machines from within a state
- HTTP and other external communication targets
- `<finalize>` — processing data returned from invoked services

These features are tracked for v3.1+.
- HTTP and other external communication targets (only `#_internal`, `#_parent`, and
`#_<invokeid>` send targets are supported)

```{seealso}
For a step-by-step migration guide with before/after examples, see
Expand Down
26 changes: 25 additions & 1 deletion docs/statecharts.md
Original file line number Diff line number Diff line change
Expand Up @@ -213,12 +213,36 @@ If an error occurs while processing the `error.execution` event itself, the engi
ignores the second error (logging a warning) to prevent infinite loops. The state machine
remains in the configuration it was in before the failed error handler.

### Block-level error catching

`StateChart` catches errors at the **block level**, not the microstep level.
Each phase of the microstep — `on_exit`, transition `on` content, `on_enter` — is an
independent block. An error in one block:

- **Stops remaining actions in that block** (per SCXML spec, execution MUST NOT continue
within the same block after an error).
- **Does not affect other blocks** — subsequent phases of the microstep still execute.
In particular, `after` callbacks always run regardless of errors in earlier blocks.

This means that even if a transition's `on` action raises an exception, the transition
completes: target states are entered and `after_<event>()` callbacks still run. The error
is caught and queued as an `error.execution` internal event, which can be handled by a
separate transition.

```{note}
During `error.execution` processing, errors in transition `on` content are **not** caught
at block level — they propagate to the microstep, where they are silently ignored. This
prevents infinite loops when an error handler's own action raises (e.g., a self-transition
`error_execution = s1.to(s1, on="handler")` where `handler` raises). Entry/exit blocks
always use block-level error catching regardless of the current event.
```

### Cleanup / finalize pattern

A common need is to run cleanup code after a transition **regardless of success or failure**
— for example, releasing a lock or closing a resource.

Because `StateChart` catches errors at the **block level** (not the microstep level),
Because `StateChart` catches errors at the **block level** (see above),
`after_<event>()` callbacks still run even when an action raises an exception. This makes
`after_<event>()` a natural **finalize** hook — no need to duplicate cleanup logic in
an error handler.
Expand Down
2 changes: 2 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,8 @@ python_files = ["tests.py", "test_*.py", "*_tests.py"]
xfail_strict = true
log_cli = true
log_cli_level = "DEBUG"
log_cli_format = "%(relativeCreated)6.0fms %(threadName)-18s %(name)-35s %(message)s"
log_cli_date_format = "%H:%M:%S"
asyncio_default_fixture_loop_scope = "module"

[tool.coverage.run]
Expand Down
64 changes: 56 additions & 8 deletions statemachine/engines/async_.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@
from ..exceptions import TransitionNotAllowed
from ..orderedset import OrderedSet
from ..state import State
from .base import _ERROR_EXECUTION
from .base import BaseEngine

if TYPE_CHECKING:
Expand Down Expand Up @@ -178,6 +179,7 @@ async def _exit_states( # type: ignore[override]
args, kwargs = await self._get_args_kwargs(info.transition, trigger_data)

if info.state is not None: # pragma: no branch
logger.debug("%s Exiting state: %s", self._log_id, info.state)
await self.sm._callbacks.async_call(
info.state.exit.key, *args, on_error=on_error, **kwargs
)
Expand All @@ -198,10 +200,24 @@ async def _enter_states( # noqa: C901
self._prepare_entry_states(enabled_transitions, states_to_exit, previous_configuration)
)

# For transition 'on' content, use on_error only for non-error.execution
# events. During error.execution processing, errors in transition content
# must propagate to microstep() where _send_error_execution's guard
# prevents infinite loops (per SCXML spec: errors during error event
# processing are ignored).
on_error_transition = on_error
if (
on_error is not None
and trigger_data.event
and str(trigger_data.event) == _ERROR_EXECUTION
):
on_error_transition = None

result = await self._execute_transition_content(
enabled_transitions,
trigger_data,
lambda t: t.on.key,
on_error=on_error_transition,
previous_configuration=previous_configuration,
new_configuration=new_configuration,
)
Expand All @@ -218,7 +234,7 @@ async def _enter_states( # noqa: C901
target=target,
)

logger.debug("Entering state: %s", target)
logger.debug("%s Entering state: %s", self._log_id, target)
self._add_state_to_configuration(target)

on_entry_result = await self.sm._callbacks.async_call(
Expand Down Expand Up @@ -257,6 +273,14 @@ async def _enter_states( # noqa: C901
return result

async def microstep(self, transitions: "List[Transition]", trigger_data: TriggerData):
self._microstep_count += 1
logger.debug(
"%s macro:%d micro:%d transitions: %s",
self._log_id,
self._macrostep_count,
self._microstep_count,
transitions,
)
previous_configuration = self.sm.configuration
try:
result = await self._execute_transition_content(
Expand Down Expand Up @@ -342,18 +366,23 @@ async def processing_loop( # noqa: C901
return None

_ctx_token = _in_processing_loop.set(True)
logger.debug("Processing loop started: %s", self.sm.current_state_value)
logger.debug("%s Processing loop started: %s", self._log_id, self.sm.current_state_value)
first_result = self._sentinel
try:
took_events = True
while took_events:
while took_events and self.running:
self.clear_cache()
took_events = False
macrostep_done = False

# Phase 1: eventless transitions and internal events
while not macrostep_done:
logger.debug("Macrostep: eventless/internal queue")
self._microstep_count = 0
logger.debug(
"%s Macrostep %d: eventless/internal queue",
self._log_id,
self._macrostep_count,
)

self.clear_cache()
internal_event = TriggerData(self.sm, event=None) # null object for eventless
Expand All @@ -365,7 +394,9 @@ async def processing_loop( # noqa: C901
internal_event = self.internal_queue.pop()
enabled_transitions = await self.select_transitions(internal_event)
if enabled_transitions:
logger.debug("Enabled transitions: %s", enabled_transitions)
logger.debug(
"%s Enabled transitions: %s", self._log_id, enabled_transitions
)
took_events = True
await self._run_microstep(enabled_transitions, internal_event)

Expand All @@ -380,7 +411,9 @@ async def processing_loop( # noqa: C901
await self._run_microstep(enabled_transitions, internal_event)

# Phase 3: external events
logger.debug("Macrostep: external queue")
logger.debug(
"%s Macrostep %d: external queue", self._log_id, self._macrostep_count
)
while not self.external_queue.is_empty():
self.clear_cache()
took_events = True
Expand All @@ -393,7 +426,14 @@ async def processing_loop( # noqa: C901
# transitions can be processed while we wait.
break

logger.debug("External event: %s", external_event.event)
self._macrostep_count += 1
self._microstep_count = 0
logger.debug(
"%s macrostep %d: event=%s",
self._log_id,
self._macrostep_count,
external_event.event,
)

# Handle lazy initial state activation.
# Break out of phase 3 so the outer loop restarts from phase 1
Expand All @@ -406,10 +446,15 @@ async def processing_loop( # noqa: C901
)
break

# Finalize + autoforward for active invocations
self._invoke_manager.handle_external_event(external_event)

event_future = external_event.future
try:
enabled_transitions = await self.select_transitions(external_event)
logger.debug("Enabled transitions: %s", enabled_transitions)
logger.debug(
"%s Enabled transitions: %s", self._log_id, enabled_transitions
)
if enabled_transitions:
result = await self.microstep(
list(enabled_transitions), external_event
Expand Down Expand Up @@ -448,9 +493,12 @@ async def processing_loop( # noqa: C901
_in_processing_loop.reset(_ctx_token)
self._processing.release()

logger.debug("%s Processing loop ended", self._log_id)
result = first_result if first_result is not self._sentinel else None
# If the caller has a future, await it (already resolved by now).
if caller_future is not None:
# Resolve the future if it wasn't processed (e.g. machine terminated).
self._resolve_future(caller_future, result)
return await caller_future
return result

Expand Down
Loading
Loading